Augmented reality viewer with automatic surface selection placement and content orientation placement

文档序号:863564 发布日期:2021-03-16 浏览:2次 中文

阅读说明:本技术 具有自动表面选择放置和内容取向放置的增强现实观看器 (Augmented reality viewer with automatic surface selection placement and content orientation placement ) 是由 V·恩格-索-兴 于 2019-06-10 设计创作,主要内容包括:描述了一种增强现实观看器。用户取向确定模块确定用户取向。内容矢量计算器计算相对于内容的近边缘和远边缘的内容取向矢量;确定用户取向矢量与内容取向矢量的点积;以及基于点积的大小来定位内容。表面区域矢量计算器针对多个表面区域中的每一个来计算表面区域取向矢量。表面选择模块确定用户取向矢量与每个表面区域取向矢量的点积,并基于点积的相对大小来选择优选表面。(An augmented reality viewer is described. A user orientation determination module determines a user orientation. A content vector calculator calculates a content orientation vector with respect to a near edge and a far edge of the content; determining a dot product of the user orientation vector and the content orientation vector; and locating content based on the size of the dot product. A surface region vector calculator calculates a surface region orientation vector for each of the plurality of surface regions. The surface selection module determines a dot product of the user orientation vector and each surface region orientation vector and selects a preferred surface based on a relative size of the dot product.)

1. An augmented reality viewer comprising:

a display that allows a user to see real world objects;

a data channel for holding content;

a user orientation determination module to determine a first user orientation of a user relative to a first display area and to determine a second user orientation of the user relative to the first display area;

a projector connected to the data channel to display the content to the user through the display within a boundary of the first display area while the user views the real world object; and

a content orientation selection module connected to the surface extraction module and the user orientation module to display the content in a first content orientation relative to the first display area such that a near edge of the content is proximate to the user when the user is in the first user orientation and to display the content in a second content orientation relative to the first display area such that the near edge is rotated closer to the user when the user is in the second user orientation and the content is rotated from the first content orientation to the second content orientation relative to the first display area.

2. An augmented reality viewer according to claim 1, wherein the user orientation determination module determines a user orientation vector indicative of an orientation of a user, the augmented reality viewer further comprising:

a content vector calculator to calculate a content orientation vector relative to the near edge of the content, wherein the content orientation selection module determines a dot product of the user orientation vector and the content orientation vector and rotates the content from the first orientation to the second orientation based on a magnitude of the dot product in the second orientation and the first orientation.

3. An augmented reality viewer according to claim 2, wherein the content orientation vector extends from the near edge of the content and the content rotates from the first content orientation to the second content orientation when the dot product becomes larger in the second content orientation than in the first content orientation.

4. The augmented reality viewer of claim 1, further comprising:

a sizing module that adjusts a size of the content to fit the surface region in the first orientation and the second orientation.

5. An augmented reality viewer according to claim 4, wherein the content has the same aspect ratio in the first and second orientations.

6. The augmented reality viewer of claim 1, further comprising:

a surface region extraction module to determine the first display region.

7. An augmented reality viewer according to claim 6, wherein the surface region extraction module determines a second surface region and the user orientation determination module determines a first orientation of a user relative to the first and second surface regions, the augmented reality viewer further comprising:

a surface region selection module to select a preferred surface region between the first surface region and the second surface region based on a normal of the respective surface region being oriented more opposite to the first user orientation of the user, wherein the projector displays the content to the user through the display within a boundary of the preferred surface region while the user views the real world object.

8. An augmented reality viewing method, comprising:

determining, by the processor, a first user orientation of a user relative to the first display area;

determining, by the processor, a first content orientation relative to the display when the user is in the first orientation;

displaying, by the processor, content to the user in the first content orientation through the display within a boundary of the first display area while the user views real world objects through the display while in the first user orientation;

determining, by the processor, a second user orientation of the user relative to the first display area;

determining, by the processor, a second content orientation relative to the display when the user is in the second position; and

displaying, by the processor, content to the user in the second content orientation through the display within a boundary of the display area while the user views the real world object through the display from the second location, wherein the content is rotated from the first content orientation to the second content orientation relative to the first display area.

9. An augmented reality viewer comprising:

a display that allows a user to view real world objects;

a data channel for holding content;

a surface region extraction module for determining a first surface region and a second surface region;

a user orientation determination module for determining a first orientation of a user relative to the first and second surface regions;

a surface region selection module that selects a preferred surface region between the first surface region and the second surface region based on a normal of the respective surface region being oriented more oppositely to the first user orientation of the user; and

a projector that displays the content to the user through the display within the boundaries of the preferred surface area while the user views the real world object.

10. The augmented reality viewer of claim 9, wherein the user orientation determination module determines a first user orientation vector indicative of the first user orientation of the user, the augmented reality viewer further comprising:

a surface region vector calculator to calculate a first surface region orientation vector indicative of an orientation of the first surface region and a second surface region orientation vector indicative of an orientation of the second surface region, wherein the surface region selection module determines a dot product of the first user orientation vector and the first surface region orientation vector and a dot product of the first user orientation vector and the second surface region orientation vector, and selects the preferred surface region based on a relative size of the dot product of the first user orientation vector and the first surface region orientation vector and the dot product of the first user orientation vector and the second surface region orientation vector.

11. An augmented reality viewer according to claim 10, wherein the first surface region orientation vector is perpendicular to the first surface region, the second surface region orientation vector is perpendicular to the second surface region, and the preferred surface region is selected based on the most negative dot product in size.

12. An augmented reality viewer according to claim 9, wherein the user position and user orientation determination module determines a second user orientation vector indicative of a second orientation of the user, and the surface selection module determines a dot product of the second user orientation vector and the first surface region orientation vector and a dot product of the second user orientation vector and the second surface region orientation vector, and selects the preferred surface region based on the relative magnitudes of the dot product of the second user orientation vector and the first surface region orientation vector and the dot product of the second user orientation vector and the second surface region orientation vector.

13. An augmented reality viewer according to claim 12, wherein the user remains in the same position relative to the first and second surface regions when the first user orientation is changed to the second user orientation.

14. An augmented reality viewer according to claim 12, wherein the user moves from a first position to a second position relative to the first and second surface regions when the first user orientation changes to the second user orientation.

15. An augmented reality viewer according to claim 12, wherein the preferred surface area remains the same when the user orientation vector changes from the first user orientation vector to the second user orientation vector.

16. An augmented reality viewer according to claim 12, wherein the preferred surface changes from the first surface to the second surface when the user orientation vector changes from the first user orientation vector to the second user orientation vector.

17. The augmented reality user of claim 12, further comprising:

a sizing module that adjusts a size of the content in the preferred surface to fit the second surface area.

18. An augmented reality user according to claim 17, wherein the content has the same aspect ratio in the first and second surface regions.

19. An augmented reality viewing method, comprising:

determining, by a processor, a first surface region and a second surface region;

determining, by the processor, a first orientation of a user relative to the first surface area and the second surface area;

selecting, by the processor, a preferred surface area between the first surface area and the second surface area based on a normal of the respective surface area being oriented more toward the first position of the user; and

displaying, by the processor, content to the user through the display within the boundary of the preferred surface area while the user views real world objects through the display from the first location.

20. An augmented reality viewer comprising:

an environment calculation unit for determining a first vector indicative of an orientation of a user;

a vector calculator for calculating a second vector;

a selection module to calculate a dot product of the first vector and the second vector;

a data channel for holding content;

a content rendering module to determine placement of the content based on the dot product;

a display that allows the user to see real world objects; and

a projector that displays the content to the user through the display while the user views the real world object through the display, the content being displayed based on the placement determined by the content rendering module.

21. An augmented reality viewing method, comprising:

determining, by a processor, a first vector indicative of an orientation of a user;

calculating, by the processor, a second vector;

calculating, by the processor, a dot product of the first vector and the second vector;

determining, by the processor, a placement of content based on the dot product; and

displaying, by the processor, the content to the user through the display while the user views real world objects through the display, the content being displayed based on the placement determined by the content rendering module.

Technical Field

The invention relates to an augmented reality viewer and an augmented reality viewing method.

Background

Modern computing and display technologies have facilitated the development of "augmented reality" viewers. An augmented reality viewer is a wearable device that presents a user with two images, one for the left eye and one for the right eye. The objects in the image for each eye are rendered using slightly different viewpoints, allowing the brain to treat these objects as three-dimensional objects. As the image continually changes viewpoint as the viewer moves, movement around the composite three-dimensional content can be simulated.

Augmented reality viewers typically include technology that allows digital or virtual image information to be presented as an augmentation to the visualization of the real world around the user. In one embodiment, the virtual image information is presented in a static position relative to the augmented reality viewer such that if the user moves their head and the augmented reality viewer moves with their head, the user is presented with an image that remains in a stationary position in front of them while the real world objects are offset in their line of sight. This makes it appear to the user that the virtual image information is not fixed relative to the real world object, but is fixed at the viewer's perspective. In other embodiments, there are techniques to keep the virtual image information in a fixed position relative to the real world objects as the user moves their head. In the latter scenario, the user may be provided with some control over the initial placement of the virtual image information relative to the real world objects.

Disclosure of Invention

The present invention provides an augmented reality viewer, comprising: a display that allows a user to see real world objects; a data channel for holding content; a user orientation determination module to determine a first user orientation of a user relative to a first display area and to determine a second user orientation of the user relative to the first display area; a projector connected to the data channel to display the content to the user through the display within a boundary (define) of the first display area while the user views the real world object; and a content orientation selection module connected to the surface extraction module and the user orientation module to display the content in a first content orientation relative to the first display area such that a near edge of the content is proximate to the user when the user is in the first user orientation and to display the content in a second content orientation relative to the first display area such that the near edge is rotated closer to the user when the user is in the second user orientation and the content is rotated from the first content orientation to the second content orientation relative to the first display area.

The invention further provides an augmented reality viewing method, which comprises the following steps: determining, by the processor, a first user orientation of a user relative to the first display area; determining, by the processor, a first content orientation relative to the display when the user is in the first orientation; displaying, by the processor, content to the user in the first content orientation through the display within a boundary of the first display area while the user views real world objects through the display while in the first user orientation; determining, by the processor, a second user orientation of the user relative to the first display area; determining, by the processor, a second content orientation relative to the display when the user is in the second position; and displaying, by the processor, content to the user in the second content orientation through the display within a boundary of the display area while the user views the real world object through the display from the second location, wherein the content is rotated from the first content orientation to the second content orientation relative to the first display area.

The present invention also provides an augmented reality viewer, comprising: a display that allows a user to view real world objects; a data channel for holding content; a surface region extraction module for determining a first surface region and a second surface region; a user orientation determination module for determining a first orientation of a user relative to the first and second surface regions; a surface region selection module that selects a preferred surface region between the first surface region and the second surface region based on a normal of the respective surface region being oriented more oppositely to the first user orientation of the user; and a projector that displays the content to the user through the display within the boundaries of the preferred surface area while the user views the real world object.

The invention further provides an augmented reality viewing method, which comprises the following steps: determining, by a processor, a first surface region and a second surface region; determining, by the processor, a first orientation of a user relative to the first surface area and the second surface area; selecting, by the processor, a preferred surface area between the first surface area and the second surface area based on a normal of the respective surface area being oriented more toward the first position of the user; and displaying, by the processor, content to the user through the display within the boundaries of the preferred surface area while the user views real world objects through the display from the first location.

The invention also provides an augmented reality viewer comprising: an environment calculation unit for determining a first vector indicative of an orientation of a user; a vector calculator for calculating a second vector; a selection module to calculate a dot product of the first vector and the second vector; a data channel for holding content; a content rendering module to determine placement of the content based on the dot product; a display that allows the user to see real world objects; and a projector that displays the content to the user through the display while the user views the real world object through the display, the content being displayed based on the placement determined by the content rendering module.

The invention further provides an augmented reality viewing method, which comprises the following steps: determining, by a processor, a first vector indicative of an orientation of a user; calculating, by the processor, a second vector; calculating, by the processor, a dot product of the first vector and the second vector; determining, by the processor, a placement of content based on the dot product; and displaying, by the processor, the content to the user through the display while the user views real world objects through the display, the content being displayed based on the placement determined by the content rendering module.

Drawings

The invention is further described, by way of example, with reference to the accompanying drawings, in which:

FIG. 1A is a block diagram of an augmented reality viewer used by a user to view real-world objects augmented with content from a computer;

FIG. 1B is a perspective view of an augmented reality viewer;

FIG. 2 is a perspective view illustrating a user wearing an augmented reality viewer while viewing two-dimensional content in a three-dimensional environment;

FIG. 3 is a perspective view showing a three-dimensional data map created with an augmented reality viewer;

FIG. 4 is a perspective view showing the determination of a user orientation vector, the extraction of a surface region, and the calculation of a surface region orientation vector;

FIG. 5 is a view similar to FIG. 4 showing placement of content renderings on one of the surface regions;

FIG. 6 is a view similar to FIG. 5 showing the variation of the user orientation vector;

FIG. 7 is a view similar to FIG. 6 showing the placement of content renderings due to changes in user orientation vectors;

FIG. 8 is a view similar to FIG. 7 showing the change in the user orientation vector due to the user's movement;

FIG. 9 is a view similar to FIG. 8 showing rotation of the content rendering due to a change in the user orientation vector;

FIG. 10 is a view similar to FIG. 9 showing the change in the user orientation vector due to the user's movement;

FIG. 11 is a view similar to FIG. 10 showing rotation of the content rendering due to a change in the user orientation vector;

FIG. 12 is a view similar to FIG. 11 showing the change in the user orientation vector due to the user's movement;

FIG. 13 is a view similar to FIG. 12 showing rotation of the content rendering due to a change in the user orientation vector;

FIG. 14 is a view similar to FIG. 13 showing the change in the user orientation vector due to the user looking up;

FIG. 15 is a view similar to FIG. 14 showing placement of a rendering of content on another surface area due to a change in the user orientation vector;

FIG. 16 is a flow chart showing the function of an algorithm to perform the method of the previous figures;

FIG. 17 is a perspective view illustrating a user wearing an augmented reality viewer while viewing three-dimensional content in a three-dimensional environment;

FIG. 18 is a top view of FIG. 17;

FIG. 19 is a view similar to FIG. 18, wherein the user has rotated in a clockwise direction about the display surface;

FIG. 20 is a view similar to FIG. 19, wherein the content has been rotated in a clockwise direction;

FIG. 21 is a perspective view showing a user viewing content on a vertical surface;

FIG. 22 is a view similar to FIG. 21, wherein the user has rotated in a counterclockwise direction;

FIG. 23 is a view similar to FIG. 2, wherein the content has been rotated in a counterclockwise direction; and

FIG. 24 is a block diagram of a machine in the form of a computer that may find application in the system of the present invention, according to one embodiment of the present invention.

Detailed Description

The terms "surface" and "surface area" are used herein to describe a two-dimensional area suitable for use as a display area. Aspects of the invention may find application when other display areas are used, for example, as display areas of a three-dimensional surface area or display areas representing slices within a three-dimensional volume.

Figure 1A of the accompanying drawings shows an augmented reality viewer 12 with which a user views a direct view of a real world scene comprising a real world surface and real world objects 14 augmented by a type of content 16 stored on, received by or otherwise generated by a computer or computer network.

The augmented reality viewer 12 includes a display 18, a data channel 20, a content rendering module 22, a projector 24, a depth sensor 28, a position sensor such as an accelerometer 30, a camera 32, an environment computing unit 34, and a content placement and content orientation unit 36.

The data channel 20 may be connected to a storage device that holds the content 16 or may be connected to a service that provides the content 16 in real time. The content 16 may be, for example: still images, such as photographs; images that remain static and can be manipulated by a user over a period of time, such as web pages; text documents or other data displayed on a computer display; or a moving image such as a video or animation. The content 16 may be two-dimensional, three-dimensional, static, dynamic, text, images, video, and the like. Content 16 may include games, books, movies, video clips, advertisements, avatars, pictures, applications, web pages, decorations, sports games, replays, 3D models, or any other type of content as will be understood by those skilled in the art.

A content rendering module 22 is connected to the data channel 20 to receive the content 16 from the data channel 20. Content rendering module 22 converts content 16 into a form suitable for three-dimensional viewing. Various techniques exist to view a two-dimensional plane in three-dimensional space, or a three-dimensional volume in three-dimensional space, depending on the orientation of the user.

The projector 24 is connected to the content rendering module 22. The projector 24 converts the data generated by the content rendering module 22 into light and passes the light to the display 18. Light travels from the display 18 to the user's eye 26. Various techniques exist to provide a three-dimensional experience for a user. Each eye is provided with a different image, and the user perceives the objects in the image as being constructed in three dimensions. There are also techniques for a user to focus on objects located at a depth field that does not necessarily lie in the plane of the display 18 and is typically located some distance behind the display 18. One way in which virtual content can be made to appear to be at a certain depth is by diverging light rays and forming a curved wavefront in a way that mimics how light from a real physical object reaches the eye. The eye then focuses the diverging beam onto the retina by changing the shape of the anatomical lens in a process called accommodation. The different divergence angles represent different depths and are created using diffraction gratings on the exit pupil expander on the waveguide.

The display 18 is a transparent display. The display 18 allows a user to see the real world objects 14 through the display 18. Thus, the user perceives an augmented reality view 40 in which the real world objects 14 seen by the user in three dimensions are augmented by a three-dimensional image that is provided to the user from the projector 24 via the display 18.

The depth sensor 28 and camera 32 are mounted in a position to capture the real world object 14. The depth sensor 28 typically detects electromagnetic waves in the infrared range and the camera 32 detects electromagnetic waves in the visible spectrum. As shown more clearly in fig. 1B, a plurality of cameras 32 may be mounted on the frame 13 of the augmented reality viewer 12 in a world-facing position. In a particular embodiment, four cameras 32 are mounted on the frame 13, two of which are located in a forward, world-facing position and two are located in a left or right, or obliquely, world-facing position. The fields of view of the multiple cameras 32 may overlap. The depth sensor 28 and camera 32 are mounted in a static position relative to the frame 13 of the augmented reality viewer 12. The center point of the images captured by the depth sensor 28 and the camera 32 are always located in the same forward direction relative to the augmented reality viewer 12.

The accelerometer 30 is mounted in a fixed position on the frame of the augmented reality viewer 12. The accelerometer 30 detects the direction of gravity. The accelerometer 30 may be used to determine the orientation of the augmented reality viewer relative to the earth's gravitational field. The depth sensor 28, in combination with a head pose algorithm that relies on visual simultaneous localization and mapping ("SLAM") and inertial measurement unit ("IMU") inputs, the accelerometer 30 allows the augmented reality viewer 12 to determine the position of the real world object 14 relative to the direction of gravity and relative to the augmented reality viewer 12.

The camera 32 captures an image of the real world object 14 and further processing of the image on a continuous basis provides data indicative of the motion of the augmented reality viewer 12 relative to the real world object 14. Because the depth sensor 28, the world camera 32, and the accelerometer 30 continuously determine the position of the real world object 14 relative to gravity, the motion of the augmented reality viewer 12 relative to gravity and the mapped real environment may also be calculated.

In fig. 1A, environment calculation unit 34 includes an environment mapping (mapping) module 44, a surface extraction module 46, and a viewer orientation determination module 48. The environment mapping module 44 may receive input from one or more sensors. The one or more sensors may include, for example, a depth sensor 28, one or more world cameras 32, and an accelerometer 30 to determine the location of the real world surface and the object 14. Surface extraction module 46 may receive data from environment mapping module 44 and determine a flat surface in the environment. The viewer orientation determination module 48 is connected to the depth sensor 28, the camera 32, and the accelerometer 30 and receives inputs from the depth sensor 28, the camera 32, and the accelerometer 30 to determine a user orientation of the user relative to the real world object 14 and the surface identified by the surface extraction module 46.

Content placement and content orientation unit 36 includes a surface vector calculator 50, a surface selection module 52, a content size determination module 54, a content vector calculator 56, and a content orientation selection module 58. The surface vector calculator 50, the surface selection module 52, and the content size determination module 54 may be sequentially connected to each other. Surface selection module 52 is coupled to viewer orientation determination module 48 and provides input to viewer orientation determination module 48. The content vector calculator 56 is connected to the data channel 20 so as to be able to receive the content 16. The content orientation selection module 58 is connected to the content vector calculator 56 and the viewer orientation determination module 48 and receives input from the content vector calculator 56 and the viewer orientation determination module 48. The content size determination module 54 is coupled to the content orientation selection module 58 and provides input to the content orientation selection module 58. The content rendering module 22 is connected to the content size determination module 54 and receives input from the content size determination module 54.

Fig. 2 shows a user 60 wearing the augmented reality viewer 12 in a three-dimensional environment.

The vector 62 represents the direction of gravity detected by one or more sensors on the augmented reality viewer 12. Vector 64 represents a direction to the right from the perspective of user 60. User orientation vector 66 represents the user orientation, in this example, the forward direction in the middle of the field of view of user 60. User orientation vector 66 also points in the direction of the center point of the images captured by depth sensor 28 and camera 32 in fig. 1. FIG. 1B shows another coordinate system 63 that includes a vector 64 to the right, a user orientation vector 66, and a device vertical vector 67 that are orthogonal to each other.

By way of example, the three-dimensional environment includes a table 68 having a horizontal surface 70, surfaces 72 and 74, and an object 76 that provides an obstacle that may make the surfaces 72 and 74 unsuitable for placing content. For example, the object 76 disrupting the continuous surfaces 72 and 74 may include a picture frame, a mirror, a crack in a wall, a rough texture, a different colored area, a hole in a surface, a protrusion of a surface, or any other non-uniformity relative to the flat surfaces 72, 74. Conversely, surfaces 78 and 80 may be more suitable for placement of content because of their relatively large size and proximity to user 60. Depending on the type of content displayed, it may also be advantageous to find a surface having rectangular dimensions, although other shapes, such as square, triangular, circular, elliptical, or polygonal shapes may also be used.

FIG. 3 illustrates the functionality of the depth sensor 28, accelerometer 30, and environment mapping module 44 of FIG. 1. Depth sensor 28 captures the depth of all features including objects and surfaces in the three-dimensional environment. The environment mapping module 44 receives data directly or indirectly from one or more sensors on the augmented reality viewer 12. For example, depth sensor 28 and accelerometer 30 may provide inputs to environment mapping module 44 to map the depth of a three-dimensional environment into three dimensions.

Fig. 3 also illustrates the functionality of the camera 32 and the viewer orientation determination module 48. Camera 32 captures images of object 76 and surface 78. The viewer orientation determination module 48 receives the images from the camera 32 and processes the images to determine that the orientation of the augmented reality viewer 12 worn by the user 60 is represented by a user orientation vector 66.

Other methods of mapping a three-dimensional environment may be employed, for example, using one or more cameras located at fixed positions within a room. However, the integration of the depth sensor 28 and the environment mapping module 44 within the augmented reality viewer 12 provides a more mobile application.

Fig. 4 illustrates the function of the surface extraction module 46 in fig. 1. Surface extraction module 46 processes the three-dimensional map created in fig. 3 to determine if there are any surfaces suitable for placement and viewing of the content (two-dimensional content in this example). The surface extraction module 46 determines a horizontal surface area 82 and two vertical surface areas 84 and 86. Surface regions 82, 84, and 86 are not real surfaces, but rather electronically represent two-dimensional flat surfaces oriented in a three-dimensional environment. Surface regions 82, 84, and 86, which are representations of data, correspond to the real surfaces 70, 78, and 80, respectively, of fig. 2, which form part of the real-world object 14 of fig. 1.

Fig. 4 shows a cube 88 and a shading 90 of the cube 88. The author uses these elements to assist the viewer in tracking changes in the user orientation vector 66 and the movement of the user 60 and the augmented reality viewer 12 through the three-dimensional space in fig. 2.

Fig. 4 also shows the function of the surface vector calculator 50 in fig. 1. The surface vector calculator 50 calculates a surface region orientation vector for each extracted surface of the mapped three-dimensional environment. For example, surface vector calculator 50 calculates a surface region orientation vector 92 that is perpendicular to the plane of surface region 82. Similarly, surface vector calculator 50 calculates a surface region orientation vector 94 that is perpendicular to surface region 84 and a surface region orientation vector 94 that is perpendicular to surface region 86.

The surface on which the virtual content is displayed is selected by the surface selection module 52, which surface selection module 52 calculates the relationship between the surface and the user. Surface selection module 52 in FIG. 1A calculates the dot product of user orientation vector 66 and surface region orientation vector 92. The dot product of unit vectors a and b is represented by the following equation:

a·b=│a││b│cosθ [1]

wherein-a-1

│b│=1

θ is the angle between unit vectors a and b.

User orientation vector 66 and surface region orientation vector 92 are orthogonal to each other, meaning that their dot product is zero.

Surface selection module 52 also calculates a dot product of user orientation vector 66 and surface region orientation vector 94. Because user orientation vector 66 and surface region orientation vector 94 are orthogonal, their dot product is zero.

Surface selection module 52 also calculates a dot product of user orientation vector 66 and surface region orientation vector 96. Because user orientation vector 66 and surface region orientation vector 96 are 180 ° relative to each other, their dot product is-1. Because the dot product that includes surface region orientation vector 96 is the most negative of the three dot products, surface selection module 52 determines that surface region 86 is the preferred surface region between surface regions 82, 84, and 86 for displaying content. The more negative the dot product, the more likely the content will be oriented to face the viewer directly. Because surface area 86 is a vertical surface area, content placement and content orientation unit 36 does not invoke content orientation selection module 58 of FIG. 1. Dot products are one of many surface properties that can be prioritized by the system or by the need for virtual content to select the best surface. For example, if a surface with a dot product of-1.0 is small and far from the user, it may not be preferred over a surface with a dot product of-0.8 but larger and close to the user. The system can select a surface with good contrast characteristics when placing the content, so that it will be easier for the user to see. Next, content sizing module 54 determines the appropriate size of content to be displayed on surface area 86. The content has an optimal aspect ratio, for example an aspect ratio of 16 on the proximal edge and 9 on the lateral edge. The content sizing module 54 uses the ratio of the near-edge to the side-edge to determine the size and shape of the content to maintain the aspect ratio at all viewing angles so as not to distort the content. Content sizing module 54 calculates the optimal height and width of the content in the optimal aspect ratio that will fit into surface area 86. In the given example, the distance between the left and right edges of surface area 86 determines the size of the content.

Fig. 5 illustrates the functionality of the content rendering module 22 and the projector 24 in fig. 1. The content rendering module 22 provides the content 16 to the projector 24 in its calculated orientation based on the sizing of the content sizing module 54 and the surface selection module 52. The viewer views content 16 as a rendering 98 placed in a three-dimensional space on surface area 86 and coplanar with surface area 86. Content 16 is not rendered on surface areas 82 and 84. All other surface characteristics are the same, and surface region 86 provides the best area for rendering 98 when compared to surface regions 82 and 84 due to the user orientation represented by user orientation vector 66. When the user orientation vector changes to a small degree, the rendering 98 remains static on the surface area 86. If viewer orientation determination module 48 in FIG. 1A senses that the user orientation vector has changed more than a predetermined threshold degree, for example, by 5 degrees, the system automatically continues as described above to recalculate all dot products and, if necessary, reposition and resize the content rendered for display to the user. Alternatively, the system may recalculate all dot product and placement content routinely (e.g., every 15 seconds) as described above.

Alternatively, the user may select area 86 for content even when the user changes their orientation.

In fig. 6, the user 60 changes the tilt of his head. As a result, user orientation vector 66 rotates in a downward direction 100. The new user orientation is represented by a new user orientation vector 102. The camera 32 in fig. 1A and 1B continuously captures images of the real world object 14. Additional sensors, such as depth sensor 28 and accelerometer 30, may also continuously capture and provide updated information. The viewer orientation determination module 48 processes the images and other data captured by the sensors on the augmented reality viewer 12 to determine relative motion of the real world object 14 within the field of view of the camera 32. Viewer orientation determination module 48 then processes this motion to determine the change in the user orientation vector from user orientation vector 66 in FIG. 5 to user orientation vector 102 in FIG. 6. The system typically selects the surface with the best dot product, although there may be some tolerance/range allowed for the dot product so that jitter and processing is reduced. By way of example, the system may move the content when there is another more optimal dot product and if the more optimal dot product is at least 5% better than the dot product of the surface on which the currently displayed content is located.

Assume that the user has not selected surface 86 for content after changing its orientation. Surface selection module 52 again calculates three dot products, namely, the dot product between user orientation vector 102 and surface region orientation vector 92, the dot product between user orientation vector 102 and surface region orientation vector 94, and the dot product between user orientation vector 102 and surface region orientation vector 96. Surface selection module 52 then determines which of the three dot products is the most negative. In this example, the dot product between user orientation vector 102 and surface area orientation vector 92 is most negative. Surface selection module 52 determines that surface region 82 is the preferred surface because its associated dot product is more negative than the dot product for surface regions 84 and 86. The system may also take into account other factors as described above.

Content placement and content orientation unit 36 in fig. 1A invokes content vector calculator 56 and content orientation selection module 58. After operation of the content orientation selection module 58, the content sizing module 54 is invoked again.

The functionality of the content vector calculator 56, the content orientation selection module 58 and the content size determination module 54 is better illustrated with the aid of fig. 7.

Fig. 7 shows content rendering module 22 and projector 24 creating a rendering 104 of content 16 that is within surface area 82 and coplanar with surface area 82. The rendering on surface area 86 is no longer displayed to user 60.

The rendering 104 has a far edge 106, a near edge 108, a right edge 110, and a left edge 112. Content vector calculator 56 in fig. 1A may calculate content orientation vector 114. The content orientation vector 114 extends from the proximal edge 108 to the distal edge 106 and is orthogonal to both the proximal edge 108 and the distal edge 106.

The calculations performed by the content vector calculator depend on the content provided on the data channel. Some content may already have a content orientation vector extending from the near edge to the far edge of the content, in which case the content vector calculator 56 simply identifies and isolates the content orientation vector in the content code. In other examples, the content orientation vector may be associated with the content, and the content vector calculator 56 may have to redirect the content orientation vector to extend from the near edge to the far edge of the content. In other examples, none of the content vector calculators 56 may generate a content orientation vector based on other data such as image analysis, placement of tools in the content, and so forth.

Content orientation selection module 58 calculates a dot product between user orientation vector 102 and content orientation vector 114. The dot products are calculated for the four scenes, i.e., when content orientation vector 114 is oriented in the direction shown in fig. 7, when content orientation vector 114 is oriented 90 ° to the right, when content orientation vector 114180 ° is oriented, and when content orientation vector 114 is oriented 90 ° to the left. Content orientation selection module 58 then selects the lowest of the four dot products and places rendering 104 such that content orientation vector 114 is aligned in the direction having the lowest associated dot product. Then, the near edge 108 is positioned closer to the user 60 than the far edge 106, and the right and left edges 112, 110 are positioned to the right and left from the orientation of the user 60 as depicted by the user direction vector 102. Thus, content 16 is oriented in a manner that is readily viewable by user 60. For example, a photograph of a person's head and torso is displayed, where the head is furthest from the user 60 and the torso is closest to the user 60, and a text document is displayed, where the first line is furthest from the user 60 and the last few lines are closest to the user 60.

Content sizing module 54 has determined the appropriate size of rendering 104, where right edge 110 and left edge 112 define the width of rendering 104 within surface region 82, and the distance between far edge 106 and near edge 104 is determined by the desired aspect ratio.

In fig. 8, the user 60 has moved counterclockwise in direction 116 around the surface area 82. The user 60 has also rotated his body 90 counter clockwise. The user 60 has now established a new orientation represented by a new user orientation vector 118. The user's head is still tilted downward toward surface area 82 and surface areas 84 and 86 are now located to the rear and right of user 60, respectively.

Surface selection module 52 again calculates the dot products associated with each of surface region orientation vectors 92, 94, and 96. The dot product of user orientation vector 118 and surface area orientation vector 94 now becomes positive. The dot product between user orientation vector 118 and surface area orientation vector 96 is approximately zero. The dot product between user orientation vector 118 and surface area orientation vector 92 is most negative. Surface selection module 52 in FIG. 1A selects surface region 82 associated with surface region orientation vector 92 as the preferred surface for positioning the rendering of content 16.

Content orientation selection module 58 in fig. 1A again calculates four dot products, each associated with a respective direction of the content orientation vector, i.e., the dot product between user orientation vector 118 and content orientation vector 114 in the direction shown in fig. 8, and additionally, the dot product between user orientation vector 118 and the content orientation vector 90 °, 180 °, and 90 ° to the right and left, respectively, with respect to content orientation vector 114 in fig. 8. Content orientation selection module 58 determines that the dot product associated with content orientation vector 114 that is 90 ° to the left with respect to the content orientation vector 114 directions shown in fig. 7 is the most positive of the four dot products.

If content orientation vector 114 is rotated 90 to the left, content size determination module 54 then determines the appropriate size for rendering.

FIG. 9 illustrates how content rendering module 22 creates renderings 104 based on the user orientation represented by user orientation vector 118. The rendering 104 is rotated 90 counterclockwise so that the content orientation vector 114 points 90 to the left as compared to fig. 8. The proximal edge 108 is now positioned closest to the user 60. Because of the available scale of surface area 82, content sizing module 54 in FIG. 1A makes rendering 104 smaller than FIG. 8. Rendering may be according to capture between locations (snap), smooth rotation, fade in/out as selected by the content creator or user preference.

In FIG. 10, the user 60 has moved further around the surface region 82 in a direction 120 and a new user orientation is established as represented by a new user orientation vector 122. The dot product between user orientation vector 122 and surface area orientation vector 96 is now positive. The dot product between user orientation vector 122 and surface area orientation vector 94 is approximately zero. The dot product between user orientation vector 122 and surface region orientation vector 92 is most negative. Thus, surface region 82 is a preferred surface for displaying content.

As shown in fig. 10, the dot product between the user orientation vector 122 and the content orientation vector 114 is approximately zero. If content orientation vector 114 is rotated 90 clockwise, 180 counterclockwise, and 90, the respective dot products differ in magnitude, with the dot product of content orientation vector 114 that is 90 to the left being the most positive. Therefore, the rendering 104 should be rotated 90 counterclockwise and resized based on the proportion of the surface area 82. FIG. 11 shows how rendering 104 rotates and resizes 82 due to changes in user orientation vector 122 while remaining on the surface area.

In FIG. 12, user 60 has moved in direction 124 around surface region 82 and established a new user orientation as represented by new user orientation vector 126. The dot product of user orientation vector 126 and surface area orientation vector 94 is now negative. However, the dot product between user orientation vector 126 and surface area orientation vector 92 is more negative. Thus, surface area 82 is a preferred surface area for creating a rendering of content 16.

As shown in fig. 12, the dot product between user orientation vector 126 and content orientation vector 114 is approximately zero. If content orientation vector 114 is rotated 90 to the left, the dot product between user orientation vector 126 and content orientation vector 114 is positive. Therefore, the rendering 104 should be rotated counterclockwise while remaining on the surface area 82. FIG. 13 illustrates the placement, orientation, and size of the rendering 104 modified based on the new user orientation vector 126.

Fig. 14 shows a new user orientation vector 132 established when the user 60 rotates his head in an upward direction 134. The dot product between user orientation vector 132 and surface region orientation vector 92 is approximately zero. The dot product between user orientation vector 132 and surface area orientation vector 96 is also approximately zero. The dot product between user orientation vector 132 and surface region orientation vector 94 is-1 or near-1, and is thus the most negative of the three surface-based dot products. Surface area 84 is now the preferred surface area for placing the rendering of content 16. FIG. 15 illustrates a rendering 136 displayed to the user 60 on the surface area 84. The rendering on surface area 82 is no longer displayed to user 60. On vertical surface areas such as surface area 84 and surface area 86, the proximal edge 108 is always at the bottom.

Fig. 16 shows an algorithm for performing the method as described above. At 150, the three-dimensional space is mapped as described with reference to fig. 3. At 152A, B and C, surface regions are extracted as described with reference to fig. 4. At 154A, B and C, a surface vector is calculated as described with reference to FIG. 4. At 156, a user orientation vector is determined as described with reference to fig. 1-4. At 158A, B and C, respective dot products between the user orientation vector and each respective surface area orientation vector are calculated as described with reference to fig. 4. At 160, a preferred surface area is determined as described with reference to fig. 4.

At 162, it is determined whether the preferred surface area is vertical. If the preferred surface area is not vertical, then at 164, the direction of the content orientation vector is determined relative to the far, near, right and left edges of the content, as described with reference to FIG. 7. After 164, content vectors at 0 °, 90 ° to the right, 180 ° and 90 ° to the left are calculated at 166A, B, C and D as described with reference to fig. 7. At 168A, B, C and D, a dot product is calculated between the user orientation vector and the content orientation vector calculated at 166A, B, C and D, respectively. At 170, a content orientation is selected, as described with reference to FIG. 7.

At 172, the content is sized as described with reference to fig. 5 and 7. At 174, the content is displayed as described with reference to fig. 5 and 7.

Following 174, a new user orientation vector may be determined at 156 as described with reference to fig. 6, 8, 9, 10, and 12. The process may then be repeated without again calculating the surface region orientation vector at 154A, B and C.

Referring to fig. 17 and 18, embodiments are shown in perspective and top views, respectively, in which three-dimensional virtual content 180 is rendered on a mapping surface 182 within an environment 184 for viewing by the user 60. In such embodiments, the above principles are used to locate three-dimensional virtual content 180 that can be viewed as easily and naturally as possible by the user 60.

The user orientation vector 66 is the same as the forward vector of the device 12 and is therefore referred to as the "device forward vector 66". Determining the surface on which to place the three-dimensional virtual content 180 may depend, at least in part, on the dot product relationship between the device forward vector 66 and the surface normal vector 186 of the mapping surface in the environment 184. For optimal viewing of the three-dimensional virtual content 180, one of many dot product relationships may be considered optimal depending on the content. For example, if the content is to be viewed from the side, it is desirable that the dot product relationship between device forward vector 66 and surface normal vector 186 is close to zero, indicating that the user is nearly orthogonal to mapping surface 182. In such an embodiment, the three-dimensional virtual content 180 placed on the mapping surface 182 would be viewed by the user from the side. Alternatively, as described herein with respect to other embodiments, a dot product relationship at or near-1 may be more desirable if the three-dimensional virtual content 180 is intended to be viewed from above. The ideal dot-product relationship may be an attribute set by the creator of the three-dimensional virtual content 180, may be selected by the user according to preferences, or may be determined by the augmented reality viewing system based on the type of content to be displayed.

Once the placement surface is determined by the system or by the user's placement, the orientation of the three-dimensional virtual content 180 on the mapping surface 182 is determined relative to the user. In the illustrated example, the three-dimensional virtual content 180 is provided with a content orientation vector 188, which content orientation vector 188 may be used to align the three-dimensional virtual content 180 with a reference vector of the user device. The three-dimensional virtual content 180 is the head of the character, where the near edge of the character is where its mouth is located. The far edge of the character is typically not rendered for viewing by the user 60 because it is on the side that is not visible to the user of the character. The content orientation vector 188 is aligned parallel to the near edge of the character. The content orientation vector 188 may be used to align the three-dimensional virtual content 180 with the augmented reality viewer 12 such that the dot product between the content orientation vector 188 and the device right vector 64 is equal to or close to 1, indicating that the two vectors point in substantially the same direction.

Referring to fig. 19 and 20, examples of three-dimensional content reorientation based on movement of a user are shown. In fig. 19, the user 60 has moved a distance and angle clockwise around the table relative to fig. 18. As a result, the dot-product relationship between content orientation vector 188 and device right vector 64 is less than 1. In some embodiments, such a change in position may not require reorientation of the three-dimensional virtual content 180. For example, the content creator, user, or software within the augmented reality viewer 12 may indicate that reorientation of the three-dimensional virtual content 180 is only required when the dot product between the content orientation vector 188 and the device reference vector is less than a predetermined threshold. A large or small threshold tolerance may be set depending on the type of content to be displayed.

If a change in the position of the user 60 from the position of FIG. 18 to the position of FIG. 19 triggers reorientation of the three-dimensional virtual content 180, the orientation module may re-render the three-dimensional virtual content 180 such that the content orientation vector 188 is aligned with the device right vector 64, as shown in FIG. 20, such that the dot product of the two vectors is equal to or close to 1. As discussed above, reorientation of the three-dimensional virtual content 180 may also allow resizing of the content; however, the content may also remain the same size such that as the user moves through the environment, the content appears to reorient only about an axis normal to the mapping surface 182.

Referring to fig. 21, 22 and 23, an example of the reorientation of virtual content 196 on a vertical surface 198 is shown. In fig. 21, the user 60 is shown viewing virtual content 196 located on a vertically oriented vertical surface 198 in the environment. Virtual content 196 may have at least one of a content right orientation vector 200 and a content vertical orientation vector 202, and content right orientation vector 200 and content vertical orientation vector 202 may be used to measure alignment relative to device right vector 64 and device vertical vector 67, respectively. In fig. 21, alignment between one of the content orientation vectors (200, 202) and the corresponding device orientation vector (64, 67) results in a dot product value of approximately 1. As discussed above, a dot product value close to 1 indicates a more similar alignment between the two vectors being compared.

If the user 60 were to change position, such as by lying on a couch as shown in fig. 22, without reorienting the virtual content 196, the alignment between the content orientation vectors (200, 202) and the corresponding device orientation vectors (64, 67) may be near zero, indicating a worse optimal alignment between the user 60 and the virtual content 196 than that shown in fig. 21. If the dot product relationship of zero is less than the desired dot product relationship for the virtual content and the user's relative orientation, the virtual content 196 may be re-rendered in the new orientation, as shown in FIG. 23, such that the dot product relationship is within the predetermined threshold. In some embodiments, re-rendering the virtual content 196 in the new orientation may re-establish the optimal dot product relationship between the content orientation vector (200, 202) and the corresponding device orientation vector (64, 67).

Fig. 24 shows a schematic representation of a machine in the exemplary form of a computer system 900 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. Further, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.

The exemplary computer system 900 includes a processor 902 (e.g., a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or both); main memory 904 (e.g., Read Only Memory (ROM), flash memory, Dynamic Random Access Memory (DRAM) such as synchronous DRAM (sdram) or Rambus DRAM (RDRAM), etc.); and static memory 906 (e.g., flash memory, Static Random Access Memory (SRAM), etc.) that communicate with each other via bus 908.

The computer system 900 may also include a disk drive unit 916 and a network interface device 920.

The disk drive unit 916 includes a machine-readable medium 922 on which is stored one or more sets of instructions 924 (e.g., software) embodying any one or more of the methodologies or functions described herein. The software may also reside, completely or at least partially, within the main memory 904 and/or within the processor 902 during execution thereof by the computer system 900, the main memory 904 and the processor 902 also constituting machine-readable media.

The software may further be transmitted or received over a network 928 via the network interface device 920.

While the machine-readable medium 924 is shown in an example embodiment to be a single medium, the term "machine-readable medium" should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term "machine-readable medium" shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention. The term "machine-readable medium" shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signals.

While certain exemplary embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention, and that this invention not be limited to the specific constructions and arrangements shown and described, since modifications may occur to those ordinarily skilled in the art.

40页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:电子装置及其控制方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类