non-planar computing display

文档序号:1722489 发布日期:2019-12-17 浏览:16次 中文

阅读说明:本技术 非平面计算显示 (non-planar computing display ) 是由 约翰·D·佩罗特 帕特里克·卢尔 于 2018-05-15 设计创作,主要内容包括:近眼显示器系统[100]包括一个或多个非平面显示器面板[110、112]和小透镜阵列[124],以显示近眼光场帧[120]。近眼显示器系统还包括渲染组件[104],该渲染组件基于与一个或多个非平面显示器面板的一组显示器几何形状数据相关联的立体聚焦体积[316]来渲染近眼光场帧中的元素图像的阵列,使得立体聚焦体积内的对象被用户的眼睛感知为聚焦。(The near-eye display system [100] includes one or more non-planar display panels [110, 112] and a lenslet array [124] to display a near-eye light field frame [120 ]. The near-eye display system also includes a rendering component [104] that renders an array of elemental images in the near-eye light field frame based on a stereoscopic focus volume [316] associated with a set of display geometry data for one or more non-planar display panels such that objects within the stereoscopic focus volume are perceived by a user's eyes as being in focus.)

1. In a near-eye display system [100], a method comprising:

Receiving display geometry data for a non-planar display [102], the non-planar display comprising one or more display panels [110, 112] of the near-eye display system;

Rendering an array [120] of element images [122] at locations within a near-eye light field frame based on a stereoscopic focal volume [316] associated with the display geometry data, wherein the non-planar display renders objects within the stereoscopic focal volume in focus; and

Transmitting the near-eye light field frame for display at the one or more display panels of the non-planar display of the near-eye display system.

2. The method of claim 1, wherein receiving display geometry data comprises:

Receiving data indicating that the non-planar display includes a plurality of display panel segments [118a, 118b ].

3. The method of claim 2, wherein determining the stereoscopic focal volume further comprises:

Determining a focal volume for each of the plurality of display panel segments, wherein each of the plurality of display panel segments presents objects within its corresponding focal volume as being in focus; and

determining the stereoscopic focus volume based at least in part on an overlap between focus volumes of each of the plurality of display panel segments.

4. The method of any preceding claim, wherein receiving display geometry data comprises:

data indicative of a curvature geometry of the non-planar display is received.

5. The method of any preceding claim, wherein receiving display geometry data comprises:

receiving data indicative of a set of display boundary data representing a location of a display boundary of the one or more display panels.

6. The method of claim 5, further comprising:

Determining a pose of an eye of a user using an eye tracking component [108] of the near-eye display system; and

Modifying rendering of the array of elemental images to prevent the user's eyes from perceiving the display boundary of the one or more display panels based on the gesture of the user's eyes and the set of display boundary data.

7. The method of claim 6, wherein determining the pose of the user's eyes comprises:

Capturing an image of the user's eyes using an imaging camera disposed between the non-planar display and the user's eyes.

8. The method of any of claims 1 to 7, further comprising:

rotating a position of a virtual plane [402] within the stereoscopic focus volume by shifting a display position of the array of elemental images.

9. The method of any of claims 1 to 7, further comprising:

rotating a position of a virtual plane [402] within the stereoscopic focus volume by changing a fold angle between two or more display panels.

10. A near-eye display system [100], comprising:

A non-planar display [102] comprising one or more display panels [110, 112] to display a near-to-eye light field frame comprising an array [120] of element images [122 ];

A rendering component [104] for rendering an array of the element images in the near-eye light field frame based on a stereoscopic focal volume [316] associated with a set of display geometry data of the non-planar display such that objects within the stereoscopic focal volume are perceived by a user's eyes as in focus; and

A lenslet array for presenting the near-eye light field frame to an eye of a user.

11. The near-eye display system of claim 10, further comprising a processor that determines the stereoscopic focal volume by:

receiving data indicating that the non-planar display includes a plurality of display panel segments;

Determining a focal volume for each of the plurality of display panel segments, wherein each of the plurality of display panel segments presents objects within its corresponding focal volume as being in focus; and

Determining the stereoscopic focus volume based at least in part on an overlap between focus volumes of each of the plurality of display panel segments.

12. the near-eye display system of claim 10, further comprising a processor that determines the stereoscopic focal volume by:

Receiving data indicative of a set of display boundary data representing a location of a display boundary of the one or more display panels;

Determining, using an eye tracking component of the near-eye display system, a pose of an eye of a user; and

Modifying rendering of the array of elemental images to prevent the user's eyes from perceiving the display boundary of the one or more display panels based on the gesture of the user's eyes and the set of display boundary data.

13. the near-eye display system of claim 10 or claim 11, further comprising:

an eye tracking assembly to track a pose of the user's eye, wherein the eye tracking assembly includes a set of one or more Infrared (IR) illuminators to project light onto the user's eye and an imaging camera disposed between the lenslet array [124] and the non-planar display and oriented toward the user's eye through the lenslet array.

14. The near-eye display system of claim 10, wherein the non-planar display comprises:

A single continuous display panel comprising different lateral portions having different degrees of curvature.

15. The near-eye display system of claim 10, wherein the non-planar display comprises:

a plurality of flat panel displays positioned in a non-planar orientation relative to one another.

16. A rendering system, comprising:

at least one processor [136, 138, 140 ];

An input for receiving data indicative of a set of display geometry data for a non-planar display [102], the non-planar display comprising one or more display panels [110, 112] of a near-eye display system [100 ]; and

A storage component [142] for storing a set of executable instructions [144, 146] configured to manipulate the at least one processor to render an array [120] of elemental images [122] at locations within a near-eye light field frame based on a stereoscopic focus volume [316] associated with the set of display geometry data, wherein the non-planar display renders objects within the stereoscopic focus volume in focus.

17. The rendering system of claim 16, wherein the set of executable instructions is further configured to determine the stereoscopic focal volume by:

Receiving data indicative of a set of display boundary data representing a location of a display boundary of the one or more display panels;

Determining a pose of an eye of a user using an eye tracking component [108] of the near-eye display system; and

Modifying rendering of the array of elemental images to prevent the user's eyes from perceiving the display boundary of the one or more display panels based on the gesture of the user's eyes and the set of display boundary data.

18. A rendering system according to claim 16 or claim 17, wherein the set of display geometry data comprises data indicating that the non-planar display comprises a plurality of display panel segments [118a, 118b ].

19. the rendering system of claim 18, wherein the set of executable instructions is further configured to determine the stereoscopic focal volume by:

Determining a focal volume for each of the plurality of display panel segments, wherein each of the plurality of display panel segments presents objects within its corresponding focal volume as being in focus; and

determining the stereoscopic focus volume based at least in part on an overlap between focus volumes of each of the plurality of display panel segments.

20. the rendering system of any of claims 16-19, wherein the set of executable instructions is further configured to render the array of elemental images by:

Rotating a position of a virtual plane [402] within the stereoscopic focus volume by shifting a display position of the array of elemental images.

Background

immersive Virtual Reality (VR) and Augmented Reality (AR) systems typically utilize Head Mounted Displays (HMDs) and other near-eye display systems to present stereoscopic images to a user, giving the sensation of the presence of a three-dimensional (3D) scene. Conventional HMDs may utilize a near-eye light field display or other computational display to provide display of three-dimensional (3D) graphics. Typically, near-eye light field displays employ one or more display panels and a number of lenses, pinholes, or other optical elements overlying the one or more display panels. The rendering system renders an array of elemental images, each representing an image or view of an object or scene from a corresponding perspective or virtual camera position.

Drawings

the present disclosure may be better understood, and its numerous features and advantages made apparent to those skilled in the art by referencing the accompanying drawings. The use of the same reference symbols in different drawings indicates similar or identical items.

Fig. 1 is a diagram illustrating a near-eye display system incorporating a non-planar display for generating a stereoscopic focal volume according to some embodiments.

fig. 2 illustrates a perspective view of the near-eye display system of fig. 1 incorporating a non-planar display for generating a stereoscopic focal volume, in accordance with some embodiments.

Fig. 3 is a diagram illustrating an example of a non-planar computing display in the near-eye display system of fig. 1 for generating a stereoscopic focal volume according to some embodiments.

Fig. 4 is a diagram illustrating an example of virtual plane rotation in the near-eye display system of fig. 1, in accordance with some embodiments.

fig. 5 is a diagram illustrating an additional example of a non-planar computing display in the near-eye display system of fig. 1 for generating a stereoscopic focal volume according to some embodiments.

fig. 6 is a diagram illustrating yet another example of a non-planar computing display in the near-eye display system of fig. 1 for generating a stereoscopic focal volume according to some embodiments.

FIG. 7 is a diagram illustrating an example of a gap hidden in a non-planar computing display according to some embodiments.

Fig. 8 is a flow diagram illustrating a method for generating a volumetric focal volume according to some embodiments.

Detailed Description

Fig. 1-8 illustrate example methods and systems for incorporating a non-planar display and generating a stereoscopic focal volume in a near-eye display system. In at least one embodiment, the near-eye display system employs a non-planar computing display to display a near-eye light field frame of an image to a user, thereby providing the user with an immersive VR or AR experience. Each near-eye light field frame is composed of an array of elemental images, each representing a view of an object or scene from a different corresponding viewpoint.

many conventional HMD devices implement either a single flat display, divided into two separate display areas, one for the user's left eye and one for the user's right eye, or a pair of separate flat displays, one for each eye of the user. Such devices also typically include a single lens for each eye in order to focus the entire image of the display into the user's eye. However, the use of a flat panel display and a single lens for each eye typically results in a bulky HMD profile, which in turn results in a high moment of inertia when in use. In addition, flat panel displays and lenses typically limit the total transverse field of view (FOV) to 110 degrees or less. These conventional HMD devices are bulky and have a limited field of view, which can severely affect the user's sense of presence in the displayed image, thereby inhibiting the sensation of immersion in the presented scene.

to provide an improved HMD appearance without reducing the field of view (FOV) and/or depth of field, in at least one embodiment, the near-eye display systems described herein utilize a non-planar computing display configuration in which objects within a stereoscopic focus volume associated with the non-planar computing display are perceived as in focus. As an example, conventional near-eye display systems typically have a planar HMD outline that requires the left and right image planes (i.e., the image planes for the left-eye and right-eye display panels of the HMD) to be coplanar in order for an object to be perceived as in focus. However, this planar HMD appearance requires a large size display panel to form a "diving mask" product housing to maintain a sufficient FOV. The near-eye display systems described herein achieve a wrap-around or non-planar outline by pairing one or more display panels with a lenslet array having a large depth of field to render a focal volume in three-dimensional (3D) space. The volume where these volumes of focus overlap represents a stereoscopic volume of focus in which both eyes of the user can perceive the object as being in focus without reducing the FOV of the near-eye display system.

Fig. 1 illustrates a near-eye display system 100 incorporating a non-planar display for generating a stereoscopic focal volume in accordance with at least one embodiment. In some embodiments, the near-eye display system 100 may be an HMD device with an appearance that mounts the HMD to the user's face. In the depicted example, the near-eye display system 100 includes a computing display subsystem 102, a rendering component 104, and one or more eye tracking components, such as one or both of an eye tracking component 106 for tracking a left eye of a user and an eye tracking component 108 for tracking a right eye of the user. Computing display subsystem 102 is a non-planar display and includes a left-eye display 110 and a right-eye display 112, the left-eye display 110 and right-eye display 112 being mounted in a device 114 (e.g., goggles, glasses, etc.) that positions displays 110, 112 in front of the user's left and right eyes, respectively.

Each of the displays 110, 112 includes at least one display panel 118 to display a series or succession of near-eye light field frames (hereinafter, for ease of reference, referred to as "light field frames"), each of which includes an array 120 of element images 122. For ease of reference, the array 120 of elemental images 122 may also be referred to herein as a light field frame 120. Each of the displays 110, 112 also includes an array 124 of lenslets 126 (also commonly referred to as "microlenses") covering the display panel 118. In general, the number of lenslets 126 in lenslet array 124 is equal to the number of elemental images 122 in array 120, but in other embodiments, the number of lenslets 126 may be less than or greater than the number of elemental images 122. Note that while the example of FIG. 1 shows a 10x4 array 122 of elemental images and a corresponding 10x4 array 120 of lenslets 126 for ease of illustration, in typical implementations, the number of elemental images 122 in a light field frame 120 and the number of lenslets 126 in a lenslet array 124 are generally higher. Further, in some embodiments, a separate display panel 118 is implemented for each of the displays 110, 112, while in other embodiments, the left-eye display 110 and the right-eye display 112 share a single display panel 118 with the left-half display panel 118 for the left-eye display 110 and the right-half display panel 118 for the right-eye display 112.

cross-sectional view 128 of figure 1 depicts a cross-sectional view along line AA of lenslet array 124 covering display panel 118, such that lenslet array 124 covers display surface 130 of each display panel 118, thereby being disposed between display surface 130 and a corresponding eye 132 of a user. In this configuration, each lenslet 126 focuses a corresponding region of display surface 130 onto pupil 134 of the eye, where each such region at least partially overlaps one or more adjacent regions. Thus, when the array 120 of elemental images 122 is displayed at the display surface 130 of the display panel 118 and then viewed by the eye 132 through the lenslet array 124, the user perceives the array 120 of elemental images 122 as a single image of the scene. Therefore, when this process is performed in parallel for both the left and right eyes of the user and an appropriate parallax is achieved therebetween, the result is that an autostereoscopic three-dimensional (3D) image is presented to the user.

further, as shown in cross-sectional view 128, display panel 118 of left-eye display 110 and display panel 118 of right-eye display 112 are positioned in a non-planar orientation relative to each other (as opposed to a conventional VR/AR display in which the left-eye and right-eye image planes presented by the display panels are coplanar). In other words, computing display subsystem 102 (including display panel 118) is a non-planar display. As shown in cross-sectional view 128, display panel 118 of left-eye display 110 and display panel 118 of right-eye display 112 may each be planar. However, the two display panels 118 are not coplanar. Alternatively, the display panels 118 are angled relative to each other such that they partially wrap around the wearer's face in use. Although each of left-eye display 110 and right-eye display 112 are depicted in this example as having a single display panel 118, in other embodiments, each of displays 110, 112 may include any "N" display panel segments (each also referred to herein as a "display panel tile"). For example, in some embodiments (such as described below with respect to fig. 5), each of the displays 110, 112 includes two display panels. Those skilled in the art will recognize that as the number of display panel segments increases, the display surface of the displays 110, 112 will be closer to a curve. In some embodiments, each display panel 118 may itself be non-planar (i.e., curved).

in other embodiments, each of the displays 110, 112, instead of having N display panel segments, includes a continuous display panel having different lateral portions with different degrees of curvature (or substantially no curvature), different orientations, or combinations thereof, such that each portion represents a separate logical portion or "block" of the display 110, 112 (such as described below with respect to fig. 2 and 5). That is, although each of the left-eye display 110 and the right-eye display 112 includes a set of pixel rows that extend across the entire lateral extent of the display panel and are driven by the same display driver hardware, the display panel may be logically organized into a set of adjacent lateral portions based on changes in curvature of the display panel in that portion or based on an orientation of that portion relative to the user's corresponding eye. The curved left and right eye displays 110, 112 may be implemented using various display technologies capable of providing varying curvature or orientation configurations to the display panel, such as thin film flexible Organic Light Emitting Diodes (OLEDs) that may be bent to a desired curvature and cross-sectional orientation and held in place by a support stand. Further, lenslet array 124 includes a plurality of lenslets 126 focused on corresponding portions of the associated display panel. That is, the optical axis of each lenslet 126 intersects the display surface 130 (referred to herein as a "display panel block") of the corresponding display panel segment, and in some embodiments, the optical axis is perpendicular to the face of the corresponding display panel.

also as shown in fig. 1, the rendering component 104 includes a set of one or more processors, such as a Central Processing Unit (CPU)136 and Graphics Processing Units (GPUs) 138, 140 as shown, and one or more storage components, such as system memory 142, for storing software programs or other executable instructions that the processors 136, 138, 140 access and execute in order to manipulate one or more of the processors 136, 138, 140 to perform the various tasks described herein. Such software programs include, for example, a rendering program 144, as described below, the rendering program 144 including executable instructions for a light field frame rendering process, and an eye tracking program 146, as described below, the eye tracking program 146 including executable instructions for a stereoscopic volume generation process.

in operation, rendering component 104 receives rendering information 148 from a local or remote content source 150, wherein rendering information 148 represents graphics data, video data, or other data representing an object or scene that is the subject of an image to be rendered and displayed at display subsystem 102. Executing the rendering program 144, the CPU136 sends drawing instructions to the GPUs 138, 140 using the rendering information 148, and the GPUs 138, 140 in turn render a series of light field frames 151 for display at the left-eye display 110 and a series of light field frames 153 for display at the right-eye display 112 in parallel using the drawing instructions using any of a variety of well-known VR/AR computing/light field rendering processes. As part of this rendering process, the CPU136 may receive pose information 150 from an Inertial Management Unit (IMU)154, whereby the pose information 150 represents a pose of the display subsystem 102, and control the rendering of one or more pairs of light field frames 151, 153 to reflect the viewpoint of the object or scene from the pose.

To this end, the eye tracking components 106, 108 may each include one or more Infrared (IR) light sources (referred to herein as "IR illuminators") to illuminate the corresponding eye with IR light, one or more imaging cameras to capture IR light (reflected back from the corresponding eye as a corresponding eye image (eye image information 156)), one or more mirrors, waveguides, beam splitters, etc. to direct the reflected IR light to the imaging cameras, and one or more processors to execute the eye tracking program 146 in order to determine a current position, a current orientation, or both (individually or collectively referred to herein as a "pose") of the corresponding eye from the captured eye image. Any of a variety of well-known eye tracking devices and techniques may be employed as the eye tracking components 106, 108 to track one or both eyes of the user.

in at least one embodiment, the near-eye display system 100 can determine the eye pose as a past eye pose, a current eye pose, or a predicted (future) eye pose, or a combination thereof. In particular, prediction of future eye pose may provide improved performance or response time, and any of a variety of eye movement prediction algorithms may be implemented to predict future eye pose. Further, in some cases, the eye tracking components 106, 108 may use scene information (e.g., facial position or significance heuristics in the image to be rendered) as input in predicting future gazes of the user's eyes for eye pose calculations. As such, as used herein, the term "eye pose" may refer to a previous, current, or predicted eye pose, or some combination thereof.

as described in more detail herein, in at least one embodiment, the near-eye display system 100 generates a stereoscopic focal volume by determining a stereoscopic focal volume in which an object appears to be in focus to both the left and right eyes 132 of a user. By using display panel segments with different curvatures and/or orientations relative to the user's eyes, the near-eye display system 100 may be equipped with an HMD with a body that is closer to the appearance of the user's head, thereby reducing its moment of inertia, as well as providing a wider lateral field of view and a more aesthetically pleasing appearance.

Fig. 2 illustrates a perspective view of a non-planar computing display, such as the display used to generate a stereoscopic focal volume in the near-eye display system 100, in accordance with some embodiments. Fig. 2 shows the near-eye display system 100 as an HMD200 with an "eyeglass" appearance, in which the HMD device 200 is mounted to the user's face via temples 202, 204, which temples 202, 204 are located on or behind the user's ears when worn by the user, according to an illustrative embodiment of the present disclosure. However, in other embodiments, the HMD device 200 may be implemented in a "face mask" shape, in which the HMD device 200 is mounted to the user's face via one or more straps, harnesses, or other attachment devices. Further, although omitted for ease of illustration, the HMD device 200 may also include one or more face gaskets to seal against the user's face to limit the intrusion of ambient light. In the depicted example, the HMD200 device has a housing 206 such that display devices mounted on or within the housing 206 (e.g., the left and right eye displays 110, 112 of fig. 1) are disposed in front of the user's eyes. As further described herein, a processor coupled to or embedded within the housing 206 generates AR/VR content for display at a display device to immerse a user in an AR/VR environment associated with the AR/VR content.

Fig. 3 is a diagram illustrating a cross-sectional view 300 of a non-planar computing display, such as a display used in the near-eye display system 100 taken along line a-a of fig. 1, in accordance with some embodiments. As shown, when the near-eye display system 100 is worn, the display panel 118 and the lenslets 126 are substantially symmetric about a medial plane 302 that corresponds to a midsagittal plane of the user. In addition, display panels 118 are non-planar with respect to each other and planar surface 304. Plane 304 is generally parallel to the coronal plane of the user and further corresponds generally to the plane in which the display of a conventional HMD is located.

as shown in this figure, the user's eye 132 is pointed at a point 306 within a virtual image 308, the virtual image 308 including a plurality of objects (not shown) intended to be perceived by the eye 132 at different depths. The depth of field 310 due to the lenslets 126 (i.e., the distance between the nearest and farthest objects that the eye 132 will perceive as focused) results in a volume within the virtual image 308 in which the objects appear to be focused.

In particular, when rendered for display by display panel 118 of left-eye display 110, objects within left-eye focal volume 312 will appear to be in focus to the user's left eye 132. Similarly, objects within the right-eye focal volume 314 will appear to be in focus to the user's right eye 132 when rendered for display by the display panel 118 of the right-eye display 112. In various embodiments, the depth of field of the left-eye focal volume 312 and the right-eye focal volume 314 may be determined using the following equation:

dPhi = 2c / (d*f) (1)

Where dPhi represents the depth of field in diopters, c represents the display pixel size in meters, d represents the lenslet diameter in meters, and f represents the lenslet focal length in meters.

as shown in fig. 3, the left eye focal volume 312 and the right eye focal volume 314 overlap at a stereoscopic focal volume 316. Thus, the light field frames are rendered based on the non-planar configuration of the display panel 118, and objects within the stereoscopic focus volume 316 will appear to be focused on both the user's left and right eyes 132.

In some embodiments, the set display geometry data is provided to the rendering component 104 with respect to, for example, the physical dimensions and geometry of the non-planar display (i.e., the computing display subsystem 102). For example, the set of display geometry data may include the physical size and geometry of one or more display panels 118 and lenslets 126. For example, the set of display geometry data may include various data such as the width of the display panel 118, the viewing distance between the eye 132 and the panel 118, the angle between the panel 118 and the plane 304, the angle between the panel and the mid-plane 302, and the like. Those skilled in the art will recognize that the positions of the left eye focal volume 312 and the right eye focal volume 314 in 3D space are determined by the size/geometry of the lenslets 126 and the display panel 118. However, in various embodiments, the location of the virtual plane within such left and right eye focal volumes 312, 314 and stereoscopic focal volume 316 may be rotated within the focal volume.

For example, fig. 4 depicts a diagram showing a cross-sectional view 400 of a rotated non-planar computing display having a virtual plane 402. In various embodiments, the virtual plane 402 may be rotated by shifting the display position of the elemental image by an amount using the following equation:

dx = n * Φ * d * f + (n*Φ*d)2 * f * tan(θ) (2)

Where N [ -N/2, N/2] denotes the number of lenslets, and Φ ═ 1/z denotes the distance to the virtual plane (in diopters), θ denotes the tilt of the virtual plane 402 with respect to the lenslets (i.e., the angle between the display lenslet tangent and the rendering virtual plane 402). Additionally, in some embodiments, the fold angle (not shown) between the display panels is adjustable, and sensors are used to determine θ of the rendering component 104 to determine the shift in display position of the elemental images. Note that eye tracking is not necessary for such embodiments, but may optionally be used to determine other viewing parameters (e.g., exit pupil distance and pupil position) if the fold angle changes or if the near-eye display system 100 moves relative to the user's eye 132.

In an alternative embodiment, fig. 5 depicts a diagram illustrating a cross-sectional view 500 of another implementation non-planar computational display for generating a stereoscopic focal volume according to some embodiments. As previously discussed with respect to fig. 1, in some embodiments, each of left-eye display 110 and right-eye display 112 may include any "N" number of display panel segments. For example, as shown in diagram 500, each of the displays 110, 112 includes two display panel segments (i.e., display panel segments 118a and 118 b). As shown in this figure, the user's eye 132 is pointed at a point 502 within the virtual image 504, the virtual image 504 including a plurality of objects (not shown) intended to be perceived by the eye 132 at different depths. The depth of field 506 (i.e., the distance between the nearest and farthest objects that will be perceived by eye 132 as being in focus) due to lenslets 126 results in a volume within virtual image 504 within which the objects appear to be in focus.

similar to the example embodiment discussed with respect to fig. 3, each display panel segment (e.g., display panel segments 118a and 118b of displays 110, 112) is associated with a corresponding focal volume within which objects appear to be in focus when presented to their respective eyes 132 for display. As shown in fig. 5, the focal volumes overlap at a volumetric focal volume 508. Thus, the light field frames are rendered based on the non-planar configuration of the display panel segments 118a, 118b, and the objects within the stereoscopic focal volume 508 appear to be focused to the user's left and right eyes 132, 132. Further, similar to the example embodiment discussed with respect to fig. 4, based on equation (2) above, a virtual plane (not shown) in the stereoscopic focus volume 508 may be rotated by shifting the display positions of the element images along the respective display segments 118a and 118 b.

In an alternative embodiment, fig. 6 depicts a diagram showing a cross-sectional view 600 of yet another implementation non-planar computational display for generating a stereoscopic focal volume according to some embodiments. Similar to the example embodiment of fig. 3, the lenslets 126 are substantially symmetric about a medial plane 602 that corresponds to a midsagittal plane of the user when the near-eye display system 100 is worn. The display is a curved display panel 604 and is generally non-planar relative to the plane in which a conventional HMD display is located. Instead of having N display panel segments and N overlapping focal volumes (i.e., 4 segments and 4 overlapping focal volumes in fig. 5), a polygon having N segments will approach a curved/circular surface when N becomes large. As shown in fig. 6, there will be a correspondingly large number of overlapping rectangular focal volumes 606 rotated at a small angle, which forms the elliptical overlap of the focal volumes shown. However, similar to the example embodiment of FIG. 5, the stereoscopic focus volume 608 is still limited by the maximum display panel tilt at the edges and is generally diamond shaped. A virtual plane may be rendered at any plane within the focal volume 606 and stereoscopic fusion may be achieved within the stereoscopic focal volume 608. The virtual plane may be displaced along the z-axis direction. Furthermore, the same rendering equation (2) discussed above may be used to change the rendering of the image to adjust for a curved display panel 604, where θ is determined by the local slope (or angle) between the display panel 604 relative to the rendered virtual plane. (i.e., the local spatial derivative or tangent of the plane of curvature).

Those skilled in the art will recognize that due to the segmentation of the display panel, in some embodiments, only a portion of the total surface area of left-eye display 110 and right-eye display 112 is visible to the user's eye. To illustrate, fig. 7 depicts a cross-sectional view 700 of a computing display, such as utilized in a near-eye display system as discussed with respect to diagram 400 of fig. 4 using lenslets and display panel segments. As shown in this diagram 700, there is a gap 702 between the display panel segments 118a and 118b due to, for example, the display bezel/outer frame of the housing that holds the display panel segments 118a, 118 b.

Each lenslet 126 of lenslet array 124 acts as a separate "projector" on eye 132, wherein each "projector" overlaps one or more adjacent projectors to form a composite virtual image 704 from the array of elemental images displayed at display panel 118. To illustrate, lenslet 126-2 projects a corresponding elemental image (represented by region 706) from region 710 of virtual image 704, and lenslet 126-4 projects a corresponding elemental image (represented by region 708) from region 712 of virtual image 704. As shown in fig. 2, regions 710 and 712 overlap in sub-region 714. Thus, the image data from this overlapping sub-region 714 may be used to render the elemental image displayed by the display panel segments 118a, 118b to hide the gap 702 between the display panel segments 118a, 118b so that the composite virtual image 704 as perceived by the user's eyes 132 does not detect the presence of the gap 702.

In some embodiments, if the display border is projected out of the plane of the virtual image 704, there will be dark spots where the display bezel/edge is located. The intensity of the replicated (e.g., overlapping) pixels within the surrounding elemental images scales by N +1/N, where N is the number of elemental images sharing the blocked pixel area. That is, the intensity of the elemental images corresponding to regions 706 and 708 may be adjusted to compensate for gap 702.

Further, in various embodiments, the eye tracking components 106, 108 of fig. 1 may track pose changes of the user's eyes 132 and provide pose information to the rendering component 104 to account for any gaze at which the user's eyes 132 may be directed to a portion of the left and right eye displays 110, 112 with gaps between display panel segments.

Fig. 8 is a method 800 of operation of the near-eye display system 100 with a non-planar computing display for rendering light field frames based on a stereoscopic focal volume, according to some embodiments. For ease of understanding, the method 800 is often described below with reference to the example scenarios shown in fig. 1-7. Method 800 shows one iteration of a process for rendering and displaying light field frames of one of left-eye display 110 or right-eye display 112, and thus, the illustrated process is repeated in parallel for each of displays 110, 112 to generate and display a different stream or sequence of light field frames for each eye at a different point in time, thereby providing a 3D autostereoscopic VR or AR experience for the user.

For light field frames to be generated and displayed, the method 800 begins at block 802, where the rendering component 104 identifies image content of corresponding eyes to be displayed to a user as light field frames. In at least one embodiment, the rendering component 104 receives IMU information 152 representing data from various pose-related sensors, such as gyroscopes, accelerometers, magnetometers, Global Positioning System (GPS) sensors, and the like, and determines a pose of the device 114 (e.g., HMD) for mounting the display 110, 112 near the user's eye from the IMU information 150. From the gesture, the CPU136 executing the rendering program 144 may determine a corresponding current viewpoint of the object scene or object, and determine an image to be rendered for the gesture from the viewpoint and the graphical and spatial description of the scene or object provided as rendering information 148.

At block 804, the CPU136 receives a set of display geometry data for a non-planar display of the near-eye display system 100. In at least one embodiment, the set of display geometry data includes data representing the geometry configuration (e.g., optic axis angle) of one or more display panels relative to the user's eyes, as shown in fig. 3. In other embodiments, the set of display geometry data includes data indicating that one or more non-planar display panels are segmented and include a plurality of display panel segments, as shown in fig. 4. If the near-eye display system 100 includes multiple display panel segments, the set of display geometry data may further include data representing the location of the display boundaries/bezels of the display panel segments. In other embodiments, the set of display geometry data includes data indicative of a curvature geometry of a non-planar display, such as shown in fig. 5 and 6.

At optional block 806, CPU136 executing eye tracking program 146 determines the pose of the user's corresponding eye. As described herein, the pose of the eye may be determined using any of a variety of eye tracking techniques. Typically, such techniques include capturing one or more images of infrared light reflected from the pupil and cornea of the eye. The eye tracking program 146 may then manipulate the CPU136 or the GPUs 138, 140 to analyze the image based on the corresponding position of one or both of the pupil reflection or the corneal reflection to determine the pose of the eye. For example, in some embodiments, monocular eye tracking is performed to obtain the region of interest information and to calculate where the user's eyes attempt to stay in the rendered scene (e.g., which objects in the scene are eyes-oriented gazes). Relative angular displacement between the two eyes is measured to determine vergence by performing monocular eye tracking for each eye. Therefore, the stops are calculated based on the determined vergence (e.g., differential eye tracking). In other embodiments, binocular eye tracking is performed to determine stops that are independent of the rendered scene content and/or the orientation of the pupil relative to the cornea, which in turn may be used to determine the direction of the eye (i.e., the gaze direction of the eye). It should be noted that although block 806 is shown in fig. 8 as following blocks 802 and 804, the processing of block 306 may be performed before, during, or after the processing of blocks 802 and 804.

With the geometry of the non-planar display (and, in some embodiments, the pose of the user's eyes) determined, at block 808, the rendering program 144 manipulates the CPU136 to instruct a corresponding one of the GPUs 138, 140 to render the lightfield frame having the array 120 using the image content identified at block 802, whereby the lightfield frame includes an array of element images. In some embodiments, as part of this processing, the CPU136 calculates a stereoscopic focal volume within the image content (i.e., virtual image) to be displayed. In particular, the CPU136 calculates the stereoscopic focal volume such that the non-planar display renders objects within the stereoscopic focal volume in focus. For example, in the context of fig. 3, left eye focal volume 308 will appear to be focused on the user's left eye 132 when rendered for display by display panel 118 of left eye display 110. Similarly, when rendered for display by display panel 118 of right-eye display 112, right-eye focal volume 310 will appear to be focused to the user's right eye 132. Thus, the CPU136 determines that the left eye focal volume 308 and the right eye focal volume 310 overlap at the stereoscopic focal volume 312 and renders the light field frame such that objects within the stereoscopic focal volume 312 will appear to be focused to both the user's left and right eyes 132.

In some embodiments, such as in the context of fig. 5, each of left-eye display 110 and right-eye display 112 may include any "N" number of display panel segments. For example, as shown in diagram 500, each of the displays 110, 112 includes two display panel segments (i.e., display panel segments 118a and 118 b). Each display panel segment (e.g., display panel segments 118a and 118b of displays 110, 112) is associated with a corresponding focal volume within which objects will appear to be in focus when presented to their respective eyes 132 for display. Thus, the CPU136 determines that these focal volumes overlap at the stereoscopic focal volume 408 and renders the light field frame such that objects within the stereoscopic focal volume 408 will appear to be focused to both the user's left and right eyes 132, 132.

Moreover, in other embodiments, such as in the context of fig. 7, there is a gap 702 between the display panel segments 118a and 118b due to, for example, the display bezel/outer frame of the housing that holds the display panel segments 118a, 118 b. Accordingly, CPU136 provides data to the GPU representing the location of the display boundary/bezel of the display panel segment of block 804 and the pose of the user's eye of block 806 and instructs the GPU to render the lightfield frame such that the elemental image is rendered to hide the gap 702 between the display panel segments 118a and 118b such that the composite virtual image 704 as perceived by the user's eye 132 does not detect the presence of the gap 702. That is, the intensity of the elemental images corresponding to regions 706 and 708 may be adjusted to compensate for and prevent perception of gap 702 caused by the display boundaries of one or more display panels. The GPU then renders the lightfield frame at block 810 and provides the lightfield frame to a corresponding one of the computing displays 110, 112 for display to the user's eye 132.

the benefit of the non-planar computing display configuration shown in fig. 1-7 is that it provides a "glasses" appearance that is closer to the eye while preserving a larger field of view. That is, the embodiments described herein allow for a more compact and weight-reduced profile (relative to the "diving mask" profile of conventional HMDs). By using display panel segments with different curvatures and/or orientations relative to the user's eyes, the HMD device may be prepared with a profile that keeps a large portion of the HMD device closer to the user's head, thereby reducing its moment of inertia and providing a wider lateral field of view and a more aesthetically pleasing appearance. In addition, the use of a non-planar display having a cross-section with different curvatures and angles allows for an HMD device with a more contoured user's head, while still providing more uniform color and brightness throughout the field of view and providing a simplified display and optical assembly configuration compared to conventional HMD devices, as compared to conventional HMD devices having panels using one or more planar displays.

In some embodiments, certain aspects of the techniques described above may be implemented by one or more processors of a processing system executing software. The software includes one or more sets of executable instructions stored or tangibly embodied on a non-transitory computer-readable storage medium. The software may include instructions and certain data that, when executed by one or more processors, manipulate the one or more processors to perform one or more aspects of the techniques described above. A non-volatile computer-readable storage medium may include, for example, a magnetic or optical disk storage device, a solid state storage device such as flash memory, a cache, Random Access Memory (RAM) or other non-volatile storage device or devices, and so forth. Executable instructions stored on a non-transitory computer-readable storage medium may be source code, assembly language code, object code, or other instruction formats that are interpreted or otherwise executable by one or more processors.

Computer-readable storage media can include any storage media or combination of storage media that is accessible by a computer system during use to provide instructions and/or data to the computer system. Such a storage medium may include, but is not limited to, an optical medium (e.g., Compact Disc (CD), Digital Versatile Disc (DVD), blu-ray disc), a magnetic medium (e.g., floppy disk, magnetic tape, or hard drive), volatile memory (e.g., Random Access Memory (RAM) or cache), non-volatile memory (e.g., Read Only Memory (ROM) or flash memory), or a microelectromechanical systems (MEMS) -based storage medium. The computer-readable storage medium can be embedded in a computing system (e.g., system RAM or ROM), fixedly attached to a computing system (e.g., a magnetic hard drive), removably attached to a computing system (e.g., an optical disk or serial bus (USB) based flash memory), or coupled to a computer system via a wired or wireless network (e.g., Network Accessible Storage (NAS)).

note that not all of the activities or elements described above in the general description are required, that a portion of a particular activity or device may not be required, and that one or more other activities or included elements may be performed in addition to those described above. Still further, the order in which activities are listed is not necessarily the order in which the activities are performed. Also, the concepts have been described with reference to specific embodiments. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the present disclosure as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present disclosure.

benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any feature(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or element of any or all the claims. Moreover, the particular embodiments disclosed above are illustrative only, as the disclosed subject matter may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. No limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope of the disclosed subject matter. Accordingly, the protection sought herein is as set forth in the claims below.

21页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:图像编码/解码方法和装置、以及存储比特流的记录介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类