Multiple focal planes with varying positions

文档序号:958984 发布日期:2020-10-30 浏览:4次 中文

阅读说明:本技术 具有变化位置的多焦平面 (Multiple focal planes with varying positions ) 是由 塞波·T·瓦利 佩卡·K·西尔塔宁 于 2019-01-16 设计创作,主要内容包括:描述了用于使用多个焦平面来显示深度图像(深度加纹理)的系统和方法。在一个实施例中,深度图像(其可以是深度视频的帧,由视频加深度序列组成)被映射到图像平面的第一集合。深度图像(或深度视频的后续帧)被映射到图像平面的第二集合。第一集合和第二集合中的每个图像平面具有指定深度,并且第一集合和第二集合在至少一个深度上不同。第一集合中的每个图像平面在其相应深度处显示,并且随后第二集合中的每个图像平面在其相应深度处显示。第一集合和第二集合的显示可以以足够高的速率循环交替以避免可感知的闪烁。(Systems and methods for displaying depth images (depth plus texture) using multiple focal planes are described. In one embodiment, a depth image (which may be a frame of a depth video, consisting of a video plus depth sequence) is mapped to a first set of image planes. The depth image (or a subsequent frame of the depth video) is mapped to a second set of image planes. Each image plane in the first set and the second set has a specified depth, and the first set and the second set differ in at least one depth. Each image plane in the first set is displayed at its respective depth, and subsequently each image plane in the second set is displayed at its respective depth. The display of the first set and the second set may be cyclically alternated at a rate high enough to avoid perceptible flicker.)

1. A method of displaying an image having corresponding depth information, the method comprising:

mapping the image to a first set of at least two image planes;

mapping the image to a second set of at least two image planes, wherein each image plane in the first set and the second set has a specified depth, and wherein the first set and the second set differ in at least one depth;

displaying each of the image planes in the first set at a respective depth of that image plane;

after all of the image planes in the first set are displayed, each of the image planes in the second set is displayed at its respective depth.

2. The method of claim 1, wherein mapping the image to the first set of image planes is performed using a first set of blending functions and mapping the image to the second set of image planes is performed using a second set of blending functions.

3. A method according to any one of claims 1-2, wherein at least one of the mixing functions has the form 0.5+0.5cos (Ax + B) for at least a selected portion of its domain.

4. The method of claim 3, wherein the at least one mixing function has a value of zero or one outside the selected portion.

5. The method of any of claims 1-4, wherein all of the image planes in the first set are displayed simultaneously, and subsequently, all of the image planes in the second set are displayed simultaneously.

6. The method of any of claims 1-4, wherein no two image planes are displayed simultaneously.

7. The method of any of claims 1-6, wherein all depths of image planes in the second set are different from all depths of image planes in the first set.

8. The method of any of claims 1-7, wherein at least one depth of an image plane in the second set is between two consecutive depths of image planes in the first set.

9. The method of any of claims 1-8, wherein the displaying of the image planes in the first set and the displaying of the image planes in the second set are performed cyclically.

10. The method of claim 9, wherein the cyclical display of the image planes in the first set and the cyclical display of the image planes in the second set are performed at a rate of at least 30 Hz.

11. The method of any of claims 1-10, wherein displaying the image plane at its respective depth comprises:

displaying an image on a display assembly; and

adjusting optics of a display device to form a virtual image of the display assembly at the respective depth.

12. The method of any of claims 1-10, wherein the image is displayed on a display device, the display device including a display assembly and a plurality of electronically controllable eyepiece lenses at different locations along an optical path from the display assembly, and wherein displaying the image plane at respective depths of the image plane comprises:

displaying an image on the display assembly; and

controlling the eyepiece lens to form a virtual image of the display component at the respective depth.

13. The method of claim 12, wherein controlling the eyepiece lens comprises: controlling a selected one of the eyepiece lenses to have a predetermined positive optical power, and controlling the remaining eyepiece lenses to have substantially zero optical power.

14. The method according to any of claims 12-13, wherein the display device is an optical see-through display comprising an objective lens and at least one inverse lens system.

15. A display device for displaying images with corresponding depth information, the device comprising:

an image plane formation module operative to map an image to a first set of at least two image planes and a second set of at least two image planes, wherein each image plane in the first set and the second set has a specified depth, and wherein the first set and the second set differ in at least one depth;

display optics operative to display image planes at respective associated depths; and

a multiplexer operative to cause the display optics to: (i) displaying each of the image planes in the first set at a respective depth of that image plane, and (ii) after displaying all of the image planes in the first set, displaying each of the image planes in the second set at a respective depth of that image plane.

Background

A multi-focal plane (MFP) display creates a stack (stack) of out-of-focus planes, composing a 3D scene from layers along the viewer's visual axis. The view of the 3D scene is formed by projecting those pixels from the focal plane to the user at different depths and spatial angles, which are visible to the user's eyepoint.

The multiple focal planes may be implemented by spatially multiplexing stacks of 2D displays, or by sequentially switching the focal lengths of individual 2D displays in a time-multiplexed manner by a high-speed zoom element (VFE) while spatially rendering visible portions of respective multiple focal image frames. An example of an MFP near-eye display is shown in fig. 2. Fig. 2 shows the display viewed by the left eye 202 and the right eye 204 of the user. A respective eyepiece 206, 208 is provided for each eye. The eyepieces focus the images formed by the respective image stacks 210, 212. The image stacks form different images at different distances from the eyepiece. To the user's eyes, the image appears to originate from different virtual image planes, such as image planes 214, 216, 218.

Multi-focal plane (MFP) displays are an attractive way to support natural adaptation in 3D rendered scenes. For various technical reasons, near-eye displays (NED) are typically only capable of supporting a relatively small number of MFPs, thereby limiting image quality. In many existing approaches, the position of the focal planes is fixed, creating a permanent mean error distribution, supporting information at or near the focal planes more than information between focal planes.

The multiple focal planes are primarily complementary, rather than additive, with respect to the viewing direction from the eyepoint. However, when viewing views compiled from discrete focal planes, additive effects can smooth the visible quantization step size and contour.

Note that each image in the stack of (virtual) focal planes is rendered at a different depth, and the eye blurs those focal plane images that are not observed. This means that the MFP display does not need to simulate blur based on eye tracking (for capturing the depth of accommodation), which is a considerable benefit of this approach.

A cassette filter.

An approximation of the focal plane image may be formed by slicing the depth map corresponding to each image into narrow depth regions (slices) and projecting the corresponding pixels into a (flat) focal plane in the middle of each depth region.

When viewing the stack of focal planes, a composite view is formed from the information on the different focal planes visible from the eye point of the viewer. Slicing into depth regions results in MFPs completing each other in the spatial (x-y) direction, rather than adding along the depth dimension (z dimension).

As a result, the smoothed 3D surface is quantified in the depth dimension, as shown in fig. 3.

The box filter separates the image information in a strict way in the spatial (x-y) and depth (z) dimensions. Since only a discrete number of focal planes are used, the depth dimension is heavily quantized, resulting in a low precision of rendering 3D shapes.

A larger number of focal planes means better depth accuracy but is also more difficult to achieve. For technical reasons, the number of focal planes is in fact limited to only a few.

The accuracy of the focal plane is generally optimal for pixels at the same depth as the focal plane. Between the focal planes, the accuracy is low, resulting in blurring of the displayed image content, even when depth-based blending is used to interpolate depth values and reduce the depth quantization effect in the rendered view.

Tent filter.

So-called depth-based blending may be used to reduce quantization errors in the depth dimension, which may otherwise be visible to the human eye. Depth blending involves weighting the pixels used to construct each focal plane using a depth-based function.

One known depth mixing function is the so-called tent filter, which is a piecewise linear sawtooth mixing function (fig. 4B). For a box filter, the corresponding function is shown in fig. 4A. In addition to the hybrid filters described above, other variations have been proposed, including those described in "Xinda Hu," Development of the Depth-Fused Multi-Focal-plane display Technology ", PhD paper, university of Arizona (2014)".

The number of focal planes.

To anyone, up to twenty-eight focal planes are considered sufficient to cover a depth range from infinity to 4 diopters (25cm), corresponding to 1/7 diopter spacing of the focal planes. For a person with average vision, fourteen focal planes may be sufficient.

Thus, for a high quality depth perception, the ideal number of focal planes is rather high. On the other hand, displaying a large number of focal planes is limited for various technical reasons. However, intelligent generation and positioning of focal planes allows for fewer focal planes to reconstruct a high fidelity view.

When only a few focal planes are used, it is beneficial that they are well positioned in view of the human eye characteristics. Since the accuracy of depth perception decreases inversely with distance from the viewer, higher accuracy is typically obtained by placing several depth planes as a function of refractive depth. In addition, the apparent number of focal planes can be increased by depth mixing.

The number of focal planes in practical applications is limited for various technical reasons. In MFP displays based on stacked physical displays, increasing the number of displays can cause problems in transparency (due to display material properties) and increase the thickness of the display structure. In implementations based on time multiplexing (of physical or virtual displays), increasing the number of multiplexed displays reduces the brightness of each MFP (by reducing their on-off ratio), again limiting the maximum number of MFPs.

Regardless of the implementation, the number of focal planes is actually limited to be relatively small (e.g., 4 to 6). The exemplary embodiments described herein allow for good quality depth imaging even when a relatively small number of focal planes are available.

Disclosure of Invention

In an exemplary embodiment, a multi-focal plane (MFP) display is provided in which the position of the focal plane varies over time. The depth blending function and focal plane position vary in a time dependent but content independent manner. As an example, a class of sinusoidal depth mixing functions may be used that provide good spatial separation and support easy formation of the focal plane in varying positions.

Some embodiments provide a method for rendering a video sequence of focal plane images to a multi-focal plane display using a zoom plane position when in use. In one such method, a set of focal planes is selected for displaying a video sequence via a multi-focal plane display, wherein each focal plane in the set of focal planes is associated with a focal length. The set of focal planes is divided into a plurality of subsets, wherein the subsets are individually selected in one period to render consecutive frames of a video sequence. Rendering a temporal sequence of video frames with associated depth information using the multi-focal plane display. To render a temporal sequence of video frames, a method comprising the following steps may be performed. For each video frame, one of the subsets is selected based on a circular ordering of the subsets. A set of mixing functions is selected or generated based on the selected subset. A focal plane image is generated for each of the focal planes in the selected subset based on the video frame, the associated depth information of the video frame, and the set of blending functions. Displaying a video frame on the multi-focal-plane display using the generated focal-plane image. The generated focal plane images may be displayed in a time multiplexed manner.

In some such embodiments, the display of the video frame includes providing the generated focal plane images and associated focal lengths to a multi-focal plane display.

In some embodiments, the step of displaying the video frame comprises adjusting the variable focus lens to allow each of the generated focal plane images to be displayed, wherein the adjustment is based on the focal distance associated with the respective focal plane of the selected subset of focal planes.

In some embodiments, a method of displaying an image with corresponding depth information is provided. The image is mapped to a first set of at least two image planes and a second set of at least two image planes. Each image plane in the first set and the second set has a specified depth, and the first set and the second set differ in at least one depth. Each image plane in the first set is displayed at its respective depth. After all image planes in the first set are displayed, each image plane in the second set is displayed at its respective depth. In some embodiments, all depths of the image planes in the second set are different from all depths of the image planes in the first set. The depths of the image planes in the second set may be at least partially interleaved with the depths of the image planes in the first set. In some embodiments, all image planes in the first set are displayed simultaneously, and subsequently all image planes in the second set are displayed simultaneously.

In some embodiments, a method of displaying a video comprising a sequence of image frames with corresponding depth information is provided. A first one of the frames is mapped to a first set of at least two image planes. A subsequent second one of the frames is mapped to a second set of at least two image planes, wherein each image plane in the first set and the second set has a specified depth, and wherein the first set and the second set differ in at least one depth. Each image plane in the first set is displayed at its respective depth. After all image planes in the first set are displayed, each image plane in the second set is displayed at its respective depth. In some embodiments, odd-numbered frames are mapped to a first set of image planes and even-numbered frames are mapped to a second set of image planes. In some embodiments, all depths of the image planes in the second set are different from all depths of the image planes in the first set. In some embodiments, the depths of the image planes in the second set are at least partially interleaved with the depths of the image planes in the first set.

Drawings

Fig. 1 is a system diagram illustrating an example wireless transmit/receive unit (WTRU) that may be used as a display driver in accordance with an embodiment.

Fig. 2 is a schematic diagram of a multi-focal near-eye display.

Fig. 3 shows a schematic example of the quantization of depth when describing a view by five focal planes. The arrows show the viewing direction.

FIGS. 4A-4B are schematic diagrams of basic depth mixing functions for four MFPs: a no-mix depth slice, called a box filter (fig. 4A), and a linear filter, called a tent filter (fig. 4B).

Fig. 5 shows the steps performed in the process of time multiplexing the MPFs in shifted positions.

Fig. 6 schematically shows an example of two sinusoidal functions that produce the weights for five focal planes.

Figure 7 shows two sinusoidal functions which are divided to produce the weights for the three focal planes (illustrated by the different line patterns).

Fig. 8 shows a sine function that generates weights of five MFPs (shown by different line patterns).

9A-9B illustrate the alternation between five focal planes (FIG. 9A) and four focal planes (FIG. 9B) such that the last four MFPs are located between the first five.

FIG. 10 shows an example of alternating between stacks of five and four MFPs as a function of time.

11A-11B show an alternating set of MFPs in a shifted position.

Fig. 12 shows an example of interleaving two stacked four MFPs as a function of time.

Fig. 13 shows an example of a binocular display operating to display two sets of interlaced MFPs in opposite phase to each eye.

FIG. 14 illustrates a set of B-spline basis functions used as a mixing function in some embodiments.

Fig. 15A-15C are schematic block diagrams illustrating multiplexing of MPFs in shifted positions.

Fig. 16 is a schematic diagram of an optical configuration for generating two virtual MFP planes.

Fig. 17 is a message flow diagram illustrating a method performed in an exemplary embodiment.

Fig. 18A is a schematic cross-sectional view of an optical see-through (OST) display capable of displaying a single focal plane.

Fig. 18B is a schematic cross-sectional view of an optical see-through (OST) display capable of displaying multiple focal planes.

19A-19C are schematic cross-sectional views of three different configurations of optical see-through displays for displaying images at three different focal planes, according to some embodiments.

Fig. 20 is a schematic cross-sectional view of optics of an optical see-through display showing a perceived eyepoint shift, in accordance with some embodiments.

Fig. 21 is a schematic cross-sectional view of optics of an optical see-through display showing reduced or zero eyepoint offset, in accordance with some embodiments.

Fig. 22 is a schematic cross-sectional view of optics of an optical see-through display according to some embodiments.

Fig. 23 is a schematic cross-sectional view of optics of an optical see-through display according to some embodiments.

Example apparatus for implementation of embodiments

Fig. 1 is a system diagram illustrating an example Wireless Transmit Receive Unit (WTRU)102 that may be used to drive a display using the techniques described herein. As shown in fig. 1, the WTRU102 may include a processor 118, a transceiver 120, a transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touchpad 128, non-removable memory 130, removable memory 132, a power source 134, a Global Positioning System (GPS) chipset 136, and/or other peripherals 138, among others. It should be appreciated that the WTRU102 may include any subcombination of the foregoing elements while maintaining consistent embodiments.

The processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a Digital Signal Processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of Integrated Circuit (IC), a state machine, or the like. The processor 118 may perform signal decoding, data processing, power control, input/output processing, and/or any other functions that enable the WTRU102 to operate in a wireless environment. The processor 118 may be coupled to a transceiver 120 and the transceiver 120 may be coupled to a transmit/receive element 122. Although fig. 1B depicts processor 118 and transceiver 120 as separate components, it should be understood that processor 118 and transceiver 120 may be integrated together in one electronic package or chip.

The transmit/receive element 122 may be configured to transmit or receive signals to or from a base station via the air interface 116. For example, in one embodiment, the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals. As an example, in an embodiment, the transmit/receive element 122 may be a radiator/detector configured to transmit and/or receive IR, UV or visible light signals. In yet another embodiment, the transmit/receive element 122 may be configured to transmit and/or receive RF and optical signals. It should be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.

Although transmit/receive element 122 is depicted in fig. 1 as a single element, WTRU 102 may include any number of transmit/receive elements 122. More specifically, the WTRU 102 may use MIMO technology. Thus, in one embodiment, the WTRU 102 may include two or more transmit/receive elements 122 (e.g., multiple antennas) that transmit and receive wireless signals over the air interface 116.

Transceiver 120 may be configured to modulate signals to be transmitted by transmit/receive element 122 and to demodulate signals received by transmit/receive element 122. As described above, the WTRU 102 may have multi-mode capabilities. Thus, the transceiver 120 may include multiple transceivers that allow the WTRU 102 to communicate via multiple RATs (e.g., NR and IEEE 802.11).

The processor 118 of the WTRU 102 may be coupled to and may receive user input data from a speaker/microphone 124, a keypad 126, and/or a display/touch pad 128, such as a Liquid Crystal Display (LCD) display unit or an Organic Light Emitting Diode (OLED) display unit. The processor 118 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128. Further, the processor 118 may access information from and store data in any suitable memory, such as the non-removable memory 130 and/or the removable memory 132. The non-removable memory 130 may include Random Access Memory (RAM), Read Only Memory (ROM), a hard disk, or any other type of memory storage device. The removable memory 132 may include a Subscription Identity Module (SIM) card, a memory stick, a Secure Digital (SD) memory card, and the like. In other embodiments, the processor 118 may access information from and store data in memory that is not physically located in the WTRU 102, such memory may be located, for example, in a server or a home computer (not shown).

The processor 118 may receive power from the power source 134 and may be configured to distribute and/or control power for other components in the WTRU 102. The power source 134 may be any suitable device for powering the WTRU 102. For example, power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (Ni-Cd), nickel-zinc (Ni-Zn), nickel metal hydride (NiMH), lithium ion (Li-ion), etc.), solar cells, and fuel cells, among others.

The processor 118 may also be coupled to a GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) related to the current location of the WTRU 102. In addition to or in lieu of information from the GPS chipset 136, the WTRU 102 may receive location information from base stations via the air interface 116 and/or determine its location based on the timing of signals received from two or more nearby base stations. It should be appreciated that the WTRU 102 may acquire location information via any suitable positioning method while maintaining consistent embodiments.

The processor 118 may be further coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality, and/or wired or wireless connectivity. For example, the peripheral devices 138 may include accelerometers, electronic compasses, satellite transceivers, digital cameras (for photos and/or video), Universal Serial Bus (USB) ports, vibration devices, television transceivers, hands-free headsets, mass spectrometers, and the like,

Figure BDA0002673067110000091

Modules, Frequency Modulation (FM) radio units, digital music players, media players, video game modules, internet browsers, virtual reality and/or augmented reality (VR/AR) devices, and activity trackers, among others. The peripheral device 138 may include one or more sensors, which may be one or more of the following: gyroscopes, accelerometers, hall effect sensors, magnetometers, orientation sensors, proximity sensors, temperature sensors, time sensors, geographic position sensors, altimeters, light sensors, touch sensors, magnetometers, barometers, gesture sensors, biometric sensors, humidity sensors, and the like.

The WTRU 102 may include a full duplex radio for which reception or transmission of some or all signals (e.g., associated with particular subframes for UL (e.g., for transmission) and downlink (e.g., for reception)) may be concurrent or simultaneous, etc. The full-duplex radio may include an interference management unit that reduces and/or substantially eliminates self-interference via signal processing by hardware (e.g., a choke coil) or by a processor (e.g., a separate processor (not shown) or by the processor 118). In an embodiment, the WTRU 102 may include a half-duplex radio that transmits or receives some or all signals, such as associated with a particular subframe for UL (e.g., for transmission) or downlink (e.g., for reception).

Detailed Description

Parameters characterizing a multi-focal plane (MFP) display typically include the number of focal planes (which may be linearly assigned on a dioptric scale) and the properties of the depth mixing function. Both affect the amount and nature of quantization error in the 3D shape (depth) approximated by the MFP. Some optimization principles are described below.

MFPs are optimized by multifocal capture.

In MFP rendering, the focal plane may be formed using textures captured by some selected aperture and focal length. For example, high spatial frequencies may be suppressed in those unfocused image regions due to blur caused by camera optics. Respectively, the MFP may lack accuracy at the respective depth/adjustment distance. Several focal lengths may be utilized to capture the texture of a view.

In "Optimal Presentation of imaging with Focus multimedia Multi-Plane Displays" by Rahul Narain et al, ACM graphical transaction processing (TOG), Vol.34, No. 4, article 59, month 8 2015, "a method for forming a dioptric placement MFP is described that uses multiple scene captures with varying focal lengths as input. Using multi-focus capture, the MFP can be more accurately optimized according to the Human Visual System (HVS). This applies, for example, to the effects of refraction, reflection, and other non-lambertian phenomena in the captured view. In addition to cameras with different focal lengths, a set of input images may be derived from a light field captured from a scene, for example with a camera such as Lytro Illum.

The MFP is optimized based on the displayed content.

One method of optimizing MPF rendering is to use methods such as "w.wu et al," Content-adaptive focus configuration for near-eye multi-focal displays, "IEEE multimedia International Conference (ICME), 2016, month 07, pages 1-6" to derive and locate a focal plane from the displayed Content. For example, if the pixels of the input image are clustered around certain depth levels or regions, it may be beneficial to locate the focal plane around these clusters for quality.

The problem is solved in the exemplary embodiments.

One way to form the multi-focal plane is to quantize (discretize) each view in the depth dimension and map each pixel to its closest focus/depth level. Whether the multiple focal planes are represented by stacks of physical displays or by rendering them in a time multiplexed manner, the result tends to suffer from two significant types of distortion, namely flattening (or cardboard) and contouring (or banding).

These distortions are caused by the quantization of the depth values, which project the pixels in the depth range onto a plane. The reconstructed view has a stack of planes with significant depth separation. Many times, objects at different distances are mapped to one of the depth planes and may appear and move like a paper cut in the reconstructed view.

Abrupt changes in depth can also result in contours and steps when viewing objects that pass through adjacent focal planes. These abrupt changes are caused by discontinuous retinal blurring as the gaze passes through both focal planes. This phenomenon is visible, although the viewer can see the correct and undistorted texture.

In existing systems, the number and placement of focal planes is typically fixed and does not change over time. The fixed number and location of focal planes results in a fixed error distribution that is higher on average between focal planes. This tends to interfere with the perception of quality of adjustment between focal planes. Note that depth accuracy also affects the quality of objects moving in depth, as the amount of blur varies over time.

Common optimization criteria for MFP displays are the number of focal planes (linearly distributed over the dioptric scale) and the nature of the depth-blending function(s) used to reduce the quantization effect. Two approaches to MFP optimization include multi-focus capture-based optimization (Narain et al) and optimization according to the content rendered (Wu et al). These methods require complex scene capture (e.g., for light fields) or complex modeling and computation of subjective quality (e.g., for perceived spatial frequency and retinal blur) and MFP placement.

Summary of exemplary embodiments.

When only few focal planes are used to reconstruct a 3D view, it is beneficial to optimally locate the focal plane reasonably. Basic optimization criteria include, for example, refractive pitch of the focal plane following the properties of the human eye, and reduction of (objective and/or subjective) depth quantization distortion by optimizing the depth mixing function.

The exemplary embodiments operate to select beneficial depth mixing functions and focal plane locations in a time dependent manner. Exemplary embodiments also provide a depth blending function that provides good spatial separation and supports the formation of focal planes at different locations.

The MPFs are time multiplexed in shifted positions.

The accuracy of the focal plane is generally optimal for pixels at the same depth as the focal plane. Between focal planes, the accuracy is lower, even when depth-based blending is used to interpolate pixel values and reduce the depth quantization effect in the rendered view. In many current approaches, the position of the focal planes is fixed, resulting in a permanent mean error distribution, supporting information at or near the focal planes more than information between focal planes.

It is advantageous to vary the position of the focal plane over time in view of quality perception, so that the mean error distribution is less structured and permanent (and thus varies over time). These changes, referred to herein as multiplexing of the focal plane, are preferably performed at a rate high enough so as not to cause perceptible flicker artifacts.

One feature of an exemplary embodiment of the method is that the display system allows the MFP to be rendered at varying depths. In addition, forming the focal plane is done at varying locations such that the depth blending function must be shifted along the depth (z) axis.

In an exemplary embodiment, a sinusoidal mixing function is employed. Such functions are easy to form and their positions are easy to change by changing their phase by a control variable.

The selected positioning is used for all MFPs synthesizing the image in question, as the position varies between each rendered input image. In this way, the brightness distribution remains substantially unchanged for each rendered MFP stack.

Notably, for a time multiplexed MFP, the time varying MFP method does not necessarily require a change in refresh rate.

Fig. 5 shows the steps performed in the time multiplexing of MPF in shifted positions. In step 502, an image of a scene is captured, and in step 504, a depth map of the scene is generated. In step 506, a shifted focal plane stack is formed, and in step 508, the shifted focal plane stack is rendered for display. The process of forming the shift MFP and rendering the focal stack will be described in more detail below.

Potential benefits of some embodiments.

A general advantage of MFP near-eye displays is that it supports natural accommodation and vergence. Each image in the stack of focal planes is rendered at a different depth, and the eye causes blur to those focal planes to be unobserved. This enables the focal plane to be rendered into the volume without tracking the user's eye accommodation. Thus, the MFP display does not require a simulation of retinal blur. In contrast, blurring using eye tracking is often an inaccurate and computationally demanding process.

The quality of MFP displays generally increases with the number of focal planes used for rendering. However, displaying a large number of focal planes is limited for various technical reasons. In practice, the maximum number of focal planes is limited to only a few, typically from four to six. A high quality rendering would require approximately a doubling of the number.

In some embodiments, a high fidelity view may be reconstructed with a relatively small number of focal planes (e.g., five). This is achieved by interleaving a smaller number of focal planes in order to increase their apparent (apparent) number. In an exemplary embodiment, the rendering quality may be improved with a selected number of MFPs (selected complexity), or by reducing the number of MFPs required, the system complexity may be reduced without reducing the rendering quality.

In embodiments using physical display stacks, the reduced number of focal planes results in better transparency. If the physical or virtual display is time multiplexed, fewer displays result in a higher on-off ratio and higher brightness per focal plane.

The reduced number of focal planes may enable thinner display structures. This is particularly beneficial for optical see-through (OST) displays, which support the option of viewing the real world through the display structure without distortion. Examples of optical see-through displays are described in more detail below.

The exemplary sinusoidal depth mixing function may have various benefits over existing sinusoidal depth mixing functions. The basic way of slicing image information using box filters yields flattening or cardboard distortion and step or contour distortion of the surface extending over several depth slices. Tent filters, which are commonly used to reduce these distortions, may also cause discontinuities in the brightness distribution, which appear as folds and contours in the focal plane.

The sinusoidal blending function used in some embodiments provides good spatial separation between focal planes so that the eye can fixate and accommodate different depths. The sinusoidal function is continuous throughout the depth range and two functions of closed form and opposite phase can be used to obtain the ownership weight values required for any set (number) of mixing functions. The closed form can also be used to form a straight dioptric positioned MFP based on linear depth information, so that no intermediate mapping between linear and dioptric dimensions is required.

The techniques disclosed herein to operate to increase the apparent number of focal planes, such as patching or doubling, may also be applied to box filters, tent filters, or other known ways of forming focal planes and/or other blending functions.

Some exemplary embodiments are computationally much less demanding than known optimization methods. Increasing the apparent number of focal planes using techniques such as patching or doubling can give considerable benefit with more cost effective systems.

In exemplary embodiments, the disclosed system may also be optimized using multi-focus capture (e.g., focal stack or lightfield) or content-based placement of a set of MFPs.

An exemplary mixing function.

In order not to cause a brightness variation of the 3D view, the weights of the focal plane brightness preferably amount to a value of 1. In this respect, a sinusoidal mixing function (sin (x) and/or cos (x)) is particularly advantageous. By appropriate shifting of the ordinate value (multiplied by 0.5), scale (multiplied by 0.5) and phase (multiplied by pi), the sum of the sinusoidal functions reaches a value of 1 and can be defined between the desired depth ranges.

Fig. 6 shows an example of two opposite phase sinusoidal functions that can be used to generate a mixing function for five focal planes. In this example, a typical depth range from 0 to 255 is used. The solid line ("series 1") may be generated using an expression

0.5+0.5cos(4πx/255)

Where x represents depth. The dashed line ("series 2") may be generated using an expression

0.5-0.5cos(4πx/255)

For suitable values of a and B, such a function typically has the form 0.5+0.5sin (Ax + B). The weighting functions for different MFPs can be obtained by selecting appropriate portions of the graph. In fig. 6, the horizontal scale is between 0 and 255, which is the range of depth map values used in the simulation. Accordingly, x is a function of the depth that produces the desired number of Sinusoids (MFPs) on the depth scale used.

Fig. 7 shows an example of a sinusoidal function that is segmented to generate three blend functions for three focal planes respectively.

Fig. 8 accordingly shows how five image planes are formed using the two sinusoids shown in fig. 6. It should be noted that in the graph of fig. 8 (and subsequently in fig. 9A-9B and 11A-11B), a small offset is introduced in the vertical direction in order to make the different functions visible where their values overlap, in particular along the x-axis. In the example of fig. 8, the different series may be represented by the following expression:

series 2: for x < 1/4255, w20.5+0.5cos (4 π x/255), otherwise W2 is 0

Series 3: for x < 1/2 · 255, w30.5-0.5cos (4 pi x/255), otherwise w3 is 0

Series 4: for 1/4 & 255 < x < 3/4255, w 40.5+0.5cos (4 pi x/255), otherwise W4 is 0 series 5: for x > 1/2255, w50.5-0.5cos (4 pi x/255), otherwise w5 is 0

Series 6: for x > 3/4255, w60.5+0.5cos (4 pi x/255), otherwise w6 is 0

Using the weight values in these series, the following technique may be used to display a pixel having a perceived brightness L0 at depth x. The total perceived luminance L0 is mapped to the perceived luminance of the corresponding pixel at each of the five image planes using the following formula.

L2=w2(x)·L0

L3=w3(x)·L0

L4=w4(x)·L0

L5=w5(x)·L0

L6=w6(x)·L0

At each image plane, the appropriate pixel is illuminated with the corresponding calculated perceived brightness. From the perspective of the viewer, the corresponding pixels in each image plane overlap each other, giving the perception of a single pixel having a perceived brightness L0 at depth x.

In some embodiments, the MFP is computed using a linear scale of depths, which corresponds to the output metrics of most depth capture devices. To apply the dioptric pitch of the MFP, the linear depth can be mapped to a dioptric scale prior to forming the MFP. To accurately map the scale, the minimum and farthest depths (distances from the eye) are determined, as described in more detail below.

MFPs with sinusoidal mixing functions are well separated spatially. Their blended nature has some of the advantages of linear tent filters.

Unlike the tent filter, the sinusoidal blend does not show folds or other sudden visible changes in the MFP brightness distribution. Furthermore, the different forms of the tent filter, sinusoidal depth functions, are smooth so that their first derivatives are continuous over the entire depth range.

Embodiments using intermediate focal planes ("tiled MFPs

The fixed composition of the focal plane is usually not optimal. The fixed placement of the MFP focal plane results in a fixed quantization step size and a fixed average error distribution along the depth scale. To address this issue, some embodiments improve MFP rendering quality by time multiplexing MPFs at alternate locations as a way to reduce quantization effects. In some such embodiments, the alternation is performed between two different focal plane assignments for each two input images.

For example, if the maximum supported and/or selected number of MFPs is five, then five focal planes are used to compose the first image displayed in the first time interval. Subsequently, for the second image, four MFPs are rendered at staggered depths between the five MFPs previously rendered. In total, nine positions for the MFP are used for rendering, which reduces the average quantization error and improves the perceived image quality. Fig. 9A-9B illustrate this principle.

The weight of the intermediate focal plane may be obtained by phase shifting the substantially sinusoidal weighting function by a quarter wavelength. In some embodiments, for outermost focal plane positions that are not at either end of the depth scale, the weighting values are done by repeating (extrapolating) with values 0 or 1.

Exemplary embodiments using this "patch" method operate to increase the apparent number and perceived quality of the focal planes, respectively, without exceeding a selected maximum number of focal planes (five MFPs in this example).

FIG. 10 shows patching a set of five MFPs with a set of four intermediate MFPs as a function of time. The image is mapped to a set of five image planes 1002 (closest to the user's eyes), 1004, 1006, 1008, and 1010. These five image planes are at a first time t1And displayed to the user. The image is also mapped to a set of four image planes 1003, 1005, 1007, and 1009. The set of four image planes is interleaved in distance with the set of five image planes. Specifically, the distance between the planes 1003 isThe distance between planes 1002 and 1004, the distance between plane 1005 is between the distance between planes 1004 and 1006, the distance between plane 1007 is between the distance between planes 1006 and 1008, and the distance between plane 1009 is between the distance between planes 1008 and 1010. At time t 1After a second time t2A set of four image planes is displayed to the user. The display of the set of five image planes and the set of four image planes may alternate, where the set of five image planes is at time t3Again displayed, and the set of four image planes at time t4And displayed again. The display of the set of five image planes and the set of four image planes may alternate at a rate high enough that the change is not noticeable to the user, for example at least twenty-four times per second. Different numbers of planes may be used in different embodiments.

To avoid flicker, shifting of the MFP stack may be performed between/for each rendering frame. The temporal characteristics of the human visual system are slow to perceive adaptation (depth) over spatial (or angular) changes, so in some embodiments, the display frame rate is unchanged despite changes in MFP position.

An embodiment using staggering (e.g., doubling) of the focal planes is used.

In some embodiments, the quality of MFP rendering can be improved by maintaining a selected maximum number of focal planes but alternating between two staggered positions, doubling the apparent number of MFPs. Fig. 11A-11B show exemplary weighting functions when four MFPs are used. The weights for the interleaved focal planes may again be obtained by phase shifting the basic sinusoidal weighting function by a quarter wavelength. For outermost focal plane positions that are not at the end of the depth scale, the weighting values can be done by repeating (extrapolating) with values 0 or 1.

An example of interleaved MFPs when five MFPs are time-multiplexed in an intermediate position is shown in fig. 12. The display of the first set of image planes 1201, 1203, 1205, 1207, 1209 alternates with the display of the second set of interleaved image planes 1202, 1204, 1206, 1208, 1210. The apparent number of focal planes is doubled (ten in this example) and the perceived accuracy is increased without increasing the selected maximum number of focal planes (five in this example).

Binocular observation: MFP stack alternating for each eye.

In some embodiments, the quantization effect in the depth dimension can be further reduced by alternating the two sets of MFPs in opposite phases for each eye. Fig. 13 shows one such method in the case of five interleaved MFPs. At time t1The image planes 1302a, 1304a, 1306a, 1308a, and 1310a are displayed to the left eye 1300a of the user, and the image planes 1301b, 1303b, 1305b, 1307b, and 1309b are displayed to the right eye 1300b of the user. At time t2The image planes 1301a, 1303a, 1305a, 1307a, and 1309a are displayed to the left eye of the user, and the image planes 1302b, 1304b, 1306b, 1308b, and 1310b are displayed to the right eye of the user. The display may be at time t 1Is configured and time t2Rapidly alternate between the configurations of (a).

One benefit of alternating between two stacked positions is that the average properties of the human eye are exploited in a similar way to the so-called binocular fusion or monoscopic techniques. By using this property, the perceived depth of field of a stereoscopic image can be extended by capturing the image pair at different focal lengths (near and far). The same phenomenon can also improve the vision of the presbyopic population by opening glasses (lenses) with different optical powers.

Selecting a plurality of MFPs

In "adaptation to multiple-focal-plane displays" by k.j. mackenzie et al, vision journal (2010)10(8):22, pages 1-20 ", using linear depth mixing (tent filter), the change in focal length results in a continuous near-linear Accommodation response for image plane separations up to 10/9D, indicating that five MFPs evenly distributed between 4 and 1/14 on the dioptric scale (corresponding to a measured distance between 0.25m and 14 m) may be sufficient for practical 3D displays.

In fig. 8, the mixing functions of five MFPs are shown, which are evenly distributed on the depth scale. If the corresponding depth scale maps between, for example, 4 and 1/14 on the diopter scale (the corresponding metric distance is between 0.25m and 14 m), the separation between the focal planes is 0.98D. Existing studies have shown that a sufficient number of focal planes are on the order of five MFPs. Embodiments disclosed herein bring accuracy well to the safety aspect without the need to use more complex methods.

Alternative mixing functions.

Known mixing functions include so-called box filters and tent filters. In addition to this, the hybrid filter may use several other variations, including those described in "Xinda Hu," Development of the Depth-Fused Multi-Focal-Plane Display Technology ", PhD paper, university of Arizona (2014)" and in "Hu, X. and Hua, H. (2014)," Design and assessment of a Depth-Fused Multi-Focal-Plane Display Technology ", IEEE/OSA Display journal of Technology, 10(4), 308-316".

A beneficial feature of the set of blending functions is that they add up to one so as not to cause a change in the total brightness level of the rendered MFP stack. This is an attribute called "uniform partitioning" which can be implemented with any number of functions.

As an example of a mixing function that can be employed in an embodiment as an alternative to the described sinusoidal function, it is possible to construct, for example, from a series of so-called impulse functions Ψ: R (R being given by the following equation) and one or more complementary functions totaling up to one:

further, the blend function may consist of various smooth transition functions between 0 and 1, or between 1 and 0, according to the following formula:

Figure BDA0002673067110000202

Furthermore, a so-called Friedrich softener (also called an approximation of the identity) can be used to create a sequence of smoothing functions for depth weighted blending.

The above are examples only, and exemplary embodiments may employ alternative sets of mixing functions.

And aligning the straight line scale and the diopter scale.

The human visual system facilitates the placement of the focal plane at regular distances over the refractive scale. However, depth information is typically most easily captured in a linear scale. Ideally, the position and scale of the linear depth range is known. However, linear scales are usually correlated, varying between some minimum and maximum distance of the scene, without information about the actual metric span.

On the other hand, starting from the eye position of the viewer and continuing to infinity, the depth perception of the human eye is more absolute. When using linearly captured depth information in an MFP near-eye display, it is helpful to identify the rendering distances (in diopter scale) closest and farthest to the eye and map/align the linearly captured depth to this range.

Depth perception is generally not true and accurate without information about the absolute scale and span of linear depth. This is especially true when content (e.g., video plus depth) is received from different sources. Nevertheless, the above-described binding and mapping between linear and refractive depth scales may be performed on a hypothetical basis in order to optimize the placement of the discrete focal planes.

The relationship between the dioptric depth d (x) and the linear normalized depth x can be expressed as follows:

D(x)=(x·zmin+(1-x)·zmax)-1wherein x is [0,1 ]]And D ismin=1/zmaxAnd D ismax=1/zmin

Here, x-0 is the maximum depth in the scene, and x-1 is the minimum depth in the scene. For a depth map with 8-bit resolution, the depth can be easily scaled from 0 to 255.

Note that the examples of MFPs in fig. 10, 12, and 13 are formed using linear depth scales for simplicity, although other depth scales may alternatively be used.

The MFP is formed and rendered at the content-related location.

In some embodiments utilizing a time-zoom plane, the MFP is positioned according to the content of the display. The positioning of the focal plane depends on the characteristics of each input image to be displayed.

For example, if the input images are clustered around certain depth levels or regions, it may be beneficial to position the focal plane around these clusters in relation to the rendering accuracy. When using e.g. a set of sinusoidal mixing functions, their refractive power is mainly a constant. Accordingly, content-based optimization can be performed for the entire set of MFPs at once.

Both the number and location of the focal planes may vary depending on the content. Applying the above procedure, for example, a segment of the sinusoidal mixing function (each containing substantially weights within one half wavelength) can be extracted and moved to any position on the depth scale. Accordingly, adjacent portions of the wave may be stretched (by repeating the weight values) to achieve a uniform partitioning property.

Instead of the sinusoidal mixing function used in most of the examples described above, the exemplary embodiments can also be applied with other mixing functions (e.g., when using tent, non-linear, impulse, polynomial, or other filters). FIG. 14 illustrates a set of B-spline basis functions that satisfy a uniform partition condition and may be used as a blend function in some embodiments. As is apparent from fig. 14, the blending function does not need to be symmetric or reach a maximum weight value of 1.

Different techniques for forming a content adaptive MFP may be used in different embodiments. In some embodiments, histogram analysis may be performed to derive content attributes. Various techniques may be used to specify the metrics and rules for optimizing the focal plane position. Rules may also utilize different models for visual perception, such as those described in "w.wu et al," Content-adaptive focus configuration for near-eye multi-focal displays, "IEEE multimedia International Conference (ICME), month 2016". However, in other embodiments, the selection of the image plane distance is independent of the content being displayed.

The MPFs are time multiplexed in shifted positions.

Fig. 15A shows a method of time-multiplexing MPFs at shifted positions. In step 1502, image content is captured with a camera (e.g., a depth camera). In step 1504, a depth map of the image content is created. In step 1506, the image content is mapped to different stacks of image planes based on the depth map. The different stacks of image planes are time multiplexed (step 1508) and rendered (step 1510) for display to a user.

Fig. 15B shows another method of time multiplexing MPFs at shifted positions. The 3D content is captured (step 1512) and reconstructed (step 1514). In step 1516, the image content is mapped to a different stack of image planes based on the depth map. The different stacks of image planes are time multiplexed (step 1518) and rendered (step 1520) for display to a user. Fig. 15C shows a case when 3D modeled content 1522 (e.g., a complete VR scene or AR object) is used as an input to form an MFP (step 1524). When the MFP that virtually models content is formed, the corresponding 3D content is available without the capturing and reconstructing steps (see fig. 15B). The different stacks of image planes are time multiplexed (step 1526) and rendered (step 1528) for display to a user.

In some embodiments, instead of texture and depth video, image information may be captured and transmitted as real-time 3D data. This may affect the formation of the MFP at the receiving location.

As described in more detail above, the formation of the MFP stack can be accomplished by "patching" or "interleaving".

One technique for forming a mixing function that may be used in different embodiments includes the following steps.

Choosing the inverse continuous sine function as the basis function.

Adjust the basis function to the desired depth range.

Adjusting the wavelength of the basis function to produce a selected number of focal planes, an

Calculate a weight table of two complementary basis functions, yielding a weight for each depth value.

The values in the weight table are divided according to the selected number of MFPs.

Supplementing the values at the end of the depth scale by extrapolating the values at the outermost focal plane.

In the case of "patch", the technique results in a set of n MFPs, patched with another set of (n-1) MFPs at an intermediate location, for a total of 2n-1 MFPs. In the case of "interleaving", this technique results in two sets of n MFPs, interleaved with each other (shifted by a quarter wavelength in the depth dimension). In either case, in some embodiments, the weights of the (segmented and completed) blending functions add up to one over the entire depth range, i.e., they form a "unified partition".

Exemplary optical structures for use in the zoom time-multiplexed embodiments.

The display device on which the exemplary embodiments are implemented may take a variety of forms in different embodiments. One such display device is described in "A Novel protocol for an Optical See-Through Head-mounted display with Addressable Focus Cues" by S.Liu et al, IEEE visual and computer graphics Association, Vol.16, No. 3, 5/6, 2010, p.381-393. Liu et al describes a solution for optical see-through AR glasses. This solution avoids the problem of transparency by placing the physical display beside the light path of the viewer. With this structure, the displayed images are virtual images that do not block each other like a physical display. In the Liu et al device, the focal length of the controllable (liquid) lens is adjusted to provide different virtual focal planes. Fig. 16 shows an optical configuration that can be used to display different image planes, in the optical configuration of fig. 16, a microdisplay 1602 displays an image. Light from the display passes through adjustable lens 1604 and through half-silvered mirror 1606 before being reflected by mirror 1608 (which may be a concave mirror). The light reflected by the mirror 1608 is again reflected by the half-silvered mirror 1606 into the user's eye 1610. The user can view the external environment through the half-silvered mirror 1606. The lens 1604 and the mirror 1608 form an image (e.g., image 1612) of the microdisplay 1602 at a position determined by the optical power of the adjustable lens 1604 and the mirror 1608.

With a display device such as that of fig. 16, rendering of any number (e.g., five) of focal planes can be performed in a time multiplexed manner with the appropriate speed and brightness of the display and lenses. The variable focus lens 1604 has a continuous focal length range and renders multiple focal planes at varying distances (with varying optical power of the lens). In an alternative embodiment, multiple focal planes may be implemented using, for example, free-form lenses/waveguides, in order to achieve a sufficiently compact display structure, for example using the techniques described in "Design of an optical section-through head-mounted display with low-number and large field of view using a free-form prism" by Cheng et al, applying optics 48, 2009, page 2655-2668 ".

In some embodiments, a pair of zoom lenses are placed in series such that one lens renders a basic set (e.g., five) of MFPs, while the other (e.g., birefringent) lens alternates the stack to an intermediate position.

An exemplary method.

An exemplary method is shown in FIG. 17, in which the renderer control module 1702 selects N focal planes (step 1704). The number may be time varying. The renderer control module also selects the position (depth) of each focal plane (step 1706). These selections may be based on the content of the depth map for the image to be rendered, or the selections may be content independent (e.g., based on the physical capabilities of the respective display device). The number and location of the focal planes are provided to a renderer 1708. The renderer also receives a depth map and an image to be rendered. In step 1710, the renderer uses the depth map (and appropriate blending functions) to form weights for forming each individual image plane (focal plane). The renderer forms (step 1712) and renders (step 1714) the various image planes and provides them to the MFP display. The MFP display cycles through the display planes, adjusting the lenses (or other adjustable display optics) of each display plane (step 1716) and displaying the image planes at the corresponding respective depths (step 1718). Note that in this exemplary method, each image plane is displayed at a different time (step 1718), and a sequential set of focal planes (inpainted or doubled) is also used, rather than simultaneously as previously described. The texture and depth captures used to form the respective sequential sets of focal planes may also be performed sequentially for optimal precision.

An optical see-through display device.

Also disclosed herein are multi-focal plane (MFP) displays that use a zoom method and time multiplexing to present multiple focal planes. In some embodiments, the focal length is varied by using an electronically controllable variable focus lens. The use of an electronic lens avoids the need for a mechanical actuator and enables structural multiplexing of several lens systems.

The term structural multiplexing is used herein to refer to the use of multiple variable focus lens configurations or layouts that occupy the same physical portion of the optical conduit. Each time, the selected lens system is configured by activating and deactivating the electronic lens as required.

In some embodiments, most components of the rendering system may be shared and held in fixed positions, potentially simplifying implementation.

Some embodiments provide coverage (i.e., no offset) of the eyepoint. Variations are also described where these are specifically weighted to achieve a better form factor for implementation (e.g., by allowing some eye offset, omitting background occlusion, or using direct occlusion).

And (3) stereo 3D.

Stereoscopic displays are a common way of displaying 3D information (commonly referred to as stereoscopic 3D or S3D). Stereoscopic viewing is based on the capture of parallel views by two cameras-a stereo pair, separated by a small distance, called the stereo baseline. The capture settings mimic binocular images perceived by both human eyes. This technology has gained popularity through its use in 3D cinema, 3DTV, and augmented and virtual reality applications (AR and VR). In AR/VR, wearable near-eye displays (sometimes referred to as glasses) are commonly used.

In real world space, the human eye is able to freely scan and pick up information by focusing and adapting to different distances/depths. When viewed, the vergence of the eye varies between viewing a parallel direction (for distant objects) and viewing a very crossed direction (for objects close to the eye). Convergence and accommodation are very strongly coupled so that most of the time, naturally, the accommodation/focus and the convergence point of both eyes meet at the same 3D point.

In traditional stereoscopic viewing, the eyes are always focused on the same image/display plane, while the Human Visual System (HVS) and the brain form a 3D perception by detecting the parallax of the image (i.e. the small distance of the respective images in the two 2D projections). In stereoscopic viewing, vergence and adjustment points may be different, which leads to vergence-adjustment conflicts (VAC). Although VAC is known to cause visual fatigue and other types of discomfort, conventional stereoscopic vision remains the most commonly used method in near-eye displays due to its convenience and cost effectiveness.

A multi-focal plane display.

In a multi-focal plane (MFP) display, the viewer is able to focus on different objects and depths, which avoids the VAC typical for stereoscopic displays. Rendering a stack of (natural or virtual) focal plane images at different depths; one is observed to be in focus, while the other is naturally blurred by the human visual system. The MFP display shows a stack of discrete focal planes that make up a 3D scene from layers along the viewer's visual axis.

The multiple focal planes are primarily complementary, rather than additive, in a direction transverse to the viewing axis. However, additive effects can smooth quantization steps and contours that might otherwise be perceived when viewing views compiled from discrete focal planes.

Multiple focal planes may be displayed by spatially multiplexing stacks of 2-D displays, or by sequentially switching the focal lengths of individual 2-D displays by a high speed zoom element (VFE) in a time multiplexed manner while spatially rendering visible portions of respective multi-focal image frames. Each image in the stack of (virtual) focal planes is rendered at a different depth, and the eye obscures those focal planes that are not observed.

Near-eye binocular viewing using two MFP stacks.

As with conventional stereoscopic near-eye displays that display side-by-side stereoscopic images, two MFP stacks are used to support stereoscopic replication in near-eye MFP eyewear. The two stacks may be formed from stereo input signals or synthesized from monoscopic input for texture and depth (video plus depth).

Using the monoscopic input signal, one MFP stack is first generated and then separated into two MFP stacks from two slightly different (stereoscopic) viewpoints. The segmentation is done by switching the monoscopic MFP stack from its nominal rendering direction to two new viewpoints, one for each eye. This is comparable to synthesizing stereoscopic viewpoints by 3D warping in a Depth Image Based Rendering (DIBR) system.

Some MFPs display stacking problems.

MFP displays may in principle be realized by stacking focal plane displays as shown in fig. 2. Using this approach for an optical see-through display may have one or more of the following problems. First, eyepiece optics can change the geometry of the real world view and should be optically compensated to provide an optical perspective effect. This increases the complexity and size of the implementation. Second, ideally, each focal plane would be associated with a respective obstruction for obstructing the background view. Otherwise, shadowing leaks may occur between the enhancement elements and the real elements. When omitting the optics (for better form factor) makes the shield farther away, the position of the shield is usually too close, resulting in shield leakage. Non-ideal placement of the shutter blurs the edges of the masked area. Third, each shutter or display element adds complexity, reduces brightness and contrast, and may cause distortion, for example, due to mutual interference.

Temporal multiplexing of the focal planes eliminates cross-talk between display elements, but it can lead to flicker and loss of brightness. Time multiplexing may reduce complexity due to management with a smaller number of components.

A zoom near-eye display.

A zoom display method avoids the VAC problem by dynamically compensating the focal length of a single plane display to match the depth of convergence of the eye. Focus compensation may be achieved by mechanical actuators to zoom the eyepiece of the display or to adjust the distance between the microdisplay and the eyepiece. Instead of zooming the eyepiece focus by mechanically adjustable means, a series of electronically controlled active optical elements may be used, including liquid lenses, deformable mirrors, and/or liquid crystal lenses.

Eye tracking has been used to determine the appropriate focal distance and adjust the position of the focal plane accordingly. However, eye tracking typically requires additional hardware, is computationally intensive, requires high accuracy, and is a challenging operation to implement.

In some devices, zooming enables rendering the focal plane at different distances in a time multiplexed manner. In such devices, the focal length is sequentially adjusted and a respective focal plane is rendered at each distance. Time multiplexing makes implementation easy, but may suffer from a loss of brightness. The benefit of the time multiplexed zoom method compared to many other MFP methods is the simplicity of the display structure.

An electronically controllable zoom lens.

Zoom optics may be implemented using a movable lens inside the optical system. For time multiplexing several focal planes, mechanically moving the parts may not be fast enough. However, the electronically controllable optical element may avoid the need to mechanically move components inside the optical system.

There are several techniques available for implementing a variable focus lens with electronically controllable optical properties. One is a liquid lens in which a transparent liquid-like substance is placed between two mechanically deformable membranes. The mechanical actuator is used for controlling the tension and adjusting the focal power of the lens. Although lenses of this type have been successfully used in prototype implementations of near-eye displays, their use is limited by the typically large mechanical dimensions and high powers used to control the tension that defines the optical power. The deformable mirror may be constructed and used in a similar manner to a liquid lens.

Other techniques utilize the properties of the liquid crystal material and apply control voltages to orient a plurality of elementary liquid crystal lenses. Virtual reality type video see-through lenses are more demanding in practice because larger lenses are typically required to support a sufficiently wide field of view (FoV). In augmented reality glasses, only a portion of the view is typically supported to display the augmented object or content, and may be implemented using smaller lenses.

The configuration of the lens determines its speed and overall power range. For example, the liquid crystal lenses may be arranged as a fresnel type lens to increase the speed of changing focus, and the liquid crystal lenses may be stacked to increase the available adjustment range.

Video see-through displays and optical see-through displays.

A video see-through near-eye display (NED) is used to view virtual or captured content or a combination thereof (AR content) in an application, where the content is considered to fill most of the user's field of view and replace the user's real-world view. Virtual gaming and stored or streamed 360 ° panoramas are examples of this category. Typically, the content is displayed in one focal plane, which may cause a VAC. Supporting multiple focal planes allows for the reduction or avoidance of VAC, which is a significant benefit.

Supporting see-through displays is a considerable challenge. There are two significantly different levels in achieving this goal. Many current methods add virtual information on the real world background without obscuring (replacing) the former, resulting in phantom-like transparency and color distortion in the presentation. It is more desirable to support occlusion by blocking light from desired parts of the real view and increasing the virtual information on these occluded areas. In most existing systems, occlusion is achieved for only one focal plane.

Note that background occlusion where virtual information blocks real-world objects may not be sufficient to seamlessly incorporate virtual components into a real-world view. Further, foreground masking may be useful. In foreground occlusion, virtual information rendered at a specified depth is occluded by those real world objects in front of it. The background and foreground masks together may be referred to as mutual masking.

By detecting the markers or feature sets by the camera, enhancements may be rendered. Furthermore, background occlusion may be formed by using an occlusion element, such as a Spatial Light Modulator (SLM), to occlude the view of the real world. Note that supporting background occlusion does not require depth sensing of the view by a depth sensor. However, if the virtual information is in turn expected to be obscured by real world objects, it is beneficial to capture more 3D properties from view than just for augmented gestures. Therefore, to support foreground masking, it is beneficial to use a depth sensor.

Example optical see-through (OST) display implementations.

An example of an optically-see-through near-eye display is shown in fig. 18A. The example of fig. 18A is given for a solution that renders only one focal plane. This implementation includes an objective lens 1802, inverting lenses 1804 and 1806, and an eyepiece lens 1808. The blocking layer 1810 (e.g., spatial light modulator) is located between the inverse lens and the eyepiece lens. A display component 1812, such as an LCD display or OLED display, is provided for displaying virtual content, and an optical combiner 1814 (e.g., a partially silvered mirror) is provided to combine images from the real world (as received through lenses 1802, 1804, and 1806) with images generated by the display component 1812.

The eyepiece lens is provided for folding (compression) the occlusion (against the shutter) and enhancing (against the focal plane display) the true view.

A shutter, which may be a Liquid Crystal (LC) element, is provided for the area to be replaced (shadowed) on the focal plane.

An augmented display component 1812 is provided for displaying the focal plane of the augmented object.

An optical combiner 1814 (e.g., a dichroic mirror) is provided to reflect the enhanced information.

Lenses 1802, 1804, 1806, and 1808 maintain the natural orientation of the view of the real world. In fig. 18A, the user's eye 1816 is in what is referred to herein as the true eyepoint. In fact, the user experiences (perceives) that the virtual viewpoint is further forward in the optical path than the real viewpoint, as discussed in more detail below. The separation between real and virtual viewpoints is referred to herein as viewpoint offset or eye offset. Especially for close viewing, a small offset is preferred.

Preferably, in the optical perspective solution, the real scene is not zoomed, i.e. the magnification of the system is 1: 1. In the example of fig. 18A, lenses with the same optical power are shown being used, however in some embodiments, the two inverted/upright lenses may have different optical powers (and distances) than the two lenses near the eye, or the two inverted lenses may be replaced with a single inverted lens.

Having collimated segments of parallel rays, such as the segment between the inverting lenses 1804 and 1806, provides flexibility in the positioning of the segments for shading and enhancement, as well as in selecting the physical length for the entire optical system.

The display device of fig. 18A operates to form only one focal plane that can be obscured. In addition to the lack of support for multi-MFP rendering, the simplified implementation uses a relatively long optical pipeline. This results in large viewpoint/eye shifts, which are particularly disruptive to viewing and interaction with nearby objects.

Problems with some display implementations.

Some current MFP solutions for optically see-through near-eye displays do not support natural shading from multiple focal planes.

Current solutions for obscuration-capable optical see-through NED, even if only one focal plane is supported, typically suffer from an offset between the real eye position and the perceived eye position (virtual eye position determined by the NED optics).

The inflexibility of the system architecture often prevents a satisfactory form factor for the system implementation from being achieved. Combining parallel optical structures for rendering multiple focal planes works in principle, but it exemplifies the challenge of achieving compact results.

An OST display supporting shadowing.

Embodiments of the displays disclosed herein are described by setting forth structures for a single eye, but it should be understood that in many embodiments, the optical structures provided for one eye are replicated for the other eye of the user to generate a fully stereoscopic display. For implementations that include two parallel pipelines and structures, information can be captured, processed, and displayed separately for each eye.

Note that virtual viewpoint generation may also be chosen to support stereoscopic imagery (rather than capturing real stereoscopic imagery), or to save bit rate if the enhanced 3D object/information is not local but received over a network. The received 3D information may also be a 3D reconstruction of the remote person, the natural view or parts thereof. This may be the case, for example, in an immersive telepresence system that brings participants virtually into the same conference space.

The systems and methods disclosed herein use a zoom method for the rendering of multiple focal planes such that the focal planes are time multiplexed to form a complete scene. While temporal multiplexing tends to sacrifice some brightness in scene rendering, it simplifies optical and mechanical structure and helps in one part to achieve a more satisfactory form factor for implementation.

The mechanical adjustment of the lens may be too slow for high frequency rendering of the focal plane. For example, an electronically controllable zoom Liquid Crystal (LC) lens may be used to achieve sufficient speed for changing the focal length.

One feature of some embodiments is the ability to support multi-focal plane rendering by multiplexing several optical arrangements within the same physical pipe, a method referred to herein as structural multiplexing. In particular, no mechanical actuators or changes are required, since a separate set of electronically controllable LC lenses is activated for each rendered focal plane. The structural multiplexing reduces the need to combine parallel optical structures and thus the size of the implementation can be reduced.

Furthermore, the variation of the optical structure does not affect the positioning of the main system components (display, SLM, mirror elements, light combiners, etc.), which reduces the need for components, simplifies the solution, and keeps the physical dimensions of the implementation reasonable, although supporting multiple obscurable focal planes.

The masking capability is implemented in many of the display embodiments described herein. This avoids transparency and color distortion that might otherwise result from an enhancement process without the support of shading.

The optical structure tends to cause a shift between the true and effective eyepoint. Small offsets are acceptable, but any variation during use is undesirable. In order to keep the effective eye point fixed, the optical length of the implementation is preferably kept constant during the zoom rendering of the focal plane.

Some embodiments disclosed herein provide for no offset between real and virtual viewpoints. Other embodiments trade off some eyepoint accuracy for better form factor. Other variations relax the form factor requirements by trading off shadowing capabilities.

FIG. 18B illustrates a display structure capable of displaying images at multiple focal planes according to some embodiments.

In the system of fig. 18B, an Augmented Reality (AR) tracking camera 1851 and a depth sensor 1852 provide input to an AR pose tracking module 1853. The camera 1851 may detect AR markers or other features associated with augmentation in the AR content production stage. In some embodiments, the depth sensor 1852 and the camera 1851 may be combined into a single sensor, such as an RGBD camera. The depth sensor may be, for example, a structured light sensor or a time-of-flight sensor. The image plane forming module 1854 generates images to be displayed at different focal planes. The image may be based on the received content 1849 and the user's gesture as determined by the gesture tracking module 1853. The image plane forming module 1854 further operates to determine which regions (e.g., which pixels) within the image plane should be completely or partially occluded. In some embodiments, it may be desirable to manipulate the 3D data to be enhanced, for example for foreground masking, color correction, and transparency effects, such as shading. The different image planes generated by the image plane formation module are provided to a multiplexer 1863, which provides the different image planes to the augmented display assembly 1812 and the shadow mask 1810 at the appropriate time in a manner synchronized with the control of the optics.

In this example, the inverse lenses 1806a, 1806b, 1806c and eyepiece lenses 1808a, 1808b, 1808c are adjustable lenses (e.g., liquid crystal lenses) controlled by respective control voltages received from the zoom control module 1862. In this example, the control voltage switches the respective lenses between a state in which the lenses function as a converging lens having a predetermined positive power and a state in which the lenses have zero power (function as a transparent sheet). It will be appreciated that in other embodiments the optical power may be controllable between different non-zero powers or even negative powers, with the lens arrangement being adjusted accordingly.

The zoom control module 1862 operates such that at any one time, a pair of lenses consisting of one inverse lens and one eyepiece lens is movable. Three pairs are used in this example. The first pair is lenses 1806a and 1808 a. The second pair is lenses 1806b and 1808 b. The third pair is lenses 1806c and 1808 c. In the state shown in fig. 18B, the lenses 1806B and 1808B are active. The focal plane visible to the user is determined by which lens pair is active. Zoom control module 1862 reports information (such as an index) to multiplexer 1863 indicating which focal plane is visible. In response, multiplexer 1863 provides the appropriate images to display component 1812 and provides the appropriate occlusion information to occlusion mask 1810. The optical components within block 1865 are referred to herein as a fabric multiplexer. The structure multiplexer 1865 multiplexes (covers) several optical pipelines and produces a rendering of the occluded focal plane, as described in more detail below.

In an example embodiment, the number of focal planes may be limited by the total attenuation of the LC lens, in addition to the normal limitations of temporal multiplexing (e.g., reduced brightness). Thus, typically 3-5 focal planes are good targets for near-eye displays that can be obscured.

In some embodiments, a head mounted display system (e.g., glasses) utilizes a camera mounted to the head mounted display to capture feature data (e.g., markers). The system detects and tracks the pose of feature data (e.g., markers) from the user's viewpoint. The system receives 3D data (e.g., video plus depth) of virtual content to be enhanced over a network. For each eye, a focal plane is formed that corresponds to the virtual object to be augmented. Using the pose data, for each eye and each focal plane, the system forms a mask for occluding the optical perspective background view. For each focal plane, the system (1) selects the appropriate lens pair to be activated and the system controls the optical power of the selected and deselected lenses accordingly, (2) obscures the optical perspective view with the corresponding obscuration mask by controlling the pixel transparency of the obscuring element, (3) displays the corresponding focal plane data on the enhanced display, compensating for the loss of brightness as needed, and (4) forms the enhanced view by combining the displayed enhancement with the obscured optical perspective view.

In the step of capturing the environment of the marker(s) or feature, a camera embedded in the eyewear structure captures video from the user's surroundings. In a subsequent step, the video data is searched for a set of distinctive features, a marker or a point cloud as part of the captured view. When the AR content is generated, the selected marker or feature set (its origin and pose) is associated with the desired augmentation.

In the step of detecting and tracking features, the captured video is searched for distinctive features of different orientations and scales, such as markers or point clouds. Previous tracking results are typically used to reduce the computational load of the search (avoiding the need to perform an exhaustive search). Detection and tracking of features (e.g., markers) is augmented with known techniques. Marker tracking is a traditional method in AR and is well supported by existing technologies. Tracking natural features may be favored because of less interference than visible markings. In both approaches, a set of captured features is used to define a viewpoint and real world coordinates in order to locate virtual information or objects. The detection and tracking may be assisted by electronics in the glasses (IMU sensors, etc.) and/or by data communicated between the external tracking module and the glasses. The coordinates and dimensions of the tracked markers or other feature sets are used for positioning and scaling of the virtual object, decomposed into focal planes, and used to generate a shadow mask for occluding an entry (OST) view and replacing it with the corresponding focal plane.

In the step of receiving the data to be enhanced, the 3D data to be enhanced may be obtained through a local and/or external network of glasses. The data may be, for example, in a depth-plus-texture format, with a pre-selected location, scale, and orientation relative to a set of features (markers or point clouds) that are potentially somewhere in the user's environment. Performing the enhancement may be conditioned on the presence/detection of a set of features in the environment.

In the step of forming a focal plane for the object to be enhanced, the 3D object is decomposed into focal planes using knowledge of the distance and shape of the 3D object. This step is performed based on information about the position of the user relative to a set of known features (markers or point clouds), and thus based on information about the position (distance), orientation and scale of the object to be enhanced. This decomposition can use any of a variety of MFP forming techniques, such as those described above. Virtual viewpoint generation may be selected to generate stereoscopic imagery and thus save bit rate when receiving enhanced 3D objects/information over a network.

In the step of forming a shadow mask, a shadow mask is generated for each focal plane to occlude selected pixels/regions of the real world view prior to adding the augmented object (decomposed into corresponding focal planes). The formation of the shadow mask may be performed based on information about the user's position relative to a known set of features (markers or point clouds) and the pose and scale of the augmented object. The mask may be a planar (binary) outline of the enhanced 3D object at the corresponding distance/depth, indicating whether a pixel is replaced by a corresponding pixel of the enhanced object. More generally, a shadow mask is a spatial mask used to specify more general image processing operations. Accordingly, the occlusion mask values may have an arbitrary weight between 0 and 1, enabling aggregation of the real world view and the enhancement information at an arbitrary ratio, rather than just replacing the real world view. This is beneficial for shading leakage compensation or color correction, for example, when shading is not actually supported. Continuous weight values may also be used to add virtual shadows.

Structural multiplexing is performed by activating and deactivating lens pairs as needed. Structure multiplexing is achieved using a zoom method that time multiplexes the optical components within the shared structure sequentially such that a desired number of focal planes are rendered at selected distances. In an example embodiment, the optical component that performs this rendering is a pair of electronically controllable lenses on both sides of an optically combined (e.g., using half-silvered mirrors) shutter and enhancement element.

For each focal plane (e.g., each focal plane FPi, for i ═ 1 to N), the step of obscuring the real world view may be performed. In this step the occlusion mask generated in step 5 is used to occlude parts of the real world view. For example, masking may be achieved by using a transmissive (LC) or reflective (SLM) mask. When used with a polarizing light combiner, the reflection option may produce sharper and high contrast results, although the use of a reflective shutter would require appropriate rearrangement of the mechanical and/or optical structures of the display. The shutter and the augmented display are preferably at substantially the same virtual distance from the viewpoint of the viewer.

For each focal plane, a step of displaying the enhanced data is performed. In this step, the virtual 3D information decomposed into focal planes is displayed in synchronization with the corresponding shading pattern. The 2D display element is used to sequentially display the enhanced objects/information one focal plane at a time. When forming the focal plane, the user position and (in AR content production) the selected object pose and size are used to obtain the focal plane from the desired distance. At any time, the displayed focal plane data is synchronized with the shadow mask data for the corresponding distance. Different numbers of reflecting elements (mirrors and/or prisms) are used to obtain the form of the optical conduit. Depending on the number of reflections, it may be necessary to flip/erect the orientation of the displayed image and the shadow mask in order to see the enhancement in the correct orientation. Similarly, some scaling of the content may be employed in order to get occlusion and enhancement presented in the correct size.

For each focal plane, the steps of combining and displaying the occluded background with the enhanced data are performed. An optical combiner is used to combine the occluded real-time views aligned with the respective enhanced focal planes. The optical combiner may be a half-silvered mirror. The combiner may be oriented at a 45 angle relative to the obscuring and enhancing display elements. The combiner may have a polarization effect to improve image quality by increasing the separation between the transmitted and reflected image components. Other optical components of the system (mirrors and lenses) may be used to deliver the combined result to the user's eye. An example selection based on a thin lens approximation is shown in the specification, although other components may be selected to optimize quality, for example, to reduce chromatic and other aberrations and to accommodate the form factor. Other components include freeform/wedge waveguides for more compact replacement of ocular/near-eye mirrors and lenses.

After all focal planes (e.g., all N focal planes) have been processed and displayed, the process is repeated, cycling through the focal planes. The cyclic focal plane may be performed at a rate high enough so that the cycle is not visible to the user to avoid flicker effects. In order to have a fluent perception of the enhancement information (possibly animation) without flicker, the frame rate for enhancement is preferably in the order of at least 30 Hz. For the N time-multiplexed focal planes, the rendering frequency is preferably on the order of at least N30 Hz to avoid flicker of the individual focal planes, where each focal plane is illuminated only 1/N times.

The display system may be calibrated prior to use. One mechanical calibration is to adjust the interpupillary distance (IPD) to meet the individual needs of each user. Additional calibration steps associated with mechanical and electronic implementations include: tracking camera calibration to compensate for geometric distortion; depth sensor calibration for correct depth sensing results (if in use); spatial alignment between the occlusion mask (occlusion), the augmented object, and the real world view; the control voltage (power) of the LC lens; the lag between the virtual and real world views is calibrated (minimized). Various techniques, both manual and automatic, may be used to assist in calibration.

The content dependent calibration method may be relevant during system operation. Such methods may include color calibration in OST implementations without shading capability and shading leakage calibration/compensation in (non-ideal) direct shading control methods.

The disclosed embodiments support rendering of multiple focal planes, thereby reducing vergence-accommodation conflicts (VAC), preventing natural focus on 3D content. Structural multiplexing enables several focal planes to be rendered using the same optical pipeline. This avoids the need to implement (replicate) and combine parallel optical structures. Example embodiments support multi-focal plane rendering by multiplexing several optical arrangements (structural multiplexing) within the same physical pipeline. In particular, since the optical system is configured in a time-multiplexed manner by the LC lens, no mechanical actuator and no change are required.

In an example embodiment, the optical pipeline is substantially symmetrical to maintain the dimensions constant (i.e., provide a magnification of 1: 1). However, symmetry may double the optical line length and may increase the resulting viewpoint/eye offset compared to the non-occluded version.

In some embodiments, at any one time, one of all LC lens pairs (focal length) is set to active and the other lenses are deactivated, i.e. set to a transparent mode with no optical power or optical effect. No mechanical actuators or changes are required to select or change the lens position in these embodiments.

And (5) structure multiplexing.

Example display systems disclosed herein support multi-focal plane rendering by multiplexing several optical arrangements within the same physical pipe, a technique referred to herein as structural multiplexing. Switching between each optical configuration is performed electronically by disabling and enabling the lens pairs in turn to select the desired focal length for each focal plane. No mechanical actuator and no change is used to change the lens position.

Using the basic structure in fig. 18B, the positions of the shutter element (SLM), the enhanced display, and the optical combiner can remain fixed when a pair of lenses for changing the rendering distance is selected. This can be achieved by adjusting the length of the collimating sections around the two lenses.

Fig. 19A-19C provide schematic diagrams of three focal planes supported by three pairs of variable focus LC lenses. Three options for focal plane rendering (focal length) are shown, keeping the length of the optics constant. In fig. 19A-19C, the three optical lines are shown separated, but in practice, all optical structures are overlapped in the same physical cross-section using structure multiplexing. In fig. 19A-19C, the active lens is shown in solid lines, while the inactive lens is shown in dashed lines. Fig. 19A shows a configuration in which the movable reversal lens and eyepiece lens are furthest from the user's eye. The configuration of fig. 19A is useful for generating a focal plane that appears relatively close to the user. Fig. 19B shows a configuration in which the movable reversal lens and the eyepiece lens are located at an intermediate distance from the user's eye. The configuration of fig. 19B is useful for generating a focal plane at an intermediate distance from the user. Figure 19C shows a configuration in which the movable reversal lens and eyepiece lens are at a minimum distance from the user's eyes. The configuration of fig. 19C is useful for generating focal planes at greater distances from the user. Note that in the example of fig. 19A-19C, the positions of the shutter (1810), combiner (1814), and display assembly (1812) remain unchanged.

The structural multiplexing is performed by electronically controlling (activating and deactivating) the variable-focus LC lens for each focal plane. The optical structures are overlapping in physical space so that multiplexing does not necessarily increase the size of the display device.

Note that the above lens powers (and corresponding three focal plane distances) are merely examples, and are not selected to optimize results relative to the human visual system. Using these examples, values may be selected for any feasible number of focal planes.

Focal plane distance and viewpoint offset.

As shown in fig. 19A-19C, the positions of the shutter elements (for background shading) and the augmented display remain fixed while the pair of lenses are electronically switched to different physical positions. Since the lens is active or inactive in its preset position, no mechanical movement is required. The position of the movable lens pair is changed so that the occlusion of the live action and the enhancement information is rendered at the desired distance. Note that the shadow mask and AR object sizes may also be adjusted to obtain the desired size for enhancement.

FIG. 20 shows the presentation distance of the display and shutter positions in a system with equal lens power and selected inverse lens pair distance (left). Note that changing the distance of the inverting lens pair (within certain limits) does not affect the perceived distance of the MFP plane (object image).

For simplicity of illustration, the optical configuration of FIG. 20 is shown without the blocking layer, combiner, or display component; instead, the display assembly is shown at position 2012, i.e. the position of the reflected image of the display in the combiner. 2012, the displayed object is indicated by a vertical arrow. Eyepiece 2008 forms a virtual image 2013 of the display object at location 2012. The distance 2001 between the user's eye 2016 and the displayed virtual image 2013 of the object is the distance of the active focal plane ("MFP distance"). However, the virtual image 2013 has a position that also corresponds to the position 2015 in real space, because the real object at the position 2015 will appear to the user's eyes to be in the same position as the virtual image 2013. Thus, the user is given the illusion that the real world is not viewed from the real position 2016 of his eyes, but rather from the perceived eyepoint 2017. The disparity between the true eye position and the perceived eyepoint is referred to as the eye offset distance 2003. Note that the image displayed at location 2012 formed an equal size image at locations 2013 and 2015, indicating a 1:1 magnification of the system. Furthermore, if light from the real object at the location 2015 is occluded by the shutter (multiplexed to the same location as the display 2012), the reflected image on the display 2012 occludes the real object in the correct way. Thus, it was described above how the system can be used to implement optical see-through AR glasses.

A reduction in eye offset distance.

Reflective elements, such as mirrors and/or prisms, may be used to reduce eye shift by folding the optical lines of the AR/VR glasses. The prism may be used as a mirror element to reflect incident light laterally from the viewing line to the optics.

In addition to reducing eye deviation, a better form factor of the glasses can be achieved by using mirrors. High quality light reflectors are also easy to manufacture.

In some embodiments, several mirror elements are used to reduce eye offset and/or improve the setup form factor. Furthermore, the half-silvered mirror element may be used as an optical combiner for real-world views of background occlusion, as well as the focal plane of the (decomposed) virtual object to be enhanced. The mirror elements may also be polarized, which may result in a clearer combined image. In some embodiments, instead of a transmissive shutter, a reflective SLM may be used for shielding by appropriate rearrangement of the optical path.

FIG. 21 illustrates an embodiment of a display device with near zero eye-point offset. Light from the real world scene 2100 is reflected by the double-sided mirror 2101 to the objective 2102. From the objective lens, the light passes through mirror 2103, first inverting lens 2104, and mirror 2105 in that order, and then enters structural multiplexer 2165. The structural multiplexer includes a controllable lens. In the configuration shown in fig. 21, the deactivated lenses are shown in dashed lines, while the active lenses are shown in solid lines. Within the structural multiplexer 2165, the light passes through the second inverse lens 2106, the blocking layer 2110, and the combiner 2114. At combiner 2114, light from the real world scene is combined with light from display component 2112. The combined light is reflected by mirror 2115 and passes through movable eyepiece lens 2108 before being reflected by double-sided mirror 2101 into user's eye 2116.

Optically, the structure of fig. 21 is similar to that of fig. 18B, except that to avoid any eye offset between the real and virtual/effective eyepoints, the virtual eyepoint is looped back to the real eyepoint. This is performed using four single-sided mirrors and one double-sided mirror in front of the user's eyes and a combiner for combining the enhancement with the corresponding occluded real view. The collimating section is used to obtain space for the focal length option implemented by the electron lens pair. In addition, four reflectors are used to obtain the loop back (zero offset) shape of the system. The incident view is reflected several times by the mirror so that the net effect is that the view is upright and the optical tunnel is equivalent to the simplified expanded version in fig. 18B.

The implementation of background masking is omitted.

Other embodiments, such as the embodiment shown in fig. 22, do not include components for obscuring a real world view. These embodiments may be implemented with fewer components. The reduced number of components increases the transparency of the system, potentially allowing more focal planes.

In the embodiment of fig. 22, light from the external scene 2300 passes directly through the combiner 2314 to the user's eyes. Light from display assembly 2312 is reflected by mirror 2315 through movable eyepiece 2308. A combiner 2314 combines light from the external scene with light from the display for presentation to the eyes of the user. In some embodiments, reflector 2315 inside structural multiplexing unit 2365 is not used, where light is provided directly from display assembly 2312 to the eyepiece lenses.

With embodiments without masking, a display method such as the embodiment of fig. 22 may be implemented as described above, with the steps involving masking being omitted as appropriate.

The embodiment of fig. 22 provides zero eye offset with a compromise of omitting background occlusion.

An implementation of direct masking is used.

In some embodiments, non-optimal direct occlusion of the real world view is performed by SLM elements in front of the eyepiece. One such embodiment is shown in fig. 23, where light from the external scene 2400 passes through the occlusion layer 2410 (e.g., spatial light modulator) before passing through the combiner 2414 to the user's eyes. Light from the display assembly 2412 is reflected by the mirror 2415 through the movable eyepiece lens 2408. A combiner 2414 combines light from the external scene with light from the display for presentation to the eyes of the user. In some embodiments, light is provided directly from the display assembly 2412 to the eyepiece lens without using a reflector 2415 inside the structural multiplexing unit 2465.

The AR tracking camera of the system (or a separate camera dedicated to this purpose) is used to capture the real world view, which can then compensate for (non-optimally) used occlusion leakage around the occlusion mask. The compensation information is added to the enhancement before the enhancement is displayed on the enhancement display. After compensation, the enhancement thus contains virtual information or objects and modified portions of the real world view to compensate for occlusion leakage caused by using a direct occlusion mask (possibly at non-optimal distances).

In embodiments using direct masking, the masking leakage may be compensated for by increasing the brightness of the portion of the focal plane image corresponding to the leakage region. Due to the position of the shielding layer, the amount by which the region outside the mask is diffused by the shutter depends on its distance to the focal plane to be shielded. The amount and extent of diffusion depends on the human visual system and eye parameters (ideally measured from the viewer's eye) and can be modeled in order to modify/compensate the view. The modification is most feasible for the information to be enhanced to be added to the optical perspective view. Using a model based on the human visual system for shading leakage (brightness variation outside the shading area), a compensation is calculated to add to the real view. In practice, compensation may be added to the information to be enhanced (e.g., to the perspective view in the optical combiner).

Additional discussion is provided.

The display systems and methods described herein allow for the display of multiple focal planes and for occlusion in different focal planes, even in embodiments with only a single display and a single shutter (per eye). Because light does not have to pass through multiple displays and multiple blockers, transparency can be improved and interference between stacked components and other possible stray effects can be avoided.

It is noted that the various hardware elements of the described embodiment(s) are referred to as "modules," which perform (i.e., execute, perform, etc.) the various functions described herein in connection with the respective modules. As used herein, a module includes hardware (e.g., one or more processors, one or more microprocessors, one or more microcontrollers, one or more microchips, one or more Application Specific Integrated Circuits (ASICs), one or more Field Programmable Gate Arrays (FPGAs), one or more memory devices) deemed suitable by one of ordinary skill in the relevant art for a given implementation. Each described module may also include instructions executable to implement one or more functions described as being performed by the respective module, and it is noted that these instructions may take the form of or include hardware (i.e., hardwired) instructions, firmware instructions, and/or software instructions, etc., and may be stored in any suitable non-transitory computer-readable medium or media, such as those commonly referred to as RAM, ROM, etc.

Although features and elements are described above in particular combinations, one of ordinary skill in the art will appreciate that each feature or element can be used alone or in any combination with other features and elements. In addition, the methods described herein may be implemented in a computer program, software, or firmware incorporated in a computer-readable medium for execution by a computer or processor. Examples of the computer readable storage medium include, but are not limited to, Read Only Memory (ROM), Random Access Memory (RAM), registers, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks and Digital Versatile Disks (DVDs). A processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.

52页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:多编解码器处理和速率控制

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类