Light field display system for vehicle enhancement

文档序号:1836179 发布日期:2021-11-12 浏览:28次 中文

阅读说明:本技术 用于车辆增强的光场显示系统 (Light field display system for vehicle enhancement ) 是由 J·S·卡拉夫 B·E·比弗森 J·多姆 于 2019-12-19 设计创作,主要内容包括:一种用于增强车辆的光场(LF)显示系统。所述LF显示系统包含形成车辆的表面(例如,内部和/或外部)的LF显示模块。所述LF显示模块各自具有显示区域并且平铺在一起以形成无缝显示表面,所述无缝显示表面具有比所述显示区域大的有效显示区域。所述LF显示模块从所述有效显示区域呈现全息内容。(A Light Field (LF) display system for enhancing a vehicle. The LF display system includes an LF display module that forms a surface (e.g., interior and/or exterior) of the vehicle. The LF display modules each have a display area and are tiled together to form a seamless display surface having an effective display area larger than the display area. The LF display module renders holographic content from the active display area.)

1. A Light Field (LF) display system, comprising:

an LF display assembly including at least one LF display module mounted on an interior surface of a vehicle interior, and the at least one LF display module configured to present one or more holographic objects at a plurality of locations relative to a display surface of the at least one LF display module, wherein the locations include a location between the display surface and a viewing volume of at least one display surface.

2. The LF display system of claim 1, wherein a holographic object of the one or more holographic objects is selected from the group consisting of: a two-dimensional object, a three-dimensional object, a control button, a control switch, a control dial, a steering control interface, a dashboard, a music control interface, an entertainment video control interface, a climate control interface, a vehicle window control interface, a vehicle door control interface, a map control interface, a computer interface, a gear shifter, some other control interface for the vehicle, or some combination thereof.

3. The LF display system of claim 1, wherein:

a holographic object of the one or more holographic objects is selected from the group consisting of: a navigation screen showing information about a location of the vehicle; a navigation assistance device; a navigation direction indicator; navigation information; a proposed vehicle route; an image associated with a vehicle surroundings; navigation content projected in front of a driver to help the driver focus on a road; navigation or information content overlaid on actual objects in the vicinity of the vehicle, or some combination thereof.

4. The LF display system of claim 1, wherein the display surface includes a plurality of surface locations, and each surface location is configured to project a portion of holographic content along an optical axis defining an axis of symmetry for light exiting the display surface at the location, the plurality of surface locations including a first subset of the surface locations having optical axes tilted at a first angle relative to a normal of the display surface.

5. The LF display system of claim 4, wherein a second subset of the plurality of surface locations project different portions of the holographic content along an optical axis that is tilted at a second angle relative to the normal to the display surface.

6. The LF display system of claim 1, further comprising:

an at least partially transparent two-dimensional (2D) display, the 2D display being placed between the display surface and the viewing volume such that light from the display surface passes through the 2D display and at least some of the light forms the one or more holographic objects.

7. The LF display system of claim 1, wherein at least one of the one or more holographic objects changes dynamically in response to instructions from a controller.

8. The LF display system of claim 7, wherein the instructions are generated by the controller in response to an event selected from the group consisting of: user input, a change in vehicle state, a navigational alert, a cell phone call, some other event that can be pre-programmed, and some combination thereof.

9. The LF display system of claim 1, further comprising:

a tracking system configured to track movement of the user; and

wherein the controller is configured to:

determining that the tracked movement is a gesture interacting with the at least one holographic object, an

An action is performed based on the gesture.

10. The LF display system of claim 9, wherein the action includes adjusting a holographic object, adjusting an operating state of the vehicle, adjusting a control interface of the vehicle, adjusting an interior configuration of the vehicle, adjusting a placement of the one or more holographic objects, transmitting data over a network, performing a phone call, performing a navigation update, or some combination thereof.

11. The LF display system of claim 9, wherein the tracking system is part of the at least one LF display module, and the at least one LF display module has a bidirectional LF display surface that simultaneously projects holographic objects and senses light from local areas adjacent to the display surface.

12. The LF display system of claim 11, wherein the sensed light is a light field.

13. The LF display system of claim 1, further comprising a controller configured to:

in response to user input, changing one of: an operational state of the vehicle, a control interface of the vehicle, an interior configuration of the vehicle, at least one of the one or more holographic objects, an arrangement of the one or more holographic objects, or some combination thereof.

14. The LF display system of claim 1, wherein the LF display assembly is further configured to generate a tactile surface in a localized area of the display.

15. The LF display system of claim 14, wherein the tactile surface coincides with a surface of at least one of the one or more holographic objects.

16. The LF display system of claim 14, further comprising:

a tracking system configured to track movement of the user; and

wherein the controller is configured to:

determining that the tracked movement is a gesture associated with the haptic surface, an

An action is performed based on the gesture.

17. The LF display system of claim 16, wherein the action includes adjusting a holographic object, adjusting the haptic surface, adjusting an operating state of the vehicle, adjusting a control interface of the vehicle, adjusting an interior configuration of the vehicle, adjusting a placement of the one or more holographic objects, transmitting data over a network, performing a phone call, performing a navigation update, a preprogrammed event, or some combination thereof.

18. The LF display system of claim 14, wherein the at least one LF display module includes dual energy surfaces and projects both light and ultrasound.

19. The LF display system of claim 1, wherein the display surface is part of a roof in the interior of the vehicle, and the one or more holographic objects simulate a sunroof and include at least one holographic object presented within a volume of holographic objects of the display surface.

20. The LF display system of claim 1, wherein the display surface is part of an exterior wall of the vehicle, and the one or more holographic objects simulate a windowed view and include at least one holographic object presented within a volume of holographic objects of the display surface.

21. The LF display system of claim 20, further comprising:

one or more cameras configured to capture images of a local area outside the vehicle, an

Wherein the at least one LF display module is configured to update the one or more holographic objects based in part on the captured image such that the outer wall of the vehicle appears as a transparent window.

22. The LF display system of claim 1, further comprising:

one or more cameras configured to capture images of a local area outside the vehicle, an

Wherein the LF display assembly is configured to update the one or more holographic objects based in part on the captured image.

23. The LF display system of claim 1, further comprising:

one or more cameras configured to capture images of a localized area around the vehicle that is at least partially invisible from the vehicle interior;

wherein the LF display assembly is configured to determine a rendering perspective of holographic content including the one or more holographic objects as a representation of the local area, the local area rendered as if the vehicle were transparent.

24. The LF display system of claim 1, further comprising:

one or more cameras configured to capture images of a local area around the vehicle;

a tracking system configured to track eye positions of a user in the interior of the vehicle;

wherein the at least one LF display module is configured to determine a rendering perspective of holographic content containing the one or more holographic objects based on the determined eye positions relative to locations within the local area, and the holographic content is a representation of the local area, the local area rendered as if the vehicle were transparent.

25. The LF display system of claim 1, further comprising:

a tracking system configured to track a location of a user within the interior of the vehicle;

wherein the one or more holographic objects include a first holographic object of the user and a second holographic object of a second user also in the interior of the vehicle, and the first holographic object points to a first location associated with the user and does not point to a second location associated with the second user.

26. The LF display system of claim 25, wherein the first holographic object is different from the second holographic object.

27. The LF display system of claim 1, comprising:

a plurality of LF display modules including the at least one LF display module, each LF display module having a module display surface area, the plurality of LF display modules tiled together to form a seamless display surface having a combined area larger than the module display surface area.

28. The LF display system of claim 1, wherein the holographic object of the at least one LF display module is relayed from the display surface to an offset location around a virtual display surface that is closer to a passenger of the vehicle than the display surface.

29. The LF display system of claim 28, wherein the holographic object of the at least one LF display module is relayed using at least one of dihedral corner reflectors.

30. The LF display system of claim 28, wherein the holographic object of the at least one LF display module is relayed using a beam splitter and a retro reflector.

31. The LF display system of claim 28, wherein the vehicle includes a first passenger location and a second passenger location, and the virtual display surface presents a holographic object that is visible from the first passenger location and not visible from the second passenger location.

32. The LF display system of claim 1, wherein the one or more holographic objects includes a first holographic object, and the first holographic object is directed to a first viewing volume associated with a first location within the vehicle and does not include a second viewing volume associated with a second location within the vehicle.

33. The LF display system of claim 1, wherein the one or more holographic objects includes a first holographic object, and the first holographic object is directed at a first viewing volume associated with a first location within the vehicle, and a portion of the first viewing volume overlaps a second viewing volume associated with a second location within the vehicle.

34. The (LF) display system of claim 1, wherein the one or more holographic objects include a holographic driver.

35. The LF display system of claim 1, wherein the light field display assembly is further configured to have one or more energy surfaces, each energy surface containing a plurality of light source locations, and the LF display system further comprises:

a plurality of waveguides, wherein each waveguide in the array of waveguides is configured to receive light from a corresponding subset of the plurality of light source locations and each waveguide guides light along a plurality of propagation paths, each propagation path in the plurality of propagation paths corresponds to a light source location and a direction of each propagation path is determined at least by a location of the corresponding light source location relative to the waveguide.

36. An LF display system, comprising:

at least one LF display module mounted on an exterior surface of a vehicle and projecting holographic objects at a plurality of configurable physical locations relative to a display surface of the at least one LF display module, wherein the locations include a location between the display surface and a viewing volume of the at least one display surface, and

wherein the holographic object changes an appearance of the vehicle.

37. The LF display system according to claim 36, wherein the holographic object is for camouflaging a vehicle by its surroundings.

38. The LF display system of claim 36, wherein the exterior surface of the vehicle includes a first side on a portion of the vehicle and a second side on a second portion of the vehicle opposite the first side, and the LF display system further comprises:

one or more cameras configured to capture images of a local area around the vehicle; and

a controller module configured to:

determining a rendering perspective of a viewer within a viewing volume on the first side of the vehicle based on the captured image,

wherein at least some of the holographic objects form a representation of the portion of the localized area around the second side of the vehicle such that at least a portion of the vehicle appears transparent.

Background

The present disclosure relates to vehicles, and in particular, to light field display systems for vehicle enhancement.

Conventional vehicles (e.g., personal transportation, commercial transportation, government transportation, etc.) are manufactured to have a particular appearance both inside and outside. In addition, this appearance is fixed and not easily changed. For example, owners must re-paint their cars to change the color of the car. Also, the internal layout and cabin space of the controls are fixed. For example, the position of the instrument panel, steering wheel, shifter, window controls, door locks, etc. are fixed in place and have a particular appearance. Because the appearance of a vehicle is fixed during manufacture and may be difficult to change after manufacture, vehicle manufacturers typically provide various options (paint colors, decorative colors, etc.) that consumers may select. However, providing options in this manner is not only expensive for the vehicle manufacturer, but at best provides only limited customization of the vehicle for the user.

Disclosure of Invention

A Light Field (LF) display system for enhancing the light field of a vehicle (e.g., automobile, airplane, etc.). The LF display system includes an LF display assembly that includes at least one LF display module that forms a surface (e.g., interior, exterior, etc.) of the vehicle. The at least one LF display module is configured to present one or more holographic objects (e.g., door control interfaces, dashboards, lines, etc.) at a plurality of locations relative to a display surface of the at least one LF display module. The location includes a location between the display surface and a viewing volume of the at least one display surface.

In some embodiments, the holographic content may contain holographic objects with which a user may interact to provide instructions to the vehicle. For example, in some embodiments, the LF display contains multiple ultrasound speakers (e.g., as part of the LF display module) and a tracking system. The plurality of ultrasonic speakers is configured to generate a tactile surface that is coincident with at least a portion of the holographic object. The tracking system is configured to track user interaction with the holographic object (e.g., images captured via the imaging sensor of the LF display module and/or some other camera). And the LF display system is configured to provide instructions to the vehicle based on the interaction.

In some embodiments, the LF display system includes at least one LF display module mounted on an exterior surface of the vehicle. The at least one LF display module projects the holographic object at a plurality of configurable physical locations relative to a display surface of the at least one LF display module. The location is comprised between the display surface and a viewing volume of the at least one display surface, and the holographic object changes an appearance of the vehicle.

Drawings

FIG. 1 is a diagram of a light field display module to render holographic objects in accordance with one or more embodiments.

FIG. 2A is a cross-section of a portion of a light field display module in accordance with one or more embodiments.

FIG. 2B is a cross-section of a portion of a light field display module in accordance with one or more embodiments.

FIG. 3A is a perspective view of a light field display module in accordance with one or more embodiments.

Fig. 3B is a cross-sectional view of a light field display module including an interleaved energy relay device in accordance with one or more embodiments.

Fig. 4A is a perspective view of a portion of a light field display system tiled in two dimensions to form a single-sided, seamless surface environment in accordance with one or more embodiments.

FIG. 4B is a perspective view of a portion of a light field display system in a multi-faceted, seamless surface environment in accordance with one or more embodiments.

FIG. 5 is a block diagram of a light field display system in accordance with one or more embodiments.

FIG. 6A is a perspective view of a vehicle enhanced with a light field display system in accordance with one or more embodiments.

FIG. 6B is a perspective view of the vehicle of FIG. 6A presenting holographic content in accordance with one or more embodiments.

FIG. 7A is a perspective view of an interior of a vehicle enhanced with a light field display system in accordance with one or more embodiments.

FIG. 7B is a perspective view of the interior of FIG. 7A presenting holographic content in accordance with one or more embodiments.

FIG. 8 is a perspective view of an interior of a vehicle enhanced with a light field display system including an enhanced window in accordance with one or more embodiments.

FIG. 9 is a perspective view of an interior of a vehicle enhanced with a light field display system including an enhanced sunroof, in accordance with one or more embodiments.

FIG. 10 is a perspective view of a vehicle enhanced with a light field display system to reduce blind spots in accordance with one or more embodiments.

FIG. 11 illustrates an example system that uses a transflector to relay holographic objects projected by a light field display in accordance with one or more embodiments.

FIG. 12 illustrates an overlap of passenger fields of view within a vehicle in accordance with one or more embodiments.

Fig. 13A illustrates an example view of a light field display having a substantially uniform projection direction in accordance with one or more embodiments.

FIG. 13B illustrates an example view of a light field display with variable projection directions in accordance with one or more embodiments.

The figures depict various embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.

Detailed Description

SUMMARY

A Light Field (LF) display system is implemented in a vehicle. The LF display system may create a multi-faceted, seamless surface environment on some or all of one or more surfaces (interior and/or exterior) of the vehicle. The LF display system may present the holographic content to a user of the vehicle, and in some embodiments, to a user outside the vehicle. The user is typically a viewer of the holographic content and may also be a driver, a passenger (a person inside the vehicle, but not the driver), a passenger (a person inside the vehicle), or a person outside the vehicle. The LF display system includes an LF display assembly configured to present holographic content including one or more holographic objects that will be visible to one or more users in a viewing volume of the LF display system. The holographic object may also be enhanced with other sensory stimuli (e.g., tactile and/or audio). For example, an ultrasonic transmitter in the LF display system may emit ultrasonic pressure waves that provide a tactile surface for some or all of the holographic objects. The holographic content may include additional visual content (i.e., 2D or 3D visual content). Coordinating the emitters to ensure that a comprehensive experience is achieved is part of the system in a multi-emitter embodiment (i.e., a holographic object that provides the correct tactile sensation and sensory stimulus at any given point in time). The LF display assembly may include one or more LF display modules for generating holographic content.

In some embodiments, the LF display system includes a plurality of LF display modules that are part of an exterior surface of the vehicle. The LF display modules along the exterior surface may be configured to project holographic content to change the appearance of the vehicle. In this way, a user of the LF display system may modify the manner in which the vehicle is presented to viewers outside the vehicle. For example, the LF display system may change the color of some or all of the vehicles, the shape of some or all of the vehicles, or some combination thereof. The LF display system may use holographic objects (e.g., spoilers, hoods, etc.) presented by some or all of the LF display modules along the exterior surface of the vehicle to alter the shape of the vehicle.

In some embodiments, the LF display system includes a plurality of LF display modules that are part of the interior surface of the vehicle. The LF display modules along the interior surface may be configured to project holographic content to change the appearance of the vehicle interior and provide vehicle control customization. For example, drivers may customize the gauges that they wish to see in the instrument panel, the position of the steering wheel, whether the vehicle is an automatic or manual transmission, the position of the window control interface, the position of the door control interface, and the like. Additionally, the vehicle may include one or more reinforced windows (e.g., windshield, sunroof, etc.). An enhanced window is a window that contains at least some LF display modules.

In some embodiments, the LF display system may contain elements that enable the system to simultaneously emit at least one type of energy, and simultaneously absorb at least one type of energy to respond to a user and provide instructions to the vehicle. For example, an LF display system may emit both holographic objects for viewing and ultrasound waves for tactile perception, and simultaneously absorb imaging information and other scene analysis for tracking the viewer, while also absorbing ultrasound waves to detect the user's touch response. As an example, such a system may project a holographic steering wheel that rotates according to a touch stimulus when virtually "touched" by a user. Display system components that perform ambient energy sensing may be integrated into the display surface through bi-directional energy elements that both emit and absorb energy, or they may be dedicated sensors separate from the display surface, such as an ultrasonic speaker and an imaging capture device such as a camera.

In some embodiments, both the interior and exterior surfaces of the vehicle contain LF display modules. One advantage of this arrangement is that blind spots for the driver of the vehicle can be substantially reduced. For example, the vehicle may contain a camera (e.g., as part of the LF display module) that captures an image of a local area around the vehicle. The LF display system uses the captured images to generate holographic content, which is then presented to the driver using an LF display module inside the vehicle. An LF display module inside the vehicle renders holographic content corresponding to the captured image, effectively allowing the driver to "see through" objects that are generally opaque (i.e., portions of the car) to view objects in the driver's blind spot.

Overview of light field display System

Fig. 1 is a diagram 100 of a Light Field (LF) display module 110 presenting a holographic object 120 in accordance with one or more embodiments. The LF display module 110 is part of a Light Field (LF) display system. The LF display system uses one or more LF display modules to render holographic content containing at least one holographic object. The LF display system may present holographic content to one or more viewers. In some embodiments, the LF display system may also enhance the holographic content with other sensory content (e.g., touch, audio, smell, temperature, etc.). For example, as discussed below, the projection of focused ultrasound waves may generate an aerial haptic sensation that may simulate the surface of some or all of the holographic objects. The LF display system includes one or more LF display modules 110 and is discussed in detail below with respect to fig. 2 through 5.

LF display module 110 is a holographic display that presents holographic objects (e.g., holographic object 120) to one or more viewers (e.g., viewer 140). LF display module 110 includes an energy device layer (e.g., an emissive electronic display or an acoustic projection device) and an energy waveguide layer (e.g., an array of optical lenses). In addition, the LF display module 110 may contain an energy relay layer for combining multiple energy sources or detectors together to form a single surface. At a high level, the energy device layer generates energy (e.g., holographic content) which is then guided to a region in space using an energy waveguide layer according to one or more four-dimensional (4D) light-field functions. LF display module 110 may also project and/or sense one or more types of energy simultaneously. For example, LF display module 110 may be capable of projecting a holographic image and an ultrasonic tactile surface in a viewing volume while detecting imaging data from the viewing volume. The operation of the LF display module 110 is discussed in more detail below with respect to fig. 2A through 3B.

LF display module 110 uses one or more 4D light field functions (e.g., derived from plenoptic functions) to generate holographic objects within holographic object volume 160. The holographic object may be three-dimensional (3D), two-dimensional (2D), or some combination thereof. Furthermore, the holographic object may be polychromatic (e.g. full color). The holographic objects may be projected in front of the screen plane, behind the screen plane, or separated by the screen plane. The holographic object 120 may be rendered such that it is perceivable anywhere within the holographic object body 160. The holographic object within holographic object 160 may appear to be floating in space to viewer 140.

Holographic object volume 160 represents the volume in which viewer 140 may perceive a holographic object. The holographic object 160 may extend in front of the surface of the display area 150 (i.e. towards the viewer 140) so that the holographic object may be presented in front of the plane of the display area 150. Additionally, holographic object 160 may extend behind the surface of display area 150 (i.e., away from viewer 140), allowing the holographic object to be rendered as if it were behind the plane of display area 150. In other words, holographic object 160 may contain all light rays originating from (e.g., projected from) display region 150 and may converge to create a holographic object. Herein, the light rays may converge at a point in front of, at, or behind the display surface. More simply, the holographic object volume 160 encompasses all volumes from which a viewer can perceive a holographic object.

Viewing volume 130 is the volume of space from which holographic objects (e.g., holographic object 120) presented within holographic object volume 160 by the LF display system are fully visible. The holographic object may be rendered in a holographic object volume 160 and viewed in a viewing volume 130 such that the holographic object is indistinguishable from an actual object. A holographic object is formed by projecting light rays as they would be generated from the surface of the object when the object is actually present.

In some cases, holographic object 160 and corresponding viewing volume 130 may be relatively small such that they are designed for a single viewer. In other embodiments, as discussed in detail below with respect to, for example, fig. 4A, 4B, and 6A through 13B, the LF display module may be enlarged and/or tiled to create larger holographic object volumes and corresponding viewing volumes that may accommodate a wide range of viewers (e.g., 1 to thousands). The LF display modules presented in this disclosure can be constructed such that the entire surface of the LF display contains holographic imaging optics, there are no dead or dead space, and no bezel is required. In these embodiments, the LF display modules may be tiled such that the imaging area is continuous over the seams between the LF display modules and the bond lines between tiled modules are barely detectable using the visual acuity of the eye. It is noted that in some configurations, although not described in detail herein, some portions of the display surface may not contain holographic imaging optics.

The flexible size and/or shape of the viewing body 130 allows a viewer to be unconstrained within the viewing body 130. For example, the viewer 140 may move to different positions within the viewing volume 130 and see different views of the holographic object 120 from corresponding viewing angles. To illustrate, referring to fig. 1, the viewer 140 is positioned at a first location relative to the holographic object 120 such that the holographic object 120 appears to be a frontal view of the dolphin. The viewer 140 can move to other positions relative to the holographic object 120 to see different views of the dolphin. For example, the viewer 140 may move so that he/she sees the left side of a dolphin, the right side of a dolphin, etc., much like the viewer 140 is watching an actual dolphin and changing his/her relative positioning to the actual dolphin to see a different view of the dolphin. In some embodiments, holographic object 120 is visible to all viewers within viewing volume 130, all viewers having an unobstructed (i.e., unobstructed by/by objects/people) line of sight to holographic object 120. These viewers may be unconstrained such that they may move around within the viewing volume to see different perspectives of the holographic object 120. Thus, the LF display system can render the holographic object such that multiple unconstrained viewers can simultaneously see different perspectives of the holographic object in real world space as if the holographic object were physically present.

In contrast, conventional displays (e.g., stereoscopic, virtual reality, augmented reality, or mixed reality) typically require each viewer to wear some external device (e.g., 3-D glasses, near-eye displays, or head-mounted displays) to see the content. Additionally and/or alternatively, conventional displays may require that the viewer be constrained to a particular viewing orientation (e.g., on a chair having a fixed position relative to the display). For example, when viewing an object shown by a stereoscopic display, the viewer will always focus on the display surface, rather than on the object, and the display will always present only two views of the object, which will follow the viewer trying to move around the perceived object, resulting in a perceived distortion of the object. However, with light field displays, viewers of holographic objects presented by LF display systems do not need to wear external devices, nor are they necessarily restricted to specific locations to see the holographic objects. The LF display system presents the holographic object in a manner that is visible to the viewer, much the same way that the viewer can see the physical object, without the need for special goggles, glasses, or head-mounted accessories. Further, the viewer may view the holographic content from any location within the viewing volume.

Notably, the size of potential location receptors for holographic objects within the holographic object volume 160. To increase the size of holographic object 160, the size of display area 150 of LF display module 110 may be increased and/or multiple LF display modules may be tiled together in a manner that forms a seamless display surface. The effective display area of the seamless display surface is larger than the display area of each LF display module. Some embodiments related to tiling LF display modules are discussed below with respect to fig. 4A, 4B, and 6A through 13B. As illustrated in fig. 1, the display area 150 is rectangular, resulting in a holographic object 160 that is pyramidal in shape. In other embodiments, the display area may have some other shape (e.g., hexagonal), which also affects the shape of the corresponding viewing volume.

Additionally, although the discussion above focuses on presenting holographic object 120 within a portion of holographic object 160 located between LF display module 110 and viewer 140, LF display module 110 may additionally present content in holographic object 160 behind the plane of display area 150. For example, LF display module 110 may make display area 150 appear to be the surface of the ocean where holographic object 120 is jumping out. And the displayed content may enable the viewer 140 to view through the displayed surface to see marine life underwater. Furthermore, the LF display system can generate content that moves seamlessly around the holographic object 160, both behind and in front of the plane of the display area 150.

Fig. 2A illustrates a cross-section 200 of a portion of an LF display module 210 in accordance with one or more embodiments. The LF display module 210 may be the LF display module 110. In other embodiments, the LF display module 210 may be another LF display module having a display area with a different shape than the display area 150. In the illustrated embodiment, the LF display module 210 includes an energy device layer 220, an energy relay layer 230, and an energy waveguide layer 240. Some embodiments of LF display module 210 have different components than those described herein. For example, in some embodiments, LF display module 210 does not include energy relay layer 230. Similarly, functionality may be distributed among components in a different manner than described herein.

The display system described herein presents an energy emission that replicates the energy of a typical surrounding object in the real world. Here, the emitted energy is directed from each coordinate on the display surface towards a particular direction. In other words, the respective coordinates on the display surface serve as the projected locations of the emitted energy. The directed energy from the display surface causes a number of energy rays to converge, which can thus create a holographic object. For example, for visible light, the LF display will project from the projection location very many rays that may converge at any point of the holographic object volume, so from the perspective of a viewer positioned further away than the projected object, the rays appear to come from the surface of a real object positioned in the region of this space. In this way, the LF display generates reflected light rays that exit this object surface from the viewer's perspective. The viewer viewing angle may vary over any given holographic object and the viewer will see different views of the holographic object.

As described herein, energy device layer 220 includes one or more electronic displays (e.g., emissive displays such as OLEDs) and one or more other energy projecting and/or energy receiving devices. One or more electronic displays are configured to display content according to display instructions (e.g., from a controller of the LF display system). One or more electronic displays comprise a plurality of pixels, each pixel having an independently controlled intensity. Many types of commercial displays can be used in LF displays, such as emissive LED and OLED displays.

The energy device layer 220 may also contain one or more acoustic projection devices and/or one or more acoustic receiving devices. The acoustic projection device generates one or more pressure waves that are complementary to the holographic object 250. The generated pressure waves may be, for example, audible, ultrasonic, or some combination thereof. The ultrasonic pressure wave array may be used for volume haptics (e.g., at the surface of the holographic object 250). The audible pressure waves are used to provide audio content (e.g., immersive audio) that can complement the holographic object 250. For example, assuming that holographic object 250 is a dolphin, one or more acoustic projection devices may be used to (1) generate a tactile surface that coincides with the surface of the dolphin so that a viewer may touch holographic object 250; and (2) provide audio content corresponding to the noise emitted by the dolphin, such as a click, chirp, or squeak. Acoustic receiving devices (e.g., microphones or microphone arrays) may be configured to monitor ultrasonic and/or audible pressure waves within a localized area of LF display module 210.

The energy device layer 220 may also contain one or more imaging sensors. The imaging sensor may be sensitive to light in the visible wavelength band and, in some cases, may be sensitive to light in other wavelength bands (e.g., infrared). The imaging sensor may be, for example, a Complementary Metal Oxide Semiconductor (CMOS) array, a Charge Coupled Device (CCD), an array of photodetectors, some other sensor that captures light, or some combination thereof. The LF display system may use data captured by one or more imaging sensors for locating and tracking the position of the viewer.

In some configurations, the energy relay layer 230 relays energy (e.g., electromagnetic energy, mechanical pressure waves, etc.) between the energy device layer 220 and the energy waveguide layer 240. Energy relay layer 230 includes one or more energy relay elements 260. Each energy relay element comprises a first surface 265 and a second surface 270 and relays energy between the two surfaces. The first surface 265 of each energy relay element may be coupled to one or more energy devices (e.g., an electronic display or an acoustic projection device). The energy relay element may be constructed of, for example, glass, carbon, optical fiber, optical film, plastic, polymer, or some combination thereof. Additionally, in some embodiments, the energy relay elements may adjust the magnification (increase or decrease) of the energy passing between the first surface 265 and the second surface 270. If the repeater provides magnification, the repeater may take the form of an array of bonded cone repeaters, known as cones, where the area of one end of the cone may be substantially larger than the area of the opposite end. The large ends of the cones may be bonded together to form a seamless energy surface 275. One advantage is that spaces are created on the small ends of each cone to accommodate the mechanical envelopes of multiple energy sources, such as the bezel of multiple displays. This additional room allows for energy sources to be placed side-by-side on the small cone side, with the active area of each energy source directing energy into the small cone surface and relaying to the large seamless energy surface. Another advantage of using a cone-shaped relay is that there is no non-imaging dead space on the combined seamless energy surface formed by the large ends of the cone. There is no border or border and therefore the seamless energy surfaces can then be tiled together to form a larger surface with few seams depending on the visual acuity of the eye.

The second surfaces of adjacent energy relay elements come together to form an energy surface 275. In some embodiments, the spacing between the edges of adjacent energy relay elements is less than the minimum perceivable profile defined by the visual acuity of a human eye having vision, e.g., 20/40, such that the energy surface 275 is effectively seamless from the perspective of a viewer 280 within the viewing volume 285. In other embodiments, the second surfaces of adjacent energy relay elements are fused together without a seam therebetween by a treatment step, which may include one or more of pressure, heat, and a chemical reaction. And in still other embodiments, the array of energy relay elements is formed by molding one side of a continuous block of relay material into an array of small cone ends, each energy relay element configured to transmit energy from an energy device attached to a small cone end to a larger area of a single combined surface that is never subdivided.

In some embodiments, one or more of the energy relay elements exhibit energy localization, wherein the energy transfer efficiency in a longitudinal direction substantially perpendicular to surfaces 265 and 270 is much higher than the transfer efficiency in a perpendicular transverse plane, and wherein the energy density is highly localized in this transverse plane as the energy wave propagates between surface 265 and surface 270. This localization of energy allows the energy distribution (e.g., image) to be efficiently relayed between these surfaces without any significant loss of resolution.

Energy waveguiding layer 240 uses waveguiding elements in energy waveguiding layer 240 to guide energy from locations (e.g., coordinates) on energy surface 275 to specific energy propagation paths that enter holographic viewing volume 285 outward from the display surface. The energy propagation path is defined by at least two angular dimensions determined by the energy surface coordinate position relative to the waveguide. The waveguide is associated with spatial 2D coordinates. These four coordinates together form a four-dimensional (4D) energy field. As an example, for electromagnetic energy, waveguide elements in energy waveguide layer 240 direct light from locations on seamless energy surface 275 through viewing volume 285 along different propagation directions. In various examples, light is directed according to a 4D light field function to form holographic object 250 within holographic object volume 255.

Each waveguiding element in the energy waveguiding layer 240 may be, for example, a lenslet comprised of one or more elements. In some configurations, the lenslets may be positive lenses. The positive lens may have a spherical, aspherical or free-form surface profile. Additionally, in some embodiments, some or all of the waveguide elements may contain one or more additional optical components. The additional optical component may be, for example, an energy-suppressing structure such as a baffle, a positive lens, a negative lens, a spherical lens, an aspherical lens, a free-form lens, a liquid crystal lens, a liquid lens, a refractive element, a diffractive element, or some combination thereof. In some embodiments, at least one of the lenslets and/or additional optical components is capable of dynamically adjusting its optical power. For example, the lenslets may be liquid crystal lenses or liquid lenses. Dynamic adjustment of the surface profile of the lenslets and/or at least one additional optical component may provide additional directional control of the light projected from the waveguide element.

In the example shown, the holographic object 255 of the LF display has a boundary formed by ray 256 and ray 257, but may be formed by other rays. Holographic object volume 255 is a continuous volume that extends both in front of (i.e., toward viewer 280) and behind (i.e., away from viewer 280) energy waveguide layer 240. In the illustrated example, rays 256 and 257 that are perceivable by the user are projected from opposite edges of LF display module 210 at a maximum angle with respect to a normal to display surface 277, but the rays may be other projected rays. The rays define the field of view of the display and therefore the boundaries of the holographic viewing volume 285. In some cases, the rays define a holographic viewing volume in which the entire display can be viewed without vignetting (e.g., an ideal viewing volume). As the field of view of the display increases, the convergence of rays 256 and 257 will be closer to the display. Thus, a display with a larger field of view allows the viewer 280 to see the entire display at a closer viewing distance. In addition, rays 256 and 257 may form an ideal holographic object volume. Holographic objects presented in an ideal holographic object volume can be seen anywhere in viewing volume 285.

In some instances, the holographic object may be presented to only a portion of viewing volume 285. In other words, the holographic object volume may be divided into any number of viewing sub-volumes (e.g., viewing sub-volume 290). In addition, the holographic object may be projected to the outside of the holographic object body 255. For example, holographic object 251 is present outside of holographic object volume 255. Because holographic object 251 is present outside of holographic object volume 255, it cannot be viewed from every location in viewing volume 285. For example, the holographic object 251 may be visible from a position in the viewing sub-volume 290, but not from a position of the viewer 280.

For example, turn to FIG. 2B to show viewing holographic content from a different viewing sub-volume. Fig. 2B illustrates a cross-section 200 of a portion of an LF display module in accordance with one or more embodiments. The cross-section of fig. 2B is the same as the cross-section of fig. 2A. However, FIG. 2B illustrates a different set of light rays projected from LF display module 210. Rays 256 and 257 still form holographic object 255 and viewing volume 285. However, as shown, the rays projected from the top of LF display module 210 and the rays projected from the bottom of LF display module 210 overlap to form respective viewing subvolumes (e.g., viewing subvolumes 290A, 290B, 290C and 290D) within viewing volume 285. Viewers in a first viewing subvolume (e.g., 290A) may be able to perceive holographic content presented in holographic object volume 255, and viewers in other viewing subvolumes (e.g., 290B, 290C, and 290D) may not be able to perceive.

More simply, as illustrated in fig. 2A, holographic object volume 255 is a volume in which holographic objects may be rendered by an LF display system such that the holographic objects may be perceived by a viewer (e.g., viewer 280) in viewer volume 285. In this way, viewing volume 285 is an example of an ideal viewing volume, and holographic object 255 is an example of an ideal object. However, in various configurations, a viewer may perceive holographic objects presented by LF display system 200 in other example holographic objects, such that the viewer is in other example viewers. More generally, a "line of sight guide" will be applied when viewing holographic content projected from the LF display module. The gaze guidance asserts that the line formed by the viewer's eye location and the holographic object being viewed must intersect the LF display surface.

Because the holographic content is rendered according to the 4D light field function, each eye of the viewer 280 sees a different viewing angle of the holographic object 250 when viewing the holographic content rendered by the LF display module 210. Furthermore, as viewer 280 moves within viewing volume 285, he/she will also see different viewing angles for holographic object 250, as will other viewers within viewing volume 285. As will be appreciated by those of ordinary skill in the art, 4D light field functions are well known in the art and will not be described in further detail herein.

As described in more detail herein, in some embodiments, the LF display may project more than one type of energy. For example, an LF display may project two types of energy, e.g., mechanical energy and electromagnetic energy. In this configuration, the energy relay layer 230 may contain two separate energy relays that are interleaved together at the energy surface 275, but separated such that energy is relayed to two different energy device layers 220. Here, one repeater may be configured to transmit electromagnetic energy, while another repeater may be configured to transmit mechanical energy. In some embodiments, mechanical energy may be projected from a location on energy waveguide layer 240 between electromagnetic waveguide elements, thereby facilitating formation of a structure that inhibits light from being transmitted from one electromagnetic waveguide element to another. In some embodiments, the energy waveguide layer 240 may also include waveguide elements that transmit focused ultrasound along a particular propagation path according to display instructions from the controller.

It should be noted that in an alternative embodiment (not shown), the LF display module 210 does not contain an energy relay layer 230. In this case, energy surface 275 is an emitting surface formed using one or more adjacent electronic displays within energy device layer 220. And in some embodiments, the spacing between the edges of adjacent electronic displays is less than the minimum perceivable profile defined by the visual acuity of a human eye having 20/40 vision without an energy relay layer, such that the energy surface is effectively seamless from the perspective of a viewer 280 within the viewing volume 285.

LF display module

Fig. 3A is a perspective view of an LF display module 300A in accordance with one or more embodiments. LF display module 300A may be LF display module 110 and/or LF display module 210. In other embodiments, the LF display module 300A may be some other LF display module. In the illustrated embodiment, the LF display module 300A includes an energy device layer 310 and an energy relay layer 320 and an energy waveguide layer 330. LF display module 300A is configured to render holographic content from display surface 365, as described herein. For convenience, the display surface 365 is illustrated in dashed outline on the frame 390 of the LF display module 300A, but more precisely the surface directly in front of the waveguide elements defined by the inner edges of the frame 390. Some embodiments of LF display module 300A have different components than those described herein. For example, in some embodiments, the LF display module 300A does not include an energy relay layer 320. Similarly, functionality may be distributed among components in a different manner than described herein.

Energy device layer 310 is an embodiment of energy device layer 220. The energy device layer 310 includes four energy devices 340 (three are visible in the figure). The energy devices 340 may all be of the same type (e.g., all electronic displays) or may comprise one or more different types (e.g., comprising an electronic display and at least one acoustic energy device).

Energy relay layer 320 is an embodiment of energy relay layer 230. The energy relay layer 320 includes four energy relay devices 350 (three are visible in the figure). Energy relay devices 350 may all relay the same type of energy (e.g., light) or may relay one or more different types (e.g., light and sound). Each of the relay devices 350 includes a first surface and a second surface, the second surface of the energy relay devices 350 being arranged to form a single seamless energy surface 360. In the illustrated embodiment, each of the energy relays 350 is tapered such that the first surface has a smaller surface area than the second surface, which allows for a mechanical envelope of the energy device 340 to be accommodated on the small end of the taper. This also leaves the seamless energy surface unbounded, since the entire area can project energy. This means that this seamless energy surface can be tiled by placing multiple instances of LF display module 300A together without dead space or borders, so that the entire combined surface is seamless. In other embodiments, the surface areas of the first and second surfaces are the same.

Energy waveguide layer 330 is an embodiment of energy waveguide layer 240. The energy waveguide layer 330 comprises a plurality of waveguide elements 370. As discussed above with respect to fig. 2, the energy waveguide layer 330 is configured to direct energy from the seamless energy surface 360 along a particular propagation path according to a 4D light field function to form a holographic object. It should be noted that in the illustrated embodiment, the energy waveguide layer 330 is defined by a frame 390. In other embodiments, the frame 390 is not present and/or the thickness of the frame 390 is reduced. The removal or reduction of the thickness of the frame 390 may facilitate tiling the LF display module 300A with additional LF display modules.

It should be noted that in the illustrated embodiment, the seamless energy surface 360 and the energy waveguide layer 330 are planar. In an alternative embodiment not shown, the seamless energy surface 360 and the energy waveguide layer 330 may be curved in one or more dimensions.

The LF display module 300A may be configured with additional energy sources residing on the surface of the seamless energy surface and allowing projection of energy fields other than light fields. In one embodiment, the acoustic energy field may be projected from electrostatic speakers (not shown) mounted at any number of locations on the seamless energy surface 360. Further, the electrostatic speaker of the LF display module 300A is located within the light field display module 300A such that the dual energy surface simultaneously projects the sound field and the holographic content (e.g., light). For example, an electrostatic speaker may be formed with one or more diaphragm elements that transmit some wavelengths of electromagnetic energy and driven by one or more conductive elements (e.g., a plane that sandwiches the one or more diaphragm elements). The electrostatic speaker may be mounted on the seamless energy surface 360 such that the diaphragm element covers some of the waveguide elements. The conductive electrodes of the speaker may be positioned at the same location as structures designed to inhibit light transmission between electromagnetic waveguides, and/or at locations between electromagnetic waveguide elements (e.g., frame 390). In various configurations, the speaker may project audible sounds and/or generate many sources of focused ultrasound energy for the tactile surface.

In some configurations, the energy device 340 may sense energy. For example, the energy device may be a microphone, a light sensor, a sound transducer, or the like. Thus, the energy relay may also relay energy from the seamless energy surface 360 to the energy device layer 310. That is, the seamless energy surface 360 of the LF display module forms a bi-directional energy surface when the energy device and the energy relay device 340 are configured to simultaneously transmit and sense energy (e.g., transmit a light field and sense sound).

More broadly, the energy device 340 of the LF display module 340 may be an energy source or an energy sensor. LF display module 300A may contain various types of energy devices that act as energy sources and/or energy sensors to facilitate the projection of high quality holographic content to a user. Other sources and/or sensors may include thermal sensors or sources, infrared sensors or sources, image sensors or sources, mechanical energy transducers that generate acoustic energy, feedback sources, and the like. Multiple other sensors or sources are possible. Further, the LF display modules may be tiled such that the LF display modules may form an assembly that projects and senses multiple types of energy from a large aggregate seamless energy surface.

In various embodiments of LF display module 300A, the seamless energy surface 360 may have various surface portions, where each surface portion is configured to project and/or emit a particular type of energy. For example, when the seamless energy surface is a dual energy surface, the seamless energy surface 360 includes one or more surface portions that project electromagnetic energy and one or more other surface portions that project ultrasonic energy. The surface portions of the projected ultrasonic energy may be positioned on the seamless energy surface 360 between the electromagnetic waveguide elements and/or co-located with structures designed to inhibit light transmission between the electromagnetic waveguide elements. In examples where the seamless energy surface is a bi-directional energy surface, the energy relay layer 320 may comprise two types of energy relay devices that are interwoven at the seamless energy surface 360. In various embodiments, the seamless energy surface 360 may be configured such that the portion of the surface below any particular waveguide element 370 is all energy sources, all energy sensors, or a mixture of energy sources and energy sensors.

Fig. 3B is a cross-sectional view of an LF display module 300B containing an interleaved energy relay in accordance with one or more embodiments. Energy relay device 350A transfers energy between energy relay first surface 345A connected to energy device 340A and seamless energy surface 360. Energy relay 350B transfers energy between energy relay first surface 345B connected to energy device 340B and seamless energy surface 360. The two relays are interleaved at an interleaved energy relay 352 connected to a seamless energy surface 360. In this configuration, the surface 360 contains interleaved energy locations of both energy devices 340A and 340B, which may be energy sources or energy sensors. Thus, the LF display module 300B may be configured as a dual energy projection device for projecting more than one type of energy, or as a bi-directional energy device for projecting one type of energy and sensing another type of energy simultaneously. LF display module 300B may be LF display module 110 and/or LF display module 210. In other embodiments, the LF display module 300B may be some other LF display module.

The LF display module 300B contains many components configured similarly to the components of the LF display module 300A in fig. 3A. For example, in the illustrated embodiment, the LF display module 300B includes an energy device layer 310, a seamless energy surface 360, and an energy waveguide layer 330 that include at least the same functionality of those components described with respect to fig. 3A. Additionally, LF display module 300B may present and/or receive energy from display surface 365. Notably, the components of LF display module 300B may be alternatively connected and/or oriented as compared to the components of LF display module 300A in fig. 3A. Some embodiments of LF display module 300B have different components than those described herein. Similarly, functionality may be distributed among components in a different manner than described herein. Fig. 3B illustrates a design of a single LF display module 300B that may be tiled to produce a dual-energy projection surface or bi-directional energy surface with a larger area.

In one embodiment, the LF display module 300B is an LF display module of a bi-directional LF display system. A bi-directional LF display system can simultaneously project energy from the display surface 365 and sense the energy. The seamless energy surface 360 contains both energy projection locations and energy sensing locations that are closely interleaved on the seamless energy surface 360. Thus, in the example of fig. 3B, energy relay layer 320 is configured differently than the energy relay layer of fig. 3A. For convenience, the energy relay layer of the LF display module 300B will be referred to herein as an "interleaved energy relay layer.

The interleaved energy relay layer 320 contains two legs: a first energy relay 350A and a second energy relay 350B. In fig. 3B, each of the legs is shown as a lightly shaded area. Each of the legs may be made of a flexible relay material and formed with sufficient length to be used with various sizes and shapes of energy devices. In some areas of the interleaved energy relay layer, the two legs are tightly interleaved together as they approach the seamless energy surface 360. In the illustrated example, interleaved energy relay 352 is shown as a dark shaded area.

When interleaved at the seamless energy surface 360, the energy relay device is configured to relay energy to/from different energy devices. The energy devices are located at the energy device layer 310. As illustrated, energy device 340A is connected to energy relay 350A, and energy device 340B is connected to energy relay 350B. In various embodiments, each energy device may be an energy source or an energy sensor.

The energy waveguide layer 330 includes waveguide elements 370 to guide energy waves from the seamless energy surface 360 along a projected path toward a series of convergence points. In this example, holographic object 380 is formed at a series of convergence points. Notably, as illustrated, the convergence of energy at holographic object 380 occurs at the viewer side (i.e., front side) of display surface 365. However, in other examples, the convergence of energy may extend anywhere in the holographic object volume, both in front of display surface 365 and behind display surface 365. The waveguide element 370 can simultaneously guide incoming energy to an energy device (e.g., an energy sensor), as described below.

In one example embodiment of LF display module 300B, the emissive display is used as an energy source (e.g., energy device 340A) and the imaging sensor is used as an energy sensor (e.g., energy device 340B). In this way, LF display module 300B can simultaneously project holographic content and detect light from a volume in front of display surface 365. Also in this embodiment, the LF display module 300B acts as both an LF display and an LF sensor.

In an embodiment, the LF display module 300B is configured to simultaneously project a light field in front of the display surface 365 and capture the light field from in front of the display surface 365. In this embodiment, energy relay 350A connects a first set of locations at seamless energy surface 360 positioned below waveguide element 370 to energy device 340A. In one example, the energy device 340A is an emissive display having an array of source pixels. The energy relay device 340B connects a second set of locations at the seamless energy surface 360 positioned below the waveguide element 370 to the energy device 340B. In one example, energy device 340B is an imaging sensor having an array of sensor pixels. The LF display module 300B may be configured such that the locations at the seamless energy surface 365 below a particular waveguide element 370 are all emission display locations, all imaging sensor locations, or some combination of these locations. In other embodiments, the bi-directional energy surface may project and receive various other forms of energy.

In another example embodiment of the LF display module 300B, the LF display module is configured to project two different types of energy. For example, the energy device 340A is an emissive display configured to emit electromagnetic energy, and the energy device 340B is an ultrasound transducer configured to emit mechanical energy. Thus, both light and sound may be projected from various locations at the seamless energy surface 360. In this configuration, the energy relay 350A connects the energy device 340A to the seamless energy surface 360 and relays the electromagnetic energy. The energy relay device is configured to have properties (e.g., varying refractive index) that enable the energy relay device to efficiently transmit electromagnetic energy. Energy relay 350B connects energy device 340B to seamless energy surface 360 and relays mechanical energy. Energy relay 350B is configured to have properties (e.g., distribution of materials having different acoustic impedances) for efficient transmission of ultrasonic energy. In some embodiments, the mechanical energy may be projected from a location between waveguide elements 370 on the energy waveguide layer 330. The location of the projected mechanical energy may form a structure for inhibiting light transmission from one electromagnetic waveguide element to another electromagnetic waveguide element. In one example, a spatially separated array of locations projecting ultrasonic mechanical energy may be configured to form three-dimensional haptic shapes and surfaces in air. The surface may coincide with a projected holographic object (e.g., holographic object 380). In some instances, phase delays and amplitude variations across the array may help form the haptic shape.

In various embodiments, the LF display module 300B with interleaved energy relay devices may contain multiple energy device layers, where each energy device layer contains a particular type of energy device. In these examples, the energy relay layer is configured to relay the appropriate type of energy between the seamless energy surface 360 and the energy device layer 310.

Tiled LF display module

Fig. 4A is a perspective view of a portion of an LF display system 400 tiled in two dimensions to form a single-sided seamless surface environment in accordance with one or more embodiments. LF display system 400 includes a plurality of LF display modules tiled to form an array 410. More specifically, each of the tiles in array 410 represents a tiled LF display module 412. The LF display module 412 may be the same as the LF display module 300A or 300B. The array 410 may cover, for example, some or all of a surface (e.g., a wall) of a room. The LF array may cover other surfaces, for example, portions of the interior and/or exterior of the automobile as discussed below with respect to fig. 6A through 13B.

The array 410 may project one or more holographic objects. For example, in the illustrated embodiment, array 410 projects holographic object 420 and holographic object 422. Tiling of LF display modules 412 allows for a larger viewing volume and allows objects to be projected at a greater distance from array 410. For example, in the illustrated embodiment, the viewing volume is approximately the entire area in front of and behind the array 410, rather than a partial volume in front of (and behind) the LF display module 412.

In some embodiments, LF display system 400 presents holographic object 420 to viewer 430 and viewer 434. Viewer 430 and viewer 434 receive different viewing angles for holographic object 420. For example, viewer 430 is presented with a direct view of holographic object 420, while viewer 434 is presented with a more oblique view of holographic object 420. As viewer 430 and/or viewer 434 moves, they are presented with different perspectives of holographic object 420. This allows the viewer to visually interact with the holographic object by moving relative to the holographic object. For example, as viewer 430 walks around holographic object 420, viewer 430 sees different sides of holographic object 420 as long as holographic object 420 remains in the holographic object volume of array 410. Thus, viewer 430 and viewer 434 may simultaneously see holographic object 420 in real world space as if the holographic object were actually present. In addition, viewer 430 and viewer 434 do not need to wear an external device in order to view holographic object 420, because holographic object 420 is visible to the viewer in much the same way that a physical object would be visible. Further, here, holographic object 422 is shown behind the array, as the viewing volume of the array extends behind the surface of the array. In this way, holographic object 422 may be presented to viewer 430 and/or viewer 434.

In some embodiments, the LF display system 400 may include a tracking system that tracks the location of the viewer 430 and the viewer 434. In some embodiments, the tracked location is the location of the viewer. In other embodiments, the tracked location is a location of the viewer's eyes. Eye location tracking is different from gaze tracking, which tracks where the eye is looking (e.g., using orientation to determine gaze location). The eyes of viewer 430 and the eyes of viewer 434 are located at different positions.

In various configurations, the LF display system 400 may include one or more tracking systems. For example, in the illustrated embodiment of fig. 4A, the LF display system includes a tracking system 440 external to the array 410. Here, the tracking system may be a camera system coupled to the array 410. The external tracking system is described in more detail with respect to fig. 5. In other example embodiments, the tracking system may be incorporated into the array 410 as described herein. For example, an energy device (e.g., energy device 340) of one or more LF display modules 412 containing a bi-directional energy surface included in the array 410 may be configured to capture an image of a viewer in front of the array 410. In any case, one or more tracking systems of LF display system 400 determine tracking information about viewers (e.g., viewer 430 and/or viewer 434) viewing holographic content presented by array 410.

The tracking information describes a location of the viewer or a location of a portion of the viewer (e.g., one or both eyes of the viewer, or limbs of the viewer) in space (e.g., relative to the tracking system). The tracking system may use any number of depth determination techniques to determine tracking information. The depth determination technique may include, for example, structured light, time-of-flight, stereo imaging, some other depth determination technique, or some combination thereof. The tracking system may include various systems configured to determine tracking information. For example, the tracking system may include one or more infrared sources (e.g., structured light sources), one or more imaging sensors (e.g., red-blue-green-infrared cameras) that may capture infrared images, and a processor that executes a tracking algorithm. The tracking system may use depth estimation techniques to determine the viewer's location and/or the viewer's orbital movement. In some embodiments, LF display system 400 generates holographic objects based on tracked positioning, motion, or gestures of viewer 430 and/or viewer 434 as described herein. For example, LF display system 400 may generate holographic objects in response to a viewer coming within a threshold distance and/or a particular location of array 410.

LF display system 400 may present one or more holographic objects tailored to each viewer based in part on the tracking information. For example, holographic object 420 may be presented to viewer 430 instead of holographic object 422. Similarly, holographic object 422 may be presented to viewer 434 instead of holographic object 420. For example, LF display system 400 tracks the location of each of viewer 430 and viewer 434. LF display system 400 determines a viewing angle for a holographic object that should be visible to a viewer based on the viewer's positioning relative to where the holographic object is to be rendered. LF display system 400 selectively projects light from particular pixels corresponding to the determined viewing angle. Thus, viewer 434 and viewer 430 may have potentially disparate experiences at the same time. In other words, LF display system 400 can present holographic content to a viewing subvolume of the viewing volume (i.e., similar to viewing subvolumes 290A, 290B, 290C and 290D shown in fig. 2B). For example, as illustrated, the viewing volume is represented by all spaces in front of and behind the array. In this example, because LF display system 400 may track the location of viewer 430, LF display system 400 may present spatial content (e.g., holographic object 420) to a viewing subvolume around viewer 430 and wild zoo content (e.g., holographic object 422) to a viewing subvolume around viewer 434. In contrast, conventional systems would have to use separate headphones to provide a similar experience.

In some embodiments, LF display system 400 may include one or more sensory feedback systems. The sensory feedback system provides other sensory stimuli (e.g., tactile, audio, or scent) that enhance holographic objects 420 and 422. For example, in the illustrated embodiment of fig. 4A, LF display system 400 includes a sensory feedback system 442 external to array 410. In one example, the sensory feedback system 442 can be an electrostatic speaker coupled to the array 410. The external sensory feedback system is described in more detail with respect to fig. 5. In other example embodiments, a sensory feedback system may be incorporated into the array 410, as described herein. For example, the energy device (e.g., energy device 340A in fig. 3B) of the LF display module 412 included in the array 410 may be configured to project ultrasound energy to and/or receive imaging information from a viewer in front of the array. In any case, sensory feedback system presents sensory content to and/or receives sensory content from a viewer (e.g., viewer 430 and/or viewer 434) viewing holographic content (e.g., holographic object 420 and/or holographic object 422) presented by array 410.

LF display system 400 may include a sensory feedback system 442 including one or more acoustic projection devices external to the array. Alternatively or additionally, LF display system 400 may contain one or more acoustic projection devices integrated into array 410, as described herein. The acoustic projection device may include an array of ultrasound sources configured to project the volumetric tactile surface. In some embodiments, for one or more surfaces of the holographic object, the tactile surface may coincide with the holographic object (e.g., at a surface of the holographic object 420) if a portion of the viewer is within a threshold distance of the one or more surfaces. In other embodiments, the tactile surface may be separate and/or independent from the holographic object. The volume haptic sensation may allow a user to touch and feel the surface of the holographic object. The plurality of acoustic projection devices may also project audible pressure waves that provide audio content (e.g., immersive audio) to the viewer. Thus, the ultrasonic pressure waves and/or the audible pressure waves may act to complement the holographic object.

In various embodiments, the LF display system 400 may provide other sensory stimuli based in part on the tracked location of the viewer. For example, holographic object 422 illustrated in fig. 4A is a lion, and LF display system 400 may cause holographic object 422 to both growl visually (i.e., holographic object 422 appears to be growl) and aurally (i.e., one or more acoustic projection devices project pressure waves) so that viewer 430 perceives it as a growl of the lion from holographic object 422.

It should be noted that in the illustrated configuration, the holographic viewing volume may be limited in a manner similar to viewing volume 285 of LF display system 200 in fig. 2. This may limit the perceived immersion that a viewer will experience with a single wall display unit. One way to address this problem is to use multiple LF display modules tiled along multiple sides, as described below with respect to fig. 4B-4F.

Fig. 4B is a perspective view of a portion of LF display system 402 in a multi-faceted, seamless surface environment in accordance with one or more embodiments. LF display system 402 is substantially similar to LF display system 400 except that multiple LF display modules are tiled to create a multi-faceted seamless surface environment. More specifically, the LF display modules are tiled to form an array that is a six-sided polymeric seamless surface environment. In fig. 4B, multiple LF display modules cover all the walls, ceiling and floor of the room. In some cases, a room may be defined as an interior portion of an automobile. In other embodiments, multiple LF display modules may cover some, but not all, of the walls, floor, ceiling, or some combination thereof. In other embodiments, multiple LF display modules are tiled to form some other aggregate seamless surface. For example, the walls may be curved such that a cylindrical polymeric energy environment is formed. Further, as described below with respect to fig. 6A-13B, in some embodiments, LF display modules may be tiled to form a surface in the interior and/or exterior of an automobile.

LF display system 402 may project one or more holographic objects. For example, in the illustrated embodiment, LF display system 402 projects holographic object 420 into an area surrounded by a six-sided polymeric seamless surface environment. In this example, the view volume of the LF display system is also contained within a six-sided polymeric seamless surface environment. It should be noted that in the illustrated configuration, viewer 434 may be positioned between holographic object 420 and LF display module 414, which projects energy (e.g., light and/or pressure waves) used to form holographic object 420. Thus, the positioning of viewer 434 may prevent viewer 430 from perceiving holographic object 420 formed by energy from LF display module 414. However, in the illustrated configuration, there is at least one other LF display module, such as LF display module 416, that is not blocked (e.g., by viewer 434) and that can project energy to form holographic object 420 and be observed by viewer 430. In this way, occlusion by the viewer in space may result in some parts of the holographic projection disappearing, but this effect is much less than if only one side of the volume were filled with the holographic display panel. Holographic object 422 is illustrated as being "outside" the walls of a closed six-sided polymeric seamless surface environment, since the holographic object body extends behind the polymeric surface. Accordingly, viewer 430 and/or viewer 434 may perceive holographic object 422 as "outside" the six-sided environment through which they may move.

As described above with reference to fig. 4A, in some embodiments, LF display system 402 actively tracks the location of the viewer and may dynamically instruct different LF display modules to render holographic content based on the tracked location. Thus, the multi-faceted configuration may provide a more robust environment (e.g., relative to fig. 4A) to provide a holographic object in which an unconstrained viewer may freely move throughout the area encompassed by the multi-faceted seamless surface environment.

It is noted that various LF display systems may have different configurations. Further, each configuration may have a particular orientation of surfaces that converge to form a seamless display surface ("polymerization surface"). That is, the LF display modules of the LF display system may be tiled to form various aggregate surfaces. For example, in fig. 4B, LF display system 402 contains LF display modules tiled to form six-sided polymeric surfaces that approximate the walls of a room. In some other examples, the polymerized surface may be present on only a portion of the surface (e.g., half of the wall) rather than the entire surface (e.g., the entire wall). Some examples are described herein.

In some configurations, the polymeric surface of the LF display system may include a polymeric surface configured to project energy toward the local viewer. Projecting energy to a local viewing volume allows for a higher quality viewing experience by, for example: increasing the density of projected energy in a particular viewing volume increases the FOV of the viewer in that viewing volume and brings the viewing volume closer to the display surface.

Control of LF display system

Fig. 5 is a block diagram of an LF display system 500 in accordance with one or more embodiments. LF display system 500 includes LF display assembly 510 and controller 520. LF display assembly 510 includes one or more LF display modules 512 that project a light field. LF display module 512 may include a source/sensor system 514 that includes one or more integrated energy sources and/or one or more energy sensors that project and/or sense other types of energy. Controller 520 includes data storage 522, network interface 524, LF processing engine 530, and vehicle interface 532. The controller 520 may also include a tracking module 526 and a viewer profiling module 528. In some embodiments, LF display system 500 also includes a sensory feedback system 570 and a tracking system 580. The LF display system described in the context of fig. 1 through 4B is an embodiment of the LF display system 500. In other embodiments, the LF display system 500 includes more or fewer modules than described herein. Similarly, functionality may be distributed among modules and/or different entities in a manner different from that described herein. The application of the LF display system 500 will also be discussed in detail below with respect to fig. 6A through 13B.

LF display assembly 510 provides holographic content in a holographic object volume that may be visible to a viewer positioned within the viewing volume. LF display assembly 510 may provide holographic content by executing display instructions received from controller 520. The holographic content may include one or more holographic objects projected in front of the polymeric surface of LF display assembly 510, behind the polymeric surface of LF display assembly 510, or some combination thereof. The generation of display instructions with controller 520 is described in more detail below.

LF display assembly 510 provides holographic content using one or more LF display modules (e.g., any of LF display module 110, LF display system 200, and LF display module 300) included in LF display assembly 510. For convenience, one or more LF display modules may be described herein as LF display module 512. The LF display modules 512 may be tiled to form an LF display assembly 510. The LF display module 512 may be configured for various seamless surface environments (e.g., single sided, multi-sided, walls of a vehicle, etc.). That is, tiled LF display modules form a polymeric surface. As previously described, LF display module 512 includes an energy device layer (e.g., energy device layer 220) and an energy waveguide layer (e.g., energy waveguide layer 240) that render holographic content. LF display module 512 may also include an energy relay layer (e.g., energy relay layer 230) that transfers energy between the energy device layer and the energy waveguide layer when rendering holographic content.

LF display module 512 may also contain other integrated systems configured for energy projection and/or energy sensing as previously described. For example, the light field display module 512 may include any number of energy devices (e.g., energy device 340) configured to project and/or sense energy. For convenience, the integrated energy projection system and the integrated energy sensing system of the LF display module 512 may be collectively described herein as a source/sensor system 514. The source/sensor system 514 is integrated within the LF display module 512 such that the source/sensor system 514 shares the same seamless energy surface with the LF display module 512. In other words, the polymeric surface of LF display assembly 510 includes the functionality of both LF display module 512 and source/sensor module 514. In other words, LF assembly 510 including LF display module 512 with source/sensor system 514 may project energy and/or sense energy while simultaneously projecting a light field. For example, LF display assembly 510 may include LF display module 512 and source/sensor system 514 configured as a dual energy surface or a bi-directional energy surface as previously described.

In some embodiments, LF display system 500 enhances the generated holographic content with other sensory content (e.g., coordinated touches, audio, or smells) using sensory feedback system 570. Sensory feedback system 570 may enhance the projection of holographic content by executing display instructions received from controller 520. In general, sensory feedback system 570 includes any number of sensory feedback devices (e.g., sensory feedback system 442) external to LF display assembly 510. Some example sensory feedback devices may include coordinated acoustic projection and reception devices, fragrance projection devices, temperature adjustment devices, force actuation devices, pressure sensors, transducers, and the like. In some cases, sensory feedback system 570 may have similar functionality as light field display assembly 510, and vice versa. For example, both the sensory feedback system 570 and the light field display assembly 510 can be configured to produce a sound field. As another example, the sensory feedback system 570 can be configured to generate a tactile surface without the light field display 510 assembly.

To illustrate, in an example embodiment of the light field display system 500, the sensory feedback system 570 may include an acoustic projection device (e.g., an ultrasonic speaker). The acoustic projection device is configured to generate one or more pressure waves that supplement the holographic content upon execution of the display instructions received from the controller 520. The generated pressure waves may be, for example, audible (for sound), ultrasonic (for touch), or some combination thereof. Similarly, sensory feedback system 570 may comprise a fragrance projection device. The fragrance projection arrangement may be configured to provide fragrance to some or all of the target area when executing display instructions received from the controller. The fragrance means may be connected to the air circulation system of the vehicle to coordinate the air flow in the target area. In addition, sensory feedback system 570 may include a temperature adjustment device. The temperature adjustment device is configured to increase or decrease the temperature in some or all of the target zones when executing display instructions received from the controller 520.

In some embodiments, sensory feedback system 570 is configured to receive input from a viewer of LF display system 500. In this case, the sensory feedback system 570 includes various sensory feedback devices for receiving input from a viewer. The sensor feedback device may include devices such as an acoustic receiving device (e.g., a microphone), a pressure sensor, a joystick, a motion detector, a transducer, and the like. The sensory feedback system may transmit the detected input to controller 520 to coordinate the generation of holographic content and/or sensory feedback.

To illustrate, in an example embodiment of a light field display assembly, the sensory feedback system 570 includes a microphone. The microphone is configured to record audio (e.g., wheezing, screaming, laughing, etc.) produced by one or more viewers. The sensory feedback system 570 provides the recorded audio as viewer input to the controller 520. The controller 520 may generate holographic content using the viewer input. Similarly, sensory feedback system 570 may comprise a pressure sensor. The pressure sensor is configured to measure a force applied to the pressure sensor by a viewer. Sensory feedback system 570 can provide the measured force as a viewer input to controller 520.

In some embodiments, the LF display system 500 includes a tracking system 580. The tracking system 580 includes any number of tracking devices configured to determine the location, movement, and/or characteristics of viewers in the target area. Typically, the tracking device is external to the LF display assembly 510. Some example tracking devices include camera assemblies ("cameras"), depth sensors, structured light, LIDAR systems, card scanning systems, or any other tracking device that can track a viewer within a target area (e.g., inside a vehicle). The monitored behavior may comprise, for example, that the user is typically a driver or passenger. The characteristics may include, for example, the name of the viewer, the age of the viewer, vehicle controller preferences, vehicle interior preferences, vehicle exterior preferences, residence, any other demographic information, or some combination thereof.

The tracking system 580 may include one or more energy sources that illuminate some or all of the target areas with light. However, in some cases, when rendering holographic content, the target area is illuminated by natural and/or ambient light from LF display assembly 510. The energy source projects light when executing instructions received from the controller 520. The light may be, for example, a structured light pattern, a light pulse (e.g., an IR flash lamp), or some combination thereof. The tracking system may project light in the following wavelength bands: a visible band (about 380nm to 750nm), an Infrared (IR) band (about 750nm to 1700nm), an ultraviolet band (10nm to 380nm), some other portion of the electromagnetic spectrum, or some combination thereof. The source may comprise, for example, a Light Emitting Diode (LED), a micro LED, a laser diode, a TOF depth sensor, a tunable laser, etc.

The tracking system 580 may adjust one or more transmit parameters when executing instructions received from the controller 520. The emission parameters are parameters that affect how light is projected from the source of the tracking system 580. The emission parameters may include, for example, brightness, pulse rate (including continuous illumination), wavelength, pulse length, some other parameter that affects how light is projected from the source assembly, or some combination thereof. In one embodiment, the source projects a pulse of light in time-of-flight operation.

The camera of tracking system 580 captures an image of the light (e.g., structured light pattern) reflected from the target area. When the tracking instruction received from the controller 520 is executed, the camera captures an image. As previously described, light may be projected by a source of the tracking system 580. The camera may comprise one or more cameras. That is, the camera may be, for example, an array of photodiodes (1D or 2D), a CCD sensor, a CMOS sensor, some other device that detects some or all of the light projected by the tracking system 580, or some combination thereof. In one embodiment, tracking system 580 may contain a light field camera external to LF display assembly 510. In other embodiments, the camera is included as part of an LF display module included in the LF display assembly 510. For example, as previously described, if the energy relay element of the light field module 512 is a bi-directional energy layer that interleaves both the transmit display and the imaging sensor at the energy device layer 220, the LF display assembly 510 may be configured to simultaneously project the light field and record imaging information from a viewing area in front of the display. In one embodiment, the images captured from the bi-directional energy surface form a light field camera. The camera provides the captured image to the controller 520.

When executing the tracking instructions received from controller 520, the camera of tracking system 580 may adjust one or more imaging parameters. Imaging parameters are parameters that affect how the camera captures an image. The imaging parameters may include, for example, frame rate, aperture, gain, exposure length, frame timing, rolling shutter or global shutter capture mode, some other parameter that affects how the camera captures images, or some combination thereof.

Controller 520 controls LF display assembly 510 and any other components of LF display system 500. The controller 520 includes a data storage 522, a network interface 524, a tracking module 526, a viewer profiling module 528, and a light field processing engine 530. In other embodiments, controller 520 includes more or fewer modules than described herein. Similarly, functionality may be distributed among modules and/or different entities in a manner different from that described herein. For example, the tracking module 526 may be part of the LF display assembly 510 or the tracking system 580.

The data storage device 522 is a memory that stores information for the LF display system 500. The stored information may include display instructions, tracking instructions, emission parameters, imaging parameters, virtual models of the target region, tracking information, images captured by the camera, one or more viewer profiles, calibration data for the light field display assembly 510, configuration data for the LF display system 510 (including resolution and orientation of the LF module 512), desired viewer geometry, content for graphics creation including 3D models, scenes and environments, textures and textures, and other information that the LF display system 500 may use, or some combination thereof. The data storage 522 is a memory, such as a Read Only Memory (ROM), a Dynamic Random Access Memory (DRAM), a Static Random Access Memory (SRAM), or some combination thereof.

The network interface 524 allows the light field display system to communicate with other systems or environments over a network. In one example, LF display system 500 receives holographic content from a remote light-field display system through network interface 524. In another example, LF display system 500 uses network interface 524 to transmit holographic content to a remote data storage device.

Tracking module 526 tracks viewers viewing content presented by LF display system 500. To this end, the tracking module 526 generates tracking instructions that control the operation of one or more sources and/or one or more cameras of the tracking system 580 and provides the tracking instructions to the tracking system 580. The tracking system 580 executes the tracking instructions and provides tracking inputs to the tracking module 526.

The tracking module 526 may determine the location of one or more viewers within the target area (e.g., sitting in a seat of the automobile, standing outside the automobile). The determined position may be relative to, for example, some reference point (e.g., a display surface). In other embodiments, the determined location may be within a virtual model of the target area. The tracked position may be, for example, a tracked position of the viewer and/or a tracked position of a portion of the viewer (e.g., eye position, hand position, etc.). The tracking module 526 uses one or more captured images from the cameras of the tracking system 580 to determine position location. The cameras of tracking system 580 may be distributed around LF display system 500 and may capture stereo images, allowing tracking module 526 to passively track the viewer. In other embodiments, the tracking module 526 actively tracks the viewer. That is, tracking system 580 illuminates some portion of the target area, images the target area, and tracking module 526 determines the location using time-of-flight and/or structured light depth determination techniques. The tracking module 526 uses the determined position location to generate tracking information.

The tracking module 526 may also receive tracking information as input from a viewer of the LF display system 500. The tracking information may include body movements (i.e., gestures) corresponding to various input options provided to the viewer by LF display system 500. A gesture is a body movement (e.g., a request to make a change to a holographic object) that may be mapped to a particular input. The gesture may include, for example, a fist making, a finger sliding in a particular direction, a pinching motion, a give up motion, a reverse pinching motion, any other motion of a portion of the body, or some combination thereof. For example, the tracking module 526 may track the viewer's body movements and assign any of the various movements as input to the LF processing engine 530. The tracking module 526 may provide the tracking information to the data store 522, the LF processing engine 530, the viewer profiling module 528, any other component of the LF display system 500, or some combination thereof.

The LF display system 500 includes a viewer profiling module 528 configured to identify and profile viewers. Viewer profiling module 528 generates a profile of a viewer (or viewers) viewing holographic content displayed by LF display system 500. The viewer profiling module 528 generates a viewer profile based in part on the viewer inputs and the monitored viewer behavior, actions, and reactions. The viewer profiling module 528 may access information obtained from the tracking system 580 (e.g., recorded images, videos, sounds, etc.) and process the information to determine various information. In various examples, the viewer profiling module 528 may use any number of machine vision or machine hearing algorithms to determine viewer behavior, actions, and responses. The monitored viewer behavior may include, for example, smiling, cheering, applauding, laughing, frightening, screaming, excitement, backing, other changes in posture, or movement of the viewer, etc.

More generally, the viewer profile may contain any information received and/or determined about the viewer viewing the holographic content from the LF display system. For example, each viewer profile may record the viewer's actions or responses to content displayed by the LF display system 500. Some example information that may be included in a viewer profile is provided below.

The viewing profile module 528 generates a viewer profile for the user of the vehicle. The view profiling module 528 builds a viewer profile based in part on user input and monitored behavior. The viewing profile describes the behavior of the viewer with respect to the LF display system 500 and one or more characteristics of the viewer. In some embodiments, the view profile module 528 may receive characteristics from the user and/or infer characteristics based on monitored behavior. The monitored behavior may comprise, for example, that the user is typically a driver or passenger. The characteristics may include, for example, the name of the viewer, the age of the viewer, vehicle controller preferences, vehicle interior preferences, vehicle exterior preferences, residence, any other demographic information, or some combination thereof.

The vehicle controller preferences describe a user preferred configuration of holographic content that can be used to provide instructions to the vehicle. Holographic content that may be used to provide some instructions to a vehicle may include, for example, a two-dimensional object, a three-dimensional object, a control switch, a control dial, a steering control interface (e.g., steering wheel), a dashboard, a music control interface (e.g., radio, music player), a climate control interface (heater and/or air conditioning control), a window control interface, a door control interface (allowing locking/unlocking of a door and/or opening of a door), a map control interface, a computer interface, a shifter, a control button, an entertainment video control interface (e.g., allowing vehicle occupants to control and/or present video content), some other control interface for a vehicle, or some combination thereof.

The vehicle interior preferences describe a user preferred configuration of holographic content presented in the vehicle interior. Such as the position of an instrument panel (e.g., speedometer, tachometer, etc.), instrument panel color, door panel color, window tint, interior holographic content (including holographic content that may be used to provide instructions to the vehicle).

The vehicle exterior preferences describe a user preferred configuration of holographic content presented along the vehicle exterior using the LF display module. For example, the exterior preferences may describe the vehicle color, what holographic objects are used to enhance the appearance of the vehicle (e.g., rearview mirrors, spoilers, hoods, etc.), the location of the holographic content along the exterior, or some combination thereof.

In some embodiments, the view profiling module 528 may update the viewer profile directly based on the user's actions and/or responses to the holographic content displayed by the LF display system 500. In some embodiments, the view profiling module 528 provides the identification of the viewer to the controller 520 to build and store a viewer profile.

In some embodiments, the data store 522 includes a viewer profile store that stores viewer profiles generated, updated, and/or maintained by the viewer profiling module 528. The viewer profile may be updated in the data store at any time by the viewer profiling module 528. For example, in one embodiment, when a particular viewer views holographic content provided by LF display system 500, viewer profile storage receives and stores information about the particular viewer in its viewer profile. In this example, the viewer profiling module 528 contains a facial recognition algorithm that can identify viewers and positively identify them when they view the presented holographic content. To illustrate, the tracking system 580 obtains an image of the viewer as the viewer enters the target area of the LF display system 500. The viewer profiling module 528 inputs the captured image and uses a facial recognition algorithm to identify the viewer's face. The identified face is associated with a viewer profile in a profile store, and as such, all of the obtained input information about the viewer may be stored in its profile. The viewer profiling module may also positively identify the viewer using a card identification scanner, a voice identifier, a Radio Frequency Identification (RFID) chip scanner, a bar code scanner, or the like.

Because viewer profiling module 528 may positively identify viewers, viewer profiling module 528 may determine each viewer's visit to LF display system 500. The viewer profiling module 528 may then store the time and date of each visit in the viewer profile of each viewer. Similarly, the viewer profiling module 528 may store input received from the viewer at each of its occurrences from any combination of the sensory feedback system 570, the tracking system 580, and/or the LF display assembly 510. The viewer profile system 528 may additionally receive other information about the viewer from other modules or components of the controller 520, which may then be stored with the viewer profile. Other components of controller 520 may then also access the stored viewer profile to determine subsequent content to provide to the viewer.

LF processing engine 530 generates 4D coordinates in a rasterized format ("rasterized data"), which, when executed by LF display assembly 510, cause LF display assembly 510 to render holographic content. LF processing engine 530 may access rasterized data from data store 522. Additionally, the LF processing engine 530 may construct rasterized data from the vectorized data set. Vectorized data is described below. LF processing engine 530 may also generate the sensory instructions needed to provide the sensory content that enhances the holographic objects. As described above, when executed by LF display system 500, the sensory instructions may generate tactile surfaces, sound fields, and other forms of sensory energy supported by LF display system 500. The LF processing engine 530 may access sensory instructions from the data store 522, building sensory instructions from the vectorized data set. In general, the 4D coordinates and sensory data represent display instructions executable by the LF display system to generate holographic and sensory content.

The amount of rasterized data describing the flow of energy through the various energy sources in the LF display system 500 is very large. Although rasterized data may be displayed on the LF display system 500 when accessed from the data store 522, rasterized data may not be efficiently transmitted, received (e.g., via the network interface 524), and subsequently displayed on the LF display system 500. For example, take the example of rasterized data representing a short slice of holographic projection by LF display system 500. In this example, the LF display system 500 includes a display that contains billions of pixels, and the rasterized data contains information for each pixel location of the display. The corresponding size of the rasterized data is huge (e.g., several gigabytes per second of movie display time) and is not manageable for efficient delivery over the commercial network via the network interface 524. For real-time streaming applications involving holographic content, the problem of efficient delivery can be magnified. When an interactive experience is required using input from the sensory feedback system 570 or the tracking module 526, an additional problem arises of storing only rasterized data on the data storage device 522. To enable an interactive experience, the light field content generated by LF processing engine 530 may be modified in real-time in response to sensory or tracking inputs. In other words, in some cases, the LF content cannot simply be read from the data storage 522.

Thus, in some configurations, data representing holographic content displayed by LF display system 500 may be passed to LF processing engine 530 in a vectorized data format ("vectorized data"). Vectorized data may be orders of magnitude smaller than rasterized data. Furthermore, vectorized data provides high image quality while having a data set size that enables efficient sharing of data. For example, the vectorized data may be a sparse data set derived from a denser data set. Thus, based on how sparse vectorized data is sampled from dense rasterized data, the vectorized data may have an adjustable balance between image quality and data transfer size. The adjustable sampling for generating vectorized data enables optimization of image quality at a given network speed. Thus, efficient transmission of holographic content via the network interface 524 is achieved for the vectorized data. The vectorized data also enables real-time streaming of the holographic content over commercial networks.

In summary, the LF processing engine 530 may generate holographic content derived from rasterized data accessed from the data storage 522, vectorized data accessed from the data storage 522, or vectorized data received via the network interface 524. In various configurations, the vectorized data may be encoded prior to data transmission and decoded after reception by the LF controller 520. In some examples, the quantized data is encoded for additional data security and performance improvements related to data compression. For example, the vectorized data received over the network interface may be encoded vectorized data received from a holographic streaming application. In some instances, the vectorized data may require a decoder, the LF processing engine 530, or both to access the information content encoded in the vectorized data. The encoder and/or decoder system may be available for use by a consumer or authorized for a third party vendor.

The vectorized data contains all the information for each sensory domain that the LF display system 500 supports in a manner that supports an interactive experience. For example, the vectorized data for an interactive holographic experience contains any vectorized features that may provide an accurate physical effect for each sensory domain supported by LF display system 500. Vectorized features may include any feature that may be synthetically programmed, captured, evaluated computationally, and the like. The LF processing engine 530 may be configured to convert vectorized features in the vectorized data into rasterized data. The LF processing engine 530 may then project the holographic content converted from the vectorized data from the LF display assembly 510. In various configurations, vectorized features may include: one or more red/green/blue/alpha channels (RGBA) + depth images; a plurality of view images of depth information with or without different resolutions, which view images may contain one high resolution center image and other views of lower resolution; material characteristics such as albedo and reflectance; a surface normal; other optical effects; surface identification; geometric object coordinates; virtual camera coordinates; displaying the plane position; an illumination coordinate; the tactile stiffness of the surface; tactile malleability; the strength of the touch; amplitude and coordinates of the sound field; (ii) an environmental condition; somatosensory energy vectors associated with mechanical receptors for texture or temperature, audio; as well as any other sensory domain characteristics. Many other vectorized features are possible.

LF display system 500 may also dynamically update one or more holographic objects in response to instructions from controller 520. For example, in response to an event occurring, controller 520 may instruct LF display assembly 510 to render and/or update one or more holographic objects. The events may include, for example, user input (e.g., which may be a gesture, a verbal command, a button press, etc.), a navigational alert, a cell phone call, a change in vehicle status, some other event that may be preprogrammed, and some combination thereof. In some embodiments, in response to a user input (e.g., a gesture), controller 520 may adjust one or more of: an operating state of the vehicle (e.g., engine on/off, active gear, drive configuration, etc.), a control interface of the vehicle, a data indicator of the vehicle, an internal configuration of the vehicle (e.g., holographic object arrangement with respect to driver, interior lighting, interior display information, radio volume, etc.), at least one of the one or more holographic objects, an arrangement of the one or more holographic objects, communicating data over a network, performing a cell phone call (e.g., which may also be a distress call), performing a navigation update, or some combination thereof. The operating state of the vehicle describes some aspects of the vehicle operation. For example, the operating state may describe a change in vehicle speed, a transmission gear (e.g., drive, park, reverse, etc.), engine operation (on/off), a drive configuration (e.g., two-wheel drive, four-wheel drive, etc.), a headlight configuration (running light on/off, high beam on/off, low beam on/off), a rearview mirror configuration, some other aspect of vehicle operation, or some combination thereof. The data indicator informs the occupant of a certain aspect of the vehicle. The data indicators may include, for example, speedometers, turn signals, tachometers, gas pressure gauges, low tire pressure warnings, fuel volume warnings, heads up display messages, navigation reminders, and the like.

The LF display system 500 may also produce an interactive viewing experience. That is, the holographic content may be responsive to input stimuli containing information about viewer location, gestures, interactions with the holographic content, or other information originating from viewer profiling module 528 and/or tracking module 526. For example, in an embodiment, LF processing system 500 produces an interactive viewing experience using real-time performance managed vectorized data received via network interface 524. In another example, if a holographic object needs to be moved in a particular direction immediately in response to a viewer interaction, LF processing engine 530 may update the rendering of the scene so the holographic object moves in the desired direction. This may require the LF processing engine 530 to use the vectorized data set to render the light field in real-time based on the 3D graphical scene with the appropriate object placement and movement, collision detection, occlusion, color, shading, lighting, etc., to correctly respond to viewer interactions. The LF processing engine 530 converts the vectorized data into rasterized data for rendering by the LF display assembly 510.

The rasterized data contains holographic content instructions and sensory instructions (display instructions) that represent real-time performance. LF display assembly 510 simultaneously projects real-time performance holographic and sensory content by executing display instructions. The LF display system 500 monitors viewer interactions (e.g., voice responses, touches, etc.) with the presented real-time performance through the tracking module 526 and the viewer profiling module 528. In response to viewer interaction, the LF processing engine produces an interactive experience by generating additional holographic and/or sensory content for display to the viewer.

To illustrate, consider an example embodiment of an LF display system 500 that includes an LF processing engine 530 that generates a plurality of holographic objects. For example, a plurality of holographic objects may be used to represent a shifter that appears a series of buttons corresponding to various transmission states (e.g., drive, reverse, neutral, etc.). The user may move to touch the holographic object representing the actuation button. Thus, tracking system 580 tracks the movement of the viewer's finger relative to the holographic object. The user's movements are recorded by tracking system 580 and sent to controller 520 and LF processing engine 530. In some embodiments, LF processing engine 530 instructs LF display assembly 510 to generate a tactile surface (e.g., using an ultrasonic speaker) that corresponds to at least a portion of the holographic object and occupies substantially the same space as some or all of the outer surface of the holographic object. The LF processing engine 530 uses the tracking information to dynamically instruct the LF display assembly 510 to move the location of the haptic surface and the location of the rendered holographic object such that the user is provided with both visual and tactile perception of pressing a physical button.

The holographic content track may also contain spatial rendering information. That is, the holographic content track may indicate a spatial location for rendering holographic content inside the car and/or along the outside of the car. For example, a holographic content segment may indicate that some holographic content is to be presented in some holographic viewing volumes, but not in other holographic viewing volumes. To illustrate, the LF processing engine 530 may present instrument controls along the dashboard of the automobile. Similarly, a holographic content track may indicate that holographic content is presented to some viewing volumes, but not other viewing volumes. For example, the LF processing engine may present instrument control to the driver along the dashboard, rather than to the passenger.

LF processing engine 530 may also create holographic content for display by LF display system 500. Importantly, here, creating holographic content for display is different from accessing or receiving holographic content for display. That is, when creating content, the LF processing engine 530 generates entirely new content for display, rather than accessing previously generated and/or received content. The LF processing engine 530 may use information from the tracking system 580, the sensory feedback system 570, the viewer profiling module 528, the tracking module 528, or some combination thereof to create holographic content for display. In some instances, LF processing engine 530 may access information from elements of LF display system 500 (e.g., tracking information and/or viewer profiles), create holographic content based on the information, and in response, display the created holographic content using LF display system 500. The created holographic content may be enhanced with other sensory content (e.g., touch, audio, or scent) when displayed by LF display system 500. Further, the LF display system 500 may store the created holographic content so that it may be displayed in the future.

In some embodiments, LF processing engine 530 incorporates Artificial Intelligence (AI) models to create holographic content for display by LF display system 500. The AI model may include supervised or unsupervised learning algorithms, including but not limited to regression models, neural networks, classifiers, or any other AI algorithm. The AI model may be used to determine viewer preferences based on viewer information recorded by LF display system 500 (e.g., by tracking system 580), which may contain information about the behavior of the viewer.

The AI model may access information from data storage 522 to create holographic content. For example, the AI model may access viewer information from one or more viewer profiles in data store 522 or may receive viewer information from various components of LF display system 500. To illustrate, the AI model may determine that a vehicle occupant prefers to see certain car trims (exterior and/or interior). The AI model may determine the preferences based on a set of positive reactions or responses by the viewer to previously viewed automobile trim. That is, the AI model may create holographic content that is personalized for a group of viewers according to learned preferences of those viewers. The AI model may also store the learned preferences of each viewer in a viewer profile store of the data store 522. In some instances, the AI model may generate a creation hologram for an individual viewer, rather than a group of viewers.

One example of an AI model that may be used to identify characteristics of a viewer, identify responses, and/or generate holographic content based on the identified information is a convolutional neural network model having a layer of nodes, where the values at the nodes of a current layer are transforms of the values at the nodes of a previous layer. The characteristics may include, for example, the name of the viewer, the age of the viewer, vehicle controller preferences, vehicle interior preferences, vehicle exterior preferences, residence, any other demographic information, or some combination thereof. The transformation in the model is determined by a set of weights and parameters that connect the current layer and the previous layer. For example, and the AI model may contain five levels of nodes: layers A, B, C, D and E. The transformation from layer A to layer B is represented by the function W1Given that the transformation from layer B to layer C is represented by W2Given that the transformation from layer C to layer D is represented by W3Given, and the transformation from layer D to layer E is represented by W4It is given. In some instances, the transformation may also be determined by a set of weights and parameters used to transform between previous layers in the model. For example, the transformation W from layer D to layer E4May be based on the transformation W used to complete the layer A to B1The parameter (c) of (c).

The input to the model may be the image encoded onto convolutional layer a acquired by tracking system 580, and the output of the model is the holographic content decoded from output layer E. Alternatively or additionally, the output may be a determined characteristic of the viewer in the image. In this example, the AI model identifies potential information in the image that represents the viewer characteristics in identification layer C. The AI model reduces the dimensionality of convolutional layer a to the dimensionality of identification layer C to identify any features, actions, responses, etc. in the image. In some instances, the AI model then adds the dimensions of identification layer C to generate the holographic content.

The image from the tracking system 580 is encoded into convolutional layer a. The image input in convolutional layer a may be related to various characteristics and/or reaction information, etc. in identification layer C. The relevant information between these elements can be retrieved by applying a set of transformations between the corresponding layers. That is, the convolution layer a of the AI model represents the encoded image, and the identification layer C of the model represents the smiling viewer. Can be obtained by transforming W1And W2Applied to the pixel values of the images in the space of convolutional layer a to identify a smiling viewer in a given image. The weights and parameters used for the transformation may indicate the relationship between the information contained in the image and the identity of the smiling viewer. For example, the weights and parameters may be quantifications of shapes, colors, sizes, etc. contained in information representing a smiling viewer in an image. The weights and parameters may be based on historical data (e.g., previously tracked viewers).

The smiley viewer in the image is identified in the identification layer C. Identification layer C represents a smiling viewer identified based on potential information about the smiling viewer in an image

The identified smiley viewer in the image may be used to generate holographic content. To generate holographic content, the AI model starts at identification layer C and transforms W2And W3The value applied to identify a given identified smiling viewer in layer C. The transformation produces a set of nodes in the output layer E. The weights and parameters used for the transformation may indicate the relationship between the identified smiling viewer and the particular holographic content and/or preferences. In some cases, the holographic content is output directly from the node of output layer E, while in other cases, the content generation system decodes the node of output layer E into the holographic content. For example, if the output is a set of identified characteristics, LF processing engine 530 may use the characteristics to generate holographic content.

In addition, the AI model may contain layers referred to as intermediate layers. An intermediate layer is a layer that does not correspond to an image, does not identify a property/reaction, etc., or does not generate holographic content. For example, in the given example, layer B is an intermediate layer between convolutional layer a and identification layer C. Layer D is an intermediate layer between identification layer C and output layer E. The hidden layer is a potential representation of different aspects of the identification that are not observable in the data, but can control the relationship between the image elements when identifying the characteristics and generating the holographic content. For example, a node in the hidden layer may have a strong connection (e.g., a large weight value) with the input value and the identification value that share the commonality of "happy man smiling". As another example, another node in the hidden layer may have a strong connection with the input value and the identification value that share a commonality of "scary screaming". Of course, there are any number of connections in the neural network. In addition, each intermediate layer is a combination of functions, such as a residual block, convolutional layer, pooling operation, skip connection, series, and the like. Any number of intermediate layers B may be used to reduce the convolutional layers to the identification layers, and any number of intermediate layers D may be used to add the identification layers to the output layers.

In one embodiment, the AI model contains a deterministic method that has been trained with reinforcement learning (thereby creating a reinforcement learning model). The model is trained to use measurements from tracking system 580 as input and changes in the created holographic content as output to improve the quality of performance.

Reinforcement learning is a machine learning system in which the machine learns "what to do" -how to map cases to actions-in order to maximize the digital reward signal. Rather than informing the learner (e.g., LF processing engine 530) which actions to take (e.g., generating specified holographic content), attempts to find out which actions yield the highest rewards (e.g., by cheering more people to improve the quality of the holographic content) are performed. In some cases, the action may affect not only the instant reward, but also the next case, and therefore all subsequent rewards. These two features-trial false searches and delayed rewards-are two significant features of reinforcement learning.

Reinforcement learning is defined not by characterizing the learning method, but by characterizing the learning problem. Basically, reinforcement learning systems capture those important aspects of the problem facing learning agents interacting with their environment to achieve goals. That is, in the example of generating a song for a performer, the reinforcement learning system captures information about viewers in the venue (e.g., age, personality, etc.). The agent senses the state of the environment and takes action that affects the state to achieve one or more goals (e.g., creating a popular song that the viewer cheers). In its most basic form, the formulation of reinforcement learning encompasses three aspects of the learner: sensation, action, and goal. Continuing with the song example, LF processing engine 530 senses the state of the environment through sensors of tracking system 580, displays holographic content to viewers in the environment, and achieves a goal that is a measure of the reception of the song by the viewers.

One of the challenges that arises in reinforcement learning is the tradeoff between exploration and utilization. To increase rewards in the system, reinforcement learning agents prefer actions that have been tried in the past and found to be effective in generating rewards. However, to discover the actions that generate the reward, the learning agent may select actions that were not previously selected. The agent "leverages" the information it already knows to obtain rewards, but it also "explores" the information to make better action choices in the future. The learning agent attempts various actions and gradually favors those actions that look best while continuing to attempt new actions. On a random task, each action is typically tried multiple times to obtain a reliable estimate of its expected reward. For example, if the LF processing engine creates holographic content that the LF processing engine knows will cause the viewer to laugh after a long period of time, the LF processing engine may change the holographic content until the time that the viewer laughs decreases.

In addition, reinforcement learning considers the entire problem of target-oriented agent interaction with uncertain environments. Reinforcement learning agents have specific goals that can sense aspects of their environment and can choose to receive high reward actions (i.e., growers). Furthermore, an agent will typically run despite significant uncertainty in the environment it faces. When reinforcement learning involves planning, the system will address the interaction between the planning and the real-time action selection, and how to acquire and improve the environmental elements. In order to advance reinforcement learning, important sub-problems must be isolated and studied, which play a clear role in the complete interactive target-seeking agent.

Reinforcement learning problems are a framework of machine learning problems in which interactions are processed and actions are performed to achieve a goal. Learners and decision makers are referred to as agents (e.g., LF processing engine 530). Things that an agent interacts with (including everything beyond the agent) are referred to as environments (e.g., viewers in a venue, etc.). The two continuously interact, the agent selects actions (e.g., creating holographic content), and the environment responds to these actions and presents the new situation to the agent. The environment also brings a reward, i.e. a special value that the agent tries to maximize over time. In one context, rewards serve to maximize the viewer's positive response to the holographic content. The complete specification of the environment defines a task that is an example of a reinforcement learning problem.

To provide more context, the agent (e.g., content generation system 350) and the environment interact at each of a series of discrete time steps, i.e., t ═ 0, 1, 2, 3, etc. At each time step t, the agent receives a swap state stSome representations of (e.g., measurements from the tracking system 580). State stWithin S, where S is a set of possible states. Based on the state stAnd a time step t, the agent selects an action at (e.g., having the actor split). Action at is in A(s)t) Wherein A(s)t) Is a set of possible actions. At a later time state (in part as a result of its action), the agent receives a digital award rt+1. State rt+1Within R, where R is the set of possible rewards. Once the agent receives the reward, the agent selects a new state st+1

At each timeAt each time step, the agent implements a mapping from states to probabilities of selecting each possible action. This mapping is called a proxy policy and is denoted as πtIn which pit(s, a) is if stA when is ═ stProbability of a. The reinforcement learning method may decide how an agent changes its policy due to the state and rewards generated by the agent's actions. The objective of the agent is to maximize the total number of rewards received over time.

This reinforcement learning framework is very flexible and can be applied to many different problems in many different ways (e.g., generating holographic content). The framework suggests that whatever the details of the sensory, memory and control devices, any problem (or purpose) of learning a target-oriented behavior can be reduced to three signals that are passed back and forth between the agent and its environment: one signal indicates the selection (action) made by the agent, one signal indicates the basis on which the selection was made (status), and one signal defines the agent goal (reward).

Of course, the AI model may include any number of machine learning algorithms. Some other AI models that may be employed are linear and/or logistic regression, classification and regression trees, k-means clustering, vector quantization, and the like. Regardless, the LF processing engine 530 typically takes input from the tracking module 526 and/or the viewer profiling module 528 and in response the machine learning model creates holographic content. Similarly, the AI model may direct the rendering of holographic content.

The foregoing examples of creating content are not limiting. Most broadly, LF processing engine 530 creates holographic content for display to a viewer of LF display system 500. Holographic content may be created based on any of the information contained in LF display system 500.

The vehicle interface 532 provides instructions to the vehicle based on user interaction with the holographic content. The vehicle interface 532 monitors interactions with the holographic content and identifies instructions to provide to the vehicle based on the monitored interactions. The vehicle interface 532 applies, for example, a look-up table (LUT), machine learning, a neural network, or some combination thereof, to the tracking information and the generated holographic content to determine whether the user is interacting with the holographic content. For example, the vehicle interface 532 monitors user interaction with the holographic object corresponding to the holographic drive button of the shifter. When the user presses the holographic drive button (as described above), the vehicle interface 532 determines (e.g., via machine learning) that the user intends to launch the vehicle and instructs the vehicle to launch its transmission. In another example, the vehicle interface 532 may monitor user interaction with a holographic object corresponding to a steering wheel. When the user rotates the holographic object in a particular direction, the vehicle interface 532 determines (e.g., via machine learning) that the user intends to turn the vehicle in the particular direction and indicates that the vehicle is turning in the particular direction.

In some embodiments, the vehicle interface 532 interfaces with the automatic driving functions of the vehicle. The autopilot function is the process by which the vehicle drives itself with little or no human intervention. For example, a vehicle occupant may provide a destination address via the LF display system 500 to which the vehicle will then automatically drive the vehicle in accordance therewith. A person skilled in the art of autonomous vehicles may implement this autonomous function in a vehicle. In some embodiments, the LF display system may present a holographic driver to one or more passengers of the autonomous vehicle. The holographic driver may be, for example, an image of a person with which one or more of the vehicle occupants may interact. When the passenger interacts with the holographic driver, LF display system 500 may receive input (e.g., destination address) from the passenger. LF display system 500 may dynamically update the holographic driver (e.g., cause the holographic driver to look at the passenger speaking thereto) and/or other holographic content presented to the passenger. The LF display system 500 may provide one or more inputs from the passenger to the automatic driving functions of the vehicle.

Fig. 6A is a perspective view of a vehicle 600 enhanced with an LF display system in accordance with one or more embodiments. In the illustrated embodiment, the vehicle 600 is an automobile. In other embodiments, vehicle 600 is some other type of machine for transporting people, cargo, sensor equipment, and/or weapons. The vehicle 600 may be, for example, an automobile (e.g., car, truck, etc.), an airplane, a drone, an unmanned aerial vehicle, a tank, a boat, a submarine, some other machine for transportation, or some combination thereof. The LF display system is an embodiment of the LF display system 500.

In the illustrated embodiment, the LF display system includes a plurality of LF display modules that are part of an exterior surface portion of the vehicle 600. Each LF display module is shown as a dashed polygon. It should be noted that in practice, the size of the LF display modules, the number of LF display modules, the location of the LF display modules, or some combination thereof may be different than shown. The LF display modules may be tiled to form a seamless display surface across some or all of the exterior surfaces of the vehicle. The exterior surface of the vehicle is the surface of the vehicle that is visible to a viewer outside the vehicle. The exterior surface may comprise, for example, a window, a body, a wheel housing, some other portion of the exterior surface, or some combination thereof. For example, the vehicle 600 includes a plurality of LF display modules on the body (e.g., LF display module 610), a plurality of LF display modules on the wheelhouse (e.g., LF display module 620), and a plurality of LF display modules on the windows (e.g., LF display module 630). It should be noted that in some embodiments, the LF display module on the glazing may be transparent to visible light.

The LF display modules along the exterior surface are configured to project holographic content to change the appearance of the vehicle 600. In this way, a user of the LF display system (e.g., vehicle driver, vehicle manufacturer) may modify how the vehicle appears to viewers outside the vehicle. For example, the LF display system may change the texture, color, and features of some or all of the vehicles and/or projected decorative objects. In the case of a bus, the LF display system may be used as a billboard to project holographic objects behind and floating in front of the display surface to attract the attention of pedestrians on the roadway. In the case of a tank, an LF display system may be used to project bushes and foliage (possibly real images) to disguise the vehicle by blending it with its surroundings. In another embodiment, a light field camera on the right side of the vehicle opposite 630 may capture images projected on the left side of the vehicle on displays 610, 620, and 630 to make the vehicle appear transparent to an observer outside the vehicle and looking at the vehicle on the left side of the vehicle.

FIG. 6B is a perspective view of the vehicle 600 of FIG. 6A presenting holographic content in accordance with one or more embodiments. The LF display module projects the holographic object at a plurality of configurable physical locations relative to a display surface of the LF display module. In the illustrated embodiment, the LF display system of the vehicle 600 is rendering holographic content such that the appearance of the vehicle 600 is altered. For example, the LF display system is rendering holographic objects 640, 650, 660, and 670 at respective configurable physical locations. Holographic objects 640 and 650 appear as decorative rear turn signal devices projected from LF display module 610. The holographic objects 640, 650 appear outside the LF display module in the holographic object volume of the LF display system at certain vantage points. In some embodiments, an ultrasonic speaker within the LF display module may be used to generate a tactile surface that coincides with at least a portion of the holographic object. For example, the ultrasonic speaker may generate a tactile surface that coincides with the outer surface of the holographic object 650.

Holographic object 660 is an image that appears as a line along the outer surface of vehicle 600. Holographic object 670 changes the color of some of the exterior surfaces of vehicle 600. It should be noted that the illustrated holographic objects 640, 650, 660, and 670 are merely examples, and in other embodiments, other holographic objects (e.g., decorative fins, brake lights projected from the vehicle 600, etc.) may be presented by the LF display system and they may be presented at other physical locations.

Fig. 7A is a perspective view of an interior 700 of a vehicle enhanced with an LF display system in accordance with one or more embodiments. In the illustrated embodiment, the vehicle is an automobile. In other embodiments, the vehicle is some other type of machine for transporting people, cargo, sensor devices, and/or weapons. The vehicle may be, for example, an automobile, an airplane, a drone, an unmanned aerial vehicle, a tank, a boat, a submarine, some other machine for transportation, or some combination thereof. In some embodiments, the vehicle is vehicle 600. The LF display system is an embodiment of the LF display system 500.

The LF display system includes an LF display assembly. The LF display assembly includes at least one LF display module mounted on an interior surface of the interior 700 of the vehicle. The at least one LF display module is configured to present one or more holographic objects at a plurality of locations relative to a display surface of the at least one LF display module. The location includes a location between the display surface and a viewing volume of the at least one display surface.

In the illustrated embodiment, the LF display system includes a plurality of LF display modules that are part of the interior surface of the vehicle. Each LF display module is shown as a dashed polygon. It should be noted that in practice, the size of the LF display modules, the number of LF display modules, the location of the LF display modules, or some combination thereof may be different than shown. The LF display modules may be tiled to form a seamless display surface across some or all of interior 700. In some embodiments, the LF display modules are formed as panels that are specific to different locations of the interior. The interior surface of the vehicle is a vehicle surface in the interior 700 of the vehicle. The interior surface may comprise, for example, a window, a seat, a door panel, a roof, an instrument panel, a floor, some other portion of interior 700, or some combination thereof.

For example, the vehicle includes multiple LF display modules (e.g., LF display module 710) on the dashboard 720 and multiple LF display modules (e.g., LF display module 730) on the door panels. The LF display modules along the interior surface are configured to project holographic content to change the appearance of portions of the interior 700. In this way, users of the LF display system may modify how the vehicle appears to their users. For example, the holographic content may include, for example, a steering control interface (e.g., steering wheel), an instrument panel (e.g., speedometer, tachometer, etc.), a music control interface (e.g., radio, music player), a climate control interface (heater and/or air conditioning controls), a window control interface (e.g., for controlling the opening/closing of a window), a door control interface (e.g., to allow locking/unlocking of a door and/or opening of a door), a map control interface (also referred to as a navigation control interface), a computer interface (e.g., to make or with a cell phone making a call), a shifter interface (e.g., corresponding to a manual or automatic transmission), some other control interface for a vehicle, or some combination thereof. It should be noted that some or all of the holographic content may contain tactile buttons, knobs, joysticks, or some other type of interface through which a user may interact with the holographic content. Further, the LF display system may display holographic content customized for one or more users of the vehicle. For example, the holographic content (e.g., dashboard, etc.) presented by LF display module 710 may be customized for the driver, whereas the window control interface presented by LF display module 730 is customized for the user of seat 740.

FIG. 7B is a perspective view 750 of the interior 700 of FIG. 7A presenting holographic content in accordance with one or more embodiments. In the illustrated embodiment, the LF display system of the vehicle is rendering holographic content such that the appearance of the interior 700 is altered. For example, LF display system is rendering holographic object 760, holographic object 765, holographic object 770, holographic object 775, holographic object 780, and holographic object 785. It should be noted that the illustrated holographic objects 760, 765, 770, 775, 780, and 785 are merely examples, and in other embodiments other holographic content and/or holographic objects may be rendered by the LF display system. For example, the configuration of the holographic content presented within interior 700 is based in part on a viewer profile of the vehicle occupant. For example, the driver's viewer profile may indicate that the driver prefers to drive an automatic transmission rather than a manual transmission, and the LF display system may present the shifter as a holographic automatic shifter. In a similar manner, a viewer profile for a vehicle occupant may be used to customize the layout, color, etc. of the holographic content for the occupant.

In FIG. 7B, holographic object 760 appears as a steering wheel (e.g., a steering interface controller). Holographic object 765 is a real image of the dashboard. Holographic object 770 may be, for example, a music control interface, a map control interface, a communications interface (e.g., making a wireless call), a computer control interface, or some combination thereof. Holographic object 780 is a real image of the shifter. The holographic object 785 is a real image of the window control interface and/or the door control interface.

In some embodiments, the light field display modules 710 and 730 may be dual energy projection devices that use an ultrasonic pressure wave projection volume haptic surface that may be used to generate the haptic surface. As an example, the haptic surfaces may feel similar to textured controls that coincide with at least a portion of the projected holographic controls. For example, the ultrasonic speaker may generate a tactile surface that coincides with the outer surface of the holographic object 760.

The LF display system may also contain one or more cameras that are part of the tracking system to generate tracking information describing the movement of the vehicle user. These cameras may be internal to the light field display assembly 510, or exist outside of the assembly 510 as separate cameras. The LF display system may use the tracking information to monitor hand gestures and/or positions relative to holographic content (including holographic objects) within interior 700. For example, holographic object 770 may contain a real image of a button. The LF display system may project ultrasound energy to generate a haptic surface corresponding to the holographic object 770 and occupy substantially the same space as some or all of the outer surfaces of the object (i.e., buttons). The LF display system may use the tracking information to dynamically move the position of the tactile surface and to dynamically move the button when the user "presses" the button. The LF display system provides instructions to the vehicle based on the tracked information. For example, in response to tracking information indicating that the user has "pressed" a button, the LF display system may instruct the vehicle to open a navigation or lock a door. In a similar manner, the LF display system may use the tracking information to provide instructions to the vehicle when the user interacts with other holographic content (e.g., dashboard, music control interface, window control interface, door control interface, etc.). Further, in some embodiments, some users may interact with the holographic content while other users cannot. For example, in a law enforcement vehicle, it may be advantageous to not allow a passenger on the rear seat to access the door controls (e.g., holographic object 785).

In some embodiments, in response to user input, the LF display system may adjust one or more of the following: an operating state of the vehicle (e.g., engine on/off, active gear, drive configuration, etc.), an interior configuration of the vehicle, at least one of the one or more holographic objects, an arrangement of the one or more holographic objects, or some combination thereof.

One potential advantage of augmenting interior 700 with holographic content is that it allows interior 700 of a vehicle to be dynamically customized. The interior configuration of the vehicle refers to holographic content describing the vehicle interior 700. For example, the driver may customize the internal configuration by adjusting the gauge (e.g., holographic object 765) they wish to see in the dashboard, the position of the steering wheel (e.g., holographic object 760), whether the vehicle appears to be an automatic transmission or a manual transmission (e.g., via holographic object 780), the position of the window control interface, the position of the door control interface, and so forth. Further, in some embodiments, the actual position of the driver may be in the seat 790 or 795. It should be noted that the vehicle controls are holographic content. Thus, in addition to the embodiment shown in fig. 7B, in an alternative embodiment, the LF display system (e.g., via LF display module 710), holographic object 760, and holographic object 765 are rendered in front of base 790, rather than in seat 795. Likewise, other holographic content (e.g., holographic object 780) may be adjusted for the driver located in the seat 790.

In some embodiments, the LF display system may present one or more holographic objects customized for each user based in part on the tracking information. In this way, viewers at least a threshold distance (e.g., a few feet) from each other can see disparate holographic content. For example, the LF display system tracks the position of the driver (e.g., eye position) and the position of the passenger within the vehicle (e.g., eye position), and determines the perspective of the holographic content (e.g., holographic object) that should be visible to the passenger of the vehicle from its tracked position relative to the position where the holographic object will be rendered. The LF display system selectively emits light from particular pixels of the LF display module, corresponding to the determined viewing angles, toward the tracked eye positions. Thus, completely different holographic content can be presented to different viewers at the same time. For example, the LF display module may present the door panels as red leather and simultaneously present holographic content to different passengers indicating that the door panels are white canvas. And neither passenger can perceive what is being presented to the other passenger.

In some embodiments, the LF display system may present one or more holographic objects related to navigation and/or maps. For example, the holographic object may comprise: a navigation screen showing information about a position of the vehicle; a navigation assistance device; a navigation direction indicator; navigation information; a proposed vehicle route; an image associated with a vehicle surroundings; navigation content projected in front of the driver to help the driver focus on the road; navigation or information content that is overlaid on actual objects in the vicinity of the vehicle, or some combination thereof. In some embodiments, these holographic objects may be presented as part of a holographic heads-up display.

Fig. 8 is a perspective view 800 of an interior of a vehicle enhanced with an LF display system including enhanced windows in accordance with one or more embodiments. In the illustrated embodiment, the vehicle is an automobile. In other embodiments, the vehicle is some other type of machine for transporting people, cargo, sensor devices, and/or weapons. The vehicle may be, for example, an automobile, an airplane, a drone, an unmanned aerial vehicle, a tank, a boat, a submarine, some other machine for transportation, or some combination thereof. In some embodiments, the vehicle is vehicle 600. The LF display system is an embodiment of the LF display system 500.

In the illustrated embodiment, the LF display system includes an enhancement window 810, an enhancement window 820, and an enhancement side window 830. In other embodiments, the LF display system may contain more or fewer enhancement windows. An enhanced window is a window that contains at least some LF display modules, shows views of the exterior of the vehicle from the interior of the vehicle, and can modify or superimpose other content onto these views. This may be used for navigation, entertainment or environmental control purposes, as examples. The enhanced window may cover external views captured by one or more 2D or light field cameras, where the 2D or 3D navigation assistance device is projected at a location far in front of the driver, so that the driver does not have to repeatedly refocus on the console under the guidance of the navigation assistance device. In another embodiment, the enhanced window may show the holographic object to passengers to show them the guest characteristics. And the enhanced window may tint the window to achieve a comfortable light level for environmental control. In the illustrated embodiment, an LF display module (not shown) spans each of enhancement window 810, enhancement window 820, and enhancement window 830. In other embodiments, the LF display module of at least one of the reinforcement windows forms part of the reinforcement window, and at least some of the remaining parts comprise a material transparent to visible light (e.g. tempered glass).

The LF display modules forming some or all of the enhanced windows (e.g., 810, 820, and 830) of the vehicle may be able to adjust the tint by dynamically attenuating visible light according to some programming, or according to instructions from a user (e.g., a driver and/or passenger). The LF display module may color some or all of the enhancement windows according to a viewer profile of the vehicle occupant. In some embodiments, the attenuation levels may be different for one or more windows. For example, as illustrated in fig. 8, enhancement window 820 has more tones than enhancement window 810 and enhancement window 830.

The LF display module of the enhancement window may render the holographic content. For example, in fig. 8, enhancement window 810 presents holographic content 840 and enhancement window 820 presents holographic content 850. Holographic content 840 and/or holographic content 850 may contain one or more holographic objects that are captured images or enhanced captured images from outside the vehicle. In one embodiment, the holographic content 840 acts as a holographic rearview mirror. In this embodiment, there may be one or more 2D cameras positioned to capture images of a local area behind the vehicle, and the holographic content 840 may treat this view as a 2D image of a viewing plane that is comfortably projected in front of the driver when the driver is backing up. The LF display system may include one or more cameras positioned to capture images of a localized area behind the vehicle (e.g., within one or more LF display modules). In another embodiment, there may be one or more light field cameras located behind the vehicle, and the holographic content 840 may illustrate this as a holographic image in front of the driver. The image is rendered within the volume of the holographic object of enhancement window 810. Thus, the image may be rendered as a portion of a holographic object in real space (e.g., between the enhanced window 810 and a passenger of the vehicle), a portion of a holographic object in the plane of the LF display module, a portion of a holographic object behind and/or outside of the plane of the enhanced window 810, or some combination thereof. In a similar manner, holographic content 850 may be rendered. For example, the captured image of the portion of the local area may be rendered as a portion of the holographic object in front of 820 (e.g., between the enhanced window 820 and an occupant of the vehicle), a portion of the holographic object in the plane of the LF display module making up the enhanced window 820, a portion of the holographic object outside the plane of the enhanced window 820, or some combination thereof.

It should be noted that the enhancement window (and more generally, the LF display module and/or multiple LF display modules tiled together) may act as a window. It is important to note that this is different from merely presenting the image from the camera on the electronic display. In electronic display screens, the presented image is static and does not change based on the position of the viewer. In contrast, holographic objects presented behind the display surface of the enhancement window support full parallax (e.g., recreate the same beam as the physical light at the surface of the holographic object, etc.), such that the holographic object changes based on the viewer's position relative to the holographic object.

Fig. 9 is a perspective view 900 of an interior of a vehicle enhanced with an LF display system including enhanced louvers 910 in accordance with one or more embodiments. In the illustrated embodiment, the vehicle is an automobile. In other embodiments, the vehicle is some other type of machine for transporting people, cargo, sensor devices, and/or weapons. The vehicle may be, for example, an automobile, an airplane, a drone, an unmanned aerial vehicle, a tank, a boat, a submarine, some other machine for transportation, or some combination thereof. In some embodiments, the vehicle is vehicle 600. The LF display system is an embodiment of the LF display system 500.

In the illustrated embodiment, the LF display system includes enhanced louvers 910. The reinforced skylight is a reinforced window positioned along the roof of the vehicle interior as described above with reference to fig. 8. Enhanced skylight 910 is a window that contains at least some LF display modules (e.g., LF display module 920). In the illustrated embodiment, LF display modules enhance skylights 910 across (in a tiled fashion to form a seamless display surface). In other embodiments, the LF display module forms part of a reinforced skylight 910, and at least some of the remaining portions include a material transparent to visible light (e.g., tempered glass).

Enhanced skylight 910 presents holographic content to a user (e.g., a passenger) of the vehicle. The LF display system may include one or more cameras configured to capture images of a local area above the vehicle (e.g., as part of the LF display module, or external to the LF display module). The LF display system may generate and update holographic content presented by enhanced skylight 910 based in part on the captured image usage. In some embodiments, enhanced sunroof 910 presents holographic content according to a viewer profile of at least one of the vehicle occupants.

Enhanced skylight 910 presents holographic content within the volume of holographic objects of enhanced skylight 910. As illustrated in fig. 9, the enhanced skylight 910 is presenting a holographic object 930, and the holographic object 930 is shark. Holographic object 930 may be present anywhere within the volume of the holographic object that enhances the display surface of skylight 910. Also, as discussed above with respect to, for example, fig. 2, the holographic object contains an area between the display surface and the user and an area behind the display surface. A user within the viewing volume of enhanced skylight 910 may move to other positions relative to holographic object 930 to see different views of the shark. Furthermore, different users viewing the shark from different locations will see different perspectives of the shark. For example, a user to the left of a shark will see one side of the shark, and a user to the right of the shark will see the other side of the shark. In some embodiments, holographic object 930 is visible to all users within the viewing volume of enhanced skylight 910 who have an unobstructed line of sight (i.e., not blocked by an object/person) to holographic object 930. These users may be unconstrained such that they may move around within the viewing volume to see different perspectives of the holographic object 930. Thus, the LF display system can render the holographic object such that multiple unconstrained users can simultaneously see different perspectives of the holographic object in real world space as if the holographic object were physically rendered.

In some embodiments, one or more LF display systems may be used to generate a haptic surface that coincides with the surface of the holographic object 930 so that a user may touch the holographic object 930; and (2) providing audio content corresponding to the rendered holographic object. The light field display assembly can include an acoustic emitter and a dual energy surface including a volumetric tactile projection system, resulting in a projected tactile surface. In another embodiment, an external tactile projection system may be used to project the tactile surface. Further, similar to fig. 7A, 7B, and 8, the LF display module of enhanced sunroof 910 and/or other LF display modules may contain one or more cameras as part of a tracking system to generate tracking information describing the movement of the vehicle user. The LF display system may use the tracking information to monitor hand gestures and/or positions relative to holographic content (including holographic objects) within interior 700. For example, the user may attempt to touch the holographic object 930 as if it were a physical shark. The LF display system may use the tracking information to dynamically move the location of the tactile surface and to dynamically move sharks when the user "presses" the button.

In some embodiments, the LF display system may present one or more holographic objects tailored to each viewer based in part on the tracking information. In this way, viewers at least a threshold distance (e.g., a few feet) from each other can see disparate holographic content. For example, the LF display system tracks the position of the driver (e.g., eye position) and the position of the passenger within the vehicle (e.g., eye position), and determines the perspective of the holographic object that should be visible to the passenger of the vehicle based on its tracked position relative to the position where the holographic object will be rendered. The LF display system selectively emits light to the tracked eye locations from particular pixels of the LF display module (e.g., enhanced skylights 910) that correspond to the determined viewing angles. Thus, completely different holographic content can be presented to different viewers at the same time. For example, enhanced skylight 910 and/or other LF display modules may present spatially-related holographic content to passengers, and simultaneously present sea-related holographic content to different passengers. And neither passenger can perceive what is being presented to the other passenger. In another example, enhanced skylight 910 and/or other LF display modules may present holographic content to the passenger, but not the driver (e.g., holographic object 930 is visible to the passenger for a certain period of time, but not to the driver for the same period of time). The LF display system may also only present content unique to each seat of the vehicle without requiring passenger tracking. In another embodiment, a holographic object displaying guest features visible outside each window may be projected to the passenger on the seat closest to the window.

It should be noted that the enhanced skylight 910 is an enhanced window and may serve as a window. And as discussed above with respect to fig. 8, this is different than merely presenting the image from the camera on the electronic display. In electronic display screens, the presented image is static and does not change based on the position of the viewer. In contrast, holographic objects presented behind the display surface of enhanced skylight 910 support full parallax (e.g., recreate the same beam as the physical light at the surface of the holographic object, etc.) such that the holographic object changes based on the viewer's position relative to the holographic object.

Fig. 10 is a perspective view 1000 of a vehicle 1010 enhanced with an LF display system to reduce blind spots in accordance with one or more embodiments. In the illustrated embodiment, the vehicle 1010 is an automobile. In other embodiments, vehicle 1010 is some other type of machine for transporting people, cargo, sensor equipment, and/or weapons. Vehicle 1010 may be, for example, an automobile, an airplane, a drone, an unmanned aerial vehicle, a tank, a boat, a submarine, some other machine for transportation, or some combination thereof. The LF display system is an embodiment of the LF display system 500.

In the illustrated embodiment, the LF display system includes a plurality of cameras as part of an exterior surface portion of the vehicle 1010 (e.g., described above with respect to fig. 6A-6B) and an LF display module as part of an interior surface of the vehicle 1010 (e.g., described above with respect to fig. 7A-B). In FIG. 10, some, but not all, of the cameras are visible (e.g., cameras 105A-G). Additionally, in some embodiments, more or fewer cameras may be used (e.g., in some cases, there is a single camera). Preferably, the cameras together have a field of view around all sides of the vehicle, and may also contain portions of local areas above and/or below the vehicle 1010. As illustrated, there are no LF display modules along the exterior surface of the vehicle 1010. In some embodiments (not shown), the camera is part of an LF display module along the exterior of the vehicle 1010. And the LF display modules may be tiled to form a seamless display surface across some or all of the exterior surfaces of the vehicle 1010. Additionally, the interior of the vehicle 1010 contains multiple LF display modules that may be tiled to form a seamless display surface across some or all of the interior surfaces of the vehicle 1010.

Vehicle 1010 includes a camera that captures images of a local area around vehicle 1010. The LF display system uses the captured images to generate holographic content, which is then presented to the driver 1020 of the vehicle 1010 using LF display modules along the interior of the vehicle 1010. An LF display module along the interior of the vehicle presents holographic content corresponding to the captured image. The LF display system presents the generated holographic content to the driver 1020 such that at least a portion of the vehicle between the driver 1020 and the blind spot appears transparent. For example, if the captured image contains an object 1030 (which is illustrated as a dog), the holographic content contains a holographic object that is located where the object 1030 is located and that moves with the object 1030. Further, the holographic object may be rendered to appear as a dog or some other object (e.g., a dog overlaid in a flashing red color to alert the driver 1020 of the object 1030 in the blind spot). The holographic content may be a holographic scene if the capture camera outside the vehicle is a light field camera, or a 2D view of the scene projected at a distance from the driver if the capture camera outside the vehicle is not a light field camera. The view of the exterior may be enhanced with a flashing red warning, a flashing light, or any other indicator.

In this way, the LF system can greatly increase the field of view of the driver 1020 and possibly also other passengers. The LF display system increases the field of view by making some or all of the vehicles 1010 substantially transparent from the perspective of the driver 1020 (and/or passengers). Thus, the LF display system provides the driver 1020 and/or other passengers with an experience of "looking" through portions of the vehicle that are opaque to visible light. It should be noted that this is different from displaying only on the screen. The holographic content is presented such that the holographic content appears at the same depth as the corresponding physical object (e.g., similar to a user looking through a window and seeing an object on the other side of the window). For example, object 1030 is located in the blind spot of driver 1020. The blind spot is a location that is outside the driver's field of view when the driver 1020 views through the windshield 1040 and/or other windows of the vehicle 1010. As described above, the LF display system magnifies the field of view of the driver 1020 such that the field of view contains an enhanced line of sight to the object 1030. In a similar manner, the driver 1020 can look around and see holographic content corresponding to other blind spots. This alleviates the need for backup cameras and the like because the LF display system presents holographic content to the driver 1020, which simulates looking "through" the vehicle 1010 to see objects in a local area.

In some embodiments, the LF display system may cover a wire frame of some or all of the body of the vehicle 1010 to help the driver 1020 understand the position of the vehicle 1010 relative to objects in a local area around the vehicle 1010.

The captured image may include images of some or all of the local area around the vehicle 1010. In some embodiments, the LF display system uses the captured images of the local regions to generate holographic content that is then presented to a viewer (e.g., viewer 1050) outside of the vehicle 1010 using an LF display module along the outside of the vehicle 1010. For example, the LF display system may make a car appear transparent so that the viewer 1050 may see the object 1030 through the car. For example, the LF display system may determine a rendering perspective of the viewer 1050 within a viewing volume of the LF display module along a first side of the vehicle 1010 based on a captured image of a portion of the local region containing the object 1030. The LF display system generates holographic content based in part on the determined rendering perspective. The LF display system presents the generated holographic content to the viewer 1040 such that at least a portion of the vehicle appears transparent.

FIG. 11 illustrates an example system 1100 that uses a transflector 1110 to relay a holographic object projected by a light field display 1105 in accordance with one or more embodiments.

In some embodiments, transflector 1110 is a Dihedral Corner Reflector Array (DCRA). The DCRA is an optical imaging element composed of a plurality of dihedral corner reflectors. In some embodiments, a DCRA is formed using two thin layers of closely spaced parallel mirrors oriented such that the planes are orthogonal to each other. In other embodiments, transflector 1110 is an array of corner reflector micromirrors.

The function of a transflector can be achieved by using a beam splitter and a retro-reflector. The beam splitter is positioned in a similar orientation as transflector 1110 in fig. 11, and the retroreflector is positioned to the left of transflector 1110 in fig. 11, with the plane of the retroreflector being perpendicular to screen plane 1125. In this configuration, the beam splitter will reflect some of the divergent light rays from the light field display 1105 toward the retro-reflector, which reflects each of the light rays in opposite directions and converges them to form a holographic object on the viewer side of the beam splitter. An example of a retroreflector is a pyramidal retroreflector array that may be composed of microstructures.

The light field display 1105 projects an off-screen holographic object 1115 on a viewer side 1120 of the screen plane 1125 and an in-screen holographic object 1130 on a display side 1135 of the screen plane 1125. Projected light ray 1140 that converges on the surface of holographic object 1115 and projected light ray 1145 that converges at in-screen holographic object 1130 (see virtual light ray 1150) all diverge as they approach transflector 1110. Incident light ray 1140 passes through transflector 1110, undergoes reflection, and exits as light ray 1155, which converges to form relay holographic object 1160. Similarly, incident light ray 1145 reflects within transflector 1110 and emerges as converging light ray 1165, forming relay holographic object 1170.

The effect is that the holographic object centered on the screen plane 1125 has now been relayed to be centered on the virtual display surface 1175.

Note that after the holographic object has been relayed, holographic object 1130, which is farther away from viewer 1180 than holographic object 1115, is now closer to the viewer. The vertical distance (D1) between holographic object 1115 and transflector 1110 is the same as the horizontal distance D1 between relay holographic object 1160 and transflector 1110. Similarly, the vertical distance D2 between holographic object 1130 and transflector 1110 is the same as the horizontal distance between relay holographic object 1170 and transflector 1110. Viewer 1180 sees holographic object 1170 floating in space slightly in front of holographic object 1160. However, for holographic objects transmitted through transflector 1110 and observed by viewer 1180, the angular light field coordinates U-V are inverted. The result is that for some images and scenes, the depth appears to be the opposite. To correct for this, in one embodiment, the light field rays projected by the light field display 1105 have their U-V polarity reversed by the correction optics. In another embodiment, the light field rendering contains the step of reversing the polarity of the U-V coordinates before display by the light field display 1105. This technique of relaying a holographic object may be used in a vehicle to bring the holographic object closer to its passengers. Holographic object volume relaying can be combined with other techniques, for example using one or more mirrors to create a folded optical system in which the optical path length required for relaying is achieved in a limited physical space.

FIG. 12 illustrates an overlap of passenger fields of view within a vehicle 1205 in accordance with one or more embodiments. The LF display system is an embodiment of the LF display system 500. In some embodiments, the LF display system of fig. 12 uses the configuration system 1100 of fig. 11 to relay holographic objects such that they appear closer to one or more passengers of the vehicle. In other embodiments, holographic object relay is not used.

Vehicle 1205 contains passenger locations 1210, 1215, 1220, 1225 and 1230. Passenger positions 1210, 1215, 1220, 1225 and 1230 are positions within vehicle 1205 that passengers typically occupy. For example, passenger position 1210 corresponds to the position of a passenger sitting in the driver seat of vehicle 1205, passenger position 1215 corresponds to the position of a passenger sitting in the front passenger seat of vehicle 1205, and passenger positions 1220, 1225, 1230 correspond to different positions of a passenger sitting in the rear seat of vehicle 1205. In some embodiments, passenger locations 1210, 1215, 1220, 1225 and 1230 are fixed. In other embodiments, the LF display system may track the actual physical location of one or more passengers. The LF display system may dynamically update the respective passenger locations of one or more passengers to match the tracked physical locations of the passengers.

In the illustrated embodiment, the passenger locations 1210 and 1215 are closer to the display surface 1235 of the LF display system. Each passenger position 1210, 1215, 1220, 1225 and 1230 has its own field of view relative to the display screen 1235. In fig. 12, each field of view is represented by a dashed line and contains a respective holographic object volume and a respective viewing volume. The fields of view overlap within vehicle 1205 to form region 1240, region 1245, region 1250, and region 1255. The area 1240 is a volume in which the fields of view of each of the passenger locations 1210, 1215, 1220, 1225, and 1230 overlap, such that holographic objects presented within the area 1240 are visible from each of the passenger locations 1210, 1215, 1220, 1225, and 1230. The area 1245 is a volume where the fields of view of the passenger positions 1220, 1225, and 1230 overlap such that the holographic object 1246 presented within the area 1245 is visible from the passenger positions 1220, 1225, and 1230, but not visible from the passenger positions 1210 and 1215. Area 1250 is the volume in which holographic object 1251 presented within area 1250 is visible from passenger location 1210, but not visible from passenger locations 1215, 1220, 1225 and 1230. Area 1255 is a volume in which holographic object 1256 presented within area 1255 is visible from passenger location 1215, but not from passenger locations 1210, 1220, 1225 and 1230.

The vehicle 1205 may also include a 2D display 1236. The 2D display 1236 is optional. The 2D display 1236 is at least partially transparent to visible light. The 2D display 1236 may be, for example, an OLED, an LCD, some other display that is at least partially transparent, or some combination thereof. The 2D display 1236 may be placed in front of the light field display surface 1235, directly in the optical path of the rays projected from the display surface 1235. The 2D display 1236 may be turned off when the light field display is operating and projecting the holographic object directly through its transparent surface, which will remain almost invisible. This 2D display 1236 can be used for various purposes when the light field display system 500 is off, including vehicle configuration and control, security access, emergency use, and the like.

The LF display system can be configured such that the respective viewers are associated with at least a portion of one or more of the passenger locations 1210, 1215, 1220, 1225 and 1230. In this way, the LF display system may selectively present holographic objects to one or more of the passengers of the vehicle 1205. For example, the LF display may present one or more holographic objects in the area 1240 such that the one or more holographic objects are visible from all passenger locations 1210, 1215, 1220, 1225, and 1230. Similarly, the LF display system can present one or more holographic objects 1246 in the area 1245 such that the one or more holographic objects are visible from the passenger locations 1220, 1225, and 1230, but not from the passenger locations 1210 and 1220. And in another example, the LF display system may present one or more holographic objects (e.g., holographic object 1251) in area 1250 such that the one or more holographic objects are visible from passenger location 1210 but not from other passenger locations of vehicle 1205. Likewise, the LF display system may present one or more holographic objects (e.g., holographic object 1256) in area 1255 such that the one or more holographic objects are visible from passenger location 1215 but not from other passenger locations of vehicle 1205.

In some embodiments, the LF display system may also generate a virtual display surface 1260 using the transflector 1110 shown in fig. 11. A transflector is not shown in FIG. 11, but may be placed over display surface 1235 and used to relay a holographic object such that the transflector is offset from the display surface and centered on virtual display surface 1260. The virtual display surface 1260 is similar to the virtual display surface 1175 described above with respect to fig. 11. In this configuration, the holographic objects appear closer to one or more passenger locations of the vehicle 1205 by way of a virtual display surface 1260 between the display and the vehicle passenger. In the illustrated embodiment, virtual display surface 1260 may be present in one or more of areas 1224, 1250, and 1255, but virtual display surface 1260 is not present in area 1240 (and again is not part of a holographic viewing volume behind display surface 1235).

It may be advantageous to adjust the light projected from the display surface in order to achieve a higher field of view for the vehicle occupants. Fig. 13A illustrates an example view of a light field display having a substantially uniform projection direction in accordance with one or more embodiments. The display surface 1305 contains a plurality of surface locations, and each of the surface locations emits a number of individual light projection paths (or rays) grouped substantially in solid angles around a central light projection path (we refer to as the optical axis of this display surface location). Each ray group is centered on this optical axis, which defines the propagation direction of the ray bundle for exiting the display surface at a given position on said display surface. The optical axis is a line of symmetry, which may define the direction of propagation of the central ray, since it may coincide with the approximate midpoint of the angular range of the ray projected from the display surface in both the horizontal and vertical dimensions. Thus, the optical axis for a location on the display surface is typically substantially aligned with the average energy vector for all light rays exiting the display surface at that location. A substantially uniform projection direction occurs if the optical axes of all the groups of rays projected from the display surface 1305 are parallel.

In the illustrated embodiment, each optical axis is parallel to a normal to the display surface 1305. In other embodiments, each optical axis may be at a respective angle to the normal (e.g., such that they are all tilted in a particular direction). And as discussed below with respect to fig. 13B, in some embodiments, the optical axes of different light groups may be at different angles to the normal of the display surface 1305. Display surface 1305 is located between passenger position 1310 in passenger seat 1315 and passenger position 1320 in passenger seat 1325. The display surface 1305 may be a virtual display surface similar to 1260 in fig. 12. Display surface 1305 emits multiple sets of light rays, such as ray set 1330, ray set 1335, and ray set 1340. Ray sets 1330, 1335, and 1340 are projected by display surface 1305 at three different locations on the display surface, and each of ray sets 1330, 1335, and 1340 has an optical axis, collectively referred to as optical axis 1345.

In the illustrated embodiment, for each position on the display surface 1305, all optical axes 1345 are substantially perpendicular to the display surface 1305. It should be noted that light from light ray set 1330 does not reach passenger position 1310, while light from light ray set 1340 does not reach passenger position 1320. This indicates that none of the passenger positions are in the view volume of the display surface 1305 (i.e., the passenger will not be able to see the vertical edge of the display furthest from the passenger). This may be corrected by introducing a deflection to the set of rays projected from each location on the display surface 1305. This deflection in the optical axis may be applied at different points on the display surface, with different magnitudes and directions, to optimize the viewer of the display, or in this case the viewing volume for a given seating arrangement of the vehicle occupant.

FIG. 13B illustrates an example view of a light field display with variable projection directions in accordance with one or more embodiments. In fig. 13B, the respective groups of projected light rays at each position on the display surface are deflected such that the optical axis is not always perpendicular to the display surface 1305. Display surface 1305 emits multiple groups of light rays, such as group of light rays 1345, group of light rays 1350, and group of light rays 1355. Ray sets 1345, 1350, and 1355 are projected by display surface 1305 at three different locations on the display surface, and ray sets 1345, 1350, and 1355 have optical axes 1360, 1365, and 1370, respectively.

The sets of rays 1345 and 1355, coming from the vertical edges of the display and defined by the central rays at the optical axes 1360 and 1370, respectively, are projected in a horizontal direction towards the center of the display such that the optical axes 1360 and 1365 have substantial deflection angles with respect to the normal of the display surface. Thus, the plurality of surface locations includes a first subset of surface locations having optical axes tilted at a first angle relative to a normal of the display surface. For example, optical axis 1360 forms a non-zero angle 1375 with a normal 1380 of display surface 1305 (i.e., the surface location associated with light ray set 1345 is such that optical axis 1360 is tilted at angle 1375 with respect to normal 1380). In contrast, optical axis 1365 of ray set 1350 is substantially parallel to normal 1380. It should be noted that in this configuration of FIG. 13B, some rays from ray set 1345 reach passenger position 1310, and some rays from ray set 1355 reach passenger position 1320. Thus, in contrast to the case of fig. 13A, the passengers of passenger positions 1310 and 1320 are now in the view volume of display surface 1305 and can see the entire display surface 1305, where there is no deflection of the optical axis from the normal of display surface 1305. The yaw angle shown in FIG. 13B is a horizontal yaw angle that may gradually change from a zero value at the center of the display to a substantially non-zero value as a person moves horizontally across the display to one edge of the display. In other embodiments, only one or a few discrete deflection angles may be used across the display surface. Different configurations of deflection angles can be used to optimize the light field display to obtain a desired viewing volume geometry. Furthermore, although the illustrated embodiment shows a variation in deflection according to the emission position along a horizontal axis (i.e., parallel to the x-axis). In some embodiments, the deflection angle may also vary according to the emission position along a vertical axis (i.e., parallel to the y-axis). For example, as the emission locations of the group of light lines move vertically, their corresponding deflection angles may change to help direct light from the emission locations toward the passenger location 1310.

In one embodiment, the deflection angle may be achieved by a detailed configuration of the waveguide that projects energy from the electromagnetic energy surface into the propagation path. In other embodiments, an optical layer placed over the light field display surface deflects the projected light rays once they exit the light field display surface. In various embodiments, the optical layer may comprise: a refractive optical layer comprising prisms having different characteristics; glass layers with different refractive indices or mirror layers, thin films, diffraction gratings, etc. The optical layer may be optimized for a particular intended viewer geometry, allowing the viewing volume to be customized at a relatively low cost.

Additional configuration information

The foregoing description of embodiments of the present disclosure has been presented for purposes of illustration; it is not intended to be exhaustive or to limit the disclosure to the precise form disclosed. One skilled in the relevant art will appreciate that many modifications and variations are possible in light of the above disclosure.

Some portions of this description describe embodiments of the disclosure in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like. Moreover, it has also proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, hardware, or any combination thereof.

Any of the steps, operations, or processes described herein may be performed or carried out using one or more hardware or software modules, alone or in combination with other devices. In one embodiment, the software modules are implemented in a computer program product comprising a computer readable medium containing computer program code, the computer readable medium being executable by a computer processor to perform any or all of the described steps, operations, or processes.

Embodiments of the present disclosure may also relate to apparatuses for performing the operations herein. This apparatus may be specially constructed for the required purposes, and/or it may comprise a general purpose computing device selectively activated or reconfigured by a computer program stored in the computer. This computer program may be stored in a non-transitory tangible computer readable storage medium or any type of medium suitable for storing electronic instructions, which may be coupled to a computer system bus. Moreover, any computing system referred to in the specification may contain a single processor, or may be an architecture that employs a multi-processor design for increased computing capability.

Embodiments of the present disclosure may also relate to products produced by the computing processes described herein. This product may include information resulting from a computing process, where the information is stored on a non-transitory tangible computer readable storage medium and may include any embodiment of the computer program product or other data combination described herein.

Finally, the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the disclosure be limited not by this detailed description, but rather by any claims based on the application to which they pertain. Accordingly, the disclosure of the embodiments is intended to be illustrative, but not limiting, of the scope of the disclosure, which is set forth in the following claims.

57页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于制表的指示器装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!