Light field display system for performance events

文档序号:425329 发布日期:2021-12-21 浏览:34次 中文

阅读说明:本技术 用于表演事件的光场显示系统 (Light field display system for performance events ) 是由 J·S·卡拉夫 B·E·比弗森 J·多姆 于 2020-04-30 设计创作,主要内容包括:一种用于向场地中的观看者显示全息表演内容(例如,现场表演)的光场(LF)显示系统。所述场地中的所述LF显示系统包含平铺在一起以形成LF模块阵列的LF显示模块。所述LF模块阵列创建用于在所述场地中显示所述表演内容的表演体(例如,舞台)。所述LF模块阵列向观看体中的观看者显示所述表演内容。所述LF显示系统可包含在LF呈现网络中。所述LF呈现网络允许在一个位置记录全息表演内容并且在另一位置显示(同时或非同时)。所述LF呈现网络包含管理所述全息表演内容的数字权限的网络系统。(A Light Field (LF) display system for displaying holographic performance content (e.g., live performances) to viewers in a venue. The LF display system in the field includes LF display modules tiled together to form an array of LF modules. The array of LF modules creates a performance volume (e.g., a stage) for displaying the performance content in the venue. The array of LF modules displays the performance content to viewers in a viewing volume. The LF display system may be included in an LF presentation network. The LF presentation network allows recording of holographic show content at one location and display (simultaneous or non-simultaneous) at another location. The LF presentation network includes a network system that manages digital rights to the holographic performance content.)

1. A Light Field (LF) display system, comprising:

a network interface configured to receive encoded holographic content via a network;

a decoder configured to decode the encoded holographic content into a format that can be rendered by an LF display component; and

a light field display component configured to present the decoded holographic content to a viewer in the venue.

2. The LF display system of claim 1, wherein the encoder decodes the encoded holographic content in approximately real time.

3. The LF display system of claim 1, wherein the holographic content is a live stream of events and the holographic content is presented in approximately real time.

4. The LF display system of claim 1, wherein the encoded holographic content is in a first format and the decoded holographic content is in a second format.

5. The LF display system of claim 4, wherein the first format is a vectorized data format and the second format is a rasterized data format.

6. The LF display system of claim 1, wherein the decoder employs a proprietary codec to decode the encoded streaming holographic content.

7. The LF display system of claim 1, wherein the holographic content is encoded in a format decodable by a proprietary codec.

8. The LF display system of claim 1, further comprising:

an LF processing engine configured to determine a hardware configuration of the LF display system; and is

Wherein the decoder decodes the encoded holographic content based on the determined hardware configuration of the LF display system.

9. The LF display system of claim 7, wherein the hardware configuration includes any one of:

the resolution of the image is determined by the resolution,

the number of rays projected per degree,

the field of view is,

a deflection angle on the display surface, an

A dimension of the display surface.

10. The LF display system of claim 1, further comprising:

an LF processing engine configured to determine a geometric orientation of the LF display system; and is

Wherein the decoder decodes the encoded holographic content based on the determined geometric orientation of the LF display system.

11. The LF display system of claim 10, wherein the geometric orientation includes any one of:

a plurality of display panels of the LF display assembly,

the relative orientation of the display panels is such that,

the height of the display panel is such that,

a width of the display panel, an

A layout of the display panel.

12. The LF display system of claim 1, further comprising:

an LF processing engine configured to determine a configuration of the site; and is

Wherein the decoder decodes the encoded holographic content based on the determined configuration of the venue.

13. The LF display system of claim 12, wherein the configuration of the venue includes any one of:

one or more holographic object bodies,

one or more viewing volumes, an

A location of the viewer relative to the LF display assembly.

14. The LF display system of claim 1, further comprising:

a digital rights management system configured to manage digital rights of received encoded holographic content, the digital rights management system allowing the LF display component to project decoded holographic content, wherein the digital rights management system provides a digital key for the decoded holographic content.

15. The LF display system of claim 1, wherein the holographic content is encrypted and the LF display system is configured to decrypt the holographic content.

16. The LF display system of claim 1, wherein the encoded holographic content is received via the network from a holographic content repository connected to the LF display system.

17. The LF display system of claim 1, wherein the holographic content is received in response to the LF display system transmitting a transaction fee to a holographic content repository for the holographic content.

18. The LF display system of claim 1, wherein the holographic content is received from a holographic content generation system configured to generate the holographic content

The performance at the scene is recorded,

encoding a recording of the performance into encoded holographic content, an

Transmitting the encoded holographic content to an LF display system via the network.

19. The LF display system of claim 1, wherein the network is a public network.

20. The LF display system of claim 1, wherein the network is a private network configured for transmitting holographic content.

21. The LF display system of claim 1, wherein

The LF display component is configured to present acoustic data, and

the encoded holographic content contains acoustic data such that the LF display system renders the decoded holographic content and the decoded acoustic data when the holographic content is decoded.

22. The LF display system of claim 1, wherein the rendered holographic content includes a first type of energy and a second type of energy.

23. The LF display system of claim 22, wherein the first type of energy is electromagnetic energy and the second type of energy is ultrasonic energy.

24. The LF display system of claim 23, wherein the first type of energy and the second type of energy are presented at the same location such that the LF display component presents a volumetric tactile surface.

25. The LF display system of claim 1, wherein

The LF display component is configured to record image data, and

the LF display component simultaneously renders holographic content and records image data.

26. The LF display system of claim 1, wherein

The LF display assembly is configured to record holographic content, and

the LF display components simultaneously render and record holographic content.

27. The LF display system of claim 1, further comprising:

a tracking system configured to obtain information about one or more of the viewers viewing the holographic content.

28. The LF display system of claim 27, wherein the LF display component presents holographic content to the viewer based on information obtained by the tracking system.

29. The LF display system of claim 27, wherein the information obtained by the tracking system includes:

the response of the one or more viewers of the audience to the presented holographic content, an

Characteristics of the one or more viewers in the audience.

30. The LF display system of claim 27, wherein the information about the viewer includes any of a location of the viewer, a movement of the viewer, a gesture of the viewer, an expression of the viewer, an age of a viewer, a gender of the viewer, and clothing worn by the viewer.

31. The LF display system of claim 1, further comprising:

a viewer profiling system configured to

Identifying a viewer of the audience viewing the holographic content, and

a viewer profile is generated for each of the identified viewers.

32. The LF display system of claim 31, wherein the LF display component presents holographic content to the viewer based on the identified one of the viewers.

33. The LF display system of claim 31, wherein the LF display system presents holographic content to the viewer based on a profile of the identified viewer.

34. The LF display system of claim 31, wherein the viewer profiling system is configured to identify viewer responses to the holographic content or characteristics of viewers viewing the holographic content, and include the identified responses or characteristics in a viewer profile.

35. The LF display system of claim 27, wherein the LF viewer profiling system identifies characteristics of the viewer, and the characteristics include any of:

the position of the viewer is such that,

the action of the viewer is such that,

the gesture of the viewer is a function of the viewer,

the facial expression of the viewer,

the gender of the viewer is such that,

the age of the viewer, an

Clothing of the viewer.

36. The LF display system of claim 26, wherein the viewer profiling system accesses social media accounts of one or more identified viewers to generate a viewer profile.

37. The LF display system of claim 1, wherein the venue is a theater.

38. The LF display system of claim 1, wherein the venue includes a plurality of viewing locations substantially surrounding a substantially horizontal display surface of the LF display system.

39. The LF display system of claim 38, wherein the display surface is at least a portion of a floor of the venue.

40. The LF display system of claim 38, wherein the display surface is at least a portion of a stage in the venue.

41. The LF display system of claim 38, wherein the display surface is at least a portion of an elevated viewing platform in the venue.

42. The LF display system of claim 1, wherein the venue includes a plurality of viewing locations arranged generally in front of the LF display assembly.

43. The LF display system of claim 1, wherein the venue includes a plurality of viewing positions positioned such that their viewing of the presented holographic content is unobstructed.

44. The LF display system of claim 1, wherein the venue has a slope and one or more viewing positions are located along the slope.

45. The LF display system of claim 1, wherein the venue includes:

one or more viewing layers, each viewing layer configured to contain a portion of the viewer.

46. The LF display system of claim 45, wherein the LF display system presents the same decoded holographic content to each layer of the venue.

47. The LF display system of claim 45, wherein the LF display system presents different decoded holographic content to one or more layers of the venue.

48. The LF display system of claim 45, wherein the layer is generally circular and positioned such that it surrounds the LF display assembly.

49. The LF display system of claim 1, wherein

The field includes a first wall, and

the LF display assembly is positioned on the first wall such that the holographic content is presented to the viewer from the first wall.

50. The LF display system according to claim 49, wherein

The field includes a second wall, and

the LF display assembly is positioned on the first wall and the second wall such that the holographic content is presented to the viewer from the first wall and the second wall.

51. The LF display system of claim 1, wherein

The field includes a stage, an

The LF display component is positioned on the stage such that the holographic content is presented to the audience from the stage.

52. The LF display system of claim 1, wherein

The field includes a floor, and

the LF display assembly is positioned on the floor such that the holographic content is presented from the floor to the viewer.

53. The LF display system of claim 1, wherein the LF display component is a substantially flat surface.

54. The LF display system of claim 1, wherein the LF display system is a curved surface.

55. A Light Field (LF) display system, comprising:

a network interface configured to receive a live stream comprising real-time holographic content via a network;

an LF processing engine configured to generate additional holographic content; and

a LF display component configured to present the real-time holographic content and the additional holographic content to viewers in a venue.

56. The LF display system of claim 55, wherein the live stream includes live holographic content representing a simulcast of one or more of:

the music party is provided with a music conference,

the performance of the user is shown,

the program is a program that is executed by a computer,

a scene, and

an event.

57. The LF display system of claim 55, wherein the live stream includes additional holographic content representing one or more of:

a concert;

performing;

a program;

a scene; and

an event.

58. The LF display system of claim 57, wherein the additional holographic content is generated by one or more of:

the neural network is used for carrying out the neural network,

a program generation algorithm is generated by the program generation algorithm,

machine learning algorithm, and

and (5) artificial intelligence.

59. The LF display system of claim 55, wherein the additional holographic content includes one or more of:

computer-generated rendering of objects, events or scenes, and

previously recorded live-action holographic content.

60. The LF display system of claim 55, wherein the live stream includes holographic content representing one or more of:

the singer can select the singer,

the number of the bands is set by the band,

the actors may be able to perform the above-described actions,

the dancer can be provided with a plurality of dancers,

comedy actors, and

a performer.

61. The LF display system of claim 55, wherein the additional holographic content enhances the live stream including real-time holographic content.

62. The LF display system of claim 55, wherein the additional holographic content replaces real-time holographic content contained in the live stream.

63. The LF display system of claim 55, wherein the additional holographic content represents one or more of:

the background is that,

on the stage, the stage is provided with a stage,

the prop is provided with a prop body,

a piece of clothing which is worn on the back,

a costume, and

musical instruments.

64. The LF display system of claim 55, wherein the additional holographic content represents one or more of:

a person, object, event or scene from another time or location,

computer-generated rendering of a person, object, event or scene, and

previously recorded holographic content.

65. The LF display system of claim 55, wherein the additional holographic content represents one or more of:

an advertisement, and

and (4) implanting advertisements.

66. The LF display system of claim 55, wherein the additional holographic content is generated by one or more of:

the neural network is used for carrying out the neural network,

a program generation algorithm is generated by the program generation algorithm,

machine learning algorithm, and

and (5) artificial intelligence.

67. The LF display system of claim 55, wherein viewers in the audience are able to interact with the additional holographic content.

68. The LF display system of claim 55, wherein the viewer is able to interact with the additional holographic content instead of the real-time holographic content.

69. The LF display system of claim 55, wherein a viewer is able to interact with the real-time holographic content.

70. The LF display system of claim 55, wherein the processing engine is configured to generate additional holographic content based on the viewer's interaction with the holographic content presented by the LF display component.

71. The LF display system of claim 55, wherein a viewer in the audience interacts with the presented holographic content, wherein interaction includes any one of:

the entity is interacted with the entity,

an auditory interaction, an

And (4) visual interaction.

72. The LF display system of claim 55, further comprising:

a tracking system configured to obtain information about one or more of the viewers viewing the holographic content.

73. The LF display system of claim 72, wherein the additional holographic content generated by the processing engine is based on the information about the one or more of the viewers obtained by the tracking system.

74. The LF display system of claim 72, wherein the information obtained by the tracking system includes:

a response to the presented holographic content by the one or more viewers of the audience, an

Characteristics of the one or more viewers in the audience.

75. The LF display system of claim 72, wherein the information about the viewer includes any of a location of the viewer, a movement of the viewer, a gesture of the viewer, an expression of the viewer, an age of a viewer, a gender of the viewer, and clothing worn by the viewer.

76. The LF display system according to claim 72, wherein

The tracking system is configured to determine aggregated information about the one or more viewers in the audience, and

the additional holographic content generated by the processing engine is based on the aggregated information.

77. The LF display system of claim 55, further comprising:

a viewer profiling system configured to:

identifying one or more of the viewers watching the holographic content, and

a viewer profile is generated for one or more of the identified viewers.

78. The LF display system of claim 77, wherein the additional holographic content generated by the processing engine is based on the one or more viewers identified by the viewer profiling system.

79. The LF display system of claim 77, wherein the additional holographic content generated by the processing engine is based on the viewer profile generated by the viewer profiling system.

80. The LF display system of claim 77, wherein the viewer profiling system is configured to identify viewer responses to the holographic content or characteristics of viewers viewing the holographic content, and include the identified responses or characteristics in a viewer profile.

81. The method of claim 77, wherein the viewer profiling system identifies characteristics including any one of:

the position of the viewer is such that,

the action of the viewer is such that,

the gesture of the viewer is a function of the viewer,

the facial expression of the viewer,

the gender of the viewer is such that,

the age of the viewer, an

Clothing of the viewer.

82. The LF display system of claim 77, wherein

The viewer profiling system is configured to generate an aggregated profile for the one or more viewers in the audience, and

the additional holographic content generated by the processing engine is based on the aggregated profile.

83. The LF display system of claim 55, further comprising:

an LF processing engine configured to determine a hardware configuration of the LF display system; and is

Wherein the LF display component renders the holographic content based on the determined hardware configuration of the LF display system.

84. The LF display system of claim 83, wherein the hardware configuration includes any one of:

the resolution of the image is determined by the resolution,

the number of rays projected per degree,

the field of view is,

a deflection angle on the display surface, an

A dimension of the display surface.

85. The LF display system of claim 55, further comprising:

an LF processing engine configured to determine a geometric orientation of the LF display system; and is

Wherein the holographic content is presented to the viewer based on the determined geometric orientation of the LF display system.

86. The LF display system of claim 85, wherein the geometric orientation includes any one of:

a plurality of display panels of the LF display assembly,

the relative orientation of the display panels is such that,

the height of the display panel is such that,

a width of the display panel, an

A layout of the display panel.

87. The LF display system of claim 55, further comprising:

an LF processing engine configured to determine a configuration of the site; and is

Wherein the LF display component renders holographic content based on the determined configuration of the venue.

88. The LF display system of claim 87, wherein the configuration of the venue includes any one of:

one or more holographic object bodies,

one or more viewing volumes, an

A location of the viewer relative to the LF display assembly.

89. The LF display system according to claim 55, wherein

The LF display component is configured to present acoustic data, and

the holographic content contains acoustic data such that the LF display system presents the holographic content and the acoustic data.

90. The LF display system of claim 55, wherein the rendered holographic content includes a first type of energy and a second type of energy.

91. The LF display system of claim 90, wherein the first type of energy is electromagnetic energy and the second type of energy is ultrasonic energy.

92. The LF display system of claim 91, wherein the first type of energy and the second type of energy are presented at the same location such that the LF display component presents a volumetric tactile surface.

93. The LF display system according to claim 55, wherein

The LF display component is configured to record image data, and

the LF display component simultaneously renders holographic content and records image data.

94. The LF display system according to claim 55, wherein

The LF display assembly is configured to record holographic content, and

the LF display components simultaneously render and record holographic content.

95. The LF display system of claim 55, wherein the live stream including real-time holographic content is received in a first format and the LF processing engine decodes the holographic content into a second format.

96. The LF display system of claim 95, wherein the first format is a vectorized data format and the second format is a rasterized data format.

97. The LF display system according to claim 55, wherein the processing engine employs a dedicated codec to decode streaming holographic content encoded by the same codec.

98. A Light Field (LF) generation system, comprising:

a light field recording component configured to record one or more types of energy representing events in a venue;

a processing engine configured to convert the recorded energy into holographic content representing a performance; and

a network interface configured to transmit the holographic content to one or more LF display systems via a network.

99. The LF generation system of claim 98, wherein the event is one or more of:

the music party is provided with a music conference,

the program is a program that is executed by a computer,

the performance of the user is shown,

a scene, and

an event.

100. The LF generation system of claim 98, wherein the events include one or more of:

the singer can select the singer,

the number of the bands is set by the band,

the magic man can be used for magic training,

the actors may be able to perform the above-described actions,

the dancer can be provided with a plurality of dancers,

comedy actors, and

a performer.

101. The LF generation system of claim 98, wherein the venue is any one of:

a performance hall is arranged at the center of the hall,

the space of the event is defined by the time of the event,

the number of the cinemas is such that,

the music hall is provided with a music hall,

a working chamber, and

and (4) a stage.

102. The LF generation system of claim 98, wherein the network is a public network.

103. The LF generation system of claim 98, wherein the network is a private network configured for transmitting holographic content.

104. The LF generation system of claim 98, wherein the processing engine employs a dedicated codec to encode the recorded energy as holographic content.

105. The LF generation system of claim 104, wherein the holographic content is encoded in a vectorized format.

106. The LF performance network system of claim 104, wherein the holographic content is encoded in a format that is decodable by an LF display system using the dedicated codec.

107. The LF generation system according to claim 98, wherein the light field recording component includes:

one or more light field modules comprising one or more energy sensors configured to record electromagnetic energy as light field content, and

wherein the holographic content comprises the light field content.

108. The LF generation system according to claim 98, wherein the light field recording component includes:

one or more acoustic recording devices configured to record acoustic energy as audio content, and

wherein the holographic content comprises the audio content.

109. The LF generation system according to claim 98, wherein the light field recording component includes:

one or more pressure sensors configured to record mechanical energy, and

wherein the holographic content comprises instructions to generate mechanical energy to produce one or more tactile surfaces.

110. The LF generation system according to claim 98, wherein the light field recording component includes:

one or more recording modules positioned around the venue such that the LF recording module captures energy from the performance at multiple viewpoints.

111. The LF generation system of claim 110, wherein the processing engine converts the energy from multiple viewpoints into the holographic content.

112. The LF generation system according to claim 98, wherein the light field recording component includes:

one or more two-dimensional cameras, each two-dimensional camera recording energy from the performance as one or more two-dimensional images.

113. The LF generation system of claim 112, wherein the processing engine converts the one or more two-dimensional images into holographic content.

114. The LF generation system of claim 113, wherein the processing engine employs a machine learning algorithm to generate the holographic content.

115. The LF generation system according to claim 98, wherein the light field recording component includes:

one or more depth sensors for determining a depth of an object.

116. The LF generation system according to claim 98, wherein the light field recording component includes:

one or more plenoptic camera systems for recording light field data.

117. The LF generation system of claim 98, wherein the LF generation system transmits the holographic content to an LF display system via the network in near real time.

118. The LF generation system of claim 98, wherein the LF generation system stores the holographic content on a local storage device.

119. The LF generation system of claim 98, wherein the LF generation system transmits the holographic content to a network system for storage on a network storage system.

120. The LF generation system of claim 98, wherein the LF generation system transmits the holographic content to an LF display system in response to receiving a transaction fee.

Background

The present disclosure relates to performances in arenas, and in particular, to light field display systems for displaying performances in arenas.

Traditionally, performance venues (e.g., theaters, concert halls, etc.) are configured to allow viewers (e.g., fans, patrons, etc.) to view live performances (e.g., concerts, dramas, comedy actors, etc.) of the performers (e.g., concerts, dramas, comedy shows, etc.) live in real time at the venue. Unfortunately, in some cases, holding a performance at a venue may limit the ability of viewers who want to view the performance in this manner. For example, the performance may be sold out, may be at an inconvenient time, or may be remote from the viewer. Sometimes a performance is recorded and subsequently reproduced on a two-dimensional surface, such as a movie screen or television, but these reproductions make it difficult to reproduce the atmosphere and passion in the venue of the live performance. It would therefore be beneficial to configure a performance venue so that viewers can perceive the performance as if it were in a live performance.

Disclosure of Invention

A Light Field (LF) display system for displaying holographic content of a performance in a venue (e.g., a theater, concert hall, etc.). The LF display system includes LF display modules forming surfaces (e.g., stages, etc.) in a venue, the LF display modules each having a display area and tiled together to form a seamless display surface having an effective display area greater than the display area. The LF display module displays holographic content of the performance from the performance volume so that viewers in the venue can perceive the performance.

In some embodiments, the holographic content of the performance may be a rendition of the performance that occurs simultaneously at another venue, created by the content creation system for display in the venue, and/or accessed from the data storage device for display in the venue. The holographic content may be managed by a network system responsible for managing the digital rights of the holographic content. For example, a viewer in the venue may pay a transaction fee to access the holographic content for display in the venue.

In some embodiments, the LF display system includes a tracking system and/or a viewer profiling system. The tracking system and viewer profiling system may monitor and store characteristics of viewers in the venue, viewer profiles describing the viewers, and/or viewer responses to holographic content in the venue. The holographic content created for display in the venue may be based on any of the monitored or stored information.

In some embodiments, a user may interact with the holographic content, and the interaction may serve as an input to the holographic content creation system. For example, in some embodiments, some or all of the LF display systems contain multiple ultrasonic speakers. The plurality of ultrasonic speakers is configured to generate a tactile surface that is consistent with at least a portion of the holographic content. The tracking system is configured to track user interaction with the holographic object (e.g., through images captured by the imaging sensor of the LF display module and/or some other camera). And the LF display system is configured to provide creation of holographic content based on the interaction.

Drawings

FIG. 1 is a diagram of a light field display module to render holographic objects in accordance with one or more embodiments.

FIG. 2A is a cross-section of a portion of a light field display module in accordance with one or more embodiments.

FIG. 2B is a cross-section of a portion of a light field display module in accordance with one or more embodiments.

FIG. 3A is a perspective view of a light field display module in accordance with one or more embodiments.

Fig. 3B is a cross-sectional view of a light field display module including an interleaved energy relay device in accordance with one or more embodiments.

Fig. 4A is a perspective view of a portion of a light field display system tiled in two dimensions to form a single-sided, seamless surface environment in accordance with one or more embodiments.

FIG. 4B is a perspective view of a portion of a light field display system in a multi-faceted, seamless surface environment in accordance with one or more embodiments.

FIG. 4C is a top view of a light field display system having a polymerization surface in a wing-like configuration according to one or more embodiments.

FIG. 4D is a side view of a light field display system with a polymerization surface in an oblique configuration in accordance with one or more embodiments.

Fig. 4E is a top view of a light field display system having a polymeric surface on a front wall of a room in accordance with one or more embodiments.

Fig. 4F is a side view of an LF display system having a polymeric surface on a front wall of a room in accordance with one or more embodiments.

Fig. 5A is a block diagram of a light field display system in accordance with one or more embodiments.

Fig. 5B illustrates an example LF movie network 550 in accordance with one or more embodiments.

Fig. 6 illustrates a side view of a venue 600 that is a conventional theater that has been enhanced with an LF display system in accordance with one or more embodiments.

Fig. 7A illustrates a cross-section of a first venue incorporating an LF display system for displaying performance content to viewers at viewing positions in a viewing volume in accordance with one or more embodiments.

Fig. 7B illustrates a cross-section of a second venue incorporating an LF display system for displaying performance content to viewers at viewing positions in a viewing volume in accordance with one or more embodiments.

Fig. 8 illustrates a venue that also functions as a home theater in a viewer's living room in accordance with one or more embodiments.

Fig. 9 is a flow diagram illustrating a method for displaying a hologram of a show within an LF show network.

The figures depict various embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.

Detailed Description

SUMMARY

A Light Field (LF) display system is implemented in a venue set as a show. For example, the performance may be a concert, a drama, a comedy show, a dance performance, or the like. The LF display system includes an LF display component configured to present holographic content including one or more holographic objects to be visible to one or more users in a viewing volume of the LF display system. The LF display assembly may form a multi-faceted, seamless surface on some or all of one or more surfaces (e.g., stages) in the venue. The LF display system may present the holographic content to viewers in the venue. The viewer typically views the performance at the venue, but may be anyone in the venue who is able to view the holographic content at the venue. The holographic object may also be enhanced with other sensory stimuli (e.g., tactile and/or audio). For example, an ultrasonic transmitter in the LF display system may emit ultrasonic pressure waves that provide a tactile surface for some or all of the holographic objects. The holographic content may include additional visual content (i.e., 2D or 3D visual content). Coordinating the emitters to ensure that a cohesive experience is achieved is part of the system in a multi-emitter embodiment (i.e., a holographic object that provides the correct tactile sensation and sensory stimulus at any given point in time).

In some embodiments, the LF display system includes a plurality of LF display modules forming a stage in the venue. The LF display module forming the stage may be configured to project holographic content of the performance to viewers in the venue. In this way, viewers in the venue can perceive the holographic performance on the stage. For example, the LF display system may display a band holding a concert, a dance of the first performance ballet, or a drama performing a drama. In some embodiments, the LF display system may create holographic content for display to viewers in the venue. For example, the LF display system may create a virtual performer to perform a song created for the virtual performer to viewers in the venue.

In some embodiments, the LF display system may contain elements that enable the system to simultaneously emit at least one type of energy, and simultaneously absorb at least one type of energy for purposes of responding to creating holographic content. For example, an LF display system may emit both holographic objects for viewing and ultrasound waves for tactile perception, and simultaneously absorb imaging information and other scene analysis for tracking the viewer, while also absorbing ultrasound waves to detect the user's touch response. As an example, this system may project holographic performers who surf the crowd, giving the viewer an illusion that the performer is in his or her hands when the viewer virtually "touches". The display system components that perform ambient energy sensing may be integrated into the display surface through bi-directional energy elements that both emit and absorb energy, or the display system components may be dedicated sensors separate from the display surface, such as an ultrasonic speaker and an imaging capture device such as a camera.

The LF display system may be part of an LF performance network. The LF show network allows LF data to be recorded in one location, encoded, transmitted to a different location, decoded and displayed as holographic content to viewers in the venue. This allows viewers at multiple venues to perceive live performances occurring at other venues. In some embodiments, the LF display system includes a network system that manages digital rights of the holographic content.

Light field display system

Fig. 1 is a diagram 100 of a Light Field (LF) display module 110 presenting a holographic object 120 in accordance with one or more embodiments. The LF display module 110 is part of a Light Field (LF) display system. The LF display system uses one or more LF display modules to render holographic content containing at least one holographic object. The LF display system may present holographic content to one or more viewers. In some embodiments, the LF display system may also enhance the holographic content with other sensory content (e.g., touch, audio, smell, temperature, etc.). For example, as discussed below, the projection of focused ultrasound waves may generate an aerial haptic sensation that may simulate the surface of some or all of the holographic objects. The LF display system includes one or more LF display modules 110 and is discussed in detail below with respect to fig. 2-5.

LF display module 110 is a holographic display that presents holographic objects (e.g., holographic object 120) to one or more viewers (e.g., viewer 140). LF display module 110 includes an energy device layer (e.g., an emissive electronic display or an acoustic projection device) and an energy waveguide layer (e.g., an array of optical lenses). In addition, the LF display module 110 may contain an energy relay layer for combining multiple energy sources or detectors together to form a single surface. At a high level, the energy device layer generates energy (e.g., holographic content) which is then guided to a region in space using an energy waveguide layer according to one or more four-dimensional (4D) light-field functions. LF display module 110 may also project and/or sense one or more types of energy simultaneously. For example, LF display module 110 may be capable of projecting a holographic image and an ultrasonic tactile surface in a viewing volume while detecting imaging data from the viewing volume. The holographic image between the display area 150 of the LF display module 110 and the viewer 140 is a real image, where the set of focal points forming the image is converging line rays. Further, some images generated by the LF display may be virtual images (e.g., images displayed behind the display area 150) in which a set of focal points is formed by the extension of divergent light rays. The operation of the LF display module 110 is discussed in detail below with respect to fig. 2-3.

LF display module 110 uses one or more 4D light field functions (e.g., derived from plenoptic functions) to generate holographic objects within holographic object volume 160. The holographic object may be three-dimensional (3D), two-dimensional (2D), or some combination thereof. Furthermore, the holographic object may be polychromatic (e.g. full color). The holographic objects may be projected in front of the screen plane, behind the screen plane, or separated by the screen plane. The holographic object 120 may be rendered such that it is perceivable anywhere within the holographic object body 160. Here, the holographic object 120 is a real image 122 when light rays converge, and is present between the display area 150 and the viewer 140. The holographic object within holographic object 160 may appear to be floating in space to viewer 140.

Holographic object volume 160 represents the volume in which viewer 140 may perceive a holographic object. The holographic object 160 may extend in front of the surface of the display area 150 (i.e. towards the viewer 140) so that the holographic object may be presented in front of the plane of the display area 150. Additionally, holographic object 160 may extend behind the surface of display area 150 (i.e., away from viewer 140), allowing the holographic object to be rendered as if it were behind the plane of display area 150. In other words, holographic object volume 160 may contain all light rays originating from display region 150 (e.g., projected), and may converge to create a holographic object. Herein, the light rays may converge at a point in front of, at, or behind the display surface. More simply, the holographic object volume 160 encompasses all volumes from which a viewer can perceive a holographic object.

Viewing volume 130 is the volume of space from which holographic objects (e.g., holographic object 120) presented within holographic object volume 160 by the LF display system are fully visible. The holographic object may be rendered in a holographic object volume 160 and viewed in a viewing volume 130 such that the holographic object is indistinguishable from an actual object. Holographic objects are formed by projecting the same light rays generated from the surface of the object when physically present.

In some cases, holographic object 160 and corresponding viewing volume 130 may be relatively small such that they are designed for a single viewer. In other embodiments, as discussed in detail below with respect to, for example, fig. 4, 6, 7, 8, and 9, the LF display module may be enlarged and/or tiled to create larger holographic object volumes and corresponding viewing volumes that may accommodate a wide range of viewers (e.g., 1 to thousands). The LF display modules presented in this disclosure can be constructed such that the entire surface of the LF display contains holographic imaging optics, there are no dead or dead space, and no bezel is required. In these embodiments, the LF display modules may be tiled such that the imaging area is continuous over the seams between the LF display modules and the bond lines between tiled modules are barely detectable using the visual acuity of the eye. It is noted that in some configurations, although not described in detail herein, some portions of the display surface may not contain holographic imaging optics.

The flexible size and/or shape of the viewing body 130 allows a viewer to be unconstrained within the viewing body 130. For example, the viewer 140 may move to different positions within the viewing volume 130 and see different views of the holographic object 120 from corresponding viewing angles. To illustrate, referring to fig. 1, the viewer 140 is positioned at a first location relative to the holographic object 120 such that the holographic object 120 appears to be a frontal view of the dolphin. The viewer 140 can move to other positions relative to the holographic object 120 to see different views of the dolphin. For example, the viewer 140 may move so that he/she sees the left side of a dolphin, the right side of a dolphin, etc., much like the viewer 140 is watching an actual dolphin and changing his/her relative positioning to the actual dolphin to see a different view of the dolphin. In some embodiments, holographic object 120 is visible to all viewers within viewing volume 130, all viewers having an unobstructed (i.e., unobstructed by/by objects/people) line of sight to holographic object 120. These viewers may be unconstrained such that they may move around within the viewing volume to see different perspectives of the holographic object 120. Thus, the LF display system can render the holographic object such that multiple unconstrained viewers can simultaneously see different perspectives of the holographic object in real world space as if the holographic object were physically present.

In contrast, conventional displays (e.g., stereoscopic, virtual reality, augmented reality, or mixed reality) typically require each viewer to wear some external device (e.g., 3-D glasses, near-eye displays, or head-mounted displays) to see the content. Additionally and/or alternatively, conventional displays may require that the viewer be constrained to a particular viewing orientation (e.g., on a chair having a fixed position relative to the display). For example, when viewing an object shown by a stereoscopic display, the viewer will always focus on the display surface, rather than on the object, and the display will always present only two views of the object, which will follow the viewer trying to move around the perceived object, resulting in a perceived distortion of the object. However, with light field displays, viewers of holographic objects presented by LF display systems do not need to wear external devices, nor are they necessarily restricted to specific locations to see the holographic objects. The LF display system presents the holographic object in a manner that is visible to the viewer, much the same way that the viewer can see the physical object, without the need for special goggles, glasses, or head-mounted accessories. Further, the viewer may view the holographic content from any location within the viewing volume.

Notably, the size of potential location receptors for holographic objects within the holographic object volume 160. To increase the size of holographic object 160, the size of display area 150 of LF display module 110 may be increased and/or multiple LF display modules may be tiled together in a manner that forms a seamless display surface. The effective display area of the seamless display surface is larger than the display area of each LF display module. Some embodiments related to tiling LF display modules are discussed below with respect to fig. 4 and 6-9. As shown in fig. 1, the display area 150 is rectangular, resulting in a holographic object 160 that is pyramidal in shape. In other embodiments, the display area may have some other shape (e.g., hexagonal), which also affects the shape of the corresponding viewing volume.

Additionally, although the discussion above focuses on presenting holographic object 120 within a portion of holographic object 160 located between LF display module 110 and viewer 140, LF display module 110 may additionally present content in holographic object 160 behind the plane of display area 150. For example, LF display module 110 may make display area 150 appear to be the surface of the ocean where holographic object 120 is jumping out. And the displayed content may enable the viewer 140 to view through the displayed surface to see marine life underwater. Furthermore, the LF display system can generate content that moves seamlessly around the holographic object 160, both behind and in front of the plane of the display area 150.

Fig. 2A illustrates a cross-section 200 of a portion of an LF display module 210 in accordance with one or more embodiments. The LF display module 210 may be the LF display module 110. In other embodiments, the LF display module 210 may be another LF display module having a display area with a different shape than the display area 150. In the illustrated embodiment, the LF display module 210 includes an energy device layer 220, an energy relay layer 230, and an energy waveguide layer 240. Some embodiments of LF display module 210 have different components than those described herein. For example, in some embodiments, LF display module 210 does not include energy relay layer 230. Similarly, functionality may be distributed among components in a different manner than described herein.

The display system described herein presents an energy emission that replicates the energy of a typical surrounding object in the real world. Here, the emitted energy is directed from each coordinate on the display surface towards a particular direction. In other words, the respective coordinates on the display surface serve as the projected locations of the emitted energy. The directed energy from the display surface causes a number of energy rays to converge, which can thus create a holographic object. For example, for visible light, the LF display will project from the projection location very many rays that may converge at any point of the holographic object volume, so from the perspective of a viewer positioned further away than the projected object, the rays appear to come from the surface of a real object positioned in the region of this space. In this way, the LF display generates reflected light rays that exit this object surface from the viewer's perspective. The viewer viewing angle may vary over any given holographic object and the viewer will see different views of the holographic object.

As described herein, energy device layer 220 includes one or more electronic displays (e.g., emissive displays such as OLEDs) and one or more other energy projecting and/or energy receiving devices. One or more electronic displays are configured to display content according to display instructions (e.g., from a controller of the LF display system). One or more electronic displays comprise a plurality of pixels, each pixel having an independently controlled intensity. Many types of commercial displays can be used in LF displays, such as emissive LED and OLED displays.

The energy device layer 220 may also contain one or more acoustic projection devices and/or one or more acoustic receiving devices. The acoustic projection device generates one or more pressure waves that are complementary to the holographic object 250. The generated pressure waves may be, for example, audible, ultrasonic, or some combination thereof. The ultrasonic pressure wave array may be used for volume haptics (e.g., at the surface of the holographic object 250). The audible pressure waves are used to provide audio content (e.g., immersive audio) that can complement the holographic object 250. For example, assuming that holographic object 250 is a dolphin, one or more acoustic projection devices may be used to (1) generate a tactile surface juxtaposed to the surface of the dolphin so that a viewer may touch holographic object 250; and (2) provide audio content corresponding to a dolphin sounding sound (e.g., a clicking, chirp, or squeaking). Acoustic receiving devices (e.g., microphones or microphone arrays) may be configured to monitor ultrasonic and/or audible pressure waves within a localized area of LF display module 210.

The energy device layer 220 may also contain one or more imaging sensors. The imaging sensor may be sensitive to light in the visible wavelength band and, in some cases, may be sensitive to light in other wavelength bands (e.g., infrared). The imaging sensor may be, for example, a Complementary Metal Oxide Semiconductor (CMOS) array, a Charge Coupled Device (CCD), an array of photodetectors, some other sensor that captures light, or some combination thereof. The LF display system may use data captured by one or more imaging sensors for locating and tracking the position of the viewer.

In some configurations, the energy relay layer 230 relays energy (e.g., electromagnetic energy, mechanical pressure waves, etc.) between the energy device layer 220 and the energy waveguide layer 240. Energy relay layer 230 includes one or more energy relay elements 260. Each energy relay element comprises a first surface 265 and a second surface 270 and relays energy between the two surfaces. The first surface 265 of each energy relay element may be coupled to one or more energy devices (e.g., an electronic display or an acoustic projection device). The energy relay element may be constructed of, for example, glass, carbon, optical fiber, optical film, plastic, polymer, or some combination thereof. Additionally, in some embodiments, the energy relay elements may adjust the magnification (increase or decrease) of the energy passing between the first surface 265 and the second surface 270. If the repeater provides magnification, the repeater may take the form of an array of bonded cone repeaters, known as cones, where the area of one end of the cone may be substantially larger than the area of the opposite end. The large ends of the cones may be bonded together to form a seamless energy surface 275. One advantage is that spaces are created on the small ends of each cone to accommodate the mechanical envelopes of multiple energy sources, such as the bezel of multiple displays. This additional room allows for energy sources to be placed side-by-side on the small cone side, with the active area of each energy source directing energy into the small cone surface and relaying to the large seamless energy surface. Another advantage of using a cone-shaped relay is that there is no non-imaging dead space on the combined seamless energy surface formed by the large ends of the cone. There is no border or border and therefore the seamless energy surfaces can then be tiled together to form a larger surface with few seams depending on the visual acuity of the eye.

The second surfaces of adjacent energy relay elements come together to form an energy surface 275. In some embodiments, the spacing between the edges of adjacent energy relay elements is less than the minimum perceivable profile defined by the visual acuity of a human eye having vision, e.g., 20/40, such that the energy surface 275 is effectively seamless from the perspective of a viewer 280 within the viewing volume 285.

In some embodiments, the second surfaces of adjacent energy relay elements are fused together with a processing step that may involve one or more of pressure, heat, and chemical reaction, in such a way that no seams exist therebetween. And in still other embodiments, the array of energy relay elements is formed by molding one side of a continuous block of relay material into an array of small cone ends, each energy relay element configured to transmit energy from an energy device attached to a small cone end to a larger area of a single combined surface that is never subdivided.

In some embodiments, one or more of the energy relay elements exhibit energy localization, wherein the energy transfer efficiency in a longitudinal direction substantially perpendicular to surfaces 265 and 270 is much higher than the transfer efficiency in a perpendicular transverse plane, and wherein the energy density is highly localized in this transverse plane as the energy wave propagates between surface 265 and surface 270. This localization of energy allows the energy distribution (e.g., image) to be efficiently relayed between these surfaces without any significant loss of resolution.

Energy waveguiding layer 240 uses waveguiding elements in energy waveguiding layer 240 to guide energy from locations (e.g., coordinates) on energy surface 275 to specific energy propagation paths that enter holographic viewing volume 285 outward from the display surface. The energy propagation path is defined by at least two angular dimensions determined by the energy surface coordinate position relative to the waveguide. The waveguide is associated with spatial 2D coordinates. These four coordinates together form a four-dimensional (4D) energy field. As an example, for electromagnetic energy, waveguide elements in energy waveguide layer 240 direct light from locations on seamless energy surface 275 through viewing volume 285 along different propagation directions. In various examples, light is directed according to a 4D light field function to form holographic object 250 within holographic object volume 255.

Each waveguiding element in the energy waveguiding layer 240 may be, for example, a lenslet comprised of one or more elements. In some configurations, the lenslets may be positive lenses. The positive lens may have a spherical, aspherical or free-form surface profile. Additionally, in some embodiments, some or all of the waveguide elements may contain one or more additional optical components. The additional optical component may be, for example, an energy-suppressing structure such as a baffle, a positive lens, a negative lens, a spherical lens, an aspherical lens, a free-form lens, a liquid crystal lens, a liquid lens, a refractive element, a diffractive element, or some combination thereof. In some embodiments, at least one of the lenslets and/or additional optical components is capable of dynamically adjusting its optical power. For example, the lenslets may be liquid crystal lenses or liquid lenses. Dynamic adjustment of the surface profile of the lenslets and/or at least one additional optical component may provide additional directional control of the light projected from the waveguide element.

In the example shown, the holographic object 255 of the LF display has a boundary formed by ray 256 and ray 257, but may be formed by other rays. Holographic object volume 255 is a continuous volume that extends both in front of (i.e., toward viewer 280) and behind (i.e., away from viewer 280) energy waveguide layer 240. In the illustrated example, rays 256 and 257 that are perceivable by the user are projected from opposite edges of LF display module 210 at a maximum angle with respect to a normal to display surface 277, but the rays may be other projected rays. The rays define the field of view of the display and therefore the boundaries of the holographic viewing volume 285. In some cases, the rays define a holographic viewing volume in which the entire display can be viewed without vignetting (e.g., an ideal viewing volume). As the field of view of the display increases, the convergence of rays 256 and 257 will be closer to the display. Thus, a display with a larger field of view allows the viewer 280 to see the entire display at a closer viewing distance. In addition, rays 256 and 257 may form an ideal holographic object volume. Holographic objects presented in an ideal holographic object volume can be seen anywhere in viewing volume 285.

In some instances, the holographic object may be presented to only a portion of viewing volume 285. In other words, the holographic object volume may be divided into any number of viewing sub-volumes (e.g., viewing sub-volume 290). In addition, the holographic object may be projected to the outside of the holographic object body 255. For example, holographic object 251 is present outside of holographic object volume 255. Because holographic object 251 is present outside of holographic object volume 255, it cannot be viewed from every location in viewing volume 285. For example, the holographic object 251 may be visible from a position in the viewing sub-volume 290, but not from a position of the viewer 280.

For example, turn to FIG. 2B to show viewing holographic content from a different viewing sub-volume. Fig. 2B illustrates a cross-section 200 of a portion of an LF display module in accordance with one or more embodiments. The cross-section of fig. 2B is the same as the cross-section of fig. 2A. However, FIG. 2B illustrates a different set of light rays projected from LF display module 210. Rays 256 and 257 still form holographic object 255 and viewing volume 285. However, as shown, the rays projected from the top of LF display module 210 and the rays projected from the bottom of LF display module 210 overlap to form respective viewing subvolumes (e.g., viewing subvolumes 290A, 290B, 290C and 290D) within viewing volume 285. Viewers in a first viewing subvolume (e.g., 290A) may be able to perceive holographic content presented in holographic object volume 255, and viewers in other viewing subvolumes (e.g., 290B, 290C, and 290D) may not be able to perceive.

More simply, as illustrated in fig. 2A, holographic object volume 255 is a volume in which holographic objects may be rendered by an LF display system such that the holographic objects may be perceived by a viewer (e.g., viewer 280) in viewer volume 285. In this way, viewing volume 285 is an example of an ideal viewing volume, and holographic object 255 is an example of an ideal object. However, in various configurations, a viewer may perceive holographic objects presented by LF display system 200 in other example holographic objects. More generally, a "line of sight guide" will be applied when viewing holographic content projected from the LF display module. The gaze guidance asserts that the line formed by the viewer's eye location and the holographic object being viewed must intersect the LF display surface.

Because the holographic content is rendered according to the 4D light field function, each eye of the viewer 280 sees a different viewing angle of the holographic object 250 when viewing the holographic content rendered by the LF display module 210. Furthermore, as viewer 280 moves within viewing volume 285, he/she will also see different viewing angles for holographic object 250, as will other viewers within viewing volume 285. As will be appreciated by those of ordinary skill in the art, 4D light field functions are well known in the art and will not be described in further detail herein.

As described in more detail herein, in some embodiments, the LF display may project more than one type of energy. For example, an LF display may project two types of energy, e.g., mechanical energy and electromagnetic energy. In this configuration, the energy relay layer 230 may contain two separate energy relays that are interleaved together at the energy surface 275, but separated such that energy is relayed to two different energy device layers 220. Here, one repeater may be configured to transmit electromagnetic energy, while another repeater may be configured to transmit mechanical energy. In some embodiments, mechanical energy may be projected from a location on energy waveguide layer 240 between electromagnetic waveguide elements, thereby facilitating formation of a structure that inhibits light from being transmitted from one electromagnetic waveguide element to another. In some embodiments, the energy waveguide layer 240 may also include waveguide elements that transmit focused ultrasound along a particular propagation path according to display instructions from the controller.

It should be noted that in an alternative embodiment (not shown), the LF display module 210 does not contain an energy relay layer 230. In this case, energy surface 275 is an emitting surface formed using one or more adjacent electronic displays within energy device layer 220. And in some embodiments, the spacing between the edges of adjacent electronic displays is less than the minimum perceivable profile defined by the visual acuity of a human eye having 20/40 vision without an energy relay layer, such that the energy surface is effectively seamless from the perspective of a viewer 280 within the viewing volume 285.

LF display module

Fig. 3A is a perspective view of an LF display module 300A in accordance with one or more embodiments. LF display module 300A may be LF display module 110 and/or LF display module 210. In other embodiments, the LF display module 300A may be some other LF display module. In the illustrated embodiment, the LF display module 300A includes an energy device layer 310 and an energy relay layer 320 and an energy waveguide layer 330. LF display module 300A is configured to render holographic content from display surface 365, as described herein. For convenience, display surface 365 is shown in dashed outline on frame 390 of LF display module 300A, but more precisely the surface directly in front of the waveguide elements defined by the inner edge of frame 390. The display surface 365 includes a plurality of projection locations from which energy may be projected. Some embodiments of the LF display module 300A have different components than those described herein. For example, in some embodiments, the LF display module 300A does not include an energy relay layer 320. Similarly, functionality may be distributed among components in a different manner than described herein.

Energy device layer 310 is an embodiment of energy device layer 220. The energy device layer 310 includes four energy devices 340 (three are visible in the figure). The energy devices 340 may all be of the same type (e.g., all electronic displays) or may comprise one or more different types (e.g., comprising an electronic display and at least one acoustic energy device).

Energy relay layer 320 is an embodiment of energy relay layer 230. The energy relay layer 320 includes four energy relay devices 350 (three are visible in the figure). Energy relay devices 350 may all relay the same type of energy (e.g., light) or may relay one or more different types (e.g., light and sound). Each of the relay devices 350 includes a first surface and a second surface, the second surface of the energy relay devices 350 being arranged to form a single seamless energy surface 360. In the illustrated embodiment, each of the energy relays 350 is tapered such that the first surface has a smaller surface area than the second surface, which allows for a mechanical envelope of the energy device 340 to be accommodated on the small end of the taper. This also leaves the seamless energy surface unbounded, since the entire area can project energy. This means that this seamless energy surface can be tiled by placing multiple instances of LF display module 300A together without dead space or borders, so that the entire combined surface is seamless. In other embodiments, the surface areas of the first and second surfaces are the same.

Energy waveguide layer 330 is an embodiment of energy waveguide layer 240. The energy waveguide layer 330 comprises a plurality of waveguide elements 370. As discussed above with respect to fig. 2, the energy waveguide layer 330 is configured to direct energy from the seamless energy surface 360 along a particular propagation path according to a 4D light field function to form a holographic object. It should be noted that in the illustrated embodiment, the energy waveguide layer 330 is defined by a frame 390. In other embodiments, the frame 390 is not present and/or the thickness of the frame 390 is reduced. The removal or reduction of the thickness of the frame 390 may facilitate tiling the LF display module 300A with additional LF display modules.

It should be noted that in the illustrated embodiment, the seamless energy surface 360 and the energy waveguide layer 330 are planar. In an alternative embodiment not shown, the seamless energy surface 360 and the energy waveguide layer 330 may be curved in one or more dimensions.

The LF display module 300A may be configured with additional energy sources residing on the surface of the seamless energy surface and allowing projection of energy fields other than light fields. In one embodiment, the acoustic energy field may be projected from electrostatic speakers (not shown) mounted at any number of locations on the seamless energy surface 360. Further, the electrostatic speaker of the LF display module 300A is positioned within the light field display module 300A such that the dual energy surface simultaneously projects the sound field and the holographic content. For example, an electrostatic speaker may be formed with one or more diaphragm elements that transmit electromagnetic energy at some wavelengths and driven by one or more conductive elements (e.g., a plane that sandwiches the one or more diaphragm elements). The electrostatic speaker may be mounted on the seamless energy surface 360 such that the diaphragm element covers some of the waveguide elements. The conductive electrodes of the speaker may be positioned at the same location as structures designed to inhibit light transmission between electromagnetic waveguides, and/or at locations between electromagnetic waveguide elements (e.g., frame 390). In various configurations, the speaker may project audible sounds and/or generate many sources of focused ultrasound energy for the tactile surface.

In some configurations, the energy device 340 may sense energy. For example, the energy device may be a microphone, a light sensor, a sound transducer, or the like. Thus, the energy relay may also relay energy from the seamless energy surface 360 to the energy device layer 310. That is, the seamless energy surface 360 of the LF display module forms a bi-directional energy surface when the energy device and the energy relay device 340 are configured to simultaneously transmit and sense energy (e.g., transmit a light field and sense sound).

More broadly, the energy device 340 of the LF display module 340 may be an energy source or an energy sensor. LF display module 300A may contain various types of energy devices that act as energy sources and/or energy sensors to facilitate the projection of high quality holographic content to a user. Other sources and/or sensors may include thermal sensors or sources, infrared sensors or sources, image sensors or sources, mechanical energy transducers that generate acoustic energy, feedback sources, and the like. Multiple other sensors or sources are possible. Further, the LF display modules may be tiled such that the LF display modules may form components that project and sense multiple types of energy from the large aggregate seamless energy surface.

In various embodiments of LF display module 300A, the seamless energy surface 360 may have various surface portions, where each surface portion is configured to project and/or emit a particular type of energy. For example, when the seamless energy surface is a dual energy surface, the seamless energy surface 360 includes one or more surface portions that project electromagnetic energy and one or more other surface portions that project ultrasonic energy. The surface portions of the projected ultrasonic energy may be positioned on the seamless energy surface 360 between the electromagnetic waveguide elements and/or co-located with structures designed to inhibit light transmission between the electromagnetic waveguide elements. In examples where the seamless energy surface is a bi-directional energy surface, the energy relay layer 320 may comprise two types of energy relay devices that are interwoven at the seamless energy surface 360. In various embodiments, the seamless energy surface 360 may be configured such that the portion of the surface below any particular waveguide element 370 is all energy sources, all energy sensors, or a mixture of energy sources and energy sensors.

Fig. 3B is a cross-sectional view of an LF display module 300B containing an interleaved energy relay in accordance with one or more embodiments. Energy relay device 350A transfers energy between energy relay first surface 345A connected to energy device 340A and seamless energy surface 360. Energy relay 350B transfers energy between energy relay first surface 345B connected to energy device 340B and seamless energy surface 360. The two relays are interleaved at an interleaved energy relay 352 connected to a seamless energy surface 360. In this configuration, the surface 360 contains interleaved energy locations of both energy devices 340A and 340B, which may be energy sources or energy sensors. Thus, the LF display module 300B may be configured as a dual energy projection device for projecting more than one type of energy, or as a bi-directional energy device for projecting one type of energy and sensing another type of energy simultaneously. LF display module 300B may be LF display module 110 and/or LF display module 210. In other embodiments, the LF display module 300B may be some other LF display module.

LF display module 300B contains many components configured similarly to the components of LF display module 300A in fig. 3A. For example, in the illustrated embodiment, the LF display module 300B includes an energy device layer 310, an energy relay layer 320, a seamless energy surface 360, and an energy waveguide layer 330, including at least the same functions as described with respect to fig. 3A. Additionally, LF display module 300B may present and/or receive energy from display surface 365. Notably, the components of LF display module 300B may be alternatively connected and/or oriented as compared to the components of LF display module 300A in fig. 3A. Some embodiments of the LF display module 300B have different components than those described herein. Similarly, functionality may be distributed among components in a different manner than described herein. Fig. 3B illustrates a design of a single LF display module 300B that may be tiled to produce a dual-energy projection surface or bi-directional energy surface with a larger area.

In one embodiment, the LF display module 300B is an LF display module of a bi-directional LF display system. A bi-directional LF display system can simultaneously project energy from the display surface 365 and sense the energy. The seamless energy surface 360 contains both energy projection locations and energy sensing locations that are closely interleaved on the seamless energy surface 360. Thus, in the example of fig. 3B, energy relay layer 320 is configured differently than the energy relay layer of fig. 3A. For convenience, the energy relay layer of the LF display module 300B will be referred to herein as an "interleaved energy relay layer.

The interleaved energy relay layer 320 contains two legs: a first energy relay 350A and a second energy relay 350B. In fig. 3B, each of the legs is shown as a lightly shaded area. Each of the legs may be made of a flexible relay material and formed with sufficient length to be used with various sizes and shapes of energy devices. In some areas of the interleaved energy relay layer, the two legs are tightly interleaved together as they approach the seamless energy surface 360. In the illustrated example, interleaved energy relay 352 is shown as a dark shaded area.

When interleaved at the seamless energy surface 360, the energy relay device is configured to relay energy to/from different energy devices. The energy devices are located at the energy device layer 310. As illustrated, energy device 340A is connected to energy relay 350A, and energy device 340B is connected to energy relay 350B. In various embodiments, each energy device may be an energy source or an energy sensor.

The energy waveguide layer 330 includes waveguide elements 370 to guide energy waves from the seamless energy surface 360 along a projected path toward a series of convergence points. In this example, holographic object 380 is formed at a series of convergence points. Notably, as illustrated, the convergence of energy at holographic object 380 occurs at the viewer side (i.e., front side) of display surface 365. However, in other examples, the convergence of energy may extend anywhere in the holographic object volume, both in front of display surface 365 and behind display surface 365. The waveguide element 370 can simultaneously guide incoming energy to an energy device (e.g., an energy sensor), as described below.

In one example embodiment of LF display module 300B, the emissive display is used as an energy source (e.g., energy device 340A) and the imaging sensor is used as an energy sensor (e.g., energy device 340B). In this way, LF display module 300B can simultaneously project holographic content and detect light from a volume in front of display surface 365. In this way, this embodiment of the LF display module 300B functions as both an LF display and an LF sensor.

In one embodiment, the LF display module 300B is configured to simultaneously project a light field from a projection location on the display surface in front of the display surface and capture the light field from in front of the display surface at the projection location. In this embodiment, energy relay 350A connects a first set of locations at seamless energy surface 360 positioned below waveguide element 370 to energy device 340A. In one example, the energy device 340A is an emissive display having an array of source pixels. The energy relay device 340B connects a second set of locations at the seamless energy surface 360 positioned below the waveguide element 370 to the energy device 340B. In one example, energy device 340B is an imaging sensor having an array of sensor pixels. The LF display module 300B may be configured such that the locations below a particular waveguide element 370 at the seamless energy surface 365 are all emissive display locations, all imaging sensor locations, or some combination of these locations. In other embodiments, the bi-directional energy surface may project and receive various other forms of energy.

In another example embodiment of the LF display module 300B, the LF display module is configured to project two different types of energy. For example, in one embodiment, energy device 340A is a transmission display configured to transmit electromagnetic energy, and energy device 340B is an ultrasound transducer configured to transmit mechanical energy. Thus, both light and sound may be projected from various locations at the seamless energy surface 360. In this configuration, the energy relay 350A connects the energy device 340A to the seamless energy surface 360 and relays the electromagnetic energy. The energy relay device is configured to have properties (e.g., varying refractive index) that enable the energy relay device to efficiently transmit electromagnetic energy. Energy relay 350B connects energy device 340B to seamless energy surface 360 and relays mechanical energy. Energy relay 350B is configured to have properties (e.g., distribution of materials having different acoustic impedances) for efficient transmission of ultrasonic energy. In some embodiments, the mechanical energy may be projected from a location between waveguide elements 370 on the energy waveguide layer 330. The location of the projected mechanical energy may form a structure for inhibiting light transmission from one electromagnetic waveguide element to another electromagnetic waveguide element. In one example, a spatially separated array of locations projecting ultrasonic mechanical energy may be configured to form three-dimensional haptic shapes and surfaces in air. The surface may coincide with a projected holographic object (e.g., holographic object 380). In some instances, phase delays and amplitude variations across the array may help form the haptic shape.

In various embodiments, the LF display module 300B with interleaved energy relay devices may contain multiple energy device layers, where each energy device layer contains a particular type of energy device. In these examples, the energy relay layer is configured to relay the appropriate type of energy between the seamless energy surface 360 and the energy device layer 310.

Tiled LF display module

Fig. 4A is a perspective view of a portion of an LF display system 400 tiled in two dimensions to form a single-sided seamless surface environment in accordance with one or more embodiments. LF display system 400 includes a plurality of LF display modules tiled to form an array 410. More specifically, each of the tiles in array 410 represents a tiled LF display module 412. The LF display module 412 may be the same as the LF display module 300A or 300B. The array 410 may cover, for example, some or all of a surface (e.g., a wall) of a room. The LF array may cover other surfaces such as table tops, billboards, round buildings, etc.

The array 410 may project one or more holographic objects. For example, in the illustrated embodiment, array 410 projects holographic object 420 and holographic object 422. Tiling of LF display modules 412 allows for a larger viewing volume and allows objects to be projected at a greater distance from array 410. For example, in the illustrated embodiment, the viewing volume is approximately the entire area in front of and behind the array 410, rather than a partial volume in front of (and behind) the LF display module 412.

In some embodiments, LF display system 400 presents holographic object 420 to viewer 430 and viewer 434. Viewer 430 and viewer 434 receive different viewing angles for holographic object 420. For example, viewer 430 is presented with a direct view of holographic object 420, while viewer 434 is presented with a more oblique view of holographic object 420. As viewer 430 and/or viewer 434 moves, they are presented with different perspectives of holographic object 420. This allows the viewer to visually interact with the holographic object by moving relative to the holographic object. For example, as viewer 430 walks around holographic object 420, viewer 430 sees different sides of holographic object 420 as long as holographic object 420 remains in the holographic object volume of array 410. Thus, viewer 430 and viewer 434 may simultaneously see holographic object 420 in real world space as if the holographic object were actually present. In addition, viewer 430 and viewer 434 do not need to wear an external device in order to view holographic object 420, because holographic object 420 is visible to the viewer in much the same way that a physical object would be visible. Further, here, holographic object 422 is shown behind the array, as the viewing volume of the array extends behind the surface of the array. In this way, holographic object 422 may be presented to viewer 430 and/or viewer 434. Because the image originates behind the plane of the LF display, holographic object 422 may be a virtual image 424.

In some embodiments, the LF display system 400 may include a tracking system that tracks the location of the viewer 430 and the viewer 434. In some embodiments, the tracked location is the location of the viewer. In other embodiments, the tracked location is a location of the viewer's eyes. Eye location tracking is different from gaze tracking, which tracks where the eye is looking (e.g., using orientation to determine gaze location). The eyes of viewer 430 and the eyes of viewer 434 are located at different positions.

In various configurations, the LF display system 400 may include one or more tracking systems. For example, in the illustrated embodiment of fig. 4A, the LF display system includes a tracking system 440 external to the array 410. Here, the tracking system may be a camera system coupled to the array 410. The external tracking system is described in more detail with respect to FIG. 5A. In other example embodiments, the tracking system may be incorporated into the array 410 as described herein. For example, an energy device (e.g., energy device 340) of one or more LF display modules 412 containing a bi-directional energy surface included in the array 410 may be configured to capture an image of a viewer in front of the array 410. In any case, one or more tracking systems of LF display system 400 determine tracking information about viewers (e.g., viewer 430 and/or viewer 434) viewing holographic content presented by array 410.

The tracking information describes a location of the viewer or a location of a portion of the viewer (e.g., one or both eyes of the viewer, or limbs of the viewer) in space (e.g., relative to the tracking system). The tracking system may use any number of depth determination techniques to determine tracking information. The depth determination technique may include, for example, structured light, time-of-flight, stereo imaging, some other depth determination technique, or some combination thereof. The tracking system may include various systems configured to determine tracking information. For example, the tracking system may include one or more infrared sources (e.g., structured light sources), one or more imaging sensors (e.g., red-blue-green-infrared cameras) that may capture infrared images, and a processor that executes a tracking algorithm. The tracking system may use depth estimation techniques to determine the location of the viewer. In some embodiments, LF display system 400 generates holographic objects based on tracked positioning, motion, or gestures of viewer 430 and/or viewer 434 as described herein. For example, LF display system 400 may generate holographic objects in response to a viewer coming within a threshold distance and/or a particular location of array 410.

LF display system 400 may present one or more holographic objects tailored to each viewer based in part on the tracking information. For example, holographic object 420 may be presented to viewer 430 instead of holographic object 422. Similarly, holographic object 422 may be presented to viewer 434 instead of holographic object 420. For example, LF display system 400 tracks the location of each of viewer 430 and viewer 434. LF display system 400 determines a viewing angle for a holographic object that should be visible to a viewer based on the viewer's positioning relative to where the holographic object is to be rendered. LF display system 400 selectively projects light from particular pixels corresponding to the determined viewing angle. Thus, viewer 434 and viewer 430 may have potentially disparate experiences at the same time. In other words, LF display system 400 can present holographic content to a viewing subvolume of the viewing volume (i.e., similar to viewing subvolumes 290A, 290B, 290C and 290D shown in fig. 2B). For example, as illustrated, because LF display system 400 may track the location of viewer 430, LF display system 400 may present spatial content (e.g., holographic object 420) to the viewing subvolumes that surround viewer 430 and wild zoo content (e.g., holographic object 422) to the viewing subvolumes that surround viewer 434. In contrast, conventional systems would have to use separate headphones to provide a similar experience.

In some embodiments, LF display system 400 may include one or more sensory feedback systems. The sensory feedback system provides other sensory stimuli (e.g., tactile, audio, or scent) that enhance holographic objects 420 and 422. For example, in the illustrated embodiment of fig. 4A, LF display system 400 includes a sensory feedback system 442 external to array 410. In one example, the sensory feedback system 442 can be an electrostatic speaker coupled to the array 410. The external sensory feedback system is described in more detail with respect to fig. 5A. In other example embodiments, a sensory feedback system may be incorporated into the array 410, as described herein. For example, the energy device (e.g., energy device 340A in fig. 3B) of the LF display module 412 included in the array 410 may be configured to project ultrasound energy to and/or receive imaging information from a viewer in front of the array. In any case, sensory feedback system presents sensory content to and/or receives sensory content from a viewer (e.g., viewer 430 and/or viewer 434) viewing holographic content (e.g., holographic object 420 and/or holographic object 422) presented by array 410.

LF display system 400 may include a sensory feedback system 442 including one or more acoustic projection devices external to the array. Alternatively or additionally, LF display system 400 may include one or more acoustic projection devices integrated into array 410, as described herein. The acoustic projection device may be comprised of an array of ultrasound sources configured to project a volumetric tactile surface. In some embodiments, for one or more surfaces of the holographic object, the tactile surface may coincide with the holographic object (e.g., at a surface of the holographic object 420) if a portion of the viewer is within a threshold distance of the one or more surfaces. The volume haptic sensation may allow a user to touch and feel the surface of the holographic object. The plurality of acoustic projection devices may also project audible pressure waves that provide audio content (e.g., immersive audio) to the viewer. Thus, the ultrasonic pressure waves and/or the audible pressure waves may act to complement the holographic object.

In various embodiments, the LF display system 400 may provide other sensory stimuli based in part on the tracked location of the viewer. For example, holographic object 422 illustrated in fig. 4A is a lion, and LF display system 400 may cause holographic object 422 to both growl visually (i.e., holographic object 422 appears to be growl) and aurally (i.e., one or more acoustic projection devices project pressure waves) so that viewer 430 perceives it as a growl of the lion from holographic object 422.

It should be noted that in the illustrated configuration, the holographic viewing volume may be limited in a manner similar to viewing volume 285 of LF display system 200 in fig. 2. This may limit the perceived immersion that a viewer will experience with a single wall display unit. One way to address this problem is to use multiple LF display modules tiled along multiple sides, as described below with respect to fig. 4B-4F.

Fig. 4B is a perspective view of a portion of LF display system 402 in a multi-faceted, seamless surface environment in accordance with one or more embodiments. LF display system 402 is substantially similar to LF display system 400 except that multiple LF display modules are tiled to create a multi-faceted seamless surface environment. More specifically, the LF display modules are tiled to form an array that is a six-sided polymeric seamless surface environment. In fig. 4B, multiple LF display modules cover all the walls, ceiling and floor of the room. In other embodiments, multiple LF display modules may cover some, but not all, of the walls, floor, ceiling, or some combination thereof. In other embodiments, multiple LF display modules are tiled to form some other aggregate seamless surface. For example, the walls may be curved such that a cylindrical polymeric energy environment is formed. Further, as described below with respect to fig. 6-9, in some embodiments, LF display modules may be tiled to form a surface in a field (e.g., a wall, etc.).

LF display system 402 may project one or more holographic objects. For example, in the illustrated embodiment, LF display system 402 projects holographic object 420 into an area surrounded by a six-sided polymeric seamless surface environment. In this example, the view volume of the LF display system is also contained within a six-sided polymeric seamless surface environment. It should be noted that in the illustrated configuration, viewer 434 may be positioned between holographic object 420 and LF display module 414, which projects energy (e.g., light and/or pressure waves) used to form holographic object 420. Thus, the positioning of viewer 434 may prevent viewer 430 from perceiving holographic object 420 formed by energy from LF display module 414. However, in the illustrated configuration, there is at least one other LF display module, e.g., LF display module 416, that is unobstructed (e.g., by viewer 434) and that can project energy to form holographic object 420 and be observed by viewer 430. In this way, occlusion by the viewer in space may result in some parts of the holographic projection disappearing, but this effect is much less than if only one side of the volume were filled with the holographic display panel. Holographic object 422 is shown as "outside" the walls of a six-sided polymeric seamless surface environment, since the holographic object body extends behind the polymeric surface. Accordingly, viewer 430 and/or viewer 434 may perceive holographic object 422 as "outside" of the enclosed six-sided environment in which it may move throughout.

As described above with reference to fig. 4A, in some embodiments, LF display system 402 actively tracks the location of the viewer and may dynamically instruct different LF display modules to render holographic content based on the tracked location. Thus, the multi-faceted configuration may provide a more robust environment (e.g., relative to fig. 4A) to provide a holographic object in which an unconstrained viewer may freely move throughout the area encompassed by the multi-faceted seamless surface environment.

It is noted that various LF display systems may have different configurations. Further, each configuration may have a particular orientation of surfaces that converge to form a seamless display surface ("polymerization surface"). That is, the LF display modules of the LF display system may be tiled to form various aggregate surfaces. For example, in fig. 4B, LF display system 402 contains LF display modules tiled to form six-sided polymeric surfaces that approximate the walls of a room. In some other examples, the polymerized surface may be present on only a portion of the surface (e.g., half of the wall) rather than the entire surface (e.g., the entire wall). Some examples are described herein.

In some configurations, the polymeric surface of the LF display system may include a polymeric surface configured to project energy toward the local viewer. Projecting energy to a local viewing volume allows for a higher quality viewing experience by, for example: increasing the density of projected energy in a particular viewing volume increases the FOV of the viewer in that viewing volume and brings the viewing volume closer to the display surface.

For example, fig. 4C illustrates a top view of LF display system 450A with a polymeric surface in a "winged" configuration. In this example, the LF display system 450A is positioned in a room having a front wall 452, a rear wall 454, a first side wall 456, a second side wall 458, a ceiling (not shown), and a floor (not shown). The first side wall 456, the second side wall 458, the rear wall 454, the floor, and the ceiling are all orthogonal. LF display system 450A includes LF display modules tiled to form a polymeric surface 460 covering the front wall. The front wall 452, and thus the converging surface 460, comprises three portions: (i) a first portion 462 that is substantially parallel to the back wall 454 (i.e., the center surface), (ii) a second portion 464 that connects the first portion 462 to the first side wall 456 and is angled to project energy toward the center of the room (i.e., the first side surface), and (iii) a third portion 466 that connects the first portion 462 to the second side wall 458 and is angled to project energy toward the center of the room (i.e., the second side surface). The first part is a vertical plane in the room and has a horizontal axis and a vertical axis. The second portion and the third portion are angled along a horizontal axis toward the center of the room.

In this example, the viewing volume 468A of LF display system 450A is located in the center of the room and is partially surrounded by three portions of the aggregation surface 460. An aggregated surface at least partially surrounding a viewer ("surrounding surface") increases the immersive experience for the viewer.

For illustration, consider, for example, a polymeric surface having only a central surface. Referring to FIG. 2A, rays projected from either end of the display surface create an ideal hologram and an ideal viewing volume, as described above. Now consider, for example, whether the central surface includes two side surfaces angled toward the viewer. In this case, rays 256 and 257 will be projected at a greater angle from the normal to the central surface. Thus, the field of view of the viewing volume will increase. Similarly, the holographic viewing volume will be closer to the display surface. In addition, since the two second and third portions are tilted closer to the viewing volume, the holographic object projected at a fixed distance from the display surface is closer to the viewing volume.

For simplicity, a display surface with only a central surface has a planar field of view, a planar threshold spacing between the (central) display surface and the viewing volume, and a planar proximity between the holographic object and the viewing volume. The addition of one or more side surfaces angled toward the viewer increases the field of view relative to a planar field of view, decreases the separation between the display surface and the viewing volume relative to a planar separation, and increases the proximity between the display surface and the holographic object relative to a planar proximity. Further angling the side surfaces toward the viewer further increases the field of view, reduces the separation and increases proximity. In other words, the angled placement of the side surfaces increases the immersive experience for the viewer.

In addition, as described below with respect to fig. 6, the deflection optics may be used to optimize the size and positioning of the viewing volume for LF display parameters (e.g., size and FOV).

Returning to fig. 4D, in a similar example, fig. 4D shows a side view of an LF display system 450B with a polymeric surface in a "tilted" configuration. In this example, the LF display system 450B is positioned in a room having a front wall 452, a rear wall 454, a first side wall (not shown), a second side wall (not shown), a ceiling 472, and a floor 474. The first side wall, second side wall, rear wall 454, floor 474, and ceiling 472 are all orthogonal. LF display system 450B includes LF display modules tiled to form a polymeric surface 460 covering the front wall. The front wall 452, and thus the converging surface 460, comprises three portions: (i) a first portion 462 that is substantially parallel to the rear wall 454 (i.e., the center surface), (ii) a second portion 464 that connects the first portion 462 to the ceiling 472 and is angled to project energy toward the center of the room (i.e., the first side surface), and (iii) a third portion 464 that connects the first portion 462 to the floor 474 and is angled to project energy toward the center of the room (i.e., the second side surface). The first part is a vertical plane in the room and has a horizontal axis and a vertical axis. The second and third sections are angled toward the center of the room along a vertical axis.

In this example, the viewing volume 468B of the LF display system 450B is located in the center of the room and is partially surrounded by three portions of the aggregation surface 460. Similar to the configuration shown in fig. 4C, the two side portions (e.g., second portion 464 and third portion 466) are angled to enclose the viewer and form an enclosure surface. The enclosing surface increases the viewing FOV from the perspective of any viewer in the holographic viewing volume 468B. In addition, the enclosing surface allows the viewing volume 468B to be closer to the surface of the display, so that the projected objects appear closer. In other words, the angled placement of the side surfaces increases the field of view, reduces the spacing, and increases the proximity of the converging surfaces, thereby increasing the immersive experience for the viewer. Further, as will be discussed below, the deflection optics may be used to optimize the size and positioning of the viewing body 468B.

The angled configuration of the side portions of the polymerized surface 460 enables holographic content to be presented closer to the viewing volume 468B than if the third portion 466 were not angled. For example, the lower extremities (e.g., legs) of a character presented from an LF display system in an inclined configuration may appear closer and more realistic than if an LF display system with a flat front wall were used.

In addition, the configuration of the LF display system and the environment in which it is located may inform the viewing volume and the shape and location of the viewing subvolume.

Fig. 4E, for example, illustrates a top view of an LF display system 450C with a converging surface 460 on a front wall 452 of the room. In this example, the LF display system 450D is positioned in a room having a front wall 452, a rear wall 454, a first side wall 456, a second side wall 458, a ceiling (not shown), and a floor (not shown).

LF display system 450C projects various rays from the converging surface 460. The light rays are projected from each location on the display surface into a range of angles centered on the viewing volume. Rays projected from the left side of the converging surface 460 have a horizontal angular extent 481, rays projected from the right side of the converging surface have a horizontal angular extent 482, and rays projected from the center of the converging surface 460 have a horizontal angular extent 483. Between these points, the projected ray may take on the middle of the range of angles as described below with respect to fig. 6. Having a gradient deflection angle in the projected rays across the display surface in this manner creates a viewing volume 468C. Furthermore, this configuration avoids wasting the resolution of the display when projecting rays into the sidewalls 456 and 458.

Fig. 4F illustrates a side view of an LF display system 450D with a converging surface 460 on a front wall 452 of the room. In this example, the LF display system 450E is positioned in a room having a front wall 452, a rear wall 454, a first side wall (not shown), a second side wall (not shown), a ceiling 472, and a floor 474. In this example, the floor is layered such that each layer is stepped up from the front wall to the rear wall. Here, each layer of the floor includes a viewing subvolume (e.g., viewing subvolumes 470A and 470B). The layered floor allows viewing sub-volumes that do not overlap. In other words, each viewing subvolume has a line of sight from the viewing subvolume to the converging surface 460 that does not pass through another viewing subvolume. In other words, this orientation creates a "stadium seating" effect in which the vertical offset between the layers provides unobstructed vision so that each layer can "see" the viewing sub-volumes of the other layers. An LF display system comprising non-overlapping viewing sub-volumes may provide a higher viewing experience than an LF display system with truly overlapping viewing volumes. For example, in the configuration shown in fig. 4F, different holographic content may be projected to viewers in the viewing sub-volumes 470A and 470B.

Control of LF display system

Fig. 5A is a block diagram of an LF display system 500 in accordance with one or more embodiments. LF display system 500 includes LF display component 510 and controller 520. LF display assembly 510 includes one or more LF display modules 512 that project a light field. LF display module 512 may include a source/sensor system 514 that includes one or more integrated energy sources and/or one or more energy sensors that project and/or sense other types of energy. Controller 520 includes data storage 522, network interface 524, and LF processing engine 530. The controller 520 may also include a tracking module 526 and a viewer profiling module 528. In some embodiments, the LF display system 500 also includes a sensing feedback system 570 and a tracking system 580. The LF display system described in the context of fig. 1, 2, 3 and 4 is an embodiment of the LF display system 500. In other embodiments, the LF display system 500 includes additional or fewer modules than those described herein. Similarly, functionality may be distributed among modules and/or different entities in a manner different from that described herein. The application of the LF display system 500 will also be discussed in detail below with respect to fig. 2 through 5.

LF display assembly 510 provides holographic content in a holographic object volume that may be visible to a viewer positioned within the viewing volume. LF display component 510 may provide holographic content by executing display instructions received from controller 520. Holographic content may include one or more holographic objects projected in front of the polymerized surface of LF display assembly 510, behind the polymerized surface of LF display assembly 510, or some combination thereof. The generation of display instructions with controller 520 is described in more detail below.

LF display assembly 510 provides holographic content using one or more LF display modules included in LF display assembly 510 (e.g., any of LF display modules 110, LF display system 200, and LF display module 300). For convenience, one or more LF display modules may be described herein as LF display module 512. LF display modules 512 may be tiled to form LF display elements 510. LF display module 512 may be structured into various seamless surface environments (e.g., single sided, multi-sided, walls of a field, curved surfaces, etc.). That is, tiled LF display modules form a polymeric surface. As previously described, LF display module 512 includes an energy device layer (e.g., energy device layer 220) and an energy waveguide layer (e.g., energy waveguide layer 240) that render holographic content. LF display module 512 may also include an energy relay layer (e.g., energy relay layer 230) that transfers energy between the energy device layer and the energy waveguide layer when rendering holographic content.

LF display module 512 may also contain other integrated systems configured for energy projection and/or energy sensing as previously described. For example, the light field display module 512 may include any number of energy devices (e.g., energy device 340) configured to project and/or sense energy. For convenience, the integrated energy projection system and the integrated energy sensing system of the LF display module 512 may be collectively described herein as a source/sensor system 514. The source/sensor system 514 is integrated within the LF display module 512 such that the source/sensor system 514 shares the same seamless energy surface with the LF display module 512. In other words, the polymeric surface of LF display assembly 510 contains the functionality of both LF display module 512 and source/sensor module 514. In other words, LF assembly 510, including LF display module 512 with source/sensor system 514, may project energy and/or sense energy while simultaneously projecting a light field. For example, LF display assembly 510 may contain an LF display module 512 and a source/sensor system 514 configured as a dual energy surface or a bi-directional energy surface as previously described.

In some embodiments, LF display system 500 enhances the generated holographic content with other sensory content (e.g., coordinated touches, audio, or smells) using sensory feedback system 570. Sensory feedback system 570 may enhance the projection of holographic content by executing display instructions received from controller 520. In general, sensory feedback system 570 includes any number of sensory feedback devices (e.g., sensory feedback system 442) external to LF display assembly 510. Some example sensory feedback devices may include coordinated acoustic projection and reception devices, fragrance projection devices, temperature adjustment devices, force actuation devices, pressure sensors, transducers, and the like. In some cases, the sensory feedback system 570 may have similar functionality as the light field display component 510, and vice versa. For example, both the sensory feedback system 570 and the light field display component 510 can be configured to produce a sound field. As another example, the sensory feedback system 570 can be configured to generate a tactile surface without the light field display 510 components.

To illustrate, in an example embodiment of the light field display system 500, the sensory feedback system 570 may comprise one or more acoustic projection devices. The one or more acoustic projection devices are configured to generate one or more pressure waves complementary to the holographic content upon execution of the display instructions received from the controller 520. The generated pressure waves may be, for example, audible (for sound), ultrasonic (for touch), or some combination thereof. Similarly, sensory feedback system 570 may comprise a fragrance projection device. The fragrance projection arrangement may be configured to provide fragrance to some or all of the target area when executing display instructions received from the controller. The fragrance means may be connected to an air circulation system (e.g. a duct, fan or vent) to coordinate the air flow within the target area. In addition, sensory feedback system 570 may include a temperature adjustment device. The temperature adjustment device is configured to increase or decrease the temperature in some or all of the target zones when executing display instructions received from the controller 520.

In some embodiments, sensory feedback system 570 is configured to receive input from a viewer of LF display system 500. In this case, the sensory feedback system 570 includes various sensory feedback devices for receiving input from a viewer. The sensor feedback device may include devices such as an acoustic receiving device (e.g., a microphone), a pressure sensor, a joystick, a motion detector, a transducer, and the like. The sensory feedback system may transmit the detected input to controller 520 to coordinate the generation of holographic content and/or sensory feedback.

To illustrate, in an example embodiment of a light field display component, the sensory feedback system 570 includes a microphone. The microphone is configured to record audio (e.g., wheezing, screaming, laughing, etc.) produced by one or more viewers. The sensory feedback system 570 provides the recorded audio as viewer input to the controller 520. The controller 520 may generate holographic content using the viewer input. Similarly, sensory feedback system 570 may comprise a pressure sensor. The pressure sensor is configured to measure a force applied to the pressure sensor by a viewer. Sensory feedback system 570 can provide the measured force as a viewer input to controller 520.

In some embodiments, the LF display system 500 includes a tracking system 580. The tracking system 580 includes any number of tracking devices configured to determine the location, movement, and/or characteristics of viewers in the target area. Typically, the tracking device is external to the LF display component 510. Some example tracking devices include a camera component ("camera"), a depth sensor, a structured light, a LIDAR system, a card scanning system, or any other tracking device that can track a viewer within a target area.

The tracking system 580 may include one or more energy sources that illuminate some or all of the target areas with light. However, in some cases, when rendering holographic content, the target area is illuminated by natural and/or ambient light from LF display assembly 510. The energy source projects light when executing instructions received from the controller 520. The light may be, for example, a structured light pattern, a light pulse (e.g., an IR flash lamp), or some combination thereof. The tracking system may project light of: a visible band (about 380nm to 750nm), an Infrared (IR) band (about 750nm to 1700nm), an ultraviolet band (10nm to 380nm), some other portion of the electromagnetic spectrum, or some combination thereof. The source may comprise, for example, a Light Emitting Diode (LED), a micro LED, a laser diode, a TOF depth sensor, a tunable laser, etc.

The tracking system 580 may adjust one or more transmit parameters when executing instructions received from the controller 520. The emission parameters are parameters that affect how light is projected from the source of the tracking system 580. The emission parameters may include, for example, brightness, pulse rate (including continuous illumination), wavelength, pulse length, some other parameter that affects how light is projected from the source assembly, or some combination thereof. In one embodiment, the source projects a pulse of light in time-of-flight operation.

The camera of tracking system 580 captures an image of the light (e.g., structured light pattern) reflected from the target area. When the tracking instruction received from the controller 520 is executed, the camera captures an image. As previously described, light may be projected by a source of the tracking system 580. The camera may comprise one or more cameras. That is, the camera may be, for example, an array of photodiodes (1D or 2D), a CCD sensor, a CMOS sensor, some other device that detects some or all of the light projected by the tracking system 580, or some combination thereof. In one embodiment, tracking system 580 may contain a light field camera external to LF display assembly 510. In other embodiments, a camera is included as part of LF display source/sensor module 514 included in LF display component 510. For example, if the energy relay elements of light field module 512 are bi-directional energy layers that interleave both the emissive display and the imaging sensors at energy device layer 220, as previously described, LF display component 510 may be configured to simultaneously project the light field and record imaging information from the viewing area in front of the display. In one embodiment, the images captured from the bi-directional energy surface form a light field camera. The camera provides the captured image to the controller 520.

When executing the tracking instructions received from controller 520, the camera of tracking system 580 may adjust one or more imaging parameters. Imaging parameters are parameters that affect how the camera captures an image. The imaging parameters may include, for example, frame rate, aperture, gain, exposure length, frame timing, rolling shutter or global shutter capture mode, some other parameter that affects how the camera captures images, or some combination thereof.

Controller 520 controls LF display assembly 510 and any other components of LF display system 500. The controller 520 includes a data storage 522, a network interface 524, a tracking module 526, a viewer profiling module 528, and a light field processing engine 530. In other embodiments, controller 520 includes more or fewer modules than described herein. Similarly, functionality may be distributed among modules and/or different entities in a manner different from that described herein. For example, the tracking module 526 may be part of the LF display component 510 or the tracking system 580.

The data storage device 522 is a memory that stores information for the LF display system 500. The stored information may include display instructions, tracking instructions, emission parameters, imaging parameters, virtual models of the target region, tracking information, images captured by the camera, one or more viewer profiles, calibration data for the light field display components 510, configuration data for the LF display system 510 (including resolution and orientation of the LF module 512), desired viewer geometry, content for graphics creation including 3D models, scenes and environments, textures and textures, and other information that the LF display system 500 may use, or some combination thereof. The data storage 522 is a memory, such as a Read Only Memory (ROM), a Dynamic Random Access Memory (DRAM), a Static Random Access Memory (SRAM), or some combination thereof.

The network interface 524 allows the light field display system to communicate with other systems or environments over a network. In one example, LF display system 500 receives holographic content from a remote light-field display system through network interface 524. In another example, LF display system 500 uses network interface 524 to transmit holographic content to a remote data storage device.

Tracking module 526 tracks viewers viewing content presented by LF display system 500. To this end, the tracking module 526 generates tracking instructions that control the operation of one or more sources and/or one or more cameras of the tracking system 580 and provides the tracking instructions to the tracking system 580. The tracking system 580 executes the tracking instructions and provides tracking inputs to the tracking module 526.

The tracking module 526 may determine the location of one or more viewers within the target area (e.g., sitting in a seat of the venue). The determined position may be relative to, for example, some reference point (e.g., a display surface). In other embodiments, the determined location may be within a virtual model of the target area. The tracked position may be, for example, a tracked position of the viewer and/or a tracked position of a portion of the viewer (e.g., eye position, hand position, etc.). The tracking module 526 uses one or more captured images from the cameras of the tracking system 580 to determine position location. The cameras of tracking system 580 may be distributed around LF display system 500 and may capture stereo images, allowing tracking module 526 to passively track the viewer. In other embodiments, the tracking module 526 actively tracks the viewer. That is, tracking system 580 illuminates some portion of the target area, images the target area, and tracking module 526 determines the location using time-of-flight and/or structured light depth determination techniques. The tracking module 526 uses the determined position location to generate tracking information.

The tracking module 526 may also receive tracking information as input from a viewer of the LF display system 500. The tracking information may contain body movements corresponding to various input options provided to the viewer by LF display system 500. For example, the tracking module 526 may track the viewer's body movements and assign any of the various movements as input to the LF processing engine 530. Tracking module 526 may provide tracking information to data store 522, LF processing engine 530, viewer profiling module 528, any other component of LF display system 500, or some combination thereof.

To provide context to the tracking module 526, consider an example embodiment of an LF display system 500 that displays a drama in which an actor successfully defeats an enemy character. In response to the scene, the viewer waves a fist in the air to show his excitement. The tracking system 580 may record the movement of the viewer's hand and transmit the record to the tracking module 526. As previously described, this may be accomplished by tracking system 580, which includes a camera, depth sensor, or other device external to light field display assembly 510, or a display surface that simultaneously projects a light field image and records the image, where the image recorded from the display surface may be a light field image or any combination of these devices. The tracking module 526 tracks the movement of the viewer's hand in the log and sends the input to the LF processing engine 530. As described below, the viewer profiling module 528 determines that the information in the image indicates that the motion of the viewer's hand is associated with a positive response. Thus, if enough viewers are identified with positive responses, LF processing engine 530 generates appropriate holographic content to celebrate hero characters to defeat enemy characters. For example, LF processing engine 530 may project a paper scrap in a scene.

The LF display system 500 includes a viewer profiling module 528 configured to identify and profile viewers. Viewer profiling module 528 generates a profile of a viewer (or viewers) viewing holographic content displayed by LF display system 500. The viewer profiling module 528 generates a viewer profile based in part on the viewer inputs and the monitored viewer behavior, actions, and reactions. The viewer profiling module 528 may access information obtained from the tracking system 580 (e.g., recorded images, videos, sounds, etc.) and process the information to determine various information. In various examples, the viewer profiling module 528 may use any number of machine vision or machine hearing algorithms to determine viewer behavior, actions, and responses. The monitored viewer behavior may include, for example, smiling, cheering, applauding, laughing, frightening, screaming, excitement, backing, other changes in posture, or movement of the viewer, etc.

More generally, the viewer profile may contain any information received and/or determined about the viewer viewing the holographic content from the LF display system. For example, each viewer profile may record the viewer's actions or responses to content displayed by the LF display system 500. Some example information that may be included in a viewer profile is provided below.

In some embodiments, the viewer profile may describe the viewer's response with respect to displayed performers (e.g., actors, singers, etc.), venues (e.g., stages, concert halls, etc.), and so on. For example, the viewer profile may indicate that the viewer typically responds positively to the performance of the performer as a group of handsome male singers who are frosty blooms between 18 and 25 years of age.

In some embodiments, the viewer profile may indicate characteristics of viewers watching the performance. For example, a viewer in a venue is wearing a jersey showing a university sign. In this case, the viewer profile may indicate that the viewer is wearing a jersey and may prefer holographic content associated with the university signs on the jersey. More broadly, the viewer characteristics that may be indicated in the viewer profile may include, for example, age, gender, race, clothing, viewing location in the venue, and the like.

In some embodiments, the viewer profile may indicate preferences of the viewer with respect to desired performances and/or venue characteristics. For example, the viewer profile may indicate that the viewer only likes to view holographic content that is appropriate for the age of each of their family. In another example, the viewer profile may indicate a holographic object volume to display holographic content (e.g., on a wall) and indicate that the holographic object volume does not display holographic content (e.g., above its head). The viewer profile may also indicate that the viewer prefers to present a tactile interface in its vicinity, or to avoid the tactile interface.

In another example, the viewer profile indicates a history of performances watched by a particular viewer. For example, the viewer profiling module 528 determines that a viewer or group of viewers has previously viewed a performance. Thus, the LF display system 500 can display holographic content that is different from the performance that the viewer previously viewed. As an example, a performance containing holographic content may have three different endings, and LF display system 500 may display the different endings based on the present viewers. In another example, each of the three endings may be presented to a different viewing volume in the same venue.

In some embodiments, a viewer profile may also describe characteristics and preferences of a group of viewers rather than a particular viewer. For example, the viewer profiling module 528 may generate a viewer profile for an audience viewing a performance in a venue. In one example, the viewer profiling module 528 creates a viewer profile for viewers watching a performance on teenagers' leeches. The profile indicates that 86.3% of the viewers are women between the ages of 20 and 35 and have a positive response to the performance. The profile also indicates that the remaining 23.7% of viewers are male between the ages of 20 and 35 and have a negative reaction to the performance. Any of the information and characteristics described previously may be applied to a group of viewers.

The viewer profiling module 528 may also access profiles associated with a particular viewer (or viewers) from one or more third party systems to establish a viewer profile. For example, a viewer purchases a ticket for a performance using a third party provider linked to the viewer's social media account. Thus, the viewer's ticket is linked to his social media account. When a viewer enters the performance floor using their ticket, the viewer profiling module 528 may access information from their social media account to establish (or enhance) a viewer profile.

In some embodiments, the data store 522 includes a viewer profile store that stores viewer profiles generated, updated, and/or maintained by the viewer profiling module 528. The viewer profile may be updated in the data store at any time by the viewer profiling module 528. For example, in one embodiment, when a particular viewer views holographic content provided by LF display system 500, viewer profile storage receives and stores information about the particular viewer in its viewer profile. In this example, the viewer profiling module 528 contains a facial recognition algorithm that can identify a viewer and positively identify the viewer as he views the presented holographic content. To illustrate, the tracking system 580 obtains an image of the viewer as the viewer enters the target area of the LF display system 500. The viewer profiling module 528 inputs the captured image and uses a facial recognition algorithm to identify the viewer's face. The identified face is associated with a viewer profile in a profile store, and as such, all of the obtained input information about the viewer may be stored in its profile. The viewer profiling module may also positively identify the viewer using a card identification scanner, a voice identifier, a Radio Frequency Identification (RFID) chip scanner, a bar code scanner, or the like.

In embodiments where the viewer profiling module 528 can positively identify the viewer, the viewer profiling module 528 can determine each viewer's visit to the LF display system 500. The viewer profiling module 528 may then store the time and date of each visit in the viewer profile of each viewer. Similarly, the viewer profiling module 528 may store input received from the viewer at each of its occurrences from any combination of the sensory feedback system 570, the tracking system 580, and/or the LF display component 510. The viewer profile system 528 may additionally receive other information about the viewer from other modules or components of the controller 520, which may then be stored with the viewer profile. Other components of controller 520 may then also access the stored viewer profile to determine subsequent content to provide to the viewer.

LF processing engine 530 generates holographic content that includes light field data as well as data for all the sensory domains supported by LF display system 500. For example, LF processing engine 530 may generate 4D coordinates in a rasterized format ("rasterized data"), which when executed by LF display component 510, cause LF display component 510 to render holographic content. LF processing engine 530 may access rasterized data from data store 522. Additionally, the LF processing engine 530 may construct rasterized data from the vectorized data set. Vectorized data is described below. LF processing engine 530 may also generate the sensory instructions needed to provide the sensory content that enhances the holographic objects. As described above, when executed by LF display system 500, the sensory instructions may generate tactile surfaces, sound fields, and other forms of sensory energy supported by LF display system 500. The LF processing engine 530 may access sensory instructions from the data store 522, building sensory instructions from the vectorized data set. The 4D coordinates and sensing data collectively represent the holographic content as display instructions executable by the LF display system to generate holographic and sensory content. More generally, the holographic content may take the form of CG content having: ideal light field coordinates, live action content, rasterized data, vectorized data, electromagnetic energy transmitted by a set of relays, instructions sent to a set of energy devices, energy locations on one or more energy surfaces, a set of energy propagation paths projected from a display surface, holographic objects visible to a viewer or audience, and many other similar forms.

The amount of rasterized data describing the flow of energy through the various energy sources in the LF display system 500 is very large. Although rasterized data may be displayed on the LF display system 500 when accessed from the data store 522, rasterized data may not be efficiently transmitted, received (e.g., via the network interface 524), and subsequently displayed on the LF display system 500. For example, take rasterized data representing a shorter show of holographic projections by LF display system 500. In this example, the LF display system 500 includes a display that contains billions of pixels, and the rasterized data contains information for each pixel location of the display. The corresponding size of the rasterized data is huge (e.g., several gigabytes of show display time per second) and is not manageable for efficient transmission over the business network via the network interface 524. For real-time streaming applications involving holographic content, the problem of efficient delivery can be magnified. When an interactive experience is required using input from the sensory feedback system 570 or the tracking module 526, an additional problem arises of storing only rasterized data on the data storage device 522. To enable an interactive experience, the light field content generated by LF processing engine 530 may be modified in real-time in response to sensory or tracking inputs. In other words, in some cases, the LF content cannot simply be read from the data storage 522.

Thus, in some configurations, data representing holographic content displayed by LF display system 500 may be passed to LF processing engine 530 in a vectorized data format ("vectorized data"). Vectorized data may be orders of magnitude smaller than rasterized data. Furthermore, vectorized data provides high image quality while having a data set size that enables efficient sharing of data. For example, the vectorized data may be a sparse data set derived from a denser data set. Thus, based on how sparse vectorized data is sampled from dense rasterized data, the vectorized data may have an adjustable balance between image quality and data transfer size. The adjustable sampling for generating vectorized data enables optimization of image quality at a given network speed. Thus, efficient transmission of holographic content via the network interface 524 is achieved for the vectorized data. The vectorized data also enables real-time streaming of the holographic content over commercial networks.

In summary, the LF processing engine 530 may generate holographic content derived from rasterized data accessed from the data storage 522, vectorized data accessed from the data storage 522, or vectorized data received via the network interface 524. In various configurations, the vectorized data may be encoded by an encoder prior to data transmission and decoded by a decoder within LF controller 520 after reception. The encoder and decoder pairs may be part of the same proprietary system codec. In some examples, the quantized data is encoded for additional data security and performance improvements related to data compression. For example, the vectorized data received over the network interface may be encoded vectorized data received from a holographic streaming application. In some instances, the vectorized data may require a decoder, the LF processing engine 530, or both to access the information content encoded in the vectorized data. The encoder and/or decoder system may be available for use by a consumer or authorized for a third party vendor. Other example encoding and/or decoding schemes may be employed to transmit and/or render holographic content.

The vectorized data contains all the information for each sensory domain that the LF display system 500 supports in a way that can support an interactive experience. For example, vectorized data for an interactive holographic experience may include any vectorized feature that may provide an accurate physical effect for each sensory domain supported by LF display system 500. Vectorized features may include any feature that may be synthetically programmed, captured, evaluated computationally, and the like. The LF processing engine 530 may be configured to convert vectorized features in the vectorized data into rasterized data. LF processing engine 530 may then project the holographic content converted from the vectorized data using LF display component 510. In various configurations, vectorized features may include: one or more red/green/blue/alpha channels (RGBA) + depth images; a plurality of view images of depth information with or without different resolutions, which view images may contain one high resolution center image and other views of lower resolution; material characteristics such as albedo and reflectance; a surface normal; other optical effects; surface identification; geometric object coordinates; virtual camera coordinates; displaying the plane position; an illumination coordinate; the tactile stiffness of the surface; tactile malleability; the strength of the touch; amplitude and coordinates of the sound field; (ii) an environmental condition; somatosensory energy vectors associated with mechanical receptors for texture or temperature, audio; as well as any other sensory domain characteristics. Many other vectorized features are possible.

The LF display system 500 may also generate an interactive viewing experience. That is, the holographic content may be responsive to input stimuli containing information about viewer location, gestures, interactions with the holographic content, or other information originating from viewer profiling module 528 and/or tracking module 526. For example, in an embodiment, LF processing system 500 produces an interactive viewing experience using real-time performance managed vectorized data received via network interface 524. In another example, if a holographic object needs to be moved in a particular direction immediately in response to a viewer interaction, LF processing engine 530 may update the rendering of the scene so the holographic object moves in the desired direction. This may require the LF processing engine 530 to use the vectorized data set to render the light field in real-time based on the 3D graphics scene with appropriate object placement and movement, collision detection, occlusion, color, shading, lighting, etc., to correctly respond to viewer interactions. The LF processing engine 530 converts the vectorized data into rasterized data for rendering by the LF display component 510. LF display system 500 may employ various other encoding/decoding techniques that allow the LF display system to render holographic content in near real-time.

The rasterized data contains holographic content instructions and sensory instructions (display instructions) that represent real-time performance. LF display component 510 simultaneously projects real-time performance holographic and sensory content by executing display instructions. The LF display system 500 monitors viewer interactions (e.g., voice responses, touches, etc.) with the presented real-time performance through the tracking module 526 and the viewer profiling module 528. In response to the viewer interaction, the LF processing engine may create an interactive experience by generating additional holographic and/or sensory content for display to the viewer.

To illustrate, consider an example embodiment of an LF display system 500 that includes an LF processing engine 530 that generates a plurality of holographic objects representing balloons that fall from the ceiling of a venue during a performance. The viewer may move to touch the holographic object representing the balloon. Accordingly, tracking system 580 tracks the movement of the viewer's hand relative to the holographic object. The movement of the viewer is recorded by the tracking system 580 and sent to the controller 520. The tracking module 526 continuously determines the movement of the viewer's hand and sends the determined movement to the LF processing engine 530. The LF processing engine 530 determines the placement of the viewer's hand in the scene, adjusting the real-time rendering of the graphics to include any desired changes (such as positioning, color, or occlusion) in the holographic object. LF processing engine 530 instructs LF display component 510 (and/or sensory feedback system 570) to generate a tactile surface using a volumetric tactile projection system (e.g., using an ultrasonic speaker). The generated tactile surface corresponds to at least a portion of the holographic object and occupies substantially the same space as some or all of the external surfaces of the holographic object. The LF processing engine 530 uses the tracking information to dynamically instruct the LF display component 510 to move the location of the haptic surface along with the location of the rendered holographic object, so that the viewer is given both visual and tactile perceptions of touching the balloon. More simply, when a viewer sees his hand touching the holographic balloon, the viewer simultaneously feels tactile feedback indicating that his hand touched the holographic balloon, and the balloon changes position or motion in response to the touch. In some examples, rather than presenting an interactive balloon in a performance accessed from data storage 522, the interactive balloon may be received as part of holographic content received from a live streaming application via network interface 524. In other words, the holographic content displayed by LF display system 500 may be a live stream of holographic content.

LF processing engine 530 may provide holographic content for display to viewers in the venue before, during, and/or after the performance to enhance the venue experience. The holographic content may be provided by the publisher of the performance, by the venue, by an advertiser, generated by the LF processing engine 530, etc. The holographic content may be content associated with a show, a genre of the show, a location of a venue, an advertisement, and the like. In any case, the holographic content may be stored in data storage 522 or streamed to LF display system 500 in a vectorized format over network interface 524. For example, a show may be displayed on a wall in a venue augmented with an LF display module. The publisher of the performance may provide holographic content presented on the wall display before the performance begins. The LF processing engine 530 accesses the holographic content and renders the accessed content from the display onto a wall of the venue before the performance begins. In another example, a venue having LF display system 500 is located in san francisco. If show specific content is not provided, the venue stores a holographic representation of the golden gate bridge presented in the venue prior to the show. Here, since show-specific holographic content is not provided, LF processing engine 530 accesses and renders a golden portal bridge in the venue. In another example, advertisers have provided holographic content of their products as advertisements to venues for display after performances. After the show ends, the LF processing engine 530 presents the viewer with an advertisement when they leave the venue. In other examples, the LF processing engine may dynamically generate holographic content for display on a wall of a theater, as described below.

The LF processing engine 500 may also modify the holographic content to suit the venue where the holographic content is being rendered. For example, not every floor may have the same size, the same number of seats, or the same technical configuration. Thus, LF processing engine 530 may modify the holographic content so that it will be properly displayed in the venue. In one embodiment, the LF processing engine 530 may access a configuration file for the site, including the layout, resolution, field of view, other technical specifications, etc. of the site. LF processing engine 530 may render and present holographic content based on information contained in the configuration file.

LF processing engine 530 may also create holographic content for display by LF display system 500. Importantly, here, creating holographic content for display is different from accessing or receiving holographic content for display. In other words, when creating content, the LF processing engine 530 generates entirely new content for display, rather than accessing previously generated and/or received content. LF processing engine 530 may use information from tracking system 580, sensory feedback system 570, viewer profiling module 528, tracking module 526, or some combination thereof, to create holographic content for display. In some instances, LF processing engine 530 may access information from elements of LF display system 500 (e.g., tracking information and/or viewer profiles), create holographic content based on the information, and in response, display the created holographic content using LF display system 500. The created holographic content may be enhanced with other sensory content (e.g., touch, audio, or scent) when displayed by LF display system 500. Further, the LF display system 500 may store the created holographic content so that it may be displayed in the future.

Dynamic content generation for LF display systems

In some embodiments, LF processing engine 530 incorporates Artificial Intelligence (AI) models to create holographic content for display by LF display system 500. The AI model may include supervised or unsupervised learning algorithms, including but not limited to regression models, neural networks, classifiers, or any other AI algorithm. The AI model may be used to determine viewer preferences based on viewer information recorded by LF display system 500 (e.g., by tracking system 580), which may contain information about the behavior of the viewer.

The AI model may access information from data storage 522 to create holographic content. For example, the AI model may access viewer information from one or more viewer profiles in data storage 522, or may receive viewer information from various components of LF display system 500. To illustrate, the AI model may determine that the viewer likes to view holographic content that the performer wears bowties. The AI model may determine preferences based on positive reactions or responses of a group of viewers to previously viewed holographic content including an actor wearing a bow tie. In other words, the AI model may create holographic content personalized for a group of viewers according to learned preferences of those viewers. Thus, for example, the AI model may incorporate a bow tie into an actor displayed in holographic content viewed by a group of viewers using LF display system 500. The AI model may also store the learned preferences of each viewer in a viewer profile store of the data store 522. In some instances, the AI model may create holographic content for a single viewer rather than a group of viewers.

One example of an AI model that may be used to identify characteristics of a viewer, identify responses, and/or generate holographic content based on the identified information is a convolutional neural network model having a layer of nodes, where the values at the nodes of a current layer are transforms of the values at the nodes of a previous layer. The transformation in the model is determined by a set of weights and parameters that connect the current layer and the previous layer. For example, and the AI model may contain five levels of nodes: layers A, B, C, D and E. The transformation from layer A to layer B is represented by the function W1Given that the transformation from layer B to layer C is represented by W2Given that the transformation from layer C to layer D is represented by W3Given, and the transformation from layer D to layer E is represented by W4It is given. In some instances, the transformation may also be determined by a set of weights and parameters used to transform between previous layers in the model. For example, fromLayer D to layer E transformation W4May be based on the transformation W used to complete the layer A to B1The parameter (c) of (c).

The input to the model may be the image encoded onto convolutional layer a acquired by tracking system 580, and the output of the model is the holographic content decoded from output layer E. Alternatively or additionally, the output may be a determined characteristic of the viewer in the image. In this example, the AI model identifies potential information in the image that represents the viewer characteristics in identification layer C. The AI model reduces the dimensionality of convolutional layer a to the dimensionality of identification layer C to identify any features, actions, responses, etc. in the image. In some instances, the AI model then adds the dimensions of identification layer C to generate the holographic content.

The image from the tracking system 580 is encoded into convolutional layer a. The image input in convolutional layer a may be related to various characteristics and/or reaction information, etc. in identification layer C. The relevant information between these elements can be retrieved by applying a set of transformations between the corresponding layers. That is, the convolution layer a of the AI model represents the encoded image, and the identification layer C of the model represents the smiling viewer. Can be obtained by transforming W1And W2Applied to the pixel values of the images in the space of convolutional layer a to identify a smiling viewer in a given image. The weights and parameters used for the transformation may indicate the relationship between the information contained in the image and the identity of the smiling viewer. For example, the weights and parameters may be quantifications of shapes, colors, sizes, etc. contained in information representing a smiling viewer in an image. The weights and parameters may be based on historical data (e.g., previously tracked viewers).

The smiley viewer in the image is identified in the identification layer C. The identification layer C represents a smiling viewer identified based on potential information about the smiling viewer in the image.

The identified smiley viewer in the image may be used to generate holographic content. To generate holographic content, the AI model starts at identification layer C and transforms W2And W3The value applied to identify a given identified smiling viewer in layer C. The transformation produces a set of nodes in the output layer E. Weights and parameters for transformationsA relationship between the identified smiling viewer and particular holographic content and/or preferences may be indicated. In some cases, the holographic content is output directly from the node of output layer E, while in other cases, the content generation system decodes the node of output layer E into the holographic content. For example, if the output is a set of identified characteristics, the LF processing engine may use the characteristics to generate holographic content.

In addition, the AI model may contain layers referred to as intermediate layers. An intermediate layer is a layer that does not correspond to an image, does not identify a property/reaction, etc., or does not generate holographic content. For example, in the given example, layer B is an intermediate layer between convolutional layer a and identification layer C. Layer D is an intermediate layer between identification layer C and output layer E. The hidden layer is a potential representation of different aspects of the identification that are not observable in the data, but can control the relationship between the image elements when identifying the characteristics and generating the holographic content. For example, a node in the hidden layer may have a strong connection (e.g., a large weight value) with the input value and the identification value that share the commonality of "happy man smiling". As another example, another node in the hidden layer may have a strong connection with the input value and the identification value that share a commonality of "scary screaming". Of course, there are any number of connections in the neural network. In addition, each intermediate layer is a combination of functions, such as a residual block, convolutional layer, pooling operation, skip connection, series, and the like. Any number of intermediate layers B may be used to reduce the convolutional layers to the identification layers, and any number of intermediate layers D may be used to add the identification layers to the output layers.

In one embodiment, the AI model contains a deterministic method that has been trained with reinforcement learning (thereby creating a reinforcement learning model). The model is trained to use measurements from tracking system 580 as input and changes in the created holographic content as output to improve the quality of performance.

Reinforcement learning is a machine learning system in which the machine learns "what to do" -how to map cases to actions-in order to maximize the digital reward signal. Rather than informing the learner (e.g., LF processing engine 530) which actions to take (e.g., generating specified holographic content), attempts to find out which actions yield the highest rewards (e.g., by cheering more people to improve the quality of the holographic content) are performed. In some cases, the action may affect not only the instant reward, but also the next case, and therefore all subsequent rewards. These two features-trial false searches and delayed rewards-are two significant features of reinforcement learning.

Reinforcement learning is defined not by characterizing the learning method, but by characterizing the learning problem. Basically, reinforcement learning systems capture those important aspects of the problem facing learning agents interacting with their environment to achieve goals. That is, in the example of generating a song for a performer, the reinforcement learning system captures information about viewers in the venue (e.g., age, personality, etc.). The agent senses the state of the environment and takes action that affects the state to achieve one or more goals (e.g., creating a popular song that the viewer cheers). In its most basic form, the formulation of reinforcement learning encompasses three aspects of the learner: sensation, action, and goal. Continuing with the song example, LF processing engine 530 senses the state of the environment through sensors of tracking system 580, displays holographic content to viewers in the environment, and achieves a goal that is a measure of the reception of the song by the viewers.

One of the challenges that arises in reinforcement learning is the tradeoff between exploration and utilization. To increase rewards in the system, reinforcement learning agents prefer actions that have been tried in the past and found to be effective in generating rewards. However, to discover the actions that generate the reward, the learning agent may select actions that were not previously selected. The agent "leverages" the information it already knows to obtain rewards, but it also "explores" the information to make better action choices in the future. The learning agent attempts various actions and gradually favors those actions that look best while continuing to attempt new actions. On a random task, each action is typically tried multiple times to obtain a reliable estimate of its expected reward. For example, if the LF processing engine creates holographic content that the LF processing engine knows will cause the viewer to laugh after a long period of time, the LF processing engine may change the holographic content such that the time until the viewer laughs is reduced.

In addition, reinforcement learning considers the entire problem of target-oriented agent interaction with uncertain environments. Reinforcement learning agents have specific goals that can sense aspects of their environment and can choose to receive high reward actions (i.e., growers). Furthermore, agents are typically still operating despite significant uncertainty in the environment they are confronted with. When reinforcement learning involves planning, the system will address the interaction between the planning and the real-time action selection, and how to acquire and improve the environmental elements. In order to advance reinforcement learning, important sub-problems must be isolated and studied, which play a clear role in the complete interactive target-seeking agent.

Reinforcement learning problems are a framework of machine learning problems in which interactions are processed and actions are performed to achieve a goal. Learners and decision makers are referred to as agents (e.g., LF processing engine 530). The things that an agent interacts with (including everything beyond) are referred to as the environment (e.g., viewers in the venue, etc.). The two continuously interact, the agent selects actions (e.g., creating holographic content), and the environment responds to these actions and presents the new situation to the agent. The environment also brings a reward, i.e. a special value that the agent tries to maximize over time. In one context, rewards serve to maximize the viewer's positive response to the holographic content. The complete specification of the environment defines a task that is an example of a reinforcement learning problem.

To provide more context, the agent (e.g., LF processing engine 530) interacts with the environment in each discrete time step in a series of discrete time steps (i.e., t ═ 0, 1, 2, 3, etc.). At each time step t, the agent receives some environmental state stFor example, measurements from the tracking system 580. State stWithin S, where S is a set of possible states. Based on the state stAnd a time step t, the agent selects an action at (e.g., having the actor split). Action at is in A(s)t) Wherein A(s)t) Is a set of possible actions. ThereafterOne time state (partly as a result of its action), the agent receives a digital reward rt+1. State rt+1Within R, where R is the set of possible rewards. Once the agent receives the reward, the agent selects a new state st+1

At each time step, the agent implements a mapping from states to probabilities of selecting each possible action. This mapping is called a proxy policy and is denoted as πtIn which pit(s, a) is if stA when is ═ stProbability of a. The reinforcement learning method may decide how an agent changes its policy due to the state and rewards generated by the agent's actions. The objective of the agent is to maximize the total number of rewards received over time.

This reinforcement learning framework is very flexible and can be applied to many different problems in many different ways (e.g., generating holographic content). The framework suggests that whatever the details of the sensory, memory and control devices, any problem (or purpose) of learning a target-oriented behavior can be reduced to three signals that are passed back and forth between the agent and its environment: one signal indicates the selection (action) made by the agent, one signal indicates the basis on which the selection was made (status), and one signal defines the agent goal (reward).

Of course, the AI model may include any number of machine learning algorithms. Some other AI models that may be employed are linear and/or logistic regression, classification and regression trees, k-means clustering, vector quantization, and the like. Regardless, the LF processing engine 530 typically takes input from the tracking module 526 and/or the viewer profiling module 528 and in response the machine learning model creates holographic content. Similarly, the AI model may direct the rendering of holographic content.

In an example, LF processing engine 530 creates a virtual popular singer. LF processing engine 530 may use information contained in the viewer profile stored in data storage 522 to create a virtual popular singer. For example, the information contained in the stored viewer profile indicates that a large number of viewers like pop music sung by predecessor female singers in the middle of the 20 s. Thus, the LF processing engine 530 creates a virtual popular singer that is displayed by the LF display system 500 as a precursor female singer in the middle of the 20 s. More specifically, the LF processing engine 530 accesses viewer profiles of viewers in the venue. The LF processing engine 530 parameterizes (e.g., quantizes) the information in each viewer profile. For example, the LF processing engine 530 may quantify characteristics such as the viewer's age, location, gender, and so forth. In addition, the LF processing engine 530 may parameterize other information contained in the viewer profile. For example, if the viewer profile indicates that the viewer watched four performances of a female singer, the content creation module may quantify this trend (e.g., generate a score indicating the viewer's interest in the female singer). The LF processing engine 530 inputs the parameterized user profile into an AI model (e.g., a neural network) configured to generate characteristics of the virtual performer based on the input parameters, and in response, receive the characteristics of the performer. The LF processing engine 530 then inputs the characteristics of the virtual performer into an AI model (e.g., a program generation algorithm) configured to generate performers for a given set of characteristics, and in response, virtual performers that are preterm female singers in the middle of the 20 s. In addition, LF processing engine 530 may create holographic content (e.g., songs, performances, etc.) that will follow the personas of the virtual popular singer. For example, the content generation module 530 may author a song about a former boyfriend of jealousy for a virtual female singer in the mid 20 s. More specifically, the LF processing engine 530 may access characteristics of the virtual performer and information about the viewer and input the information into an AI model (e.g., recurrent neural network "RNN"). Likewise, characteristics and information can be parameterized (e.g., using classification and regression trees) and input into the RNN. Here, the RNN may be trained using real-world songs with similar input parameters. Thus, the RNN generates a virtual song for the virtual singer to viewers within the venue with similar characteristics to one or more songs created by other preterm female singers in the mid-20's.

LF processing engine 530 may create holographic content based on the shows shown in the venue. For example, a show shown in a venue may be associated with a set of metadata describing the characteristics of the show. The metadata may include, for example, background, genre, performer, show type, subject matter, title, run time, and the like. LF processing engine 530 may access any of the metadata describing the performance and in response generate holographic content that is present in the venue. For example, a movie titled "Last masculine fish (The Last Merman)" will be played in a venue enhanced with The LF display system 500. The LF processing engine 530 accesses metadata of the show to create holographic content for the walls of the venue before the show begins. Here, the metadata contains that the background is underwater and the genre is romantic. LF processing engine 530 enters metadata into the AI model and in response receives holographic content displayed on the wall of the venue. In this example, LF processing engine 530 creates a seaside sunset that is displayed on the wall of the venue before the show begins playing.

In an example, LF processing engine 530 creates holographic content based on viewers present at a venue containing LF display system 500. For example, a group of viewers enters the venue to view a performance enhanced by the holographic content displayed by LF display system 500. The viewer profiling module 528 generates viewer profiles for viewers in the venue, as well as an aggregate viewer profile representing all viewers in the venue. The LF processing engine 530 accesses the aggregated viewer profile and creates holographic content for display to viewers in the venue. For example, viewers in the venue are a group of couples watching performers known as romantic balladry, and thus, the aggregated viewer profile contains information (e.g., by parameterization and input to the AI model) indicating that they may like holographic content that is commensurate with couples in the appointment. Thus, the LF processing engine 530 generates holographic content such that the venue has a more romantic atmosphere (e.g., candles, dim lights, Marvin Gaye's music, etc.).

In some instances, LF processing engine 530 may create holographic content based on pre-existing content. Here, preexisting may be previously recorded songs, existing real-world performers, and the like. For example, The viewer's favorite band is "The Town peoples" and they wish to view The concert of their favorite Town peoples songs. Thus, the LF processing engine 530 creates a performance of Town peoples singing "d.m.c.a." and "In the grafy". In this example, the LF processing engine 530 may access existing songs and models from the Town peoples (e.g., stored in the data store 522) and use the content to create new performances for display by the LF display system 500. More specifically, the content creation system may access the songs and models from data storage 522. The LF processing engine 530 inputs the songs and models into an AI model (e.g., a program generation algorithm) configured to create a performance, for example, of a rock band, and outputs holographic content of the performance in response to the AI model. In some cases, the viewer and/or LF processing engine 530 may pay a fee to the copyright holder because LF processing engine 530 creates holographic content based on copyrighted content.

In an example, LF processing engine 530 creates holographic content based on responses of viewers watching the performance. For example, a viewer in a venue is watching a show in a venue enhanced by LF display system 500. The tracking module 526 and the viewer profiling module 528 monitor the viewer's responses to viewing the performance. For example, the tracking module 526 may obtain images of a viewer while the viewer is watching a performance. The tracking module 526 identifies the viewer, and the viewer profiling module 528 may use machine vision algorithms to determine the viewer's response based on the information contained in the image. For example, the AI model may be used to identify whether a viewer watching a performance is smiling, and thus, the viewer profiling module 528 may indicate in the viewer profile whether the viewer is responding positively or negatively to the performance based on the smiling. Other reactions may also be determined. The tracking module may determine information about the viewer including a location of the viewer, a movement of the viewer, a gesture of the viewer, an expression of the viewer, an age of the viewer, a gender of the viewer, a race of the viewer, or clothing worn by the viewer. This information may be shared with the viewer profiling module 528 to generate a viewer profile. By way of illustration, LF processing engine 530 generates comedy routines executed by virtual comedy actors. The tracking system 580 and viewer profiling module 528 monitor the viewer's responses as the LF display system 500 displays a performance. In this case, the viewer does not smile for the "cold joke" spoken by the virtual comedy actor. In response, LF processing engine 530 modifies the comedy show such that the virtual comedy actor begins to play with a particular viewer. Here, LF processing engine 530 may input the reaction into an AI model (e.g., a reinforcement learning model) configured to cause laughter in a viewer viewing the holographic content. Based on the viewer's response and characteristics in the viewer, LF processing engine 530 changes the way the joke is spoken. For example, a virtual comedy actor may begin to smile the leopard print pants worn by a particular viewer. Here, the model may be trained using previously performed comedy performances and recorded user responses to those performances.

In some instances, LF processing engine 530 may create holographic content based on pre-existing content. Here, preexisting may be previously recorded songs, existing real-world performers, and the like. For example, The viewer's favorite band is "The Town peoples" and they wish to view The concert of their favorite Town peoples songs. Thus, the LF processing engine 530 creates a performance of Town peoples singing "d.m.c.a." and "In the grafy". In this example, the LF processing engine 530 may access existing songs and models from the Town peoples (e.g., stored in the data store 522) and use the content to create new performances for display by the LF display system 500. More specifically, the content creation system may access the songs and models from data storage 522. The LF processing engine 530 inputs the songs and models into an AI model (e.g., a program generation algorithm) configured to create a performance, for example, of a rock band, and outputs holographic content of the performance in response to the AI model. In some cases, the viewer and/or LF processing engine 530 may pay a fee to the copyright holder because LF processing engine 530 creates holographic content based on copyrighted content.

In a similar example, LF processing engine 530 may create holographic content based on pre-existing or provided advertising content. In other words, for example, LF processing engine 530 may request an advertisement from a network system through network interface 524, in response, the network system provides holographic content, and LF processing engine 530 creates holographic content for display containing the advertisement. Some examples of advertisements may include products, text, video, and the like. Advertisements may be presented to particular viewers based on the viewers in the viewers. Similarly, holographic content may enhance a performance with advertisements (e.g., in-line advertisements). Most generally, as previously described, LF processing engine 530 may create advertising content based on any of the characteristics and/or responses of the viewers in the venue.

The foregoing examples of creating content are not limiting. Most broadly, LF processing engine 530 creates holographic content for display to a viewer of LF display system 500. Holographic content may be created based on any of the information contained in LF display system 500.

Holographic content distribution network

Fig. 5B illustrates an example LF performance network 550 in accordance with one or more embodiments. One or more LF display systems may be included in LF performance network 550. LF show network 550 contains any number of LF display systems (e.g., 500A, 500B, and 500C), LF generation system 554, and networking system 556 coupled to one another via network 552. In other embodiments, LF performance network 550 includes additional or fewer entities than those described herein. Similarly, functionality may be distributed among different entities in a different manner than described herein.

In the illustrated embodiment, the LF performance network 550 includes LF display systems 500A, 500B, and 500C that can receive holographic content via network 552 and display the holographic content to viewers. LF display systems 500A, 500B, and 500C are collectively referred to as LF display system 500.

LF generation system 554 is a system that generates holographic content for display in a performance venue containing an LF display system. In other words, the LF generation system is configured to generate any of the LF content described herein. The holographic content may be a performance or may be holographic content that enhances a traditional performance. LF generation system 554 may include a light field recording component comprised of any number of sensors and/or processors to record energy data of an event, and a processing engine configured to convert this recorded energy data into holographic content. For example, the sensors of the light field recording component may include a camera for recording images, a microphone for recording audio, a pressure sensor for recording interactions with objects, and the like. In some examples, the light field recording component of the LF generation system 554 includes one or more recording modules (e.g., an LF display module configured to record energy data from an event, or a simple 2D camera for capturing images of an event) positioned around an area (e.g., a performance venue) to record events from multiple viewpoints. In this case, the processing engine of LF generation system 554 is configured to convert energy from multiple viewpoints into holographic content. In some examples, the light field recording component includes two or more two-dimensional recording systems used by the processing engine to convert multiple viewpoints of an event into three-dimensional holographic content. The light field recording component may also include other sensors, such as a depth sensor and/or a plenoptic camera.

The processing engine of LF generation system 554 may also generate synthetic light field data from a Computer Generated Image (CGI) animation. The synthetic light field data may be from, for example, a virtual world or an animated movie, and may be used to augment the sensed data from the light field recording component. The processing engine of LF generation system 554 combines the recorded sensory information, synthesizes the data, and encodes the information into holographic content and sensory content. LF generation system 554 may transmit the encoded holographic content to one or more of LF display systems 500 for display to a viewer. As previously discussed, to achieve an effective transfer speed, the data of the LF display systems 500A, 500B, 500C, etc. may be transferred as vectorized data over the network 552.

More broadly, the LF generation system 554 generates holographic content for display in the venue by using any recorded sensed or synthesized data of events that may be projected by the LF display system while exhibiting the performance. For example, the sensed data may include recorded audio, recorded images, recorded interactions with objects, and so forth. Many other types of sensed data may be used. To illustrate, the recorded visual content may include: 3D graphical scenes, 3D models, object placement, texture, color, shading, and lighting; similar performance transformations using AI models and larger data sets can be used to convert to 2D performance data in holographic form; multi-view camera data from camera equipment of many cameras with or without depth channels; plenoptic camera data; a CG content; or other types of recorded sensed data of events as described herein.

In various examples, the event recorded with one or more sensors on one or more energy domains may be a concert, a program, a scene, a single performance, a band performance, an actor performance, a dancer performance, a magic performance, or another type of event that may be exhibited in a performance venue. The performance venue may comprise one or more of a performance hall, event space, theater, concert hall, studio, or stage.

In some configurations, LF generation system 554 may use a dedicated encoder to perform encoding operations that reduce sensed data recorded for a performance to a vectorized data format as described above. That is, encoding the data as vectorized data may include image processing, audio processing, or any other computation that may result in a reduced data set that is easier to transmit over the network 552. The encoder may support a format used by performance manufacturing industry professionals. In other configurations, the LF generation system may transmit the performance content to the network system 556 and/or the LF display system without encoding the content.

Each LF display system (e.g., 500A, 500B, 500C) may receive encoded data from the network 552 via the network interface 524. In this example, each LF display system includes a decoder to decode the encoded LF display data. More specifically, LF processing engine 530 generates rasterized data for LF display component 510 by applying a decoding algorithm provided by a decoder to the received encoded data. In some examples, the LF processing engine may additionally generate rasterization data for the LF display component using input from the tracking module 526, viewer profiling module 528, and sensing feedback system 570 as described herein. Holographic content recorded by LF generation system 554 is reproduced for rasterized data generated by LF display component 510. Importantly, each LF display system 500A, 500B, and 500C generates rasterized data that is appropriate for the particular configuration of LF display components in terms of geometry, resolution, and the like. In some configurations, the encoding and decoding processes are part of a proprietary encoding/decoding system pair (or 'codec') that may be provided to a display consumer or authorized by a third party. In some cases, the encoding/decoding system pair may be implemented as a proprietary API that may provide a common programming interface to content creators.

In some configurations, the various systems in LF performance network 550 (e.g., LF display system 500, LF generation system 554, etc.) may have different hardware configurations. The hardware configuration may include an arrangement of physical systems, energy sources, energy sensors, haptic interfaces, sensory capabilities, resolution, LF display module configuration, or any other hardware description of the systems in the LF performance network 550. Each hardware configuration may generate or use sensed data in a different data format. Thus, the decoder system may be configured to decode encoded data for the LF display system to be presented on the LF display system. For example, an LF display system having a first hardware configuration (e.g., LF display system 500A) receives encoded data from an LF generation system having a second hardware configuration (e.g., LF generation system 554). The decoding system accesses information describing the first hardware configuration of the LF display system 500A. The decoding system decodes the encoded data using the accessed hardware configuration so that the decoded data can be processed by the LF processing engine 530 of the receiving LF display system 500A. While recorded in the second hardware configuration, LF processing engine 530 generates and renders rasterized content for the first hardware configuration. In a similar manner, regardless of the hardware configuration, holographic content recorded by LF generation system 554 may be rendered by any LF display system (e.g., LF display system 500B, LF display system 500C). Various other aspects that may be included in a hardware configuration may include: resolution, number of rays projected per degree, field of view, angle of deflection on the display surface, and dimensions of the display surface, among others. Additionally, the hardware configuration may also include: the plurality of display panels of the LF display assembly, the relative orientation of the display panels, the height of the display panels, the width of the display panels, and the layout of the display panels.

Similarly, the various systems in LF performance network 550 may have different geometric orientations. The geometric orientation reflects the physical size, layout, and arrangement of the various modules and systems included in the LF display system. Thus, the decoder system may be configured to decode encoded data to be presented on the LF display system for the LF display system in the geometric configuration. For example, an LF display system having a first geometric configuration (e.g., LF display system 500A) receives encoded data from an LF generation system having a second geometric configuration (e.g., LF generation system 554). The decoding system accesses information describing the first geometric configuration of the LF display system 500A. The decoding system decodes the encoded data using the accessed geometric configuration so that the decoded data can be processed by the LF processing engine 530 of the receiving LF display system 500A. While recorded in the second geometric configuration, LF processing engine 530 generates and renders rasterized content for the first geometric configuration. In a similar manner, regardless of the geometric configuration, holographic content recorded by LF generation system 554 may be rendered by any LF display system (e.g., LF display system 500B, LF display system 500C). Various other aspects that may be included in the geometric configuration may include: multiple display panels (or surfaces), relative orientation of display panels, of the LF display assembly.

Similarly, the various venues in the LF performance network 550 may have different configurations. The venue configuration reflects any of a number and/or location of holographic object volumes, a number and/or location of viewing volumes, and a number and/or location of viewing positions relative to the LF display system. Thus, the decoder system may be configured to decode encoded data to be presented in the LF display system for the LF display system in the venue. For example, an LF display system (e.g., LF display system 500A) present in a first venue receives encoded data from an LF generation system (e.g., LF generation system 554) recorded in a different venue (or some other space). The decoding system accesses information describing the venue. The decoding system decodes the encoded data using the accessed site configuration so that the decoded data can be processed by the LF processing engine 530 installed in the site. The LF processing engine 530 generates and presents content for the venue despite the records being in different locations.

Network system 556 is any system configured to manage holographic content transmission between systems in LF performance network 550. For example, network system 556 may receive requests for holographic content from LF display system 500A and facilitate the transmission of holographic content from LF generation system 554 to LF display system 500A. The network system 556 may also store holographic content, viewer profiles, holographic content, etc. for transmission to and/or storage by other LF display systems 500 in the LF performance network 550. The network system 556 may also contain an LF processing engine 530 that can create holographic content as previously described.

The network system 556 may include a Digital Rights Management (DRM) module to manage the digital rights of the holographic content. For example, LF generation system 554 may transmit the holographic content to network system 556, and the DRM module may encrypt the holographic content using a digital encryption format. In other examples, LF generation system 554 encodes the recorded light-field data into a holographic content format that can be managed by a DRM module. Network system 556 may provide the digitally encrypted keys to the LF display systems so that each LF display system 500 can decrypt and subsequently display the holographic content to the viewer. Most generally, the network system 556 and/or LF generation system 554 encode the holographic content, and the LF display system can decode the holographic content.

Network system 556 may act as a repository for previously recorded and/or created holographic content. Each piece of holographic content may be associated with a transaction fee that, when received, causes network system 556 to transmit the holographic content to LF display system 500 that provides the transaction fee. For example, LF display system 500A may request access to holographic content via network 552. The request includes a transaction fee for the holographic content. In response, network system 556 transmits the holographic content to the LF display system for display to the viewer. In other examples, the network system 556 may also serve as a subscription service for holographic content stored in the network system. In another example, LF generation system 554 records light field data of a performance in real-time and generates holographic content representative of the performance. LF display system 500 transmits a request for holographic content to LF generation system 554. The request includes a transaction fee for the holographic content. In response, LF generation system 554 transmits the holographic content for simultaneous display on LF display system 500. Network system 556 can act as a mediator to exchange transaction fees and/or manage the holographic content data stream across network 552.

Network 552 represents the communication paths between systems in LF performance network 550. In one embodiment, the network is the internet, but may be any network including, but not limited to, a Local Area Network (LAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a mobile, wired, or wireless network, a cloud computing network, a private network, or a virtual private network, and any combination thereof. In addition, all or some of the links may be encrypted using conventional encryption techniques, such as Secure Sockets Layer (SSL), encrypted HTTP, and/or Virtual Private Network (VPN). In another embodiment, the entities may use custom and/or dedicated data communication techniques instead of or in addition to those described above.

Example site

Fig. 6-8 illustrate several example sites where holographic content may be displayed using an LF display system (e.g., LF display system 500). The holographic content of the show displayed in the venue ("show content") may be any of the holographic content described herein. The performance content may be presented in place of live performance by real performers. Within a venue, any number of viewers are located within any number of viewers. The LF display system is configured to display performance content in a holographic object volume ("performance volume") such that viewers in the viewing volume perceive the performance content. Typically, an LF display system in a venue includes an array of light field modules 210 surrounding a show volume that generate a seamless multi-sided LF display. Thus, the performance volume may be a polymeric holographic object volume of an array of LF display modules 210.

Fig. 6 illustrates a side view of a venue 600 that is a conventional theater that has been enhanced with an LF display system in accordance with one or more embodiments. In the theatre, both stage 602 and the wall 604 surrounding the stage are arranged with an array 640 of LF display modules such that the area above the stage is the performance volume 610 of the LF display system. For clarity, performance volume 610 (e.g., holographic object 255) is shown as a bounded square, but performance volume 610 is shown as only a portion of an actual performance volume. For example, the performance body may extend into the wall 604 or stage 602. In fig. 6, the LF display system is an embodiment of an LF display system 500. Further, the LF display modules in array 640 are embodiments of LF display assembly 510.

Here, the theatre house is three-tiered, but may contain any number of tiers. Each layer contains a plurality of viewing positions (e.g., viewing positions 622A, 622B) for viewers to view performance content 630 in the performance volume 610. As shown, the performance content is a real image 632, but may also be a virtual image. The viewing position 622 in each layer is contained in a viewing volume (e.g., 620A, 620B, and 620C) of the LF display system. The LF display system may display the same or different performance content 630 to viewers in each viewing volume 620. For example, as described below, viewers in viewing positions 622A in the bottom tier of viewers 620A may see different performance content 630 than viewers in viewing positions 622B in the middle tier of viewers 620B.

In other embodiments, the theatre may be configured differently. For example, a stage may have a pitch, a theater may contain an orchestra or standing spatial area, etc. Any surface in the theater (e.g., stage archways, ceilings, backgrounds, curtains, etc.) can be included in the array 640 of LF display modules. Further, the theater may have additional layers and viewing volumes, and these layers and viewing volumes may be arranged in any number of configurations.

As a background example, the venue 600 shown is a movie theater in san francisco, charged to show live performance content of a famous performer "n.j. It is noted, however, that n.j. is performing at a venue in new york city. The venue in new york city contains an LF generation system 554 for recording and transmitting the performance of n.j. as performance content 630 to other venues via a network (e.g., network 552). Venue 600 in san francisco includes LF display system 500 configured to display received performance content 630 to viewers at viewing positions 622 in viewing volume 620. Payment of a transaction fee (e.g., payment to network system 556 or LF generation system 554) to receive the performance content. In various examples, an owner of venue 600, a presenter of venue 600, a performance manager, or any other agent may pay a fee. Performance content 630 allows viewers in san francisco to perceive the n.j. as if the n.j. were performing live performance in their front performance volume 610. This allows viewers in san francisco to view live performances without having to go to new york city.

In some embodiments, venue 600 charges an entrance fee to view a live performance of n.j. through LF display system 500. Each viewing position 622 may have a different entrance fee, and typically the entrance fee for the bottom viewing volume 620A is more expensive than the entrance fee for the top viewing volume 620C. The LF display system 500 may display different show content 630 to each viewing volume 620 based on the entrance fee of the viewing volume 620. For example, a performance in san francisco that fully renders n.j. may cost more (e.g., processing power, energy, etc.) than a performance that partially renders n.j. Thus, the LF processing engine 530 can display only a portion of the performance content 630 to the theatre top-level viewing volume 620C while playing all of the performance content 630 to the theatre bottom-level viewing volume 620A. For example, when only a portion of the performance content 630 is displayed, the LF processing engine 530 may display the performance content 630 in only a portion of the performance volume 610, rather than the entire performance volume 610, the LF processing engine 530 may remove aspects of the performance content 630 (e.g., candidate dancers, props, etc.), the LF processing engine 530 may present the performance content 630 to the top viewing volume 620C at a lower resolution than the bottom viewing volume 620A, and so on.

Alternatively or additionally, LF processing engine 530 may create holographic content to enhance performance content 630 based on an entrance fee for viewing volume 620. For example, LF processing engine 530 may create (or access from network system 556) advertisements for display concurrently with performance content 630. The advertisements may be based on information obtained by the viewer profiling system 590 or the tracking system 580. For example, LF processing engine 530 may access a viewer profile that contains viewer characteristics and responses to holographic content. The LF processing engine 530 accesses and displays advertisements from the data store 522 associated with the viewer's characteristics and responses. When show content 630 is displayed, the venue may charge a fee from the advertiser for displaying the advertisement. In another example, LF processing engine 530 may augment performance content 630 with sponsored holographic content (e.g., a posted advertisement) rather than publicly displayed advertisements. For example, the LF processing engine 530 may replace clothing worn during the new york city performance with clothing from an advertiser. Thus, a viewer watching the performance content 630 in a movie theater in san francisco would perceive that n.j. is wearing clothing from an advertiser instead of his clothing in new york city.

Fig. 7A illustrates a cross-section of another venue 700A incorporating an LF display system for displaying performance content to viewers at viewing positions in a viewing volume according to one or more embodiments. The venue allows the LF display system to display real-time holographic content and/or recorded or some other additional holographic content. The holographic content may include content representing concerts, shows, programs, scenes, events, etc. Holographic content may also include artists, bands, actors, dancers, comedy actors, performers, and the like. Holographic content may also include representations of any background, stage, prop, piece of clothing, costume, musical instrument, etc. All venues described herein may be configured to display similar holographic content.

Here, venue 700A is designed and constructed to display show content 730 rather than augmenting an already existing venue. As shown, fig. 7A is a cross-section of field 600 shaped as a cylindrical ring, with LF display system 740 covering the outer surface of the inner cylinder of the ring. In fig. 7A, LF display system 740 is an embodiment of LF display component 510. Additionally, LF display system 740 is an embodiment of LF display system 500.

Venue 700A includes a plurality of viewing locations arranged in a layer. Here, each layer is located on the inner surface of the outer cylinder with a radius larger than the radius of the inner cylinder. The viewing position 722 in each layer is within a viewing volume (e.g., 720A, 720B, 720C, 720D, 720E, and 720F). Here, the performance bodies (e.g., 710A, 710B, 710C, 710D, 710E, and 710F) are located between the inner wall of the outer cylinder and the outer wall of the inner cylinder. The show volume is positioned such that a viewer in viewing position 722 can perceive show content 730 displayed in show volume 710. Likewise, for clarity, a performance volume 710 (e.g., holographic object 255) is shown as a bounded square, but the performance volume 710 shown is only a portion of an actual performance volume. For example, the performance body may extend into an inner cylinder of venue 700A.

In the illustrated example, a portion of each show volume 710 and view volume 720 spatially overlap. Although shown as partially overlapping, the show volume and the viewing volume may completely overlap in space. As previously described, the spatial overlap is the area in which viewers can interact with the performance content 730. Further, users interacting with the performance content 730 in the spatially overlapping region can be monitored by the sensory feedback component 570 and in response the performance content 730 can change. In some cases, venue 700 may charge a higher entrance fee for viewing location 722 in the area where viewers may interact with show content 730.

Venues designed for displaying show content using an array of LF display modules have several advantages over venues enhanced with an array of LF display modules. For example, viewers in different viewing volumes of venue 700A watch different shows (or different sub-portions of the shows), while viewers in viewing volume 620 of venue 600 watch the same shows. Thus, the LF display system in venue 700 may generate a better viewing experience for each viewer 720. For example, a viewer in watch volume 720A of venue 700 may be provided with the same view of performance content 730 as the viewer in watch volume 720C. However, in venue 600, the viewer in viewing volume 620C views show content 730 farther and below than the viewer in viewing volume 620A. In this manner, each viewer in venue 700A may have the best seat at the full venue.

Further, considering LF display parameters, performance content 730 displayed to viewing volume 710 in fig. 7 is easier to manage than performance content 630 displayed to viewing volume 620 in fig. 6 for rendering of performance content. Some LF display parameters may include, for example, resolution, field of view, projection distance, and viewing subvolumes. To illustrate, a viewing volume 720, such as in the venue of fig. 7, may be designed such that it is substantially equal in shape and is substantially similarly spaced from a performance volume 710. Thus, since each viewing volume 720 is identical, the rendering of performance content 730 to each viewing volume 720 may be reduced. However, in the venue of fig. 6, each viewing volume 620 is different and separate rendering of performance content 630 is required for each viewing volume 620.

The venue may take other shapes such that the array of light field display modules may display performance content in the performance volume to viewers in viewing positions within the viewing volume 720. Some example venues having different shapes may include domes, awnings, tunnels, spherical theaters, and the like.

For example, fig. 7B illustrates a cross-section of another venue 700B incorporating an LF display system for displaying performance content to viewers at viewing positions in a viewing volume in accordance with one or more embodiments. Venue 700B is designed and constructed to display show content 730 rather than augmenting an already existing venue. As shown, fig. 7B is a cross-section of field 700B. In fig. 7B, LF display system 740 is an embodiment of LF display component 510. Additionally, LF display system 740 is an embodiment of LF display system 500.

In the example shown, venue 700B resembles a cinema in a circle, with viewing position 722 surrounding a stage. The floor of the stage 702 is covered with an array 740 of LF display modules such that the area above the stage forms a performance volume (e.g., performance volume 710G). LF display 710 presents performance content in performance volume 710G so that viewers in venue 700B can perceive the performance content. In venue 700B, viewing positions 722 are positioned at an inclination such that the line of sight of each viewing position allows unobstructed viewing of show content 730 from a viewing volume (e.g., viewing volume 720G). Here, the venue includes one viewing volume 720G so that all viewers are presented with the same performance. In other configurations, there may be more than one viewing volume.

More generally, LF display systems may have a substantially horizontal or near-horizontal display surface. In several examples, an LF display system may include the following display surfaces: (i) at least some portion of a floor of the venue, (ii) at least some portion of a stage (e.g., stage 702) in the venue, and/or (iii) at least some portion of a raised viewing platform in the venue. Other types of horizontal surfaces are also possible. For these configurations, the viewer may be elevated and looking down relative to the display surface to view the holographic performance content projected from the display surface, and the viewer may partially or completely surround the display surface. There are many other configurations of light field display surfaces, including vertically mounted display surfaces with viewing positions disposed generally in front of the LF display surface and described elsewhere in this disclosure (450A shown in fig. 4C, 450B shown in fig. 4D, and 450C shown in fig. 4E), and curved LF display surfaces.

For clarity, performance volume 710G is shown as a bounded square, but performance volume 710G may only be a portion of actual performance volume 710G. For example, show volume 710G may extend further toward the top of venue 700B. Further, a portion of the show volume 710G and the view volume 720G spatially overlap. Although shown as partially overlapping, the show volume and the viewing volume may completely overlap in space. As previously described, the spatial overlap is the area in which viewers can interact with the performance content 730.

The field may also be a much smaller location. For example, fig. 8 shows a venue 800 that also functions as a home theater in a viewer's living room 802 in accordance with one or more embodiments. Here, the home theater contains an LF display system 810 comprised of an array of LF display modules on one wall. In fig. 8, LF display system 810 is an embodiment of LF display system 500.

LF display system 810 may be configured such that a show volume (e.g., holographic object volume 255) and a viewing volume completely overlap within living room 802. That is, at any viewing location within living room 802 (e.g., viewing location 822), a viewer can view and interact with performance content 830. While the viewing position 822 is shown as a chair, the viewer can cross the living room and view the performance content 830. That is, the viewing location 822 may be the location of the viewer in the living room 802.

In some embodiments, LF display system 810 creates (or modifies) performance content 830 based on viewer interactions with performance content 830. For example, the viewer may strike the performer with a fist in the performance content 830. In this case, in the performance content 830, the viewer may say "too baseball, n.j.", and move his hand as if the performer were to be struck with a fist. The tracking system 580 monitors the viewer and identifies (e.g., via machine hearing, machine vision, neural networks, etc.) that the viewer wishes to strike the performer with a fist in the performance content 830. The content generation system 530 creates performance content (e.g., using machine learning, neural networks, or some combination thereof) that represents a performer making a fist back and forth based on using the monitored viewer information. The tracking system 580 monitors the positioning of the viewer's hand, and when the viewer's fist and the performer's fist spatially overlap, the sensory feedback component 570 creates a sensation for the viewer to impact with the performer's fist in the performance content.

In some cases, a viewer may interact with the network system 556 to obtain performance content 830 for his living room 802. Continuing with the example described with reference to fig. 5B, the viewer may send a transaction fee to the network system 556 and the LF generation system 554 sends the n.j. holographic content shown in new york city to the venue 800.

Displaying performance content to viewers in a venue

Fig. 9 is a flow diagram of a method 900 for displaying show content to viewers in a venue (e.g., venue 600) in the context of an LF show network (e.g., LF show network 550). The method 900 may include additional or fewer steps, and the steps may be performed in a different order. Further, various steps or combinations of steps may be repeated any number of times during the performance of the method.

First, a venue incorporating an LF display system (e.g., LF display system 500) transmits 910 a request for a live stream (or previously recorded) of performance content (e.g., performance content 630) to a network system (e.g., network system 556) system via a network (e.g., network 552). The request includes a transaction fee sufficient to pay royalties to copyright owners of the performance content.

An LF generation system (e.g., LF generation system 554) records LF data for live performances and transmits corresponding performance content to the network system. The network system transmits the performance content to the LF display system so that the performance content can be displayed at approximately the same time that the LF generation system is recording.

The LF display system receives 920 holographic performance content 630 from the network system via the network.

The LF display system determines 930 a configuration of the LF display system and/or the performance space. For example, the LF display system may access a configuration file containing a number of parameters describing the HW configuration of the LF display, including resolution, light per degree of projection, field of view, deflection angle on the display surface, or dimensions of the LF display surface. The configuration file may also contain information about the geometric orientation of the LF display components, including the number, relative orientation, width, height, and layout of the LF display panels. In addition, the profile may contain configuration parameters for the arena, including the holographic object volume, the viewing volume, and the viewer's position relative to the display panel.

To illustrate by way of example, LF display system 500 determines 930 viewing volumes (e.g., viewing volumes 620A, 620B, and 620C) for displaying performance content. For example, the LF display system 500 may access information in the LF display system that describes the layout, geometric configuration, and/or hardware configuration of the venue. To illustrate, the layout may include the location, spacing, and size of viewing positions (e.g., viewing position 622) in the venue. Thus, the LF display system may determine that the viewing position in the first layer of the venue is in a first viewing volume, the viewing position in the second layer of the venue is in a second viewing volume, and the viewing position in the third layer of the venue is in a third viewing volume. In various other embodiments, the LF display system may determine any number and configuration of viewers at any location within the venue.

The LF display system generates 940 holographic content (and other sensed content) for presentation on the LF display system based on the hardware configuration of the LF display system within the performance venue and the particular layout and configuration of the performance venue. Determining the presentation content for display may include rendering the presentation content appropriately for the venue or viewer. For example, the LF display system may: (i) adding, with the advertisement, performance content for display to the viewing volumes in the third tier and removing aspects of the live performance from the performance content, (ii) reducing fidelity of the performance content for display to the viewing volumes in the second tier of the venue, and (iii) fully rendering the performance content for display to the viewing volumes in the third tier of the venue.

The LF display system presents 950 performance content in the performance volumes in the venue such that viewers at the viewing positions in each viewing volume perceive the appropriate performance content. That is, viewers of the top viewing volume perceive performance content with advertisements, viewers of the middle viewing volume perceive performance content at a lower resolution, and viewers of the bottom viewing volume perceive fully rendered performance content.

The LF display system can determine information about viewers in the viewing volume at any time while the viewers are viewing the performance content. For example, the tracking system may monitor a facial response of a viewer in the viewing volume, and the viewer profiling system may access information about characteristics of the viewer in the viewing volume.

The LF display system can create (or modify) performance content for simultaneous display based on the determined information. For example, the LF processing engine creates a light show for simultaneous display by the LF display system based on the determined information about the viewer.

Additional configuration information

The foregoing description of embodiments of the present disclosure has been presented for purposes of illustration; it is not intended to be exhaustive or to limit the disclosure to the precise form disclosed. One skilled in the relevant art will appreciate that many modifications and variations are possible in light of the above disclosure.

Some portions of this description describe embodiments of the disclosure in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like. Moreover, it has also proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, hardware, or any combination thereof.

Any of the steps, operations, or processes described herein may be performed or carried out using one or more hardware or software modules, alone or in combination with other devices. In one embodiment, the software modules are implemented in a computer program product comprising a computer readable medium containing computer program code, the computer readable medium being executable by a computer processor to perform any or all of the described steps, operations, or processes.

Embodiments of the present disclosure may also relate to apparatuses for performing the operations herein. This apparatus may be specially constructed for the required purposes, and/or it may comprise a general purpose computing device selectively activated or reconfigured by a computer program stored in the computer. This computer program may be stored in a non-transitory tangible computer readable storage medium or any type of medium suitable for storing electronic instructions, which may be coupled to a computer system bus. Moreover, any computing system referred to in the specification may contain a single processor, or may be an architecture that employs a multi-processor design for increased computing capability.

Embodiments of the present disclosure may also relate to products produced by the computing processes described herein. This product may include information resulting from a computing process, where the information is stored on a non-transitory tangible computer readable storage medium and may include any embodiment of the computer program product or other data combination described herein.

Finally, the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the disclosure be limited not by this detailed description, but rather by any claims based on the application to which they pertain. Accordingly, the disclosure of the embodiments is intended to be illustrative, but not limiting, of the scope of the disclosure, which is set forth in the following claims.

65页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:具有梯度密封界面的过滤元件

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!