Apparatus, method and computer program for providing notifications

文档序号:1358585 发布日期:2020-07-24 浏览:11次 中文

阅读说明:本技术 用于提供通知的装置、方法和计算机程序 (Apparatus, method and computer program for providing notifications ) 是由 S·S·马特 A·勒蒂涅米 A·埃罗南 J·A·利帕南 于 2018-12-14 设计创作,主要内容包括:一种装置、方法和计算机程序,该装置包括:用于确定视角介导内容在被提供给绘制设备的内容内为可用的部件;以及用于向内容添加指示视角介导内容为可用的通知的部件;其中该通知包括被添加到内容的空间音频效果。(An apparatus, method and computer program, the apparatus comprising: means for determining that perspective-mediated content is available within content provided to a rendering device; and means for adding a notification to the content indicating that the perspective-mediated content is available; wherein the notification includes a spatial audio effect added to the content.)

1. An apparatus, comprising:

means for determining that perspective-mediated content is available within content provided to a rendering device; and

means for adding a notification to the content indicating that perspective-mediated content is available;

wherein the notification comprises a spatial audio effect added to the content.

2. The apparatus of claim 1, wherein the spatial audio effect of the notification is temporarily added to the content.

3. The apparatus of any preceding claim, wherein the notification comprises a spatial audio effect that changes spatialization of the perspective-mediated content.

4. The apparatus of any preceding claim, wherein the notification comprises application of an artificial audio object to the perspective-mediated content.

5. The device of any preceding claim, wherein the spatial audio effect added to the content comprises one or more of ambient noise, reverberation.

6. The apparatus of any preceding claim, wherein the notification is added to the content by applying a room impulse response to the content.

7. The apparatus of claim 6, wherein the room impulse response applied mediates a room in which content is captured and a room in which the content is to be drawn independent of the perspective.

8. The apparatus of any preceding claim, wherein the perspective-mediated content comprises content that has been captured within a three-dimensional space that supports different audio and/or visual scenes being rendered via the rendering device, wherein the audio and/or video scenes being rendered depend on a location of a user of the rendering device.

9. The apparatus of claim 8, wherein the notification added to the content produces a different audio effect on the audio scene corresponding to the user's location.

10. The apparatus of any of claims 8-9, wherein the notification added to the content comprises adding reverberation to the content to create the audio effect that one or more audio objects are moving within the three-dimensional space.

11. The apparatus of any preceding claim, wherein the perspective-mediated content comprises audio content.

12. The apparatus of any preceding claim, wherein the perspective-mediated content comprises content captured by a plurality of devices.

13. A content rendering device comprising the apparatus of any preceding claim and means for rendering perspective-mediated content.

14. A content capture device comprising the apparatus of any preceding claim and means for capturing perspective-mediated content.

15. A method, comprising:

determining that perspective-mediated content is available within content provided to a rendering device; and

adding a notification to the content indicating that perspective-mediated content is available;

wherein the notification comprises a spatial audio effect added to the content.

16. The method of claim 15, wherein the spatial audio effect of the notification is temporarily added to the content.

17. The method of any of claims 15-16, wherein the spatial audio effect added to the content comprises one or more of ambient noise, reverberation.

18. The method of any of claims 15-17, wherein the notification is added to the content by applying a room impulse response to the content.

19. A computer program comprising computer program instructions which, when executed by processing circuitry, cause:

determining that perspective-mediated content is available within content provided to a rendering device; and

adding a notification to the content indicating that perspective-mediated content is available;

wherein the notification comprises a spatial audio effect added to the content.

20. The computer program of claim 19, wherein the spatial audio effect of the notification is temporarily added to the content.

Technical Field

Examples of the present disclosure relate to an apparatus, method and computer program for providing notifications. In particular, they relate to an apparatus, method and computer program for providing notifications relating to perspective mediated content.

Background

The perspective-mediated content may include audio and/or visual content representing an audio space and/or a visual space having a plurality of dimensions. When perspective-mediated content is rendered, the rendered audio scene and/or visual scene depends on the user's location. This enables drawing different audio scenes and/or different visual scenes, wherein the audio scenes and/or the visual scenes correspond to different positions of the user.

The perspective-mediated content may be used in virtual reality or augmented reality applications or any other suitable type of application.

Disclosure of Invention

According to various, but not necessarily all, examples of the disclosure there is provided an apparatus comprising: means for determining that perspective-mediated content is available within content provided to a rendering device; and means for adding a notification to the content indicating that the perspective-mediated content is available; wherein the notification includes a spatial audio effect added to the content.

The notified spatial audio effect may be temporarily added to the content.

The spatial audio effect added to the content may include one or more of ambient noise, reverberation.

The notification may be added to the content by applying a room impulse response to the content. The applied room impulse response may mediate the room in which the content is captured and the room in which the content is to be rendered independent of the perspective.

The perspective-mediated content may include content that has been captured within a three-dimensional space that enables different audio and/or visual scenes to be rendered via a rendering device, where the rendered audio and/or video scenes depend on a location of a user of the rendering device. The notifications added to the content may produce different audio effects on the audio scene corresponding to the user's location.

The notification added to the content may include adding reverberation to the content to create an audio effect that one or more audio objects are moving within the three-dimensional space.

The perspective-mediated content may include audio content.

The perspective-mediated content may include content captured by multiple devices.

According to various, but not necessarily all, examples of the disclosure there is provided an apparatus comprising: processing circuitry; and memory circuitry comprising computer program code, the memory circuitry and the computer program code configured to, with the processing circuitry, cause the apparatus to: determining that perspective-mediated content is available within content provided to a rendering device; and adding a notification to the content indicating that the perspective-mediated content is available; wherein the notification comprises a spatial audio effect added to the content.

According to various, but not necessarily all, examples of the disclosure there is provided a method comprising: determining that perspective-mediated content is available within content provided to a rendering device; and adding a notification to the content indicating that the perspective-mediated content is available; wherein the notification comprises a spatial audio effect added to the content.

The notified spatial audio effect may be temporarily added to the content.

The spatial audio effect added to the content may include one or more of ambient noise, reverberation.

The notification may be added to the content by applying a room impulse response to the content. The applied room impulse response may be independent of the room in which the perspective-mediated content was captured and the room in which the content is to be rendered.

The perspective-mediated content may include content that has been captured within a three-dimensional space that enables different audio and/or visual scenes to be rendered via a rendering device, where the rendered audio and/or video scenes depend on a location of a user of the rendering device. The notifications added to the content produce different audio effects on the audio scene corresponding to the user's location.

The notification added to the content may include adding reverberation to the content to create an audio effect that one or more audio objects are moving within the three-dimensional space.

The perspective-mediated content may include audio content.

The perspective-mediated content may include content captured by multiple devices.

According to various, but not necessarily all, examples of the disclosure there is provided a computer program comprising computer program instructions that, when executed by processing circuitry, cause: determining that perspective-mediated content is available within content provided to a rendering device; and adding a notification to the content indicating that the perspective-mediated content is available; wherein the notification comprises a spatial audio effect added to the content.

According to various, but not necessarily all, examples of the disclosure there is provided a physical entity embodying the computer program described above.

According to various, but not necessarily all, examples of the disclosure there is provided an electromagnetic carrier signal carrying the computer program described above.

In accordance with various, but not necessarily all, examples of the disclosure, there are provided examples as claimed in the appended claims.

Drawings

For a better understanding of various examples that facilitate an understanding of the brief description, reference will now be made, by way of example only, to the accompanying drawings in which:

FIG. 1 illustrates an apparatus;

FIG. 2 illustrates a method;

FIGS. 3A and 3B illustrate an example system;

4A-4C illustrate example systems for generating different types of perspective-mediated content;

5A-5B illustrate a system that provides a first type of perspective-mediated content;

6A-6B illustrate a system that provides a second type of perspective-mediated content;

7A-7B illustrate a system that provides a third type of perspective-mediated content;

8A-8B illustrate a system that provides a fourth type of perspective-mediated content; and

FIG. 9 illustrates another example system.

Detailed Description

The following description describes an apparatus 1, method and computer program 9 that controls how content, which may include perspective-mediated content, is rendered to a user. In particular, they control how a user may be informed that perspective-mediated content is available or that new types of perspective-mediated content are already available. The perspective-mediated content may include an audio space and/or a visual space, where the rendered audio scene and/or visual scene depends on the location of the user.

Fig. 1 schematically illustrates an apparatus 1 according to an example of the present disclosure. The apparatus 1 illustrated in fig. 1 may be a chip or a chipset. In some examples, the apparatus 1 may be provided within a device such as a content capture device, a content processing device, a content rendering device, or any other suitable type of device.

The apparatus 1 comprises control circuitry 3. The control circuitry 3 may provide means for controlling an electronic device such as a content capture device, a content processing device, a content rendering device or any other suitable type of device. The control circuitry 3 may also provide means for performing the methods or at least a portion of the methods of examples of the present disclosure.

The apparatus 1 comprises processing circuitry 5 and memory circuitry 7. The processing circuitry 5 may be configured to read from the memory circuitry 7 and write to the memory circuitry 7. The processing circuitry 5 may comprise one or more processors. The processing circuitry 5 may further comprise: an output interface via which the processing circuitry 5 outputs data and/or commands; and an input interface via which data and/or commands are input to the processing circuitry 5.

The memory circuitry 7 may be configured to store a computer program 9, the computer program 9 comprising computer program instructions (computer program code 11) which, when loaded into the processing circuitry 5, control the operation of the apparatus 1. The computer program instructions of the computer program 9 provide the logic and routines that enables the apparatus 1 to perform the example methods described above. By reading the memory circuitry 7, the processing circuitry 5 is able to load and execute the computer program 9.

The computer program 9 may arrive at the apparatus 1 via any suitable delivery mechanism, which may be, for example, a non-transitory computer readable storage medium, a computer program product, a memory device, a recording medium such as a compact disc read only memory (CD-ROM) or Digital Versatile Disc (DVD), or an article of manufacture that tangibly embodies a computer programv6) The computer program code 9 is transmitted to the apparatus 1 by a wireless protocol, such as ZigBee, ANT +, Near Field Communication (NFC), radio frequency identification, wireless local area network (wireless L AN) or any other suitable protocol.

Although the memory circuitry 7 is illustrated in the figures as a single component, it will be appreciated that it may be implemented as one or more separate components, some or all of which may be integrated/removable and/or may provide permanent/semi-permanent/dynamic/cached storage.

Although the processing circuitry 5 is illustrated in the figures as a single component, it is to be understood that it may be implemented as one or more separate components, some or all of which may be integrated/removable.

References to "computer-readable storage medium", "computer program product", "tangibly embodied computer program", etc. or a "controller", "computer", "processor", etc. should be understood to encompass not only computers having different architectures such as single/multi-processor architectures, Reduced Instruction Set Computing (RISC) and sequential (von neumann)/parallel architectures, but also specialized circuits such as Field Programmable Gate Arrays (FPGA), Application Specific Integrated Circuits (ASIC), signal processing devices and other processing circuitry. References to computer program, instructions, code etc. should be understood to encompass software for a programmable processor or firmware such as, for example, the programmable content of a hardware device whether instructions for a processor or configuration settings for a fixed-function device, gate array or programmable logic device etc.

As used in this application, the term "circuitry" refers to all of the following:

(a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry); and

(b) a combination of circuitry and software (and/or firmware), such as (if applicable): (i) a combination of processor(s); or (ii) part of the processor (s)/software (including digital signal processor(s), software and memory(s) that work together to cause a device such as a mobile phone or server to perform various functions) and

(c) circuits, such as microprocessor(s) or portions of microprocessor(s), require software or firmware for operation, even if the software or firmware is not physically present.

Fig. 2 illustrates an example method that may be used in examples of the present disclosure. The method may be implemented using the apparatus 1 shown in fig. 1. The method may be implemented by the apparatus 1 within a content capture device, within a content processing device, within a content rendering device or within any other suitable device. In some examples, the blocks of the method may be distributed among one or more different devices.

The method comprises the following steps: at block 21, it is determined that perspective-mediated content is available within content provided to a rendering device.

The content provided to the rendering device may include audio content. The audio content may be generated by one or more audio objects that may be located at different locations within the space.

In some examples, the content provided to the rendering device may include visual content. The visual content may include images corresponding to objects within the space. In some examples, the visual content may correspond to audio content such that images in the visual content correspond to the audio content.

At block 21, the content provided to the rendering device may be perspective-mediated content or non-perspective-mediated content. In some examples, the content may be volumetric content or non-volumetric content.

The non-perspective mediated content may include audio or visual content, wherein the audio scene and/or the visual scene rendered by the rendering device is independent of a location of a user of the rendering device. The same audio scene and/or visual scene may be provided even if the user changes their orientation or position.

The audio perspective mediated content may represent an audio space. The audio space may be a multi-dimensional space. In an example of the present disclosure, the audio space may be a three-dimensional space. The audio space may comprise one or more audio objects. The audio objects may be located at different positions within the audio space. In some examples, the audio object may be moved within the audio space.

Different audio scenes may be available within the audio space. Different audio scenes may comprise different representations of the audio space when listened to from a particular angle within the audio space.

For example, the audio perspective mediating content may include audio generated by a band or a plurality of musicians that may be located at different locations around the room. This enables users to listen to different audio scenes depending on the way they rotate their head when rendering audio perspective-mediated content. The audio scene that the user listens to may also depend on the position of the audio object relative to the user. If the user moves in the audio space, this may change the audio objects that the user can listen to, as well as the volume and other parameters of the audio objects. For example, if a user starts from a first position located next to a musician playing the drum, they will hear primarily the audio provided by the drum, while if they move towards another musician playing the guitar, the guitar's sound will increase relative to the sound provided by the drum. It is to be appreciated that this example is intended to be illustrative, and that other examples for rendering audio perspective mediated content may be used in examples of the present disclosure.

The visual perspective mediation content may represent a visual space. The visual space may be a multi-dimensional space. In examples of the present disclosure, the visual space may be a three-dimensional space. The space represented by the visual space may be the same as the space represented by the audio space.

Different visual scenes may be available within the visual space. Different visual scenes may include different representations of a visual space when viewed from a particular angle within the visual space. As with audio perspective mediated content, a user can change the rendered visual perspective mediated content by changing its position and/or orientation within the visual space.

In some examples, the content may include mediated reality content. This may be content that enables a user to visually experience a fully or partially man-made environment, such as a virtual visual scene or a virtual audio scene. The mediated reality content may include interactive content such as video games or non-interactive content such as motion video or audio recordings. The mediated reality content may be augmented reality content, virtual reality content, or any other suitable type of content.

The content may be perspective-mediated content such that a viewpoint of the user within a space represented by the content changes an audio and/or visual scene rendered to the user. For example, if a user of the rendering device rotates his head, this will change the audio scene and/or the visual scene that is rendered to the user.

At block 21, perspective-mediated content may be determined to be available using any suitable means. The apparatus may comprise control circuitry 3 which may be as described above. In some examples, perspective-mediated content may be obtained by a plurality of different capture devices. In such an example, it may be determined that perspective-mediated content is available for a period of time during which content is captured by multiple capture devices. This determination may be made by controlling circuitry 3 provided within the capture device or controlling circuitry 3 provided within a communication system including the capture device or any other suitable means.

In some examples, a content file including perspective-mediated content includes metadata indicating that the content is perspective-mediated content. The metadata may indicate the number of degrees of freedom that the user has within the perspective-mediated content, e.g., it may indicate whether the user has three degrees of freedom or six degrees of freedom. In some examples, it may indicate the size of the volume available in the perspective-mediated content. For example, it may indicate a virtual space where view-mediated content is available. In such an example, the metadata may be used to determine whether perspective-mediated content is available.

In some examples, different content files may be available that include different types of content. For example, a first file may contain non-view-mediated content, while a second file may contain view-mediated content that allows three degrees of freedom, and a third file may contain view-mediated content that allows six degrees of freedom. In such an example, it may be determined that the perspective-mediated content is available when the additional content file becomes available.

In some examples, a single capture device may obtain perspective-mediated content. In such an example, the control circuitry 3 of the capture device may be arranged to provide an indication that perspective-mediated content has been captured, or the processing device may provide an indication that captured content has been processed to provide perspective-mediated content. In such an example, the indication may provide a trigger that enables the apparatus 1 to determine that perspective-mediated content is available.

The content may be provided to a rendering device. The rendering device may include any component that enables content to be rendered for a user. Rendering of the content may include providing the content in a form perceptible to the user. Rendering of the content may include rendering the content as perspective-mediated content. The content may be rendered by any suitable rendering device, such as one or more headphones, one or more speakers, one or more display units, or any other suitable rendering device. Rendering devices may be provided within more complex devices. For example, a virtual reality headset may include a headset and one or more displays, and a handheld device such as a mobile phone or tablet computer may include a display and one or more speakers.

In some examples, when content is provided to a rendering device, it may be rendered immediately. For example, a user may be live streaming audiovisual content. In such an example, the capturing of the content and the rendering of the content may occur simultaneously or with very little delay. In other examples, the content may be stored in one or more memories of the rendering device when provided to the rendering device. This may enable the user to download the content and use it at a later point in time. In such an example, the rendering of the content and the capturing of the content would not be simultaneous.

The method further comprises the following steps: at block 23, a notification is added to the content indicating that view-mediated content is available. The added notification includes a spatial audio effect added to the content. Thus, the notification includes the modification of the content, rather than a separate notification provided in addition to the content.

The spatial audio effect added to the content may include any audio effect that may be used to provide an indication to the user that perspective-mediated content is now available. In some examples, the spatial audio effect may include the addition or reverberation of ambient noise or any other suitable audio effect that enables a user to perceive a notification that has been added to the content.

Spatial audio effects added to the content may alter any spatialization of the audio content. The user may perceive the change to act as a notification that perspective-mediated content is available. In the case where the content being rendered is non-perspective mediated content, then the addition of the spatial effect to the content may be perceived by the user and serve as an indication that the perspective-mediated content is now available. In the case where the content being rendered is perspective-mediated content, the addition of the spatial effect of the notification may change the spatial audio being rendered so that the user may perceive that the audio has changed. This may serve as a notification that other types of perspective-mediated content are now available.

In some examples, the content provided to the rendering device may not include audio content. For example, when perspective-mediated content becomes available, the content may be only visual content, or the audio content may be very quiet. In such an example, the notification may include applying the artificial audio object to the content. A spatial audio effect may then be added to the artificial audio object.

In some examples, adding a spatial effect, such as reverberation, to content may create an audio effect in which one or more audio objects within an audio space are moving. In some examples, the spatial effect may create an audio effect in which the audio object is moving away from the user. This may give an indication that the size of the audio space is increasing, which intuitively indicates that perspective-mediated content is available.

The spatial audio effect added to the content may produce a different audio effect than the captured spatial audio content. That is, the notification does not attempt to reconstruct a real-world audio experience for the user, but rather provides a deviation from the provided audio content such that the user is alerted to the fact that the availability of the content has changed by the perspective of the alert. Thus, the notification provides an audio effect that is at least temporarily different from the audio scene corresponding to the user's position within the audio space.

In some examples, the notification may be added to the content by applying a room impulse response to the content. The room impulse response of the application mediates the room in which the content is captured or the room in which the content is to be drawn to the user independent of the perspective. That is, the room impulse response is not added to provide a realistic effect, but rather an audio alert for the user.

When the user hears a notification that view-mediated content is available, they can then choose whether or not to access the view-mediated content. For example, the user may be able to cause a user input to mediate content to switch from the original content to the newly available perspective.

In some examples, the notification added to the content may be temporarily added. For example, the notification may be added to the content within a predetermined time period. In some examples, the effects included within the notifications may be adjusted such that they disappear within a predetermined period of time. The predetermined period of time may be a few seconds or any other suitable length of time. In other examples, the notification may be added permanently. That is, notifications may be added until deleted by user input. The user input may be whether the user selects to use perspective-mediated content or not to use perspective-mediated content.

Fig. 3A illustrates an example system 29 that can be used to implement examples of the present disclosure. The example system 29 includes a plurality of capture devices 35A, 35B, 35C, and 35D, the apparatus 1, and a rendering device 40.

The apparatus 1 may comprise control circuitry 3 as described above, which may be arranged to implement a method according to an example of the present disclosure. For example, the apparatus 1 may be arranged to implement the method or at least part of the method shown in fig. 2. In some examples, the apparatus 1 may be provided within the capture devices 35A, 35B, 35C, and 35D. In some examples, the apparatus 1 may be provided within the rendering device 40. In some examples, the apparatus 1 may be provided by one or more devices within a communications network, such as one or more remote servers or one or more remote processing devices.

In the example of fig. 3A, the capturing devices 35A, 35B, 35C and 35D, the apparatus 1 and the rendering device 40 may be arranged to communicate via a communication network, which may be a wireless communication network. The capturing devices 35A, 35B, 35C, and 35D, the apparatus 1, and the drawing device 40 may be located at positions distant from each other. In the example of fig. 3A, the capturing devices 35A, 35B, 35C and 35D, the apparatus 1 and the rendering device 40 are shown as different entities. As mentioned above, in other examples, the apparatus 1 may be provided within one or more of the capturing devices 35A, 35B, 35C and 35D or within the rendering device 40.

The capture devices 35A, 35B, 35C, and 35D may comprise any device that may be arranged to capture audio content and/or visual content. The capture devices 35A, 35B, 35C, and 35D may include one or more microphones for capturing audio content, one or more cameras for capturing visual content, or any other suitable component. In the example of fig. 3A, the capture devices 35A, 35B, 35C, and 35D include a plurality of communication devices, such as cellular telephones. Other types of capture devices 35A, 35B, 35C, and 35D may be used in other examples of the disclosure.

In the example of FIG. 3A, each of the capture devices 35A, 35B, 35C, and 35D is operated by a different user 33A, 33B, 33C, and 33D. Users 33A, 33B, 33C, and 33D are located at different positions and may capture the same audio object 37A, 37B from different perspectives.

In the example system 29 of FIG. 3A, multiple users 33A, 33B, 33C, and 33D are capturing an audio space 31 using capture devices 35A, 35B, 35C, and 35D. The audio space 31 comprises two audio objects 37A and 37B. The first audio object 37A comprises a singer and the second audio object 37B comprises a dancer. One or both of audio objects 37A and 37B may be moved within audio space 31 while audio content is being captured. Users 33A, 33B, 33C, and 33D and capture devices 35A, 35B, 35C, and 35D are spatially distributed around audio space 31 to enable perspective mediated content to be generated.

In the example system of FIG. 3A, four capture devices 35A, 35B, 35C, and 35D are used to capture audio content. It is to be appreciated that in other examples of the disclosure, any number of capture devices 35A, 35B, 35C, and 35D may be used to capture content. The capture devices 35A, 35B, 35C, and 35D may capture audio content independently of each other. No direct connection is required between any of the capture devices 35A, 35B, 35C and 35D.

Each of the capturing devices 35A, 35B, 35C, and 35D may provide the content being captured to the apparatus 1. The device 1 may be as shown in fig. 1. The apparatus 1 may be provided within one of the capturing devices 35A, 35B, 35C and 35D, within a remote server provided within a communication network, or within the rendering device 40 or any other suitable type of device.

Once the device 1 obtains the content, the device 1 may perform the method illustrated in fig. 3A. In block 30, the apparatus 1 processes the captured content. The process of capturing content may include: synchronizing content captured by the different capture devices 35A, 35B, 35C, and 35D and/or any other suitable type of processing.

In some examples, the processing of the captured content as performed in block 30 may include determining a location of one or more of the capture devices 35A, 35B, 35C, and 35D. This may enable the extent of the audio space 31 covered by the capture devices 35A, 35B, 35C and 35D to be determined.

Once the captured content has been processed, the apparatus 1 creates perspective-mediated content at block 32, and the apparatus 1 creates non-perspective-mediated content at block 34. In the example of fig. 3A, the creation of view-mediated content and non-view-mediated content has been shown as separate blocks. It is to be appreciated that in other examples, they may be provided as a single block.

Perspective-mediated content may be created if there are a sufficient number of spatially distributed capture devices 35A, 35B, 35C, and 35D recording the audio space 31 to support reconstruction of a three-dimensional space. Depending on the content that has been captured by the capture devices 35A, 35B, 35C, and 35D, different types of perspective-mediated content may be created.

In some examples, the perspective-mediated content may include a space in which the user has three degrees of freedom. In such an example, the audio scene rendered by the rendering device 40 may depend on the angular orientation of the user's head. If the user rotates or changes the angular position of his head, this will result in a different audio scene being drawn for the user. The user may be able to rotate his head around three different vertical axes to enable different audio scenes to be drawn.

The angular position of the user's head may be detected using one or more accelerometers, one or more microelectromechanical devices, one or more gyroscopes, or any other suitable means. The means for detecting the angular position of the user's head may be located within the rendering device 40.

In some examples, the perspective-mediated content may include a space with six degrees of freedom for the user. In such an example, the audio scene rendered by the rendering device 40 may depend on the angular orientation of the user's head as described above. The audio scene drawn by the drawing device 40 may also depend on the location of the user. If the user changes his position by moving along any of the three perpendicular axes, this will result in a different audio scene being drawn for the user. The user may be able to move along three different vertical axes to enable different audio scenes to be drawn.

In some examples, the perspective-mediated content may include a space with three degrees of freedom + for the user. In such an example, as with perspective-mediated content having three degrees of freedom, the audio scene rendered by rendering device 40 may depend on the angular orientation of the user's head. In case the user has three degrees of freedom +, the audio scene rendered by the rendering device 40 may also depend on the position of the user to a limited extent compared to content with six degrees of freedom. This may allow small movements of the user to cause a change in the audio scene, for example, it may allow a seated user to change their position in the seat and cause a change in the audio scene.

The location of the user may be detected using a location sensor such as a GPS (global positioning system) sensor, a HAIP (high accuracy indoor location) sensor, or any other suitable type of sensor. The means for detecting the position of the user may be located within the rendering device 40.

In some examples, the size of the audio space in which perspective-mediated content may be provided may vary. For example, if more capture devices 35A, 35B, 35C, and 35D are used, this may enable a larger sound space 31 to be captured. This may increase the volume of a user with six degrees of freedom. It may increase the distance in the three axes the user may move, enabling different audio scenes to be drawn. The type of perspective-mediated content may be changed from content where the user has three degrees of freedom + to content where the user has six degrees of freedom.

The type of perspective-mediated content available may depend on the number of capture devices 35A, 35B, 35C, and 35D used to capture the audio space 31 and the spatial distribution of the capture devices 35A, 35B, 35C, and 35D.

The non-perspective mediated content may include content of the rendered audio scene independent of the location of the user 38 of the rendering device 40. The non-perspective mediated content may include content to be captured by a single capture device 35. The non-perspective mediated content may always be available regardless of the number and respective locations of the capture devices 35A, 35B, 35C, and 35D used to capture the audio space 31. The non-perspective mediated content may include non-volumetric content.

If new types of perspective-mediated content become available, at block 36, a notification is added to the content currently being provided to rendering device 40. The content currently provided to the rendering device 40 may include a first type of non-perspective mediated content or perspective mediated content.

The notification provides an indication that new types of perspective-mediated content are available. The added notification may indicate that new types of perspective-mediated content are already available. For example, it may indicate whether the content supports three degrees of freedom, three degrees of freedom +, six degrees of freedom, or any other type.

The added notification includes a spatial audio effect. The added spatial audio effect is not intended to reconstruct the captured audio space 31 and therefore does not need to provide a realistic representation of the audio space 31. Conversely, the notification may include adding reverberation or other sound effects to the audio content, which may create the perception that the audio space 31 has changed. For example, adding reverberation to one or more audio objects may create the perception that the audio objects have moved away.

Once the notification is added to the content, the content with the notification is provided to the rendering device 40. The rendering device 40 then renders the content and notifications so that they may be perceived by the user 38 of the rendering device 40.

Fig. 3B illustrates another example system 29 that can be used to implement examples of the present disclosure. The example system 29 of fig. 3B also includes a plurality of capture devices 35A, 35B, 35C, and 35D, the apparatus 1, and the rendering device 40, which may be similar to the capture devices 35A, 35B, 35C, and 35D, the apparatus 1, and the rendering device 40 shown in fig. 3A. In the example of fig. 3B, system 29 also includes server 44.

The server 44 may comprise control circuitry 3 as described above, which may be arranged to implement methods and portions of methods according to examples of the present disclosure. For example, server 44 may be arranged to implement at least a portion of the method or methods illustrated in FIG. 2. The server 44 may be located remotely from the capturing devices 35A, 35B, 35C and 35D, the apparatus 1 and the rendering device 40. The server 44 may be arranged to communicate with the capturing devices 35A, 35B, 35C and 35D, the apparatus 1 and the rendering device 40 via a wireless communication network or via any other suitable means.

In some examples, server 44 may be arranged to store content that may be perspective-mediated content. Perspective-mediated content may be provided from server 44 to apparatus 1 and rendering device 40 to support that the perspective-mediated content is to be rendered to user 38.

In the example of FIG. 3B, each of the capture devices 35A, 35B, 35C, and 35D is operated by a different user 33A, 33B, 33C, and 33D. Users 33A, 33B, 33C, and 33D are located at different positions and may capture the same audio object 37A, 37B from different perspectives.

In the example system 29 of FIG. 3B, multiple users 33A, 33B, 33C, and 33D are capturing an audio space 31 using capture devices 35A, 35B, 35C, and 35D. The audio space 31 comprises two audio objects 37A and 37B. The first audio object 37A comprises a singer and the second audio object 37B comprises a dancer. One or both of audio objects 37A and 37B may be moved within audio space 31 while audio content is being captured. Users 33A, 33B, 33C, and 33D and capture devices 35A, 35B, 35C, and 35D are spatially distributed around audio space 31 to enable perspective-mediated content to be generated.

In the example system of FIG. 3B, four capture devices 35A, 35B, 35C, and 35D are used to capture audio content. It is to be appreciated that in other examples of the disclosure, any number of capture devices 35A, 35B, 35C, and 35D may be used to capture content. The capture devices 35A, 35B, 35C, and 35D may capture audio content independently of each other. No direct connection is required between any of the capture devices 35A, 35B, 35C and 35D.

Each of the capture devices 35A, 35B, 35C and 35D may provide the content being captured to the apparatus 1. The device 1 may be as shown in fig. 1. The apparatus 1 may be provided within one of the capturing devices 35A, 35B, 35C and 35D, or within a remote server 44 provided within a communications network, or within the rendering device 40, or any other suitable type of device.

Once the device 1 obtains the content, the device 1 may perform the method illustrated in fig. 3B. At block 45, the device 1 processes the captured content. The processing of the captured content may include: synchronizing content captured by the different capture devices 35A, 35B, 35C, and 35D and/or any other suitable type of processing.

Once the captured content has been processed, the apparatus 1 determines the type of content available at block 47. At block 47, the apparatus 1 may determine whether the available content is non-perspective-mediated content or perspective-mediated content. In some examples, the apparatus 1 may determine the type of perspective-mediated content available. For example, the apparatus 1 may determine the degrees of freedom available to the user when rendering the perspective-mediated content.

Determining the type of content available may include determining the type of content that has been captured by the capture devices 35A, 35B, 35C, and 35D and/or determining the type of content available on the server 44. For example, the content captured by the capture devices 35A, 35B, 35C, and 35D may be non-perspective mediated content, however, there may be perspective-mediated content that is related to the same audio space 31 stored on the server 44. In such an example, server 44 may add metadata to the perspective-mediated content stored therein. The metadata may indicate a type of perspective-mediated content. The server 44 may provide the content and the metadata to the apparatus 1. The apparatus 1 may use the metadata to determine the type of perspective-mediated content available.

If new types of perspective-mediated content become available, at block 49, a notification is added to the content currently being provided to rendering device 40. The content currently provided to the rendering device 40 may include a first type of non-view-mediated content or view-mediated content.

The notification provides an indication that new types of perspective-mediated content are available. The added notification may indicate that new types of perspective-mediated content are already available. For example, it may indicate whether the content supports three degrees of freedom, three degrees of freedom +, six degrees of freedom, or any other type of content.

The added notification includes a spatial audio effect similar to the effect provided in the system 29 of fig. 3A. Other types of audio effects may be used in other examples of the disclosure.

Once the notification is added to the content, the content with the notification is provided to the rendering device 40. The rendering device 40 then renders the content and notifications so that they may be perceived by the user 38 of the rendering device 40.

In the example system of fig. 3A and 3B, the rendering device 40 comprises a set of headphones arranged to provide an audio output to the user 38. It is to be appreciated that in other examples, other types of rendering devices 40 may be used. For example, the rendering device 40 may comprise a communication device, such as a mobile phone, a headset comprising a display, or any other suitable type of rendering device 40.

Once the user 38 of the rendering device 40 has received notification that new types of perspective-mediated content are available, they may ignore the notification and continue to use the original content, or may cause the user input to switch to the new types of perspective-mediated content.

In some examples, different types of perspectives may be used to mediate content. For example, a first type of perspective-mediated content may be a stereo audio output that may be provided to a set of headphones, which may give the end user three degrees of freedom because they may rotate their head to different orientations, and the different orientations of the user's head provide them with different audio scenes.

In some examples, the perspective-mediated content may support six degrees of freedom for the user. This may enable the user not only to rotate his head about three different axes, but also to move his position in space. That is, this may enable the user to move sideways and/or back and forth in a vertical direction in order to change the sound scene provided to them. The notification added to the non-view-mediated content may provide an indication of the type of view-mediated content that is already available. In some examples, the amount of spatial audio effect added to the non-perspective mediated content may provide an indication of the type of perspective mediated content that is already available. For example, if the view-angle-mediated content supports six degrees of freedom, more spatial audio effects may be added than in the case where the view-angle-mediated content supports three degrees of freedom. This may enable a user to not only determine that perspective-mediated content is available, but may also be able to distinguish between different types of perspective-mediated content that are already available. Additionally, if the rendering device is currently rendering perspective-mediated content of a first type, a notification may be added to provide an indication that perspective-mediated content of a second, different type has become available. For example, if the user is currently drawing content that achieves three degrees of freedom, a notification may be added when perspective-mediated content that achieves six degrees of freedom becomes available.

In the example of fig. 3A and 3B, the created perspective-mediated content includes audio content. Specifically, the perspective mediated content includes a sound space 31. It is to be appreciated that for the disclosure, other types of content can be used in other examples. For example, in some cases, the content may include visual content, and some examples of content may include both audio content and visual content. In some examples, the audio content may be perspective-mediated content, or the visual content may be non-perspective-mediated content. The content may include real-time content that is rendered simultaneously or with little delay after capture. In other examples, the content may include stored content that may be stored in rendering device 40 or at a remote device. The content may include a plurality of different content files, which may correspond to different virtual spaces and/or different points in time. The content may make different types of perspective-mediated content available to different portions of the content.

Fig. 4A-4C illustrate an example system 29 in which different types of perspective-mediated content are available. Each example system 29 comprises one or more capturing devices 35 arranged to capture the audio space 31, the apparatus 1 and at least one rendering device 40. The system 29 shown in fig. 4A-4C may represent the same system at different points in time when different capture devices 35 are used.

The audio space 31 captured in fig. 4A to 4C is the same as the audio space 31 shown in fig. 3A and 3B. The example audio space 31 includes two sound objects: singer 37A and dancer 37B. It is to be appreciated that other audio spaces 31 and other audio objects 37 may be used in other examples of the disclosure.

In the example system of FIG. 4A, only one capture device 35A is used to capture the audio space 31. The capture device 35A may be operated by the first user 33A. Audio content captured by the single capture device 35A is provided to the apparatus 1 to enable the apparatus 1 to process 30 the audio content.

In the example system 29 of fig. 4A, only a single viewpoint is used to capture audio content, and thus perspective-mediated content is not available. In this example, the apparatus 1 creates some non-perspective mediated content, but does not create any perspective mediated content. Accordingly, the content provided from the apparatus 1 to the rendering device 40 includes non-view-angle-mediated content. The non-perspective mediated content may be mono audio content or stereo audio content or any other suitable type of content.

The rendering device 40 includes a set of headphones that enable audio content to be provided to the user 38 of the rendering device. Other types of rendering devices 40 may be used in other examples of the present disclosure.

In the example system 29 of FIG. 4B, two capture devices 35A, 35B are used to capture the audio space 31. The capture devices 35A, 35B may be operated by two different users 33A, 33B. For example, the second user 33A may have joined the first user 33A to capture the audio space 31. This now provides two different positions from which to capture the audio space 31.

The audio content captured by the capture devices 35A, 35B is provided to the apparatus 1 to enable the apparatus to process 30 the audio content. Processing of the audio content may include synchronizing the two captured audio streams, determining the location of the capture devices 35A, 35B, or any other suitable process. The apparatus 1 may also create perspective-mediated content and non-perspective-mediated content using the two captured audio streams.

The apparatus 1 may perform any suitable processing to create the perspective-mediated content. For example, the process of providing perspective-mediated content may include adding a room impulse response, applying a head-related transfer function, or any other suitable spatial audio effect. The processing performed on the captured audio content so that the perspective-mediated content to be created can be designed so that the audio content rendered by the rendering device 40 reconstructs the audio space 31 that has been captured by the capturing devices 35A and 35B as closely as possible. That is, the captured content is processed to provide perspective mediated content intended to provide realistic spatial audio effects.

When the perspective-mediated content becomes available, the apparatus 1 adds a notification to the content being provided to the rendering device 40. In the example of fig. 4B, the notification is added to non-perspective mediated content, which may correspond to content as recorded by the first capture device 35A.

In the example of fig. 4B, the perspective mediated content includes binaural content. Binaural content provides three degrees of freedom of movement to the user 38 of the rendering device 40. When binaural content is rendered, the orientation of the user's head will determine the audio scene rendered by the rendering device 40. By moving their head to different angular orientations, user 38 may thereby change the audio scene that is drawn to them.

In the example system 29 of FIG. 4C, five capture devices 35A, 35B, 35C, 35D, and 36E are used to capture the audio space 31. The capture devices 35A, 35B, 35C, 35D, and 36E may be operated by five different users 33A, 33B, 33C, 33D, and 33E. For example, three more users 33C, 33D, and 33E may have joined a first user 33A and a second user 33B to capture the audio space 31. This now provides five different positions from which the audio space 31 is captured.

Audio content captured from all five capture devices 35A, 35B, 35C, 35D and 36E is provided to the apparatus 1 to enable the apparatus to process 30 the audio content. Processing of the audio content may include synchronizing the multiple captured audio streams, determining the locations of 35A, 35B, 35C, 35D, and 36E, or any other suitable processing. The apparatus 1 may also create perspective-mediated content and non-perspective-mediated content using the plurality of captured audio streams. Perspective-mediated content may be created using a similar process as used in the example of fig. 4B, or any other suitable process.

In the example of fig. 4C, an increased number of capture devices 35A, 35B, 35C, 35D, and 36E may support different types of perspective-mediated content being created. For example, it may enable the distance between the audio objects 37A, 37B and the angular position of the audio objects 37A, 37B to be taken into account. This may support the creation of perspective-mediated content with six degrees of freedom. In some examples, an increase in the number of capture devices 35A, 35B, 35C, 35D, and 36E may increase the size of audio space 31 for which perspective-mediated content may be created.

When new types of view-mediated content become available, the apparatus 1 adds a notification to the content being provided to the rendering device 40. In the example of fig. 4C, the notification may be added to the non-view mediated content or the binaural content, depending on the type of content that the user 38 of the rendering device 40 has selected to consume.

The notification added to the content in the example of fig. 4C may be a different notification than the notification added in the example of fig. 4B. This may enable the use of different notifications to indicate that different types of view-mediated content are available. For example, more spatial audio effects may be added to the content in fig. 4C than would be added to the content in fig. 4B. This large number of spatial audio effects provides the following indication: more degrees of freedom are available or view-mediated content is now available for a larger audio space 31.

In the example system of fig. 4A-4C, as more users 33A, 33B, 33C, 33D, and 33E and their capture devices 35A, 35B, 35C, 35D, and 35E become available to capture the audio space 31, different types of perspective-mediated content become available. It is to be appreciated that in other examples, other reasons may result in perspective-mediated content being available or unavailable. For example, in some cases, perspective-mediated content may be obtained by a single capture device 35. In this case, the capture device 35 may not always operate so that perspective-mediated content may be created. In this case, a case where view-mediated content is available may sometimes occur, and a case where view-mediated content is unavailable may sometimes occur. Examples of the present disclosure may be used to notify a user 38 of a rendering device 40 of a change in the availability of perspective-mediated content.

Fig. 5A and 5B illustrate examples in which perspective-mediated content is not available. Fig. 5A shows a real audio space 31 that has been captured by one or more capturing devices, and fig. 5B shows how it is represented to a user 38 of a rendering device 40.

The real audio space 31 comprisesA plurality of audio objects 37A, 37B, 37C and 37D. The audio objects 37A, 37B, 37C, and 37D are positioned at different angular positions and at different distances from the listening position of the user 38 of the rendering device 40. In the example of fig. 5A, a first audio object 37A is positioned at an angle θAAnd a distance dAAt an angle θ, a second audio object 37B is positionedBDistance dBWhere the third audio object 37C is positioned by an angle thetaCAnd a distance DCAnd a fourth audio object 37D is positioned at an angle thetaCAnd a distance dDTo (3).

In the example of fig. 5A and 5B, perspective mediated content is not available. There may be many reasons why view-mediated content is not available. For example, the audio space 31 may have been captured by a single capture device 35, or a capture device arranged to obtain spatial audio may not be functioning correctly or for any other suitable reason.

Fig. 5B represents audio content being rendered to a user 38 of a rendering device 40. This shows that audio objects 37A, 37B, 37C and 37D are not rendered at any difference in angle or distance, so that the same audio scene is provided to user 38, regardless of the position of user 38 or the angular orientation of its head.

Fig. 6A and 6B illustrate examples where perspective-mediated content becomes available. Fig. 6A shows a real audio space 31 that has been captured by one or more capture devices, and fig. 6B shows how it is represented to a user 38 of a rendering device 40.

The real audio space 31 comprises a plurality of audio objects 37A, 37B, 37C and 37D. The audio objects 37A, 37B, 37C, and 37D are positioned at different angular positions and at different distances from the listening position of the user 38 of the rendering device 40. In the example of FIG. 6A, a first audio object 37A is positioned at an angle θAAnd a distance dAAt an angle θ, a second audio object 37B is positionedBDistance dBAt angle θ, a third audio object 37C is positioned at angle θCAnd a distance DCAnd a fourth audio object 37D is positioned at an angle thetaCAnd a distance dDTo (3). In the example of FIG. 6A, all audio objects 37A, 37B, 37C, and 37D are locatedAt equal distances from the listening position of the user 38. It is to be appreciated that in other examples, audio objects 37A, 37B, 37C, and 37D may be located at different distances from the listening position.

In the example of fig. 6A and 6B, the audio scene 31 is captured so that the apparatus 1 can determine the angle θ of each audio object 37A, 37B, 37C and 37D. This may enable the determination of the direction of arrival for each audio object 37A, 37B, 37C and 37D when the apparatus 1 mediates content at the creation of a perspective. This may enable creation of perspective-mediated content, where the angular position of each of audio objects 37A, 37B, 37C, and 37D may be reconstructed.

Fig. 6B represents audio content rendered to the user 38 of the rendering device 40. This shows that audio objects 37A, 37B, 37C and 37D are rendered such that user 38 may perceive a different angular position of each audio object 37A, 37B, 37C and 37D.

The user 38 may be able to rotate his head about three different vertical axes x, y and z. The rendering device 40 may detect the angular position of the user's head about these three axes and use this information to control the audio scene rendered by the rendering device 40. Different audio scenes may be drawn for different angular orientations of the user's head.

When the perspective-mediated content shown in fig. 6A and 6B becomes available, a notification may be added to the content provided to the rendering device 40 to indicate that the perspective-mediated content has become available.

Fig. 7A and 7B illustrate examples where new types of perspective-mediated content become available. Fig. 7A shows a real audio space 31 that has been captured by one or more capture devices, and fig. 7B shows how it is represented to a user 38 of a rendering device 40.

In the example of fig. 7A and 7B, the audio scene 31 is captured so that the apparatus 1 can determine the angle θ of each audio object 37A, 37B, 37C, and 37D and the distance between the audio object 37A, 37B, 37C, and 37D and the listening position of the user 38. This may enable the determination of the direction of arrival and the distance between the user 38 and the audio objects 37A, 37B, 37C and 37D for each audio object 37A, 37B, 37C and 37D when the apparatus 1 is creating perspective-mediated content. This may enable creation of perspective-mediated content, where the angular position and relative distance of each of audio objects 37A, 37B, 37C, and 37D may be reconstructed.

Fig. 7B represents audio content rendered to the user 38 of the rendering device 40. This shows that audio objects 37A, 37B, 37C and 37D are rendered such that user 38 may perceive different angular positions of each audio object 37A, 37B, 37C and 37D, and may also move within virtual visual space 71.

The virtual audio space 71 is indicated by the grey area in fig. 7B. In the example of fig. 7B, the virtual audio space 71 includes an elliptical region. Other shapes for the virtual audio space 71 may be used in other examples of the disclosure.

The user 38 may be able to move within the virtual audio space 71 by moving along three perpendicular axes x, y and z. For example, the user 38 may move left and right, back and forth, or up and down, or any combination of these directions. The rendering device 40 may detect the position of the user 38 within the virtual audio space 71 and may use this information to control the audio scene rendered by the rendering device 40. Different audio scenes may be rendered for different locations within the virtual visual space 71.

When perspective mediated content having six degrees of freedom shown in fig. 7A and 7B becomes available, a notification may be added to the content provided to rendering device 40 to indicate that new types of perspective mediated content are already available.

Fig. 8A and 8B illustrate examples of view-angle mediated content becoming available for a larger audio space 31. Fig. 8A shows a real audio space 31 that has been captured by one or more capture devices, and fig. 8B shows how it is represented to a user 38 of a rendering device 40.

In the example of fig. 8A and 8B, the audio scene 31 is captured so that the apparatus 1 can determine the angle θ of each audio object 37A, 37B, 37C, and 37D and the distance between the audio object 37A, 37B, 37C, and 37D and the listening position of the user 38. This may enable the determination of the direction of arrival and the distance between the user 38 and the audio objects 37A, 37B, 37C and 37D for each audio object 37A, 37B, 37C and 37D when the apparatus 1 is creating perspective-mediated content. This may enable creation of perspective-mediated content, where the angular position and relative distance of each of audio objects 37A, 37B, 37C, and 37D may be reconstructed. The audio scene 31 in fig. 8A may be similar to the audio scene shown in fig. 7A. In the example of fig. 8A, the capture device 35 captures audio content to cover the larger audio space 31.

Fig. 8B represents audio content rendered to the user 38 of the rendering device 40. This shows that audio objects 37A, 37B, 37C, and 37D are drawn such that user 38 may perceive a different angular position of each audio object 37A, 37B, 37C, and 37D, and may also move within virtual visual space 81.

The virtual audio space 81 is indicated by a grey area in fig. 8B. In the example of fig. 8B, the virtual audio space 81 includes an oval-shaped region similar to the virtual audio space shown in fig. 7B. However, in the example of fig. 8B, the virtual audio space 81 covers a larger volume. This may enable the user 38 to move a greater distance while enabling perspective-mediated content to be rendered.

When perspective mediated content having a larger virtual audio space 81 as shown in fig. 8A and 8B becomes available, a notification may be added to the content provided to the rendering device 40 to indicate that the volume at which the perspective mediated content is available has increased.

Fig. 9 illustrates another example system 29 in which different types of perspective-mediated content are available. The example system 29 of fig. 9 includes a plurality of capture devices 35F, 35G, 35H, 35I, 35J, a server 44, and at least one rendering device 40. The apparatus 1 for adding a notification indicating the type of view-mediated content available may be provided within the rendering device 40. In other examples, apparatus 1 may be provided within server 44 or within any other suitable device within system 29.

In the example system 29 of fig. 9, the capture devices 35F, 35G, 35H, 35I, 35J may comprise image capture devices. The image capturing device may be arranged to capture video images or any other suitable type of image. The image capturing device may be further arranged to capture audio corresponding to the captured image.

The system 29 of fig. 9 includes a plurality of capture devices 35F, 35G, 35H, 35I, 35J. Different capture devices 35F, 35G, 35H, 35I, 35J within the plurality of capture devices 35F, 35G, 35H, 35I, 35J are arranged to capture different types of perspective-mediated content. The first capture device 35F is arranged to capture perspective mediated content having three degrees of freedom +, the second capture device 35G is arranged to capture perspective mediated content having three degrees of freedom, the third capture device 35H is arranged to capture perspective mediated content having three degrees of freedom, the fourth capture device 35I is arranged to capture perspective mediated content having three degrees of freedom, and the fifth capture device 35J is arranged to capture perspective mediated content having three degrees of freedom +. Other numbers and arrangements of capture devices 35F, 35G, 35H, 35I, 35J may be used in other examples of the disclosure.

The content captured by the plurality of capturing devices 35F, 35G, 35H, 35I, 35J is provided to the server 44. Once the server 44 receives content from the plurality of capture devices 35F, 35G, 35H, 35I, 35J, the server may perform the method illustrated in FIG. 9. At block 90, server 44 processes the content. The processing of the captured content may include: content captured by the different capture devices 35F, 35G, 35H, 35I, and 35J and/or any other suitable type of processing is synchronized.

Once the captured content has been processed, the server creates a content file including the perspective-mediated content at block 93. In some examples, server 44 may create a plurality of different content files, where the different content files include different types of perspective-mediated content. In some examples, the content file may include metadata indicating that the content is view-mediated content. The metadata may indicate the number of degrees of freedom that the user has within the perspective-mediated content, e.g., it may indicate whether the user has three degrees of freedom or six degrees of freedom. In some examples, it may indicate the size of the volume in which the perspective-mediated content is available. For example, it may indicate a virtual space in which view-mediated content is available. In such an example, the metadata may be used to determine whether perspective-mediated content is available. In some examples, the metadata may indicate a period of time that the perspective-mediated content has been captured.

The content file may be created simultaneously with the content capture. This may enable real-time streaming of view-mediated content. In other examples, the content file may be created at a later point in time. This may enable perspective-mediated content to be stored for rendering at a later point in time.

At block 95, server 44 receives input selecting a content file. Input may be received in response to input made by the user 38 via the rendering device 40. The input may be a selection of a particular content file, a selection of content captured by a particular capture device 35, or any other suitable type of selection.

In the example of FIG. 9, the user may select to draw content captured by a particular capture device 35. For example, the user 38 may choose to switch between content captured by the first capture device 35F and content captured by the second capture device 35G.

In response to the input 95, the selected content is provided from the server to the rendering device 40 in block 97. In block 99, the device 1 within the rendering apparatus 40 determines the type of content available. If the type of view-mediated content available has changed, then the apparatus 1 will add an audio notification indicating that the type of view-mediated content available has changed.

For example, in the example of fig. 9, when the user 38 switches between content captured by the first capture device 35F and content captured by the second capture device 35G, this changes the type perspective mediated content available for three degrees of freedom + to three degrees of freedom. The apparatus 1 may use the metadata within the corresponding content file to detect the change. The audio notification added to the content may provide an indication of: the available degrees of freedom have been reduced by switching to a new content file. In response to the audio notification, the user 38 may decide to continue drawing the content captured by the second capture device 35G, or may select a different content file.

Accordingly, examples of the present disclosure provide an efficient method of providing a notification to the user 38 of the rendering device 40 that perspective-mediated content has become available. The notification may be provided audibly, and thus does not require any visual user interface to be provided. This means that in the example where the user 38 is viewing visual content, the visual content will not be obscured by any icons or other notifications that the user 38 may be annoyed.

The notification added to the content may also provide an indication of the type of view-mediated content available and/or the size of the view-mediated content available. This may provide additional information to the user and may help the user 38 of the rendering device 40 determine whether they wish to begin using perspective-mediated content.

Adding notifications to content provided to a rendering device also provides the following advantages: there is no need to provide any additional messages between the apparatus 1 and the rendering device 40. This means that once the perspective-mediated content becomes available, a notification may be provided to the user 38 that the perspective-mediated content is available. This reduces any delay in notifications provided to the user 38.

This definition of "circuitry" applies to all uses of this term in this application, including in any claims. As yet another example, as used in this application, the term "circuitry" would also cover an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software and/or firmware. For example, and if applicable to the particular claim element, the term "circuitry" would also cover a baseband integrated circuit or applications processor integrated circuit for a mobile phone or similar integrated circuit in a server, a cellular network device, or another network device.

The term "comprising" is used herein in an inclusive rather than exclusive sense. That is, any reference to X including Y means that X may include only one Y or may include more than one Y. If the exclusive meaning of "including" is to be used, it will be apparent from the context that "only one … is included or" consisting of "is used.

In this brief description, reference has been made to various examples. The description of features or functions with respect to the examples indicates that those features or functions are present in the examples. The use of the terms "example" or "such as" or "may" in this document, whether explicitly stated or not, means that such features or functions are present in at least the described example, whether described as an example or not, and that they may, but do not necessarily, be present in some or all of the other examples. Thus, "an example," "e.g.," or "may" refers to a particular instance of a class of examples. The properties of an instance may be the properties of the instance only or of such properties or of a subclass of the class, which includes some but not all instances in the class. Thus, it is implicitly disclosed that features described with reference to one example but not with reference to another example may be used in that other example, if possible, but not necessarily in that other example.

Although embodiments of the present invention have been described in the preceding paragraphs with reference to various examples, it should be appreciated that modifications to the examples given can be made without departing from the scope of the invention as claimed.

The features described in the foregoing description may be used in combinations other than the combinations explicitly described.

Although functions have been described with reference to certain features, those functions may be performed by other features, whether described or not.

Although features have been described with reference to certain embodiments, those features may also be present in other embodiments, whether described or not.

Whilst endeavoring in the foregoing specification to draw attention to those features of the invention believed to be of particular importance it should be understood that the applicant claims protection in respect of any patentable feature or combination of features hereinbefore referred to and/or shown in the drawings whether or not particular emphasis has been placed thereon.

30页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:带有SIM卡的致动工具和/或测量设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!