Generating targets for target implementers in a synthetic reality set

文档序号:1220306 发布日期:2020-09-04 浏览:10次 中文

阅读说明:本技术 在合成现实布景中生成用于目标实现器的目标 (Generating targets for target implementers in a synthetic reality set ) 是由 I·M·里克特 A·S·塞尼 O·索雷斯 于 2019-01-18 设计创作,主要内容包括:在一些具体实施中,一种方法包括将目标实现器实例化到合成现实布景中。在一些具体实施中,该目标实现器通过一组预定义目标和一组视觉渲染属性来表征。在一些具体实施中,该方法包括获得表征该合成现实布景的上下文信息。在一些具体实施中,该方法包括基于该组预定义目标和一组用于该目标实现器的预定义动作的函数来生成用于该目标实现器的目标。在一些具体实施中,该方法包括基于用于该目标实现器的该目标来设置用于该合成现实布景的环境条件。在一些具体实施中,该方法包括基于用于该目标实现器的该目标来建立用于该目标实现器的初始条件和一组当前动作。在一些具体实施中,该方法包括基于该目标来修改该目标实现器。(In some implementations, a method includes instantiating a goal implementer into a synthetic reality set. In some implementations, the goal implementer is characterized by a set of predefined goals and a set of visual rendering attributes. In some implementations, the method includes obtaining contextual information characterizing the synthetic reality set. In some implementations, the method includes generating the goal for the goal implementer based on a function of the set of predefined goals and a set of predefined actions for the goal implementer. In some implementations, the method includes setting an environmental condition for the synthetic reality set based on the goal for the goal implementer. In some implementations, the method includes establishing an initial condition and a set of current actions for the goal implementer based on the goal for the goal implementer. In some implementations, the method includes modifying the goal implementer based on the goal.)

1. A method, comprising:

at a device comprising a non-transitory memory and one or more processors coupled with the non-transitory memory:

instantiating an object implementer into a synthetic reality set, wherein the object implementer is characterized by a set of predefined objects and a set of visual rendering attributes;

obtaining context information characterizing the synthetic reality set, the context information including data corresponding to a physical set;

generating a goal for the goal implementer based on a function of the set of predefined goals, the context information, and a set of predefined actions for the goal implementer;

setting an environmental condition for the synthetic reality set based on the goal for the goal implementer;

establishing an initial condition and a set of current actions for the goal implementer based on the goal for the goal implementer; and modifying the goal implementer based on the goal.

2. The method of claim 1, wherein generating the target comprises generating the target using a neural network.

3. The method of claim 2, wherein the neural network generates the target based on a set of neural network parameters.

4. The method of claim 3, further comprising:

adjusting the set of neural network parameters based on the goal.

5. The method of any of claims 3 to 4, further comprising:

the set of neural network parameters is determined based on a reward function that assigns positive rewards to desired targets and negative rewards to undesired targets.

6. The method of any of claims 2 to 5, further comprising:

configuring the neural network based on reinforcement learning.

7. The method of any of claims 2 to 6, further comprising:

training the neural network based on one or more of a video, novel, book, caricature, and video game associated with the goal implementer.

8. The method of any of claims 1-7, wherein modifying the goal implementer comprises:

providing the goal to a goal implementer engine that generates actions that satisfy the goal.

9. The method of any of claims 1 to 8, further comprising:

the set of predefined targets is obtained from source material, the source material including one or more of movies, video games, caricatures, and novels.

10. The method of claim 9, wherein obtaining the set of predefined objectives comprises: scraping the source material to extract the set of predefined targets.

11. The method of any of claims 9-10, wherein obtaining the set of predefined objectives comprises:

determining the set of predefined goals based on a type of the instantiated goal implementer.

12. The method of any of claims 9-11, wherein obtaining the set of predefined objectives comprises:

determining the set of predefined goals based on a user-specified configuration of the goal implementer.

13. The method of any of claims 9-12, wherein obtaining the set of predefined objectives comprises:

determining the set of predefined goals based on a limit specified by an entity owning the object.

14. The method of any of claims 1 to 13, further comprising:

capturing an image; and

obtaining the set of visual rendering attributes from the image.

15. The method of any of claims 1-14, wherein generating the target comprises: receiving user input indicative of the set of predefined actions.

16. The method of any of claims 1-15, wherein generating the target comprises: receiving the set of predefined actions from a goal implementer engine that generates actions for the object.

17. The method of any of claims 1-16, wherein the contextual information indicates whether other goal implementers have been instantiated within the synthetic reality set.

18. The method of any of claims 1-17, wherein generating the target comprises:

generating a first target in response to the contextual information indicating that a second target implementer has been instantiated within the synthetic reality set; and generating a second goal different from the first goal in response to the contextual information indicating that a third goal implementer has been instantiated within the synthetic reality set.

19. The method of any of claims 1-18, wherein the goal implementer comprises a representation of a character, and the context information indicates whether one or more representations of other characters and representations of equipment have been instantiated within the synthetic reality set.

20. The method of any of claims 1-19, wherein the contextual information comprises user-specified context information.

21. The method of any of claims 1-20, wherein the context information indicates one or more of:

a topography of the synthetic reality set; and

environmental conditions within the synthetic reality set, the environmental conditions including one or more of temperature, humidity, pressure, visibility, ambient light level, and ambient sound level.

22. The method of any of claims 1-21, wherein the context information comprises a grid map of a physical set, the grid map indicating locations and dimensions of real objects in the physical set.

23. The method of any of claims 1-22, wherein generating the target comprises: selecting a first goal from the set of predefined goals that can be achieved given the set of predefined actions; and

forgoing selection of a second goal that cannot be achieved given the set of predefined actions.

24. The method of any of claims 1-23, wherein the synthetic reality set comprises a virtual reality set.

25. The method of any of claims 1-24, wherein the synthetic reality set comprises an augmented reality set.

26. The method of any of claims 1-25, wherein the goal implementer comprises a representation of a character from one or more of a movie, a video game, a caricature, and a novel.

27. The method of any of claims 1-26, wherein the goal implementer comprises a representation of equipment from one or more of a movie, a video game, a caricature, and a novel.

28. The method of any of claims 1-27, wherein the initial condition indicates placement of the goal implementer within the synthetic reality set.

29. An apparatus, comprising:

one or more processors;

a non-transitory memory;

one or more displays; and

one or more programs stored in the non-transitory memory that, when executed by the one or more processors, cause the apparatus to perform any of the methods of claims 1-28.

30. A non-transitory memory storing one or more programs that, when executed by one or more processors of a device with a display, cause the device to perform any of the methods of claims 1-28.

31. An apparatus, comprising:

one or more processors;

a non-transitory memory;

a display; and

means for causing the apparatus to perform any one of the methods of claims 1-28.

Technical Field

The present disclosure generally relates to generating targets for target-implementers (objects) in a composite reality setting (setting).

Background

Some devices are capable of generating and presenting a composite reality set. Some synthetic reality sets include virtual sets that are synthetic alternatives to physical sets. Some synthetic reality sets include an augmented set that is a modified version of a physical set. Some devices that present a composite reality setting include mobile communication devices, such as smart phones, Head Mounted Displays (HMDs), eyeglasses, Heads Up Displays (HUDs), and optical projection systems. Most previously available devices that render a synthetic reality set are ineffective at rendering representations of certain objects. For example, some previously available devices that present a synthetic reality set are not suitable for presenting representations of objects associated with actions.

Disclosure of Invention

Various implementations disclosed herein include apparatus, systems, and methods for generating content for synthesizing a real set. In various implementations, a device includes a non-transitory memory and one or more processors coupled with the non-transitory memory. In some implementations, a method includes instantiating a goal implementer into a synthetic reality set. In some implementations, the goal implementer is characterized by a set of predefined goals and a set of visual rendering attributes. In some implementations, the method includes obtaining contextual information characterizing the synthetic reality set. In some implementations, the method includes generating the goal for the goal implementer based on a function of the set of predefined goals, the context information, and a set of predefined actions for the goal implementer. In some implementations, the method includes setting an environmental condition for the synthetic reality set based on the goal for the goal implementer. In some implementations, the method includes establishing an initial condition and a set of current actions for the goal implementer based on the goal for the goal implementer. In some implementations, the method includes modifying the goal implementer based on the goal.

According to some implementations, an apparatus includes one or more processors, non-transitory memory, and one or more programs. In some implementations, the one or more programs are stored in a non-transitory memory and executed by the one or more processors. In some implementations, the one or more programs include instructions for performing or causing performance of any of the methods described herein. According to some implementations, a non-transitory computer-readable storage medium has stored therein instructions that, when executed by one or more processors of a device, cause the device to perform or cause performance of any of the methods described herein. According to some implementations, an apparatus includes one or more processors, non-transitory memory, and means for performing or causing performance of any of the methods described herein.

Drawings

Accordingly, the present disclosure may be understood by those of ordinary skill in the art and a more particular description may be had by reference to certain illustrative embodiments, some of which are illustrated in the accompanying drawings.

Fig. 1A and 1B are illustrations of an exemplary operating environment according to some implementations.

Fig. 2 is a block diagram of an exemplary system according to some implementations.

Fig. 3A is a block diagram of an exemplary pop-up content engine according to some implementations.

Fig. 3B is a block diagram of an example neural network, according to some implementations.

Fig. 4A-4E are flow diagram representations of a method of generating content for a composite reality setting, according to some implementations.

Fig. 5 is a block diagram of a server system enabled with various components of a pop-up content engine according to some implementations.

Fig. 6 is an illustration of a captured character according to some implementations.

In accordance with common practice, the various features shown in the drawings may not be drawn to scale. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. Additionally, some of the figures may not depict all of the components of a given system, method, or apparatus. Finally, throughout the specification and drawings, like reference numerals may be used to refer to like features.

Detailed Description

Numerous details are described in order to provide a thorough understanding of example implementations shown in the drawings. The drawings, however, illustrate only some example aspects of the disclosure and therefore should not be considered limiting. It will be understood by those of ordinary skill in the art that other effective aspects and/or variations do not include all of the specific details described herein. Moreover, well-known systems, methods, components, devices, and circuits have not been described in detail so as not to obscure more pertinent aspects of the example implementations described herein.

A physical setting refers to a world that can be perceived and/or interacted with by an individual without the aid of an electronic system. A physical set (e.g., a physical forest) includes physical elements (e.g., physical trees, physical structures, and physical animals). The individual may interact directly with and/or perceive the physical set, such as by touch, vision, smell, hearing, and taste.

In contrast, a Synthetic Reality (SR) set refers to a set that is fully or partially computer-created via an electronic system that an individual is able to perceive and/or that the individual is able to interact with. In the SR, a subset of the individual movements is monitored, and in response to the subset, one or more properties of one or more virtual objects in the SR set are altered in a manner that complies with one or more laws of physics. For example, the SR system may detect that an individual is walking a few steps forward and, in response, adjust the graphics and audio presented to the individual in a manner similar to how such scenes and sounds would change in a physical set. Modification of one or more properties of one or more virtual objects in the SR set may also be made in response to a representation of the movement (e.g., audio instructions).

The individual may interact with and/or perceive the SR object using any of his senses, including touch, smell, sight, taste, and sound. For example, an individual may interact with and/or perceive an auditory object that creates a multi-dimensional (e.g., three-dimensional) or spatial auditory scene and/or implements auditory transparency. A multi-dimensional or spatial auditory scene provides an individual with the perception of discrete auditory sources in a multi-dimensional space. With or without computer-created audio, auditory transparency selectively combines sound from a physical set. In some SR scenarios, an individual may interact with and/or perceive only auditory objects.

One example of an SR is Virtual Reality (VR). VR scenery refers to a simulated scenery designed to include only computer-created sensory inputs for at least one sensation. A VR scene includes a plurality of virtual objects that an individual may interact with and/or perceive. The individual may interact with and/or perceive the virtual object in the VR scene by simulating a subset of the individual's actions within the computer-created scene and/or by simulating the individual or its presence within the computer-created scene.

Another example of an SR is Mixed Reality (MR). An MR set refers to a simulated set designed to integrate computer-created sensory inputs (e.g., virtual objects) with sensory inputs from a physical set or a representation thereof. On the real spectrum, the mixed reality scenery is between the VR scenery at one end and the full physical scenery at the other end and does not include those scenery.

In some MR scenarios, the computer-created sensory inputs may adapt to changes in sensory inputs from the physical scenario. Additionally, some electronic systems for rendering MR scenery may monitor the orientation and/or position relative to the physical scenery to enable virtual objects to interact with real objects (i.e., physical elements from the physical scenery or representations thereof). For example, the system may monitor motion such that the virtual plant appears to be stationary relative to the physical building.

One example of mixed reality is Augmented Reality (AR). An AR set refers to a simulated set in which at least one virtual object is superimposed over a physical set or representation thereof. For example, the electronic system may have an opaque display and at least one imaging sensor for capturing images or video of the physical set, which are representations of the physical set. The system combines the image or video with the virtual object and displays the combination on the opaque display. The individual using the system indirectly views the physical set via an image or video of the physical set and observes a virtual object superimposed over the physical set. When the system captures images of a physical set using one or more image sensors, and uses those images to render an AR set on an opaque display, the displayed images are referred to as video passthrough. Alternatively, the electronic system for displaying the AR setting may have a transparent or translucent display through which the individual may directly view the physical setting. The system may display the virtual object on a transparent or translucent display such that the individual uses the system to view the virtual object superimposed over the physical set. As another example, the system may include a projection system that projects the virtual object into the physical set. The virtual object may be projected, for example, on a physical surface or as a hologram, such that an individual uses the system to view the virtual object superimposed over a physical set.

An augmented reality set may also refer to a simulated set in which a representation of a physical set is altered by computer-created sensory information. For example, a portion of the representation of the physical set may be graphically altered (e.g., enlarged) such that the altered portion may still represent one or more initially captured images but not a faithfully reproduced version. As another example, in providing video-through, the system may change at least one of the sensor images to impose a particular viewpoint that is different from the viewpoint captured by the one or more image sensors. As another example, the representation of the physical set may be altered by graphically blurring or eliminating portions thereof.

Another example of mixed reality is Augmented Virtual (AV). An AV setting refers to a simulated setting in which a computer created setting or a virtual setting incorporates at least one sensory input from a physical setting. The one or more sensory inputs from the physical set may be a representation of at least one feature of the physical set. For example, the virtual object may render a color of the physical element captured by the one or more imaging sensors. As another example, the virtual object may exhibit characteristics consistent with actual weather conditions in the physical set, as identified via weather-related imaging sensors and/or online weather data. In another example, an augmented reality forest may have virtual trees and structures, but an animal may have features that are accurately reproduced from images taken of the physical animal.

Many electronic systems enable individuals to interact with and/or perceive various SR settings. One example includes a head-mounted system. The head-mounted system may have an opaque display and one or more speakers. Alternatively, the head mounted system may be designed to receive an external display (e.g., a smartphone). The head-mounted system may have one or more imaging sensors and/or microphones for capturing images/video of the physical set and/or audio of the physical set, respectively. The head mounted system may also have a transparent or translucent display. Transparent or translucent displays may incorporate a substrate through which light representing an image is directed to an individual's eye. The display may incorporate LEDs, OLEDs, digital light projectors, laser scanning light sources, liquid crystal on silicon, or any combination of these technologies. The light transmitting substrate may be an optical waveguide, an optical combiner, an optical reflector, a holographic substrate, or any combination of these substrates. In one embodiment, a transparent or translucent display may be selectively switched between an opaque state and a transparent or translucent state. As another example, the electronic system may be a projection-based system. Projection-based systems may use retinal projections to project an image onto the individual's retina. Alternatively, the projection system may also project the virtual object into the physical set (e.g., onto a physical surface or as a hologram). Other examples of SR systems include head-up displays, automotive windshields capable of displaying graphics, windows capable of displaying graphics, lenses capable of displaying graphics, headphones or earphones, speaker arrangements, input mechanisms (e.g., controllers with or without haptic feedback), tablets, smart phones, and desktop or laptop computers.

The present disclosure provides methods, systems, and/or devices for generating content for synthesizing a real set. The pop-up content engine generates targets for the target implementers and provides the targets to the corresponding target implementer engines so that the target implementer engines can generate actions that meet the targets. The target indication plot or story line generated by the pop-up content engine for which the target implementer engine generates actions. Generating the target enables the presentation of a dynamic target implementer that performs the inverse of presenting a static target implementer, thereby enhancing the user experience and improving the functionality of the device presenting the synthetic reality set.

FIG. 1A is a block diagram of an exemplary operating environment 100 according to some implementations. While relevant features are shown, those of ordinary skill in the art will recognize from the present disclosure that various other features are not shown for the sake of brevity and so as not to obscure more pertinent aspects of the exemplary implementations disclosed herein. To this end, as a non-limiting example, operating environment 100 includes controller 102 and electronic device 103. In the example of fig. 1A, the electronic device 103 is held by a user 10. In some implementations, the electronic device 103 includes a smartphone, tablet, laptop, or the like.

As shown in fig. 1A, the electronic device 103 presents a synthetic reality setting 106. In some implementations, the synthetic reality scenery 106 is generated by the controller 102 and/or the electronic device 103. In some implementations, the synthetic reality scenery 106 includes a virtual scenery that is a synthetic substitute for a physical scenery. In other words, in some implementations, the synthetic reality scenery 106 is synthesized by the controller 102 and/or the electronic device 103. In such implementations, the synthetic reality scenery 106 is different from the physical scenery in which the electronic device 103 is located. In some implementations, the synthetic reality scenery 106 includes an augmented scenery that is a modified version of the physical scenery. For example, in some implementations, the controller 102 and/or the electronic device 103 modifies (e.g., augments) the physical set in which the electronic device 103 is located to generate the synthetic reality set 106. In some implementations, the controller 102 and/or the electronic device 103 generates the synthetic reality scenery 106 by simulating a copy of the physical scenery in which the electronic device 103 is located. In some implementations, the controller 102 and/or the electronic device 103 generates the synthetic reality scene 106 by removing and/or adding items to a synthetic copy of the physical scene in which the electronic device 103 is located.

In some implementations, the composite reality scenery 106 includes various SR representations of the goal implementers, such as a boy action figure representation 108a, a girl action figure representation 108b, a robot representation 108c, and a drone representation 108 d. In some implementations, the goal implementer represents characters from fictional material such as movies, video games, comics, and novels. For example, boy action figure representation 108a represents a "boy action figure" character from an imaginary caricature, while girl action figure representation 108b represents a "girl action figure" character from an imaginary video game. In some implementations, the synthetic reality scenery 106 includes goal implementers representing characters from different fictional material (e.g., from different movies/games/caricatures/novels). In various implementations, the goal implementer represents a thing (e.g., a tangible object). For example, in some implementations, the target implementer represents equipment (e.g., a machine such as an airplane, a tank, a robot, an automobile, etc.). In the example of fig. 1A, the robot representation 108c represents a robot, and the drone representation 108d represents a drone. In some implementations, the goal implementer represents something from an imaginary material (e.g., equipment). In some implementations, the goal implementer represents something from a physical set, including something that is internal and/or external to the synthetic reality set 106.

In various implementations, the goal implementer performs one or more actions. In some implementations, the goal implementer performs a series of actions. In some implementations, the controller 102 and/or the electronic device 103 determine the actions to be performed by the target implementer. In some implementations, the actions of the goal implementer are to some extent similar to the actions performed by the corresponding character/thing in the fictional material. In the example of fig. 1A, girl action figure representation 108b is performing a flight action (e.g., because the corresponding "girl action figure" character is able to fly). In the example of fig. 1A, drone representation 108d is performing a hover action (e.g., because a drone in the real world is able to hover). In some implementations, the controller 102 and/or the electronic device 103 obtain actions for the target implementer. For example, in some implementations, the controller 102 and/or electronic device 103 receives actions of the target implementer from a remote server that determines (e.g., selects) the actions.

In various implementations, the goal implementer performs actions to meet (e.g., accomplish or achieve) the goal. In some implementations, a goal implementer is associated with a particular goal, and the goal implementer performs an action that improves the likelihood of meeting the particular goal. In some implementations, for example, the SR representation of the target implementer is referred to as an object representation because it represents various objects (e.g., real objects or imaginary objects). In some implementations, the goal implementer that represents the role is referred to as a role goal implementer. In some implementations, the role goal implementer performs actions to implement the role goal. In some implementations, the goal implementer that represents the equipment is referred to as an equipment goal implementer. In some implementations, the equipment goal implementer performs actions to implement the equipment goal. In some implementations, the target implementer that represents the environment is referred to as an environment target implementer. In some implementations, the environmental goal implementer performs environmental actions to implement the environmental goal.

In some implementations, the synthetic reality set 106 is generated based on user input from the user 10. For example, in some implementations, the electronic device 103 receives user input indicating a terrain for synthesizing the real scene 106. In such implementations, the controller 102 and/or electronic device 103 configures the synthetic reality scene 106 such that the synthetic reality scene 106 includes terrain indicated via the user input. In some implementations, the user input indicates an environmental condition. In such implementations, the controller 102 and/or electronic device 103 configures the synthetic reality scene 106 to have the environmental condition indicated by the user input. In some implementations, the environmental conditions include one or more of temperature, humidity, pressure, visibility, ambient light levels, ambient sound levels, time of day (e.g., morning, afternoon, evening, or night) and precipitation (e.g., cloudy, rainy, or snowy).

In some implementations, the actions of the goal implementer are determined (e.g., generated) based on user input from the user 10. For example, in some implementations, the electronic device 103 receives a user input indicating placement of the SR representation of the target implementer. In such implementations, the controller 102 and/or the electronic device 103 locate the SR representation of the target implementer according to the placement indicated by the user input. In some implementations, the user input indicates a particular action that the target implementer is allowed to perform. In such implementations, the controller 102 and/or the electronic device 103 selects the action of the target implementer from the particular actions indicated by the user input. In some implementations, the controller 102 and/or the electronic device 103 forego actions that are not among the particular actions indicated by the user input.

FIG. 1B is a block diagram of an exemplary operating environment 100a, according to some implementations. While relevant features are shown, those of ordinary skill in the art will recognize from the present disclosure that various other features are not shown for the sake of brevity and so as not to obscure more pertinent aspects of the exemplary implementations disclosed herein. To this end, as a non-limiting example, operating environment 100a includes a controller 102 and a Head Mounted Device (HMD) 104. In the example of fig. 1B, HMD 104 is worn by user 10. In various implementations, HMD 104 operates in substantially the same manner as electronic device 103 shown in fig. 1A. In some implementations, the HMD 104 performs substantially the same operations as the electronic device 103 shown in fig. 1A. In some implementations, the HMD 104 includes a head-mountable housing. In some implementations, the head-mountable housing is shaped to form a receiver for receiving an electronic device having a display (e.g., electronic device 103 shown in fig. 1A). In some implementations, the HMD 104 includes an integrated display for presenting a synthetic reality experience to the user 10.

FIG. 2 is a block diagram of an exemplary system 200 for generating actions for various target implementers in a composite reality set. For example, the system 200 generates the targets for the boy action figure representation 108a, the girl action figure representation 108b, the robot representation 108c, and/or the drone representation 108d shown in fig. 1A. In the example of fig. 2, system 200 includes a boy action figure character engine 208a, a girl action figure character engine 208b, a robotic equipment engine 208c, and a drone equipment engine 208d that generate actions 210 for boy action figure representation 108a, girl action figure representation 108b, robotic representation 108c, and drone representation 108d, respectively. In some implementations, the system 200 also includes an environment engine 208e, a pop-up content engine 250, and a display engine 260.

In various implementations, the pop-up content engine 250 generates respective targets 254 for the target implementers in the synthetic reality set and/or the environment of the synthetic reality set. In the example of fig. 2, the pop-up content engine 250 generates a boy action figure target 254a for the boy action figure representation 108a, a girl action figure target 254b for the girl action figure representation 108b, a robot target 254c for the robot representation 208c, a drone target 254d for the drone representation 108d, and/or an environment target 254e (e.g., environmental conditions) for the environment of the composite reality scene 106. As shown in fig. 2, the pop-up content engine 250 provides the targets 254 to the corresponding persona/equipment/environment engines. In the example of fig. 2, the pop-up content engine 250 provides a boy action figure goal 254a to the boy action figure role engine 208a, a girl action figure goal 254b to the girl action figure role engine 208b, a robot goal 254c to the robotic equipment engine 208c, a drone goal 254d to the drone equipment engine 208d, and an environment goal 254e to the environment engine 208 e.

In various implementations, the pop-up content engine 250 generates the goal 254 based on a function of the possible goals 252 (e.g., a set of predefined goals), contextual information 258 characterizing the synthetic reality set, and the actions 210 provided by the character/equipment/environment engine. For example, in some implementations, the pop-up content engine 250 generates the goal 254 by selecting the goal 254 from the possible goals 252 based on the contextual information 258 and/or the action 210. In some implementations, the possible targets 252 are stored in a data store. In some implementations, the possible targets 252 are obtained from corresponding fictitious source material (e.g., by scraping a video game, movie, novel, and/or caricature). For example, in some implementations, possible targets 252 for the girl's mobile figure representation 108b include saving lives, saving pets, fighting crimes, and the like.

In some implementations, the pop-up content engine 250 generates the target 254 based on the actions 210 provided by the persona/equipment/environment engine. In some implementations, the pop-up content engine 250 generates the goal 254 such that, given the action 210, a probability of completing the goal 254 satisfies a threshold (e.g., the probability is greater than the threshold, e.g., the probability is greater than 80%). In some implementations, the pop-up content engine 250 generates goals 254, which goals 254 are likely to be completed with the action 210.

In some implementations, the pop-up content engine 250 ranks the possible targets 252 based on the actions 210. In some implementations, the ranking for a particular possible target 252 indicates a likelihood of completing the particular possible target 252 given the action 210. In such implementations, the pop-up content engine 250 generates the target 254 by selecting the top N ranked possible targets 252, where N is a predefined integer (e.g., 1, 3, 5, 10, etc.).

In some implementations, the pop-up content engine 250 establishes an initial/end state 256 for the synthetic reality set based on the goal 254. In some implementations, the initial/end state 256 indicates placement (e.g., location) of various character/equipment representations within the synthetic reality set. In some implementations, the synthetic reality set is associated with a duration (e.g., a few seconds, minutes, hours, or days). For example, the synthetic reality set is scheduled to last for the duration. In such implementations, the initial/end state 256 indicates that the various characters/equipment represent placement at/near the beginning of the duration and/or at/near the end of the duration. In some implementations, the initial/end state 256 indicates environmental conditions for the synthetic real scenery at/near the beginning/end of a duration associated with the synthetic real scenery.

In some implementations, the pop-up content engine 250 provides the goal 254 to the display engine 260 in addition to providing the goal 254 to the character/equipment/environment engine. In some implementations, the display engine 260 determines whether the action 210 provided by the persona/equipment/environment engine is consistent with the goal 254 provided by the pop-up content engine 250. For example, display engine 260 determines whether act 210 satisfies goal 254. In other words, in some implementations, display engine 260 determines whether act 210 increases the likelihood of completing/achieving goal 254. In some implementations, if act 210 satisfies goal 254, display engine 260 modifies the synthetic reality set according to act 210. In some implementations, if act 210 does not satisfy goal 254, display engine 260 foregoes modifying the synthetic reality set in accordance with act 210.

Fig. 3A is a block diagram of an exemplary pop-up content engine 300 according to some implementations. In some implementations, the pop-up content engine 300 implements the pop-up content engine 250 shown in fig. 2. In various implementations, the pop-up content engine 300 generates goals 254 (e.g., character/equipment representations such as boy action figure representation 108a, girl action figure representation 108b, robot representation 108c, and/or drone representation 108d shown in fig. 1A) for various goal implementers instantiated in a composite reality set. In some implementations, at least some of the targets 254 are used in an environment engine (e.g., environment engine 208e shown in fig. 2) that affects the environment of the synthetic reality set.

In various implementations, the emergent content engine 300 includes a neural network system 310 (hereinafter "neural network 310" for brevity), a neural network training system 330 (hereinafter "training module 330" for brevity) that trains (e.g., configures) the neural network 310, and a scraper 350 that provides possible targets 360 to the neural network 310. In various implementations, the neural network 310 generates the targets 254 (e.g., the target 254a for the boy action figure representation 108a, the target 254b for the girl action figure representation 108b, the target 254c for the robot representation 108c, the target 254d for the drone representation 108d, and/or the environmental target 254e shown in fig. 2).

In some implementations, the neural network 310 includes a Long Short Term Memory (LSTM) Recurrent Neural Network (RNN). In various implementations, the neural network 310 generates the target 254 based on a function of the possible targets 360. For example, in some implementations, the neural network 310 generates the target 254 by selecting a portion of the possible targets 360. In some implementations, the neural network 310 generates the target 254 such that the target 254 is to some extent similar to the possible target 360.

In various implementations, the neural network 310 generates the target 254 based on contextual information 258 characterizing the synthetic reality set. As shown in fig. 3A, in some implementations, the context information 258 indicates an instantiated equipment representation 340, an instantiated character representation 342, user-specified context/environment information 344, and/or actions 210 from the target implementer engine.

In some implementations, the neural network 310 generates the target 254 based on the instantiated equipment representation 340. In some implementations, instantiated equipment representation 340 refers to an equipment representation located in a synthetic reality setting. For example, referring to fig. 1A, the instantiated equipment representation 340 includes a robot representation 108c and a drone representation 108d in the synthetic reality set 106. In some implementations, the goal 254 includes interacting with one or more of the instantiated equipment representations 340. For example, referring to fig. 1A, in some implementations, one of the targets 254a for the boy action figure representation 108a includes a destruction robot representation 108c, and one of the targets 254b for the girl action figure representation 108b includes a protection robot representation 108 c.

In some implementations, the neural network 310 generates the target 254 for each role representation based on the instantiated equipment representations 340. For example, referring to fig. 1A, if the synthetic reality scene 106 includes a robotic representation 108c, then one of the targets 254a for the boy's action figure representation 108a includes breaking the robotic representation 108 c. However, if the synthetic reality set 106 does not include the robotic representation 108c, then the target 254a for the boy action figure representation 108a is included in the synthetic reality set 106 to remain flat.

In some implementations, the neural network 310 generates a target 254 for each equipment representation based on other equipment representations instantiated in the synthetic reality set. For example, referring to fig. 1A, if the synthetic reality scene 106 includes the robotic representation 108c, then one of the targets 254d for the drone representation 108d includes the protective robotic representation 108 c. However, if the synthetic reality set 106 does not include the robotic representation 108c, then the target 254d for the drone representation 108d includes hovering at the center of the synthetic reality set 106.

In some implementations, the neural network 310 generates the target 254 based on the instantiated persona representation 342. In some implementations, the instantiated character representation 342 refers to a character representation located in a synthetic reality setting. For example, referring to fig. 1A, the instantiated character representation 342 includes a boy action figure representation 108a and a girl action figure representation 108b in the composite reality setting 106. In some implementations, the target 254 includes interacting with one or more of the instantiated character representations 342. For example, referring to fig. 1A, in some implementations, one of the targets 254d for the drone representation 108d includes a follower girl action figure representation 108 b. Similarly, in some implementations, one of the goals 254c for the robotic representation 108c includes avoiding the boy action figure representation 108a

In some implementations, the neural network 310 generates a target 254 for each role representation based on the other role representations instantiated in the synthetic reality set. For example, referring to fig. 1A, if the composite reality set 106 includes a boy action figure representation 108a, then one of the targets 254b for the girl action figure representation 108b includes capturing the boy action figure representation 108 a. However, if the synthetic reality set 106 does not include the boy action figure representation 108a, then the target 254b for the girl action figure representation 108b is included flying within the synthetic reality set 106.

In some implementations, the neural network 310 generates a target 254 for each equipment representation based on the role representations instantiated in the synthetic reality set. For example, referring to fig. 1A, if the composite reality scene 106 includes a girl action figure representation 108b, then one of the targets 254d for the drone representation 108d includes following the girl action figure representation 108 b. However, if the synthetic reality set 106 does not include the girl action figure representation 108b, then the target 254d for the drone representation 108d includes hovering at the center of the synthetic reality set 106.

In some implementations, the neural network 310 generates the target 254 based on user-specified context/environment information 344. In some implementations, the user-specified scene/environment information 344 indicates boundaries of the synthetic reality set. In such implementations, the neural network 310 generates the goal 254 such that the goal 254 can be met (e.g., achieved) within the boundaries of the synthetic reality set. In some implementations, the neural network 310 generates the target 254 by selecting a portion of the possible targets 252 that are more appropriate for the environment indicated by the user-specified scenario/environment information 344. For example, when the user-specified scene/environment information 344 indicates that the sky within the synthetic reality set is clear, the neural network 310 sets one of the targets 254d for the drone representation 108d to hover over the boy action figure representation 108 a. In some implementations, the neural network 310 foregoes selecting a portion of the possible targets 252 that do not fit in the environment indicated by the user-specified scenario/environment information 344. For example, when the user-specified scene/environment information 344 indicates a gust within the synthetic reality set, the neural network 310 discards the hover target for the drone representation 108 d.

In some implementations, the neural network 310 generates the goal 254 based on the actions 210 provided by the various goal implementer engines. In some implementations, the neural network 310 generates the goal 254 such that the goal 254 can be met (e.g., achieved) given the actions 210 provided by the goal implementer engine. In some implementations, the neural network 310 evaluates the possible targets 360 relative to the actions 210. In such implementations, the neural network 310 generates the goal 360 by selecting a possible goal 360 that the action 210 can satisfy and forgoing selecting a possible goal 360 that the action 210 cannot satisfy.

In various implementations, the training module 330 trains the neural network 310. In some implementations, the training module 330 provides Neural Network (NN) parameters 312 to the neural network 310. In some implementations, the neural network 310 includes a model of neurons, and the neural network parameters 312 represent weights of the model. In some implementations, the training module 330 generates (e.g., initializes or initiates) the neural network parameters 312 and refines (e.g., adjusts) the neural network parameters 312 based on the targets 254 generated by the neural network 310.

In some implementations, the training module 330 includes a reward function 332 that utilizes reinforcement learning to train the neural network 310. In some implementations, the reward function 332 assigns positive rewards to desired goals 254 and negative rewards to undesired goals 254. In some implementations, during the training phase, training module 330 compares target 254 to validation data that includes validated targets. In such implementations, the training module 330 stops training the neural network 310 if the target 254 is similar to the verified target to some extent. However, if the target 254 is not to some extent similar to the verified target, the training module 330 continues to train the neural network 310. In various implementations, the training module 330 updates the neural network parameters 312 during/after training.

In various implementations, the scraper 350 scrapes the content 352 to identify the potential targets 360. In some implementations, content 352 includes movies, video games, comics, novels, and fan-created content such as blogs and comments. In some implementations, the scraper 350 scrapes the content 352 using various methods, systems, and/or devices associated with content scraping. For example, in some implementations, the scraper 350 utilizes one or more of text pattern matching, HTML (hypertext markup language) parsing, DOM (document object model) parsing, image processing, and audio analysis to scrape content 352 and identify possible targets 360.

In some implementations, the target implementer is associated with a type of representation 362, and the neural network 310 generates the target 254 based on the type of representation 362 associated with the target implementer. In some implementations, the type 362 of representation indicates a physical characteristic of the target implementer (e.g., color, material type, texture, etc.). In such implementations, the neural network 310 generates the target 254 based on physical characteristics of the target implementer. In some implementations, the type 362 of representation indicates the behavioral characteristics of the target implementer (e.g., aggressiveness, friendliness, etc.). In such implementations, the neural network 310 generates the goal 254 based on behavioral characteristics of the goal implementer. For example, the neural network 310 generates targets that are disruptive to the boy's action figure representation 108a in response to behavioral characteristics that include aggressiveness. In some implementations, the type 362 of representation indicates the functional and/or performance characteristics (e.g., strength, speed, flexibility, etc.) of the target implementer. In such implementations, the neural network 310 generates the target 254 based on the functional characteristics of the target implementer. For example, the neural network 310 generates targets that always move for the girl action figure representation 108b in response to behavioral characteristics including speed. In some implementations, the type 362 of representation is determined based on user input. In some implementations, the type 362 of representation is determined based on a combination of rules.

In some implementations, the neural network 310 generates the target 254 based on the specified target 364. In some implementations, the specified targets 364 are provided by an entity that controls (e.g., owns or creates) the fictitious material from which the character/equipment originates. For example, in some implementations, the specified targets 364 are provided by a movie producer, video game creator, novice, or the like. In some implementations, the possible targets 360 include a designated target 364. As can be seen, in some implementations, the neural network 310 generates the target 254 by selecting a portion of the specified target 364.

In some implementations, the possible goals 360 of the goal implementer are limited by the limiter 370. In some implementations, the limiter 370 limits the neural network 310 from selecting a portion of the possible targets 360. In some implementations, the restraint 370 is controlled by an entity that owns (e.g., controls) the fictitious material from which the character/equipment originates. For example, in some implementations, limiter 370 is controlled by a movie producer, video game creator, novice, or the like. In some implementations, the limiter 370 and the neural network 310 are controlled/operated by different entities. In some implementations, the limiter 370 limits the neural network 310 from generating targets that violate criteria defined by the entity controlling the fictive material.

Fig. 3B is a block diagram of a neural network 310 according to some implementations. In the example of fig. 3B, the neural network 310 includes an input layer 320, a first hidden layer 322, a second hidden layer 324, a classification layer 326, and a target selection module 328. Although the neural network 310 includes two hidden layers as an example, one of ordinary skill in the art will appreciate from this disclosure that in various implementations, one or more additional hidden layers are also present. Adding additional hidden layers increases computational complexity and memory requirements, but may improve performance for some applications.

In various implementations, the input layer 320 receives various inputs. In some implementations, the input layer 320 receives the contextual information 258 as input. In the example of fig. 3B, input layer 320 receives input from the goal implementer engine indicating instantiated equipment 340, instantiated role 342, user specified context/environment information 344, and action 210. In some implementations, the neural network 310 includes a feature extraction module (not shown) that generates a feature stream (e.g., a feature vector) based on the instantiated equipment 340, the instantiated role 342, the user-specified context/environment information 344, and/or the action 210. In such implementations, the feature extraction module provides the feature stream to the input layer 320. As can be seen, in some implementations, the input layer 320 receives a feature stream that is a function of the instantiated equipment 340, the instantiated role 342, the user specified scene/environment information 344, and the action 210. In various implementations, input layer 320 includes a plurality of LSTM logic cells 320a, also referred to by those of ordinary skill in the art as neurons or models of neurons. In some such implementations, the input matrix from the features to LSTM logic unit 320a includes a rectangular matrix. The size of this matrix is a function of the number of features contained in the feature stream.

In some implementations, the first hidden layer 322 includes a plurality of LSTM logical units 322 a. In some implementations, the number of LSTM logic cells 322a ranges between approximately 10 to 500. Those of ordinary skill in the art will appreciate that in such implementations, the number of LSTM logical units per layer is greater than previously known methods (approximately O (10)1)-O(102) Is several orders of magnitude smaller (about O (10))1)-O(102) This allows such implementations to be embedded in highly resource constrained devices. As shown in the example of fig. 3B, the first hidden layer 322 receives its input from the input layer 320.

In some implementations, the second hidden layer 324 includes a plurality of LSTM logical units 324 a. In some implementations, the number of LSTM logic units 324a is the same as or similar to the number of LSTM logic units 320a in the input layer 320 or the number of LSTM logic units 322a in the first hidden layer 322. As shown in the example of fig. 3B, the second hidden layer 324 receives its input from the first hidden layer 322. Additionally or alternatively, in some implementations, the second hidden layer 324 receives its input from the input layer 320.

In some implementations, the classification layer 326 includes a plurality of LSTM logical units 326 a. In some implementations, the number of LSTM logic units 326a is the same as or similar to the number of LSTM logic units 320a in the input layer 320, the number of LSTM logic units 322a in the first hidden layer 322, or the number of LSTM logic units 324a in the second hidden layer 324. In some implementations, the classification layer 326 includes an implementation of a polynomial logic function (e.g., soft-max function) that produces a number of outputs approximately equal to the number of possible actions 360. In some implementations, each output includes a probability or confidence metric that the action 210 satisfies the corresponding goal. In some implementations, the output does not include targets excluded by the operation of the limiter 370.

In some implementations, the target selection module 328 generates the target 254 by selecting the top N target candidates provided by the classification layer 326. In some implementations, act 210 may satisfy the top N target candidates. In some implementations, goal selection module 328 provides goal 254 to a rendering and display pipeline (e.g., display engine 260 shown in FIG. 2). In some implementations, goal selection module 328 provides goals 254 to one or more goal implementer engines (e.g., boy action figure role engine 208a, girl action figure role engine 208b, robotic equipment engine 208c, drone equipment engine 208d, and/or environment engine 208e, shown in fig. 2).

Fig. 4A is a flowchart representation of a method 400 of generating content for a composite reality set. In various implementations, the method 400 is performed by a device (e.g., the controller 102 and/or the electronic device 103 shown in fig. 1A) having a non-transitory memory and one or more processors coupled with the non-transitory memory. In some implementations, the method 400 is performed by processing logic (including hardware, firmware, software, or a combination thereof). In some implementations, the method 400 is performed by a processor executing code stored in a non-transitory computer-readable medium (e.g., memory). Briefly, in some implementations, the method 400 includes: instantiating a target implementer into a composite reality set; obtaining context information for synthesizing a real set; generating a target for a target implementer; setting an environmental condition for synthesizing a real set; establishing initial conditions for the goal implementer based on the goal; and modifying the goal implementer based on the goal.

As represented by block 410, in various implementations, method 400 includes instantiating the goal implementer into a composite reality set (e.g., instantiating boy action figure representation 108a, girl action figure representation 108b, robot representation 108c, and/or drone representation 108d into the composite reality set 106 shown in fig. 1A). In some implementations, the goal implementer is characterized by a set of predefined goals (e.g., the possible goals 360 shown in fig. 3A) and a set of visual rendering attributes.

As represented by block 420, in various implementations, the method 400 includes obtaining contextual information characterizing a synthetic reality set (e.g., the contextual information 258 shown in fig. 2-3B). In some implementations, the method 400 includes receiving (e.g., from a user) contextual information.

As represented by block 430, in various implementations, method 400 includes generating a goal for a goal implementer based on a function of a set of predefined goals, context information, and a set of predefined actions for the goal implementer. For example, referring to FIG. 2, method 400 includes generating goal 254 based on possible goal 252, contextual information 258, and action 210.

As represented by block 440, in various implementations, method 400 includes setting environmental conditions for synthesizing a real set based on a goal for a goal implementer. For example, referring to fig. 2, the method 400 includes generating an environmental objective 254e (e.g., an environmental condition).

As represented by block 450, in various implementations, method 400 includes establishing an initial condition and a set of current actions for a target implementer based on a goal for the target implementer. For example, referring to fig. 2, the method 400 includes establishing initial/end states 256 (e.g., a role representation, an equipment representation, and/or an environment) for various target implementers.

As represented by block 460, in various implementations, the method 400 includes modifying the goal implementer based on the goal. For example, referring to FIG. 2, in some implementations, method 400 includes providing goal 254 to display engine 260 and/or one or more goal implementer engines.

Referring to FIG. 4B, as represented by block 410a, in various implementations, the method 400 includes obtaining a set of predefined targets (e.g., the possible targets 360 shown in FIG. 3A) from source material (e.g., the content 352 shown in FIG. 3A, such as a movie, book, video game, caricature, and/or novel). As represented by block 410b, in various implementations, the method 400 includes scraping source material for the set of predefined targets.

As represented by block 410c, in some implementations, the method 400 includes determining the set of predefined targets based on a type of representation (e.g., the type 362 of the representation shown in FIG. 3A). As represented by block 410d, in some implementations, the method 400 includes determining the set of predefined goals based on a user-specified configuration (e.g., determining the type 362 of the representation shown in FIG. 3A based on user input).

As represented by block 410e, in some implementations, the method 400 includes determining the predefined goal based on the limits specified by the owner of the object. For example, referring to fig. 3A, in some implementations, the method 400 includes limiting the possible targets 360 that can be selected by the neural network 310 by operation of the limiter 370.

As represented by block 410f, in some implementations, the synthetic reality set (e.g., synthetic reality set 106 shown in fig. 1A) includes a virtual reality set.

As represented by block 410g, in some implementations, the synthetic reality set (e.g., the synthetic reality set 106 shown in fig. 1A) includes an augmented reality set.

As represented by block 410h, in some implementations, the goal implementer is a representation of a character from one or more of a movie, a video game, a caricature, and a novel (e.g., boy action figure representation 108a and/or girl action figure representation 108b shown in fig. 1A).

As represented by block 410i, in some implementations, the goal implementer is a representation of equipment from one or more of a movie, a video game, a caricature, and a novel (e.g., the robotic representation 108c and/or the drone representation 108d shown in fig. 1A).

As represented by block 410j, in some implementations, the method 400

Including obtaining a set of visual rendering attributes from the image. For example, in some implementations, method 400 includes capturing an image and extracting visual rendering attributes from the image (e.g., by utilizing devices, methods, and/or systems associated with image processing).

Referring to fig. 4C, as represented by block 420a, in various implementations, the context information indicates whether a target implementer has been instantiated in the synthetic reality setting. As represented by block 420B, in some implementations, the context information indicates which character representations have been instantiated in the composite reality set (e.g., the context information includes the instantiated character representations 342 shown in fig. 3A-3B). As represented by block 420c, in some implementations, the context information indicates that an equipment representation has been instantiated in the synthetic reality set (e.g., the context information includes the instantiated equipment representation 340 shown in fig. 3A-3B).

As represented by block 420d, in various implementations, the context information includes user-specified context information (e.g., user-specified context/environment information 344 shown in fig. 3A-3B). As represented by block 420e, in various implementations, the context information indicates a terrain of the synthetic reality set (e.g., a landscape, e.g., a natural artifact such as a mountain, river, etc.). As represented by block 420f, in various implementations, the context information indicates environmental conditions within the synthetic reality set (e.g., the user-specified scene/environment information 344 shown in fig. 3A-3B).

As represented by block 420g, in some implementations, the context information includes a grid map of the physical set (e.g., a detailed representation of the physical set in which the device is located). In some implementations, the grid map indicates the location and/or dimensions of the real object in the physical set. More generally, in various implementations, the contextual information includes data corresponding to a physical set. For example, in some implementations, the context information includes data corresponding to a physical set in which the device is located. In some implementations, the context information indicates a boundary surface (e.g., a floor, wall, and/or ceiling) of the physical set. In some implementations, data corresponding to the physical set is used to synthesize/modify the SR set. For example, the SR set includes SR representations of walls that exist in the physical set.

Referring to fig. 4D, as represented by block 430a, in some implementations, the method 400 includes generating a target using a neural network (e.g., the neural network 310 shown in fig. 3A-3B). As represented by block 430b, in some implementations, the neural network generates the target based on a set of neural network parameters (e.g., neural network parameters 312 shown in fig. 3A). As represented by block 430c, in some implementations, the method 400 includes adjusting a neural network parameter based on a target generated by the neural network (e.g., adjusting the neural network parameter 312 based on the target 254 shown in fig. 3A).

As represented by block 430d, in some implementations, the method 400 includes determining neural network parameters based on a reward function (e.g., the reward function 332 shown in fig. 3A) that assigns positive rewards to desired targets and negative rewards to undesired targets. As represented by block 430e, in some implementations, the method 400 includes configuring (e.g., training) a neural network based on reinforcement learning. As represented by block 430f, in some implementations, the method 400 includes training a neural network based on content scraped from videos such as movies, books such as novels and caricatures, and video games (e.g., by the scraper 350 shown in fig. 3A).

As represented by block 430g, in some implementations, the method 400 includes generating the first goal if a second goal implementer is instantiated in the synthetic reality set. As represented by block 430h, in some implementations, the method 400 includes generating the second goal if a third goal implementer is instantiated in the synthetic reality set. More generally, in various implementations, method 400 includes generating different goals for a goal implementer based on other goal implementers present in a synthetic reality set.

As represented by block 430i, in some implementations, the method 400 includes selecting a target if the likelihood of meeting the target with a given set of actions is greater than a threshold. As represented by block 430j, in some implementations, the method 400 includes forgoing selecting a target if the likelihood of meeting the target given the set of actions is less than a threshold.

Referring to fig. 4E, as represented by block 440a, in some implementations, the method 400 includes setting one or more of a temperature value, a humidity value, a pressure value, and a precipitation value within the synthetic reality set. In some implementations, the method 400 includes causing rain or snow to fall in a synthetic reality set. As represented by block 440b, in some implementations, the method 400 includes setting one or more of an ambient sound level value (e.g., in decibels) and an ambient lighting level value (e.g., in lumens) for synthesizing a real set. As represented by block 440c, in some implementations, the method 400 includes setting a state of a celestial body within a synthetic reality setting (e.g., setting sunrise or sunset, setting full or partial moon, etc.).

As represented by block 450a, in some implementations, the method 400 includes establishing an initial/end position of the target implementer. In some implementations, the synthetic reality set is associated with a duration. In such implementations, method 400 includes setting an initial position occupied by the target implementer at or near the beginning of the duration and/or setting an end position occupied by the target implementer at or near the end of the duration.

As represented by block 450b, in some implementations, the method 400 includes establishing an initial/end action of the target implementer. In some implementations, the synthetic reality set is associated with a duration. In such implementations, method 400 includes establishing an initial action that the target implementer performs at or near the beginning of the duration and/or establishing an ending action that the target implementer performs at or near the end of the duration.

As represented by block 460a, in some implementations, the method 400 includes providing the target to a rendering and display pipeline (e.g., the display engine 260 shown in FIG. 2). As represented by block 460b, in some implementations, the method 400 includes modifying the SR representation of the target implementer such that the SR representation of the target implementer can be viewed as performing an action that satisfies the target.

Fig. 5 is a block diagram of a server system 500 enabled with one or more components of a device (e.g., the controller 102 and/or the electronic device 103 shown in fig. 1A) according to some implementations. While some specific features are shown, those of ordinary skill in the art will recognize from this disclosure that various other features are not shown for the sake of brevity and so as not to obscure more pertinent aspects of the particular implementations disclosed herein. To this end, as a non-limiting example, in some implementations, the server system 500 includes one or more processing units (CPUs) 501, a network interface 502, a programming interface 503, a memory 504, and one or more communication buses 505 for interconnecting these and various other components.

In some implementations, the network interface 502 is provided to, among other uses, establish and maintain a metadata tunnel between a cloud-hosted network management system and at least one private network that includes one or more compatible devices. In some implementations, the communication bus 505 includes circuitry to interconnect and control communications between system components. The memory 504 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices, and may include non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices or other non-volatile solid state storage devices. Memory 504 optionally includes one or more storage devices located remotely from CPU 501. The memory 504 includes a non-transitory computer-readable storage medium.

In some implementations, the memory 504 or a non-transitory computer-readable storage medium of the memory 504 stores programs, modules, and data structures, or a subset thereof, including the optional operating system 506, the neural network 310, the training module 330, the scraper 350, and the possible targets 360. As described herein, the neural network 310 is associated with neural network parameters 312. As described herein, the training module 330 includes a reward function 332, which reward function 332 trains (e.g., configures) the neural network 310 (e.g., by determining the neural network parameters 312). As described herein, the neural network 310 determines targets for a target implementer located in the synthetic reality set and/or the environment of the synthetic reality set (e.g., targets 254 shown in fig. 2-3B).

FIG. 6 is a diagram illustrating an environment 600 in which characters are captured. To this end, the environment 600 includes a hand 602 holding the device 604 and an imaginary material 610. In the example of fig. 6, the fictional material 610 includes books, novels, or caricatures related to the boy action figure. The imaginary material 610 includes a picture 612 of the boy action figure. In operation, a user holds the device 604 such that the picture 612 is within the field of view 606 of the device 604. In some implementations, the device 604 captures an image that includes a picture 612 of a boy's action figure.

In some implementations, the picture 612 includes encoded data (e.g., a barcode) identifying the boy action figure. For example, in some implementations, the encoded data specifies that picture 612 is from a boy action figure of fictional material 610. In some implementations, the encoded data includes a Uniform Resource Locator (URL) that directs the device 604 to a resource that includes information about the boy action figure. For example, in some implementations, the resources include various physical and/or behavioral attributes of the boy action figure. In some implementations, the resource indicates a goal of the boy action figure.

In various implementations, the device 604 presents an SR representation of the target implementer of the boy action figure in a composite reality set (e.g., in the composite reality set 106 shown in fig. 1A). FIG. 6 shows a non-limiting example of capturing a role. In some implementations, the device 604 captures roles and/or equipment based on the audio input. For example, in some implementations, the device 604 receives audio input identifying a boy action figure. In such implementations, the device 604 queries the data store of characters and equipment to identify the character/equipment specified by the audio input.

While various aspects of the implementations described above are described within the scope of the appended claims, it should be apparent that various features of the implementations described above may be embodied in a wide variety of forms and that any specific structure and/or function described above is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that an aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method may be practiced using any number of the aspects set forth herein. In addition, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to or other than one or more of the aspects set forth herein.

It will also be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first node may be referred to as a second node, and similarly, a second node may be referred to as a first node, which changes the meaning of the description, as long as all occurrences of the "first node" are renamed consistently and all occurrences of the "second node" are renamed consistently. The first node and the second node are both nodes, but they are not the same node.

The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the claims. As used in the description of the present embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

As used herein, the term "if" may be interpreted to mean "when the prerequisite is true" or "in response to a determination" or "according to a determination" or "in response to a detection" that the prerequisite is true, depending on the context. Similarly, the phrase "if it is determined that [ the prerequisite is true ]" or "if [ the prerequisite is true ]" or "when [ the prerequisite is true ]" is interpreted to mean "upon determining that the prerequisite is true" or "in response to determining" or "according to determining that the prerequisite is true" or "upon detecting that the prerequisite is true" or "in response to detecting" that the prerequisite is true, depending on context.

31页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:在神经网络中使用存储器的模拟数字转换器

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!