Element-based ray casting rule switching

文档序号:1939514 发布日期:2021-12-07 浏览:23次 中文

阅读说明:本技术 基于元素的光线投射规则的切换 (Element-based ray casting rule switching ) 是由 欧文·佩德罗蒂 加扬·埃德里维拉 布兰登·富特旺勒 于 2021-05-28 设计创作,主要内容包括:本申请涉及基于元素的光线投射规则的切换。可以为人工现实环境中的元素(例如,对象或体积)分配不同的光线投射规则。响应于检测到相应的触发,例如用户进入体积或与对象交互,可以实现与该元素相关联的光线投射规则。实现光线投射规则可以控制光线的各个方面,例如光线的形状、大小、光线的效果、光线的来源、光线是否沿着特定平面定向或者光线如何被控制。在某些情况下,人工现实系统可以同时投射多条光线,这些光线由用户的同一特征控制。使用优先级规则(例如,加权因子、层级、过滤器等),人工现实系统可以确定哪一条光线是主要光线,这允许用户使用主要光线与元素进行交互。(The application relates to switching of element-based ray casting rules. Elements (e.g., objects or volumes) in an artificial reality environment may be assigned different ray casting rules. In response to detecting a corresponding trigger, such as a user entering a volume or interacting with an object, a ray casting rule associated with the element may be implemented. Implementing ray casting rules may control various aspects of the light, such as the shape, size, effect, source, whether or not the light is directed along a particular plane, or how the light is controlled. In some cases, the artificial reality system may project multiple rays simultaneously, which are controlled by the same feature of the user. Using priority rules (e.g., weighting factors, hierarchies, filters, etc.), the artificial reality system can determine which rays are chief rays, which allows the user to interact with the element using the chief rays.)

1. A method for customizing ray casting rules in an artificial reality three-dimensional (3D) environment, the method comprising:

providing the artificial reality 3D environment, wherein at least one user-controlled ray is provided and configured with a current set of rays defining ray casting rules for attributes of the at least one user-controlled ray;

detecting a trigger event corresponding to an element of the artificial reality 3D environment, wherein the element of the artificial reality 3D environment is linked to one or more alternative sets of ray casting rules;

retrieving an alternative set of ray casting rules from the one or more alternative sets of ray casting rules linked to elements of the artificial reality 3D environment; and

applying the alternative set of ray casting rules such that the displayed rays have at least one attribute different from attributes of rays using the current set of ray casting rules.

2. The method of claim 1, wherein the at least one different attribute specifies a difference in size or shape of the displayed light rays.

3. The method of claim 1, wherein the at least one different attribute specifies a difference in how the displayed light interacts with an element in the artificial reality 3D environment.

4. The method of claim 1, wherein the at least one different attribute specifies a difference in origin of the displayed rays or how the displayed rays respond to a difference in user input.

5. The method of claim 1, wherein the alternative set of ray casting rules are linked to elements of the artificial reality 3D environment by identifiers assigned to a predefined set of ray casting rules.

6. The method of claim 1, wherein the elements of the artificial reality 3D environment are automatically linked to the set of alternatives for ray casting rules based on a type specified for the elements of the artificial reality 3D environment and a mapping of element types to the set of ray casting rules.

7. The method of claim 1, wherein elements of the artificial reality 3D environment are automatically linked to the alternative set of ray casting rules by:

the use case is determined by analyzing:

use of elements of the artificial reality 3D environment, or

Use of elements identified as similar to elements of the artificial reality 3D environment; and

identifying a match between the determined use case and the alternative set of ray casting rules.

8. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,

wherein the one or more alternative sets of ray casting rules comprise a set of at least two ray casting rules; and

wherein the retrieved alternative set of ray casting rules is selected from the set of at least two ray casting rules by determining a match between the context of the trigger event and the alternative set of ray casting rules.

9. The method of claim 1, further comprising causing display of one or more transition visual indicators associated with the alternate set of ray casting rules in response to the triggering event, the one or more transition visual indicators showing a transition from the current set of ray casting rules to the alternate set of ray casting rules.

10. The method of claim 1, further comprising causing display of one or more affordances relating to elements of the artificial reality 3D environment, the affordances indicating that the elements of the artificial reality 3D environment are associated with one or more alternative sets of the ray casting rules.

11. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,

wherein the element of the artificial reality 3D environment is a specified volume; and

wherein the triggering event comprises a determination that at least a portion of a user entered the specified volume.

12. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,

wherein an element of the artificial reality 3D environment is a designated virtual object; and

wherein the triggering event includes a determination that the specified virtual object was selected by a user.

13. A computer-readable storage medium storing instructions that, when executed by a computing system, cause the computing system to perform operations comprising:

providing an artificial reality environment using at least one user-controlled light having a first attribute defining an aspect of the at least one user-controlled light;

detecting a trigger event, wherein the trigger event corresponds to:

an element of the artificial reality environment linked to an alternative set of ray casting rules; or

A gesture linked to the alternative set of ray casting rules;

retrieving the alternative set of ray casting rules; and

applying the alternative set of ray casting rules such that the displayed ray has at least one attribute different from the first attribute.

14. The computer-readable storage medium of claim 13,

wherein the triggering event corresponds to the gesture; and

wherein retrieving the alternate set of ray casting rules comprises retrieving a set of ray casting rules previously mapped to the gesture.

15. The computer-readable storage medium of claim 13, wherein the at least one different attribute specifies one or more of:

a difference in shape of the displayed light;

a difference in size of the displayed light;

differences in how the displayed light interacts with elements in the artificial reality environment;

a difference in origin of the displayed rays;

how the displayed light responds to user input; or

Any combination of the above.

16. The computer-readable storage medium of claim 13,

wherein the trigger event corresponds to an element of the artificial reality environment, the element being a specified volume; and

wherein the triggering event comprises a determination that at least a portion of a user entered the specified volume.

17. A computing system for receiving user input using a plurality of alternative rays, the computing system comprising:

one or more processors; and

one or more memories storing instructions that, when executed by the one or more processors, cause the computing system to perform operations comprising:

providing an artificial environment having a plurality of lights, the plurality of lights being simultaneously controlled by a particular characteristic of a user;

applying a ray weighting or sorting system to determine a principal ray from the plurality of rays;

causing display of each ray of the plurality of rays to be consistent with the result of determining the principal ray; and

in response to a ray action, a ray result dependent on the determined primary ray is caused to the ray action.

18. The computing system of claim 17, wherein the particular feature of the user comprises at least a portion of a hand of the user, an eye of the user, or a user-operated controller.

19. The computing system of claim 17 wherein the computing system,

wherein the result is a first result;

wherein when a different one of the plurality of rays is determined to be the primary ray, performing the same ray action at a different time to produce a second result; and

wherein the second result of the ray action is different from the first result due to the difference in the primary rays.

20. The computing system of claim 17, wherein the ray weighting or ranking system comprises one or more of:

a hierarchy of the plurality of rays having respective weights;

a weight value assigned to the ray when the ray is determined to correspond to the current focus of the user;

a weight value assigned to the ray when the ray is determined to be actionable; or

Any combination of the above.

Technical Field

The present disclosure relates to dynamic selection of ray casting rules in a three-dimensional (3D) environment.

Background

In an artificial reality environment, some or all of the objects that a user sees and interacts with are "virtual objects," i.e., representations of objects that appear in the environment that are generated by the computing system. Virtual objects in an artificial reality environment may be presented to a user through a head-mounted display, a mobile device, a projection system, or another computing system. Typically, a user may interact with a virtual object using controls and/or gestures. In some cases, the artificial reality system may track user interactions with "real objects" that exist independently of the artificial reality system controlling the artificial reality environment. For example, a user may select a real object and add a virtual overlay to change the color of the object or some other way the object is presented to the user, to affect other virtual objects, and so on. As used herein, unless otherwise specified, an "object" may be a real or virtual object.

Some 3D systems allow a user to interact with an object using a projection or "ray," which in many cases is a line projected from the user's hand. Various systems define different types of light, such as straight light, curved light, or light emitted from different body parts or other user-controlled elements. While each light projection system has its own advantages, it also has disadvantages. Thus, in some cases, each of the various existing light projection systems is cumbersome, inaccurate, and/or inoperable.

Brief Description of Drawings

FIG. 1 is a block diagram illustrating an overview of devices on which some implementations of the present technology may operate.

Fig. 2A is a line diagram illustrating a virtual reality headset (headset) that may be used in some implementations of the present technology.

Fig. 2B is a line drawing illustrating a mixed reality headset that may be used in some implementations of the present technology.

FIG. 3 is a block diagram illustrating an overview of an environment in which some implementations of the present technology may operate.

Fig. 4 is a block diagram illustrating components that may be used in a system employing the disclosed technology in some implementations.

FIGS. 5A and 5B are flow diagrams illustrating a process used in some implementations of the present technology for assigning a ray casting rule to an element and applying a ray casting rule corresponding to a triggering event.

Fig. 6 is a conceptual diagram illustrating an example of a plurality of elements having different ray casting rule sets displayed in a 3D space.

FIG. 7 is a conceptual diagram illustrating an example of an activated downward ray casting rule set corresponding to a volume that a user has entered.

FIG. 8 is a conceptual diagram illustrating an example of a limited ray casting rule set corresponding to activation of a user-selected object.

FIG. 9 is a conceptual diagram illustrating an example of a set of activated sphere ray casting rules corresponding to a volume that a user has entered.

FIG. 10 is a conceptual diagram illustrating an example of an activated set of anchor ray casting rules.

FIG. 11 is a flow diagram illustrating a process for managing multiple simultaneous rays in some implementations of the present technology.

Fig. 12A and 12B are conceptual diagrams illustrating an example of casting a plurality of rays, where one ray is selected as a main ray for user interaction.

The techniques described herein may be better understood by referring to the following detailed description in conjunction with the accompanying drawings in which like reference numerals indicate identical or functionally similar elements.

Detailed Description

Aspects of the present disclosure relate to managing ray casting in a 3D environment by allowing a controller of an artificial environment element to assign different ray casting rules to the element, setting ray casting rules for the element in response to a trigger, and managing multiple simultaneous rays. The rules of the projected light rays may control properties of the light rays, such as the shape of the light rays (e.g., a line, curve, cone, pyramid, cylinder, sphere, or combination of shapes); size (e.g., diameter and/or length); the effect of the light (e.g., the effect of the light striking the object, whether the light passes through the object, an automatic action triggered by the light, or a delay time before the light residing on the object selects the object); where and at what angle the light emanates (e.g., whether the light is anchored from one point and oriented from another point; whether the light is from the user's hand, eye, or controller; or whether the light is angled with respect to a particular plane or whether its angle is user controlled); or how the light is controlled (e.g., whether the light is based on one-handed or a controller, or whether any two-handed control modifications are applied).

An element in a 3D environment may be assigned different rules by the controller of the element. The controller may include, for example, an application that creates an object or an application that controls a volume allocated by an artificial reality system. In various implementations, the artificial reality system may provide an interface for light aspects that the controller may set. In some implementations, the rules may be organized into ray casting rule sets that define interaction modalities (modalities), and the controller may assign one of these predefined rule sets to an element. For example, one rule set may specify a "down" ray casting modality, in which rays are cast directly down from a user's hand. As another specific example, another set of rules may specify that the projected light is 18 inches long from the user's hand and anchored at the user's eye within the associated volume.

Any combination of non-conflicting rules may be added to a predefined set of rules, which may then be assigned to different elements or shared for use by other elements. For example, in some cases, a defined rule set may be declaratively applied to an element, such as by setting a rule set name when initializing the element. In some implementations, a rule set may be automatically assigned to an element, for example, based on a mapping of element types to the rule set or based on a determination of how elements (or elements identified as similar) are typically used by a user and a match between use cases and ray casting rules. Further, the controller may assign different sets of rules to the same element to apply in different contexts or use different triggers to cause their activation, e.g., depending on the characteristics of the user, the identified relationship to other elements, the distance between the user and the element, etc.

In some implementations, the controller may also specify hints (affordance) that may be associated with an element to indicate how the light will interact with the element. For example, a visual affordance of a down arrow may be added to a volume indicating that within the volume, a downward ray casting rule will be applied. In some implementations, affordances may be associated with a predefined set of rules, which provides consistency in how elements signal what ray casting rules they are mapped to.

Once the application has created the object or assigned the volume to write, and the application has assigned the ray casting rules for the element, the element can be rendered with any visual affordance assigned to it. The artificial reality system may then monitor any triggers that may cause the ray casting rules to change. In some implementations, the trigger may be a portion of the user entering the volume element (e.g., a portion from which light is projected). For example, a user placing their hand in a specified volume containing a map object may trigger application of a ray casting rule defined to create a ray whose end will capture the nearest selectable portion of the map and magnify it. In other implementations, the trigger may be to select an element having an assigned ray casting rule. For example, a default light projection modality may be to project light directly from a user's hand, and the user may use that modality to select a globe object having an assigned rule that projects light out of the user's eye while mapping the rotation of the globe in the X-axis according to movement of one of the user's hands and mapping the rotation of the globe in the Y-axis according to movement of the other of the user's hands. In still other embodiments, the ray casting rules may be mapped to gestures or other controls such that executing a gesture or activating a control will cause the current ray casting rule to change. For example, light may be projected straight out from the user's hand when the hand is rotated to one side, and downward light may be projected from the user's palm when the user makes a gesture that opens the hand with the palm facing downward.

When a trigger is identified, the artificial reality system may retrieve a ray casting rule associated with the trigger (e.g., a ray casting rule associated with an element in which the trigger occurred or associated with a gesture or other space). In some implementations, the artificial reality system may display transitions that signal to the user a change in the ray casting rules, such as an animation of a ray changing orientation or deforming its shape. The artificial reality system may then apply these rules to interpret further actions of the user to control the light projection under the applied rules.

In some implementations, the artificial reality system may project multiple rays simultaneously, which are controlled by the same features of the user (e.g., hands, eyes, controls, etc.). The artificial reality system may determine which ray is active at any given time by evaluating conditions such as where the artificial reality system determines the user focus point, which rays intersect the actionable object, and/or priority levels established among the rays. In various implementations, rays that are not identified as primary rays may be hidden or displayed as diminished compared to the primary rays. When the user indicates an action (e.g., putting another finger and thumb together to indicate a "click"), the action is performed with respect to the light that is then determined to be dominant.

Embodiments of the disclosed technology may include or be implemented in conjunction with an artificial reality system. Artificial reality or extra reality (XR) is a form of reality that has been adjusted in some way before presentation to the user, which may include, for example, Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR), mixed reality, or some combination and/or derivative thereof. The artificial reality content may include fully generated content or generated content combined with captured content (e.g., real-world photos). The artificial reality content may include video, audio, haptic feedback, or some combination thereof, any of which may be presented in a single channel or in multiple channels (such as stereoscopic video that produces a three-dimensional effect to a viewer). Additionally, in some embodiments, the artificial reality may be associated with an application, product, accessory, service, or some combination thereof, for example, for creating content in the artificial reality and/or for use in the artificial reality (e.g., performing an activity in the artificial reality). An artificial reality system that provides artificial reality content may be implemented on a variety of platforms, including a Head Mounted Display (HMD) connected to a host computer system, a standalone HMD, a mobile device or computing system, a "cave" environment or other projection system, or any other hardware platform capable of providing artificial reality content to one or more viewers.

As used herein, "virtual reality" or "VR" refers to an immersive experience in which the user's visual input is controlled by the computing system. "augmented reality" or "AR" refers to a system in which users view images of the real world after they pass through a computing system. For example, a tablet computer with a camera on the back may capture an image of the real world and then display the image on a screen on the opposite side of the tablet computer from the camera. The tablet may process, adjust, or "enhance" the image as it passes through the system, such as by adding virtual objects. "mixed reality" or "MR" refers to a system in which light entering the user's eye is generated in part by a computing system and is composed in part of light reflected by objects in the real world. For example, the MR headset may be shaped as a pair of glasses with a through-display (pass-through display), which allows light from the real world to pass through a waveguide that simultaneously emits light from a projector in the MR headset, allowing the MR headset to present virtual objects that blend with real objects that the user can see. As used herein, "artificial reality," "additional reality," or "XR" refers to any of VR, AR, MR, or any combination or mixture thereof.

There are existing XR systems for projecting light that selects and interacts with objects in an artificial reality environment. However, such light projection systems may be inaccurate, imprecise, and/or provide limited functionality in some situations. For example, a system that projects light directly from the hand alone may be difficult to use when selecting occluded objects. As another example, a system that projects only infinitely long rays may be difficult to use to select nearby targets. Furthermore, only light rays projected over a short distance are not available to interact with objects at a distance. Thus, existing XR systems that provide a single ray casting modality force the user to use this modality regardless of another modality that may be more effective in some cases. In addition, systems that provide ray adjustment, for example, allow a user to adjust ray curvature, place the responsibility of selecting ray casting rules on the user. Existing light projection systems can be inaccurate, frustrating, and time consuming to operate by an end user.

The ray casting rule switching systems and processes described herein overcome these problems associated with conventional XR interaction techniques and it is desirable to provide users with greater control over object interaction, provide more functionality, be more natural and intuitive than interaction in existing XR systems, and not require the end user to manually select ray casting rules. Although natural and intuitive, the systems and processes described herein are rooted in computerized artificial reality systems, rather than simulations of traditional object interactions. For example, existing object interaction techniques cannot describe multiple different ray casting rules for the same 3D space, let alone provide customization of the rules for different elements. Furthermore, existing XR systems do not provide for multiple ray casting based on the same user control, where the rays used to implement the user action are selected based on context and intent.

Several implementations are discussed in more detail below with reference to the figures. Fig. 1 is a block diagram illustrating an overview of devices on which some implementations of the disclosed technology may operate. These devices may include hardware components of a computing system 100, the computing system 100 managing ray casting in a 3D environment by allowing different ray casting rules to be assigned to elements, setting ray casting rules for elements in response to triggers, and managing multiple simultaneous rays. In various implementations, computing system 100 may include a single computing device 103 or multiple computing devices (e.g., computing device 101, computing device 102, and computing device 103) that communicate over wired or wireless channels to distribute processing and sharing of input data. In some implementations, the computing system 100 may include a stand-alone head-mounted device capable of providing a computer-created or enhanced experience for a user without the need for external processing or sensors. In other implementations, the computing system 100 may include multiple computing devices, such as a headset and a core processing component (e.g., a console, mobile device, or server system), with some processing operations being performed on the headset and other processing operations being offloaded to the core processing component. An example headset is described below in conjunction with fig. 2A and 2B. In some implementations, the location and environment data may be collected only by sensors incorporated in the headset device, while in other implementations, one or more non-headset computing devices may include sensor components capable of tracking environmental or location data.

The computing system 100 may include one or more processors 110 (e.g., Central Processing Units (CPUs), Graphics Processing Units (GPUs), Holographic Processing Units (HPUs), etc.). Processor 110 may be a single processing unit or a plurality of processing units in a device, or distributed across multiple devices (e.g., distributed across two or more computing devices 101 and 103).

Computing system 100 may include one or more input devices 120 that provide input to processor 110, notifying them of actions. These actions may be facilitated by a hardware controller that interprets signals received from an input device and communicates the information to the processor 110 using a communication protocol. Each input device 120 may include, for example, a mouse, keyboard, touch screen, touch pad, wearable input device (e.g., haptic glove, bracelet, ring, earring, necklace, watch, etc.), camera (or other light-based input device, such as an infrared sensor), microphone, or other user input device.

The processor 110 may be coupled to other hardware devices, for example, using an internal or external bus, such as a PCI bus, SCSI bus, or wireless connection. The processor 110 may communicate with a hardware controller of a device, such as the display 130. Display 130 may be used to display text and graphics. In some implementations, the display 130 includes an input device as part of the display, such as when the input device is a touch screen or is equipped with an eye direction monitoring system. In some implementations, the display is separate from the input device. Examples of display devices are: LCD display screens, LED display screens, projection displays, holographic or augmented reality displays (such as head-up display devices or head-mounted devices), and the like. Other I/O devices 140 may also be coupled to the processor, such as a network chip or card, a video chip or card, an audio chip or card, a USB, firewire or other external device, a camera, a printer, a speaker, a CD-ROM drive, a DVD drive, a disk drive, etc.

Computing system 100 may include communication devices capable of wireless or wired communication with other local computing devices or network nodes. The communication device may communicate with another device or server over a network using, for example, the TCP/IP protocol. Computing system 100 may distribute operations across multiple network devices using communication devices.

The processor 110 may access the memory 150, and the memory 150 may be contained on one of the computing devices of the computing system 100, or may be distributed across multiple computing devices of the computing system 100 or other external devices. The memory includes one or more hardware devices for volatile or non-volatile storage, and may include read-only memory and writeable memory. For example, the memory may include one or more of Random Access Memory (RAM), various caches, CPU registers, read-only memory (ROM), and writable non-volatile memory, such as flash memory, hard disks, floppy disks, CDs, DVDs, magnetic storage devices, tape drives, and the like. The memory is not a propagated signal off the underlying hardware; thus, the memory is non-transitory. The memory 150 may include a program memory 160 that stores programs and software such as an operating system 162, a light manager 164, and other application programs 166. Memory 150 may also include data storage 170, which may store data that may be provided to program storage 160 or any element of computing system 100.

Some implementations may operate with many other computing system environments or configurations. Examples of computing systems, environments, and/or configurations that may be suitable for use with the technology include, but are not limited to, XR headsets, personal computers, server computers, hand-held or laptop devices, cellular telephones, wearable electronic devices, gaming consoles, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.

Fig. 2A is a line drawing of a virtual reality Head Mounted Display (HMD)200 in accordance with some embodiments. The HMD 200 includes a front rigid body 205 and a band 210. The front rigid body 205 includes one or more electronic display elements of an electronic display 245, an Inertial Motion Unit (IMU)215, one or more position sensors 220, a positioner 225, and one or more computing units 230. The position sensor 220, IMU 215, and computing unit 230 may be internal to the HMD 200 and may not be visible to the user. In various implementations, the IMU 215, position sensor 220, and locator 225 may track the motion and location of the HMD 200 in three degrees of freedom (3DoF) or six degrees of freedom (6DoF) in real world and virtual environments. For example, the positioner 225 may emit an infrared beam that produces a spot of light on real objects around the HMD 200. One or more cameras (not shown) integrated with the HMD 200 may detect the light spots. The computing unit 230 in the HMD 200 may use the detected light points to extrapolate the position and movement of the HMD 200, as well as identify the shape and position of real objects around the HMD 200.

An electronic display 245 may be integrated with the front rigid body 205 and may provide image light to the user as directed by the computing unit 230. In various embodiments, the electronic display 245 may be a single electronic display or multiple electronic displays (e.g., one display for each user's eye). Examples of electronic display 245 include: a Liquid Crystal Display (LCD), an Organic Light Emitting Diode (OLED) display, an active matrix organic light emitting diode display (AMOLED), a display including one or more quantum dot light emitting diode (QOLED) sub-pixels, a projector unit (e.g., a micro led, a laser, etc.), some other display, or some combination thereof.

In some implementations, HMD 200 may be coupled to a core processing component, such as a Personal Computer (PC) (not shown) and/or one or more external sensors (not shown). External sensors may monitor the HMD 200 (e.g., through light emitted from the HMD 200), which the PC may use in conjunction with the output from the IMU 215 and the position sensor 220 to determine the positioning and movement of the HMD 200.

In some implementations, HMD 200 may communicate with one or more other external devices, such as a controller (not shown) that a user may hold with one or both hands. The controller may have its own IMU unit, position sensor and/or may emit more spots. The HMD 200 or external sensor can track these controller spots. The computing unit 230 or core processing components in the HMD 200 may use this tracking in conjunction with the IMU and position output to monitor the user's hand position and motion. The controller may also include various buttons that the user may actuate to provide input and interact with the virtual object. In various implementations, HMD 200 may also include additional subsystems, such as an eye tracking unit, an audio system, various network components, and so forth. In some implementations, one or more cameras included in or external to HMD 200 may monitor the position and pose of the user's hand, instead of or in addition to the controller, to determine gestures and other hand and body movements.

Fig. 2B is a line drawing of a mixed reality HMD system 250 that includes a mixed reality HMD 252 and a core processing component 254. As shown by link 256, the mixed reality HMD 252 and the core processing component 254 may communicate over a wireless connection (e.g., a 60GHz link). In other implementations, the mixed reality system 250 includes only a head mounted device, no external computing device, or other wired or wireless connections between the mixed reality HMD 252 and the core processing component 254. The mixed reality HMD 252 includes a pass-through display 258 and a frame 260. The frame 260 may house various electronic components (not shown), such as light projectors (e.g., lasers, LEDs, etc.), cameras, eye tracking sensors, MEMS components, network components, and so forth.

The projector may be coupled to a pass-through display 258, for example, through optical elements, to display media to a user. The optical elements may include one or more waveguide assemblies, reflectors, lenses, mirrors, collimators, gratings, or the like, for directing light from the projector to the eye of the user. Image data may be transmitted from the core processing component 254 to the HMD 252 via link 256. A controller in HMD 252 may convert the image data into light pulses from a projector that may be transmitted as output light to the user's eye through optical elements. The output light may be mixed with light passing through the display 258, allowing the output light to present virtual objects that appear as if they existed in the real world.

Similar to HMD 200, HMD system 250 may also include motion and position tracking units, cameras, light sources, etc., which allow HMD system 250 to track itself, e.g., in 3DoF or 6DoF, track portions of the user (e.g., hands, feet, head, or other body parts), map virtual objects to appear stationary as HMD 252 moves, and react to gestures and other real world objects.

Fig. 3 is a block diagram illustrating an overview of an environment 300 in which some implementations of the disclosed technology may operate 300. The environment 300 may include one or more client computing devices 305A-305D, examples of which may include the computing system 100. In some implementations, some client computing devices (e.g., client computing device 305B) may be HMD 200 or HMD system 250. The client computing device 305 may operate in a networked environment using logical connections to one or more remote computers (e.g., server computing devices) over a network 330.

In some implementations, server 310 may be an edge server that receives client requests and coordinates the implementation of those requests by other servers (e.g., servers 320A-320C). Server computing devices 310 and 320 may comprise a computing system, such as computing system 100. Although each server computing device 310 and 320 is logically shown as a single server, the server computing devices may each be a distributed computing environment, including multiple computing devices located at the same or geographically different physical locations.

Client computing device 305 and server computing devices 310 and 320 may each act as a server or a client to other servers/client devices. The server 310 may be connected to a database 315. Servers 320A-320C may each be connected to a respective database 325A-325C. As described above, each server 310 or 320 may correspond to a group of servers, and each of these servers may share a database or may have their own database. Although databases 315 and 325 are logically shown as a single unit, databases 315 and 325 may each be a distributed computing environment containing multiple computing devices, may be located within their respective servers, or may be located at the same or geographically different physical locations.

Network 330 may be a Local Area Network (LAN), Wide Area Network (WAN), mesh network, hybrid network, or other wired or wireless network. The network 330 may be the internet or some other public or private network. The client computing device 305 may connect to the network 330 through a network interface, such as through wired or wireless communication. Although the connections between server 310 and server 320 are shown as separate connections, these connections may be any kind of local area network, wide area network, wired or wireless network, including network 330 or a separate public or private network.

Fig. 4 is a block diagram illustrating a component 400, which component 400 may be used in a system employing the disclosed technology in some implementations. Component 400 may be included in one device of computing system 100 or may be distributed across multiple devices of computing system 100. The components 400 include hardware 410, a mediator 420, and application specific components 430. As described above, a system implementing the disclosed technology may use a variety of hardware including a processing unit 412, working memory 414, input and output devices 416 (e.g., cameras, displays, IMU units, network connections, etc.), and storage memory 418. In various implementations, the storage memory 418 may be one or more of: a local device, an interface to a remote storage device, or a combination thereof. For example, storage memory 418 may be one or more hard disk drives or flash drives accessible over a system bus, or may be a cloud storage provider (e.g., in storage 315 or 325) or other network storage accessible over one or more communication networks. In various implementations, the component 400 may be implemented in a client computing device, such as the client computing device 305, or on a server computing device, such as the server computing device 310 or 320.

Mediator 420 may include components that mediate resources between hardware 410 and special purpose components 430. For example, mediator 420 may include an operating system, services, drivers, a Basic Input Output System (BIOS), controller circuitry, or other hardware or software systems.

The special purpose unit 430 may include software or hardware configured to perform operations for managing different ray casting rules in a 3D environment. The specialized components 430 may include, for example, a ray rule set and element map 434, a trigger detection module 436, an element controller 438, a ray controller 440, a multi-ray result selector 442, and components and APIs (e.g., interface 432) that may be used to provide a user interface, transfer data, and control the specialized components. In some implementations, component 400 may be in a computing system distributed across multiple computing devices, or may be an interface to a server-based application executing one or more special-purpose components 430. Although depicted as separate components, the application specific components 430 may be functionally distinct, logically or otherwise, and/or may be sub-modules or code blocks of one or more applications.

The ray casting rule set and element mapping 434 may include a set of ray casting rules that have been mapped to a particular element. Although illustrated as a particular module, in some implementations, the ray casting rule sets and element mappings 434 may be distributed among different data sources, such as in a file or other data object associated with a particular application, defined in code of an application, specified in a database, and so forth. In some implementations, the ray casting rule sets and element mappings 434 may include mappings to predefined ray casting rule sets, e.g., by referencing identifiers of defined ray casting rule sets.

Examples of ray casting rule sets include a "normal" ray casting rule set, a "limited" ray casting rule set, a "down" ray casting rule set, a "remote" ray casting rule set, an "anchor" ray casting rule set, a "forward" ray casting rule set, a "curved" ray casting rule set, and a "sphere" ray casting rule set.

The set of "normal" ray casting rules may specify rays originating from the user's fingertip or from the controller that extend along a line defined by A) the fingertip to the user's wrist or B) the centerline of the controller. The light ray may be an infinitely long straight line. The "limited" ray casting rule set may specify rays in the same manner as the normal ray casting rule set, except that rays are only of a fixed length (e.g., 18 inches, 3 feet, etc.).

The "downward" ray casting rule set may specify a ray originating from a point on the controller or user's hand (e.g., the center of her palm) that is directed downward to be perpendicular to the floor plane or a plane defined as the bottom of the volume. In some implementations, instead of using a floor plane or the bottom of a volume, the set of ray casting rules may use another defined surface that rays remain perpendicular to.

A "remote" ray casting rule set may operate similar to a downward ray casting rule set, except that the origin of the ray is on a designated surface and moves relative to the user's hand or controller, which may be away from the defined origin surface. In some implementations, the magnitude of the user movement can be modified on the origin surface such that large hand movements produce small changes at the light origin, or such that small hand movements produce large changes at the light origin. In some implementations, the user's hand movement may map a 2D left/right/front/back movement to a left/right/up/down movement of the ray origin.

An "anchor" ray casting rule set may specify the location of a ray based on an origin and a control point. A line extending from the origin through the control point may be set as the center of the ray. The origin may be a tracked portion of the user's body, such as the dominant eye, hip, or shoulder, and the control point may be a controller or a portion of the user's hand, such as a fingertip, palm, wrist, or fist.

The "forward" ray casting rule set may specify that the rays are all perpendicular to the input surface. For example, if the input surface is a flat panel, the light originates from the user's hand, but is oriented perpendicular to the surface regardless of the angle of the user's hand.

A "curved" ray casting rule set may operate like a normal ray casting rule set except that lines emanating from an origin are curved (e.g., bent downward) with a particular curvature.

A "sphere" ray casting rule set may specify that the lines of traditional light be replaced with a sphere that is fixed on the user's hand or on the controller.

The above examples of ray casting rule sets are not exhaustive, and in fact, there are countless ray casting rules that can be set using the disclosed techniques. Some additional examples of PROJECTION (e.g., light) interaction systems that may be selected or customized using the disclosed techniques are described in U.S. patent application No. 16/578,221, entitled PROJECTION CASTING IN visual events, U.S. patent application No. 16/578,236, entitled GLOBAL AND LOCAL MODE visual OBJECT INTERACTIONS, U.S. patent application No. 16/578,240, entitled "method-STATE general OBJECTs CONTROLS, U.S. patent application No. 16/578,260, entitled" VIRTUAL interfaces AT A DISTANCE, AND U.S. patent application No. 16/661,945, entitled 3D interfaces WITH WEB events, each of which is incorporated herein by reference in its entirety. Additional details regarding the assignment and storage of ray casting rules to element mappings are discussed below in conjunction with FIG. 5A.

The trigger detection module 436 may detect when a trigger event occurs, signaling that the system should switch the ray casting rules. In various implementations, trigger detection may occur by registering which events will trigger the application of the ray casting rule change, or the system may determine the trigger event corresponding to each mapping from the ray casting rule set and element mapping 434. For example, a triggering event may specify a change in a ray casting rule under various circumstances, such as when a portion of a user enters a volume mapped to a new ray casting rule, when a user interacts with an object element mapped to a new ray casting rule, or when a user performs a gesture mapped to a new ray casting rule. Additional details regarding detecting a trigger for changing the ray casting rules are discussed below in connection with block 554 of FIG. 5B.

The element controller 438 may apply an indicator to the drawn element to indicate to the user that entering or interacting with the element will result in a change in the ray casting rules. For example, icons, colors, highlights, or text may be displayed that will provide signals to the user that the transition between ray casting rules is expected, eliminating potential confusion. Additional details of displaying elements according to associated ray casting rules are discussed below in connection with block 552 of FIG. 5B.

When the mapping is triggered by the trigger detection module 436, the ray controller 440 may receive ray casting rules from the ray casting rule set and the element mapping 434. The ray controller 440 may then change the attributes of the rays, such as changing shape, size, angle, effect, origin, control dynamics, etc., according to the received ray casting rules. In some implementations, the light controller 440 may also project multiple lights simultaneously, where only one of the multiple lights selected as primary by the multi-light result selector 442 will be used to interact with the environmental element. In various implementations, only the primary light is displayed to the user, or the non-primary light is displayed as diminished compared to the primary light. Additional details regarding retrieving and applying ray casting rules are discussed below in connection with blocks 556 and 560 of FIG. 5B. Additional details regarding the projection of multiple rays will be discussed below in conjunction with fig. 11, 12A, and 12B.

The multi-ray result selector 442 may determine which of the plurality of rays cast by the ray controller 440 is the principal ray by applying a priority rule. The chief ray is a ray that the user can use for interaction, while the other rays are inactive until determined to be the chief ray. In various implementations, the rules that determine which ray is the primary ray may be based on various factors, such as: a predefined hierarchy of rays, a determination of where the user focus is located, and/or identified as actionable rays. In some implementations, many of these factors may be applied by: using certain factors as binary selectors (i.e., filters) to get rays that qualify to be selected as dominant, using factors as weights to determine the relative importance of rays, and/or sorting rays. Additional details regarding selecting a chief ray from the plurality of rays are discussed below in connection with block 1104 of FIG. 11 and FIGS. 12A and 12B.

Those skilled in the art will appreciate that the components shown in fig. 1-4 above, as well as the components in each of the flow diagrams discussed below, may be varied in a number of ways. For example, the order of logic may be rearranged, sub-steps may be performed in parallel, illustrated logic may be omitted, other logic may be included, and so on. In some implementations, one or more of the components described above may perform one or more of the processes described below.

FIG. 5A is a flow diagram illustrating a process 500 for assigning ray casting rules to elements in some implementations of the present technology. In some implementations, the process 500 may be performed on a computing system at the direction of a content creator or developer. For example, a developer of an application may use process 500 to assign ray casting rules to elements or element types that the application may create.

At block 502, the process 500 receives an assignment of ray casting rules for an element or a type of element. In various implementations, the ray casting rule assignment may be made in various ways, such as in program code, in a mapping of a) elements or element types to B) defined ray casting rules or ray casting rule sets, as declarations or other structured formats, and so forth.

For example, the code of the application may include a function call from the application to a control program of the XR system to create a virtual object or to request an allocation of a volume (i.e., an allocation element) for the application to draw into. The function call may include parameters identifying one or more ray casting rules corresponding to the element (e.g., specifying the name of an established set of ray casting rules, providing a reference to a ray casting rule data object defining one or more rules, etc.). As a more specific example, the shell of the XR system may provide an interface to request volumes as follows: a volume (volume ID, volume name, default location, volume size, ray casting rule set) is obtained, where the final parameter ray casting rule set employs a list of ray casting rules (or identifiers of such a list) specifying the rules that will be implemented when the user enters the volume returned by the shell program. A developer may include such lists in a program, provide instructions that cause the program to dynamically build such lists, or specify an external source from which to extract the lists (e.g., see the mappings of the declaration examples below). In another case, a mapping may be used to define pairs between an element or element type and a ray casting rule or set of ray casting rules. For example, an application developer may use a tool that allows the developer to use a GUI or other interface to select rules that apply to particular elements created by the application, e.g., "standard _ box _ volume" → "traditional ray"; "table top volume" → "down"; "starry sky _ volume" → "two-handed variable diameter"; etc. so that when an application requests a volume with the name, it will set the corresponding ray casting rule. Similarly, a ray casting rule set of elements or element types may be specified, for example, as structured data (e.g., as an XML or JSON blob). For example, a "down" rule set may be defined in a JSON blob as { ray _ orientation: parallel to the _ Y axis, ray _ direction: away from the user; ray origin: a palm center; ray object pass: if not; light length: infinite; light effect: standard; [ etc. ].

In various implementations, the ray casting rules may be assigned to individual elements, e.g., when an application creates an element, the application may be configured to assign a set of ray casting rules to the element, or the element may be assigned types that have been set (e.g., by an application developer) to be associated with a predefined set of ray casting rules. For example, in the mapping example above, "table _ top _ volume" → "down" specifies that a volume assigned the "table _ top _ volume" type should be assigned a defined ray casting rule set labeled "down". Further ray casting rules may be assigned to volumes (which may have different shapes or other attributes) or objects (which may be real or virtual). For example, when an application is granted a volume to write to, the volume may have light projection rules that will be activated when a user enters the volume. As another example, when an application causes a virtual object to be created, the object may be assigned a ray casting rule that is activated when the object is picked up. In yet another example, when the application is active and provided with identification of a real-world object, the application may provide a set of back ray casting rules to apply when the object is selected or the user is within a threshold distance of the object.

In various implementations, assigning a ray casting rule to an element may also specify a trigger that will result in the ray casting rule being implemented. For example, a trigger may specify that a ray casting rule applies based on entering a volume, selecting an object, touching an object, being within a threshold distance of an element, a specified relationship between elements (e.g., implementing a rule when an object of a particular type enters a volume), performing a particular gesture, or activating a control, etc.

In some implementations, multiple ray casting rule sets may be applied to the same element or a set of nested elements (e.g., objects or volumes within another volume). The ray casting rule sets may be associated with different contexts or different triggers that cause their activation. For example, the context for activating the ray casting rule set includes identifying certain features of the user, identified relationships with other elements, distances between the user and the elements, and the like. In some implementations, the ray-casting rule set may have an inheritance system (inheritance system), whereby nested elements use the rule set of the parent element unless the rule is replaced by the ray-casting rule assigned to the nested element. In other implementations, nested elements having assigned ray casting rules that do not specify a particular type of ray casting rule may use default values for those unspecified ray casting rules.

In some implementations, the XR system can automatically identify ray casting rules that apply to the element. For example, element features may be mapped to various ray casting rule sets or observed user interactions with elements (or elements identified as similar, e.g., by a machine learning system), and the XR system may recognize that these types of interactions are typically facilitated by switching to a particular ray casting rule set. In some implementations, such mapping or interaction type pairing can be performed using a machine learning model, where the model can be trained, e.g., using training items: A) the developer-selected ray casting rules are paired with element features, B) the user-selected ray casting rules are paired with particular elements, or C) the pairing is based on metrics determined for observed user interactions. For example, the amount of time it takes for a user to perform an action and/or the number of times the action must be attempted may be tracked for an element type to determine which ray casting rule sets produce better interaction metrics, which the system may then use for positive and negative training examples to select ray casting rules in view of their characteristics for a new element.

At block 504, the process 500 may store one or more indications of associations between elements or element types and ray casting rules from block 502. In various implementations, these indications may be stored as part of the source code or in a local or remote data resource accessible to the program (e.g., in a database, data file, local or global variable, etc.). In some implementations, the set of ray casting rules may be established as a shareable ray casting modality that may be transmitted and used by other developers. Thus, the ray casting rules framework described herein may serve as a basis for creating a standard library of ray casting rule sets that may be selected for various use cases. This may allow a developer or automated system to identify the interaction type of an element and select or have the system automatically select the ray casting rule set identified as best suited for that interaction type.

FIG. 5B is a flow diagram illustrating a process 550 for applying ray casting rules corresponding to triggering events used in some implementations of the present technology. In some implementations, process 550 may be performed by the XR system while running an application that creates and controls an element for which process 500 (FIG. 5A) has assigned ray casting rules. Blocks 552 and 558 of process 550 are shown in dashed lines to explicitly indicate that in some implementations, these steps are not performed. However, as noted above, in various implementations, steps are not required, and steps may be rearranged or removed. When the application causes virtual objects to be created and/or requests a virtual space to be written to, the application may provide the ray casting rules defined during process 500 and/or the XR system may generate them "in time".

At block 552, the process 550 may result in the display of a visual indicator or "affordance" that signals an element having an alternative ray casting rule. In various implementations, these affordances may be displayed at all times or at certain events, such as when the corresponding element is selected or identified as a target for the user's gaze. For example, when a volume having alternative ray casting rules is selected, a "wireframe" may be placed around the volume, and icons defined for the set of ray casting rules associated with the volume may be placed on the boxes defined by the wireframe (see, e.g., icons 606 and 610 on the wireframe of volume 604 in fig. 6).

In some implementations, when an application causes an object to be created or requests a volume to be written, the application may provide to a system controller (e.g., a shell program or operating system) of the XR system the ray casting rules to be applied to the element (as defined by process 500 of fig. 5A) and one or more affordances (e.g., icons) to be set for the element. In other implementations, the application of control elements may manage the enlightenment application of those elements. In still other applications, the system controller may automatically provide an advisory, for example, when the indicator signals that a ray casting rule has changed for the element or a pre-mapped advisory is used for a different type of ray casting rule or set of ray casting rules.

At block 554, the process 550 may determine whether any trigger events for changing the ray casting rules occurred. In various implementations, the XR system, the activated application, or both may be responsible for determining whether a triggering event has occurred. The trigger event may include any identifiable action that may be mapped to a set of ray casting rules. Some triggering events may include entering a volume element, selecting or picking up an object element, or performing a particular gesture or activating an explicit control (e.g., a control in a menu) for changing ray casting rules. In some implementations, when an application causes a virtual object to be created or requests a virtual space to be written to, the application may provide trigger events to the XR system that will cause the ray casting rules to change, allowing the XR system setup registers to monitor those events and implement the indicated ray casting rules when the trigger events occur. In the first case, the trigger event may be associated with a particular element (e.g., activated when a user's hand or controller enters the volume), while in the second case, the trigger event may be generic (e.g., activated when a user performs a particular gesture). In the first case, the trigger event may be linked to the ray casting rule by the ray casting rule associated with the element. In the second case, the trigger event may be linked directly to the ray casting rules in the map of the XR system.

In some implementations, multiple sets of ray casting rules may be mapped to the same element corresponding to different triggers. The process 550 may be configured to detect triggers of different contexts, e.g., depending on characteristics of the user, relationships between identified elements, distances between the user and the elements, etc. As a more specific example, a first set of ray casting rules may be applied when a volume element is selected from the outside (first trigger), and a second set of ray casting rules may be applied when a user's hand enters the volume (second trigger). Each of these triggers may be registered into the XR system for monitoring and linked to a corresponding rule set.

If no triggering event is detected, the process 550 may return to block 552. If a triggering event is detected, process 550 may continue to block 556.

At block 556, the process 550 may retrieve the ray casting rules that match the trigger event detected at block 554. When a trigger event is associated with a particular element (e.g., a volume is entered or an object is selected), the retrieved ray casting rule may be the ray casting rule associated with the element (and in some cases the trigger for the element). When the trigger event is not associated with a particular element (e.g., a ray casting rule switching gesture is performed), the trigger event may be mapped to a particular ray casting rule in the XR system, which the process 550 may retrieve.

At block 558, process 550 may cause a visual or other (e.g., audio or tactile) transition indicator to be provided to signal a transition in the ray casting rules. Although not required, such an indicator may signal to the user that a ray casting rule transition has occurred, thereby significantly reducing user confusion when the user interaction pattern changes. For example, when a ray casting rule change causes a change in ray origin, size, or angle, the animation may slide or morph the ray from a new origin, with a new size, or at a new angle. As another example, in the event that a ray casting rule change results in a ray function change, an example of a new function may be displayed and/or a notification may be provided that explains the function.

At block 560, the process 550 may apply the retrieved ray casting rules. As described above, this may include numerous possible adjustments, such as changing the ray direction, origin, control point, length, width, shape, how the action of the user's second hand affects the ray, what effect the ray has on the artificial environment, and so forth. Also as described above, in some cases, the new ray casting rules may be in a predefined set, providing standard interaction modalities that the user can get used to and understand how to interact with. The process 550 may be repeated continuously as the application continues to create objects or request volumes with assigned ray casting rules, or there are gestures or controls that map to ray casting rule changes.

Fig. 6 is a conceptual diagram illustrating an example 600 of multiple elements displayed in 3D space with different ray casting rule sets. The example 600 includes three virtual elements: a cuboid volume 604 (surrounding object 602), a cuboid object 608, and a spherical volume 612.

The volume 604 has a set of "downward" ray casting rules applied to it. The corresponding visual affordances 606A and 606B are icons that are displayed when the wire frame of the volume 604 is displayed, indicating that when the user's hand enters the volume 604, the light will turn straight from the user's hand toward the bottom of the volume (see FIG. 7).

Object 608 has a set of "sphere" ray casting rules applied to it. The corresponding visual affordances 610A and 610B are icons displayed on the object 608 indicating that when the object 608 is selected, the light will be replaced by a sphere originating from the user's hand (see FIG. 9).

The spherical volume 612 has a "limited" set of ray casting rules applied to it. The corresponding visual affordance 614 is an icon that is displayed continuously with respect to the volume 612, indicating that when the user's hand enters the volume 612, the light will transition to shorten to a particular length (see FIG. 8).

In the example 600, the user hand 616 has not entered the volume 604 or 612 and has not selected the object 608. Thus, using the default ray casting rules, rays 618 are cast directly from the user's hand 616.

Fig. 7 is a conceptual diagram illustrating an example 700 of a set of downward ray casting rules corresponding to activation of a volume 604. In the example 700, the user has placed her hand 616 into the volume 604 (not shown-see fig. 6), which the XR system identifies as a trigger for changing the ray casting rules. In response, the XR system has retrieved and applied the "down" ray casting rules associated with the volume 604. In this case, applying the ray casting rule includes casting no more rays 618 (FIG. 6), but rather casting rays 702 originating from the user's palm and pointing to the base of the volume 604. This allows the user to easily control objects on the desktop of object 602.

Fig. 8 is a conceptual diagram illustrating an example 800 of a limited set of ray casting rules corresponding to activation of a volume 612. In example 800, the user has placed her hand 616 into a volume 612 (not shown-see fig. 6), which the XR system identifies as a trigger for changing the ray casting rules. In response, the XR system has retrieved and applied the "limited" ray casting rules associated with the volume 612. In this case, applying the ray casting rule includes replacing ray 618 (which extends to infinity, see FIG. 6) with shortened ray 802, which shortened ray 802 ends at point 804, which is three feet from the user's hand 616. Object 806 and 812 are within volume 612. Shortening ray 802 intersects objects 806 and 808, causing them to be selected (indicated by the bold line), but ends before object 810, so object 810 is not selected.

FIG. 9 is a conceptual diagram illustrating an example 900 of a set of sphere ray casting rules corresponding to activation of an object 608 that a user has selected. In example 900, the user has moved her hand 616 so that ray 618 intersects object 608, selects it (not shown — see FIG. 6), which the XR system recognizes as a trigger to change the ray casting rules. In response, the XR system has retrieved and applied the "sphere" ray casting rules associated with object 608. In this case, applying the ray casting rule includes replacing ray 618 with a selection sphere 902 anchored to the user's hand 616 (FIG. 6). In this example, selection of object 608 causes object 904 and 910 to appear. Selection sphere 902 intersects objects 904 and 906 causing them to be selected (indicated by the bold lines), but not disjoint objects 908 and 910.

FIG. 10 is a conceptual diagram illustrating an example 1000 where a ray casting rule has implemented an "anchored" set of ray casting rules. These ray casting rules are enforced with the location of the rays based on the origin (e.g., points 1006 or 1008) and the control points (e.g., points 1002 or 1004). A line extending from the origin through the control point is set as the center of the ray. In various implementations, the origin point may be a tracked portion of the user's body, such as the dominant eye (point 1006), hip, or shoulder (point 1008), and the control point may be a portion of the user's hand, such as a fingertip (point 1004), palm (point 1002), wrist, or fist. The anchor ray using the combination of origin and control points is more stable and accurate than conventional ray casting. Additional details regarding anchoring light PROJECTION are provided in U.S. patent application No. 16/578,221, entitled PROJECTION CASTING IN VIRTUAL entry networks, which is incorporated by reference.

FIG. 11 is a flow diagram illustrating a process 1100 for managing multiple simultaneous rays in some implementations of the present technology. Process 1100 may be performed by an XR system when a user interacts with elements in an environment.

At block 1102, the process 1100 may project a plurality of rays having different attributes. The attributes of the rays may include rays of different angles, rays with different geometries (e.g., shapes or curvatures), different origins, different reactions to user motion, or rays that cause different effects when they interact with elements in the environment. For example, the process 1100 may direct a light ray from the user's hand, project a second light ray from the user's hand that is curved toward the ground, direct a third light ray from the user's hand toward the floor, and a fourth light ray that originates at the user's dominant eye and is anchored by a control point of the user's fingertip.

At block 1104, the process 1100 may use rules to determine which of the plurality of rays cast at block 1102 is the chief ray. The chief ray is a ray that the user can use to interact with the world, while the other rays are inactive until determined to be the chief ray. For example, if the XR system projects three rays, and each ray intersects a corresponding object, the XR system only selects the object that intersects the primary ray when the user performs a "tap" gesture to select the object. In various implementations, the rules that determine which ray is the primary ray may be based on various factors, such as: a predefined hierarchy of rays, a determination of where the user focus is located, and/or identified as actionable rays. In some implementations, many of these factors may be applied by: some factors are used as binary selectors (i.e., filters) to get rays that qualify to be selected as dominant, and factors are used as weights to determine the relative importance of and/or rank the rays.

For example, a predefined hierarchy of rays may provide an initial weight for sorting rays, such as 0.6 for rays extending straight out from the user's hand, 0.5 for rays extending out from the user's hand and curving downward, 0.4 for rays extending straight out to the floor, and 0.2 for straight rays sloping outward from the user's hand and upward. As another example, if the user's focus corresponds to a ray, an additional 0.4 weight may be applied to the ray. In various implementations, the user focus point corresponding to a ray may be where the user's eye or head direction is tracked, and the line from the user's head or eye in that direction is within a threshold distance of the ray or within a threshold distance of the object with which the ray intersects. In yet another example, a weight (e.g., 0.4) may be applied to a ray based on whether the ray is identified as actionable. In various implementations, a light may be identified as actionable when the results of available user commands (e.g., gestures, verbal recommendations, or other inputs) will be affected by the light state. For example, the XR system may detect when the user makes a "tap" gesture that results in the selection of an object that intersects a ray, so only the ray that is currently intersecting the selectable object is actionable. If more than one type of weight is applied, the various weights on each ray may be added to determine the overall weight of each ray, and the highest weighted ray may be selected as the principal ray. The above weight values are merely examples, and other weights or methods of combining weights may be used. In other examples, the factors of the rays may be sorted using filters rather than weights. As a specific example, some rays can be considered primary rays only when the rays are actionable or when the rays correspond to the focus of the user, for example. In some implementations, the filter may be applied to only some of the light rays. For example, the user focus may be a filter of light rays that extend outward and curve downward from the user's hand, but may be only a weighting factor of light rays that extend directly from the user's hand. Additional examples of determining which of a plurality of rays is the principal ray are provided below in conjunction with fig. 12A and 12B.

At block 1106, the process 1100 may modify the display of one or more of the plurality of rays based on the selection result. In some implementations, this may include hiding all but the primary ray. In other implementations, this may include displaying the primary light more prominently than the non-primary light (e.g., as a thicker line, in a different color, more solid, etc.). In some implementations, the rays may have different views according to the ordering of the rays from block 1104. For example, a primary ray may be highlighted, rays with weights above a threshold may appear attenuated compared to the primary ray, and rays with filtered or weights below the threshold may be hidden.

At block 1108, when a ray-based action occurs (e.g., performing a user selection gesture, an application requesting a current direction of a ray, etc.), the process 1100 may provide an indication of a principal ray (e.g., an access to the principal ray, an origin and direction of the principal ray, an indication of an object that the principal ray intersects, etc.) to perform the action. The process 1100 may be repeated continuously while using light in an XR system.

Fig. 12A and 12B are conceptual diagrams illustrating an example 1200 of casting a plurality of rays, where a ray is selected based on a hierarchy and a user focus. Example 1200 shows a user's hand 1202 where rays 1206 and 1212 are projected by the XR system from an origin where the user has gathered the tips of her thumb and middle finger together. Example 1200 also shows an indication 1204 of a user's gaze tracked by the XR system.

In example 1200, the hierarchy of weights is applied to ray 1206 and 1212 as follows: ray 1208-0.5, ray 1212-0.25, ray 1210-0.2, and ray 1206-0.1. In example 1200, the ray weight is further based on whether the ray intersects a selectable object (+0.4) and whether the user's gaze corresponds to a ray (+ 0.5).

In fig. 12A, no ray intersects a selectable object, and the user's gaze does not correspond to any ray, so the rays are weighted according to the hierarchy: ray 1208-0.5, ray 1212-0.25, ray 1210-0.2, and ray 1206-0.1. Thus, in FIG. 12A, the ray 1208 with the highest weight is the primary ray and is therefore shown by the XR system as a bolder line than the other rays.

In FIG. 12B, ray 1210 intersects optional object 1216 and ray 1212 intersects optional ray 1214, in addition to the hierarchical ray weights. Thus, the weight of each of these rays is increased by 0.4. In addition, the user's gaze 1204 also intersects the selectable object 1216 at a point within a threshold distance of the location where the ray 1210 intersects the selectable object 1216. Thus, the weight of the light ray 1210 is further increased by 0.5. Thus, the weights of the rays in FIG. 12B are as follows: ray 1210-1.1(0.2+0.4+0.5), ray 1212-0.65(0.25+0.4), ray 1208-0.5, and ray 1206-0.1. Thus, in FIG. 12B, the ray 1210 with the highest weight is the primary ray and is therefore shown as a bolder line than the other rays by the XR system.

Reference in the specification to "an implementation" (e.g., "some implementations," "various implementations," "one implementation," "an implementation," etc.) means that a particular feature, structure, or characteristic described in connection with the implementation is included in at least one implementation of the present disclosure. The appearances of such phrases in various places in the specification are not necessarily all referring to the same implementation, nor are separate or alternative implementations necessarily mutually exclusive of other implementations. In addition, various features are described that may be presented by some implementations but not by others. Similarly, various requirements are described which may be requirements for some implementations but not other implementations.

As used herein, above a threshold means that the value of the item being compared is above the other values specified, the item being compared is the one with the greatest value of a certain number of items, or the item being compared has a value within a specified top percentage value. As used herein, below a threshold means that the value of the item being compared is below the other values specified, the item being compared is the one with the smallest value of a certain number of items, or the item being compared has a value within a specified bottom percentage value. As used herein, within a threshold means that the value of the compared item is between two specified other values, the compared item is between a middle specified number of items, or the compared item has a value within a middle specified percentage range. Relative terms such as "high" or "unimportant," if not otherwise defined, may be understood as assigning a value and determining how the value compares to an established threshold. For example, the phrase "selecting a quick connection" may be understood to mean selecting a connection having an assigned value corresponding to its connection speed above a threshold value.

As used herein, the word "or" refers to any possible permutation of a set of items. For example, the phrase "A, B or C" refers to at least one of A, B, C, or any combination thereof, such as any of the following: a; b; c; a and B; a and C; b and C; A. b, C, respectively; or multiples of any item, such as a and a; B. b, C, respectively; A. a, B, C and C; and so on.

Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Specific embodiments and implementations have been described herein for purposes of illustration, but various modifications may be made without deviating from the scope of the embodiments and implementations. The specific features and acts described above are disclosed as example forms of implementing the claims that follow. Accordingly, the embodiments and implementations are not limited except as by the appended claims.

Any patents, patent applications, and other references mentioned above are incorporated herein by reference. Aspects can be modified, if necessary, to employ the systems, functions and concepts of the various references described above to provide yet further implementations. To the extent statements or subject matter in the documents incorporated by reference conflict with statements or subject matter of the present application, the present application shall control.

34页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于物理光学模型和神经网络的双流散景渲染方法及系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!