Interactive video game system

文档序号:1342680 发布日期:2020-07-17 浏览:30次 中文

阅读说明:本技术 交互式视频游戏系统 (Interactive video game system ) 是由 T.J.科塞尔特 W.C.叶 于 2018-11-06 设计创作,主要内容包括:一种交互式视频游戏系统包括体积传感器阵列,其设置在活动区域周围并被配置成收集针对多个参预者中的每个的相应体积数据。该系统包括通信地耦合到体积传感器阵列的控制器。控制器被配置成:从体积传感器阵列接收多个参预者中的每个的相应体积数据;组合多个参预者中的每个的相应体积数据以生成针对多个参预者中的每个的至少一个相应模型;至少部分地基于多个参预者中的每个参预者的生成的至少一个相应模型来生成针对多个参预者中的每个参预者的相应虚拟表示;以及在虚拟环境中呈现多个参预者中的每个参预者的生成的相应虚拟表示。(An interactive video-game system includes a volume sensor array disposed about an activity area and configured to collect respective volume data for each of a plurality of participants. The system includes a controller communicatively coupled to the volume sensor array. The controller is configured to: receiving respective volumetric data for each of a plurality of participants from a volumetric sensor array; combining the respective volume data for each of the plurality of participants to generate at least one respective model for each of the plurality of participants; generating a respective virtual representation for each of the plurality of participants based at least in part on the generated at least one respective model of each of the plurality of participants; and presenting the generated respective virtual representation of each of the plurality of participants in the virtual environment.)

1. An interactive video-game system comprising:

a volume sensor array disposed about the active area and configured to collect respective volume data for each of the plurality of participants; and

a controller communicatively coupled to the volume sensor array and configured to:

receiving the respective volumetric data of each of the plurality of participants from the volumetric sensor array;

combining the respective volume data for each of the plurality of participants to generate at least one respective model for each of the plurality of participants;

generating a respective virtual representation for each of the plurality of participants based at least in part on the generated at least one respective model of each of the plurality of participants; and

presenting the generated respective virtual representation of each of the plurality of participants in a virtual environment.

2. The interactive video-game system of claim 1, wherein the volumetric sensor array comprises an array of depth cameras, L IDAR devices, or a combination thereof.

3. The interactive video-game system of claim 1, wherein the volumetric sensors of the array are symmetrically distributed around a perimeter of the active area.

4. The interactive video-game system of claim 1, wherein the generated at least one respective model comprises a respective shadow model for each of the plurality of participants, and wherein each respective virtual representation of the plurality of participants is generated based on the respective shadow model of each of the plurality of participants.

5. The interactive video-game system of claim 1, comprising a display device disposed proximate to the active area, wherein the display device is configured to display the virtual environment to the plurality of participants in the active area.

6. The interactive video-game system of claim 1, wherein the controller is configured to:

determining an in-game action and a corresponding in-game effect for each of the plurality of participants in the activity area based at least in part on the generated at least one respective model; and

updating the generated respective virtual representation and the virtual environment for each of the plurality of participants based on the determined in-game actions and in-game effects.

7. The interactive video-game system of claim 1, wherein each of the generated at least one respective model comprises a respective volumetric model for each of the plurality of participants, and the controller is configured to generate a simulated image when game activity ends, the simulated image comprising the volumetric model of a particular participant of the plurality of participants within the virtual environment.

8. The interactive video-game system of claim 1, wherein the activity area is a three-dimensional (3D) activity area, and the volumetric sensor array is configured to collect the respective volumetric data for each participant in the plurality of participants relative to a width, height, and depth of the 3D activity area.

9. The interactive video-game system of claim 1, wherein the activity area is a two-dimensional (2D) activity area, and the volumetric sensor array is configured to collect the respective volumetric data for each of the plurality of participants relative to a width and a height of the 2D activity area.

10. An interactive video-game system comprising:

a display device disposed proximate to an active area and configured to display a virtual environment to a plurality of participants in the active area;

an array of sensing units disposed around the active area, wherein each sensing unit of the array is configured to determine a partial model of at least one participant of the plurality of participants; and

a controller communicatively coupled to the array of sensing cells, wherein the controller is configured to:

receiving the partial model of each of the plurality of participants from the array of sensing cells;

generating a respective model of each of the plurality of participants by fusing the partial models of each of the plurality of participants;

determining an in-game action of each of the plurality of participants based at least in part on the generated respective model of each of the plurality of participants; and

displaying, on the display device, a respective virtual representation of each of the plurality of participants in the virtual environment based at least in part on the in-game action of each of the plurality of participants and the generated respective model.

11. The interactive video-game system of claim 10, wherein each sensing unit of the array of sensing units comprises a respective sensor communicatively coupled to a respective processing circuit, wherein the respective processing circuit of each sensing unit of the array of sensing units is configured to determine a respective portion of the generated respective model of each of the plurality of participants based on data collected by the respective sensor of the sensing unit.

12. The interactive video game system of claim 11, wherein each sensing unit of the array of sensing units is communicatively coupled to the controller via an Internet Protocol (IP) network.

13. The interactive video-game system of claim 10, wherein the array of sensing units comprises an array of depth cameras, L IDAR devices, or a combination thereof, distributed symmetrically around a perimeter of the active area.

14. The interactive video-game system of claim 13, wherein the array of sensing units is disposed above the plurality of participants and directed at a downward angle toward the active area.

15. The interactive video game system of claim 14, wherein the array of sensing units comprises up to eight sensing units.

16. The interactive video-game system of claim 10, comprising a Radio Frequency (RF) sensor communicatively coupled to the controller, wherein the controller is configured to receive data from the RF sensor indicating an identity, a location, or a combination thereof for each of the plurality of participants.

17. The interactive video-game system of claim 10, comprising an interface panel communicatively coupled to the controller, wherein the interface panel comprises a plurality of input devices configured to receive input from the plurality of participants during a gaming activity.

18. The interactive video game system of claim 17, wherein the interface panel comprises at least one physical effects device configured to provide at least one physical effect to the plurality of participants based at least in part on the determined in-game actions of each of the plurality of participants.

19. The interactive video game system of claim 10, comprising a database system communicatively coupled to the controller, wherein the controller is configured to query and receive information from the database system relating to the plurality of participants.

20. The interactive video-game system of claim 19, wherein to present the respective virtual representation of each of the plurality of participants, the controller is configured to query and receive the information about the plurality of participants and to determine how to modify the virtual representation based on the received information.

21. The interactive video-game system of claim 10, wherein the controller is configured to generate the respective virtual representation of each of the plurality of participants based on the respective model of each of the plurality of participants.

22. A method of operating an interactive video game system, comprising:

receiving, via processing circuitry of a controller of the interactive video-game system, a partial model of a plurality of participants positioned within an activity area from an array of sensing units disposed about the activity area;

fusing, via the processing circuitry, the received partial models of each of the plurality of participants to generate a respective model of each of the plurality of participants;

determining, via the processing circuitry, an in-game action of each of the plurality of participants based at least in part on the generated respective model of each of the plurality of participants; and

presenting, via a display device, a virtual representation of each of the plurality of participants in a virtual environment based at least in part on the in-game action of each of the plurality of participants and the generated respective model.

23. The method of claim 22, comprising:

scanning each of the plurality of participants positioned within the active area using the volumetric sensor of each sensing cell of the array of sensing cells;

generating, via the processing circuitry of each sensing cell of the array of sensing cells, the partial model of each participant of the plurality of participants; and

providing the partial model of each of the plurality of participants to the processing circuitry of the controller via a network.

24. The method of claim 23, wherein each of the generated partial models comprises a partial volume model, a partial bone model, a partial shadow model, or a combination thereof.

25. The method of claim 22, wherein the plurality of participants includes at least twelve participants simultaneously playing the interactive video game system.

Background

The present disclosure relates generally to video game systems and, more particularly, to interactive video game systems that enable simultaneous multi-player gaming activities.

Video game systems typically enable participants to control characters in a virtual environment to achieve predetermined goals or tasks. Conventional video game systems typically rely on manual input devices, such as joysticks, game controllers, keyboards, etc., to enable participants to control characters within the virtual environment of the game. Additionally, some modern video game systems may include cameras that are capable of tracking the movements of the participants, thereby enabling the participants to control the video game characters based on their movements. However, these systems often suffer from the problem of occlusion, in which a portion of the participant is at least temporarily obscured from the camera and, therefore, the system is no longer able to accurately track the position or movement of the participant. For example, interruptions may cause jerks or jerkiness (stutter) in the movement of characters in the virtual environment, as well as other inaccuracies or misconverts in the participant's actions to character actions within the game. Additionally, for a multi-participant video game system, the potential for interruption increases significantly with the number of participants.

Disclosure of Invention

The present embodiments relate to an interactive video-game system that includes a volumetric sensor array disposed about an active area, the volumetric sensor array configured to collect respective volumetric data for each of a plurality of participants. The system includes a controller communicatively coupled to the volume sensor array. The controller is configured to receive respective volume data for each of the plurality of participants from the volume sensor array. The controller is configured to combine the respective volume data for each of the plurality of participants to generate at least one respective model for each of the plurality of participants. The controller is further configured to generate a respective virtual representation for each of the plurality of participants based at least in part on the generated at least one respective model of each of the plurality of participants. The controller is further configured to present the generated respective virtual representation of each of the plurality of participants in the virtual environment.

The present embodiments are also directed to an interactive video-game system having a display device disposed proximate to an active area and configured to display a virtual environment to a plurality of participants in the active area. The system includes an array of sensing units disposed about the active area, wherein each sensing unit of the array is configured to determine a partial model of at least one of the plurality of participants. The system also includes a controller communicatively coupled to the array of sensing cells. The controller is configured to: receiving a partial model of each of a plurality of participants from an array of sensing cells; generating a respective model of each of the plurality of participants by fusing the partial models of each of the plurality of participants; determining in-game actions of each of the plurality of participants based at least in part on the generated respective model of each of the plurality of participants; and displaying, on the display device, a respective virtual representation of each of the plurality of participants in the virtual environment based at least in part on the in-game actions of each of the plurality of participants and the generated respective model.

The present embodiments also relate to a method of operating an interactive video game system. The method includes receiving, via processing circuitry of a controller of the interactive video-game system, a partial model of a plurality of participants positioned within an activity area from an array of sensing units disposed about the activity area. The method includes fusing, via the processing circuitry, the received partial models of each of the plurality of participants to generate a respective model of each of the plurality of participants. The method includes determining, via the processing circuitry, an in-game action of each of the plurality of participants based at least in part on the generated respective model of each of the plurality of participants. The method further includes presenting, via the display device, a virtual representation of each of the plurality of participants in the virtual environment based at least in part on the in-game actions of each of the plurality of participants and the generated respective model.

Drawings

These and other features, aspects, and advantages of the present disclosure will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:

FIG. 1 is a schematic diagram of an embodiment of an interactive video-game system that enables multiple participants to control respective virtual representations by performing actions in a three-dimensional (3D) activity area in accordance with the present technology;

FIG. 2 is a schematic diagram of another embodiment of an interactive video-game system with a two-dimensional (2D) active area in accordance with the present technology;

FIG. 3 is a diagram illustrating an example of a skeletal model and a shadow model representing a participant in a 3D active area and corresponding virtual representations of the participant presented in a virtual environment in accordance with the present techniques;

FIG. 4 is a diagram illustrating an example of a skeletal model and a shadow model representing a participant in a 2D active area and corresponding virtual representations of the participant presented in a virtual environment in accordance with the present techniques;

FIG. 5 is a flow chart illustrating an embodiment of a process of operating an interactive gaming system in accordance with the present technology; and

FIG. 6 is a flow diagram illustrating an example embodiment of a process by which an interactive video game system performs certain actions indicated in the flow diagram of FIG. 5 in accordance with the present technology.

Detailed Description

As used herein, "volumetric scan data" refers to three-dimensional (3D) data (such as point cloud data) collected by optically measuring (e.g., imaging, ranging) the visible exterior surface of a participant in an active area. As used herein, a "volumetric model" is a 3D model generated from volumetric scan data of a participant that generally describes the external surface of the participant and may include texture data. As used herein, a "shadow model" refers to a texture-free volumetric model of a participant generated from volumetric scan data. Thus, when rendered on a two-dimensional (2D) surface such as a display device, the shadow model of the participant has a shape that is substantially similar to the shadow or silhouette of the participant when illuminated from behind. As used herein, a "skeletal model" refers to a 3D model generated from volumetric scan data of a participant that defines the predicted location and position of certain skeletons of the participant (e.g., skeletons associated with arms, legs, head, spine) to describe the location and pose of the participant within an activity region.

The disclosed interactive video game system includes an array having two or more volume sensors, such as depth cameras and light detection and ranging (L IDAR) devices, capable of volume scanning each participant.

As mentioned, the array of the disclosed interactive video game system includes a plurality of volumetric sensors arranged around an activity area to monitor the actions of participants within the activity area. This generally ensures that the skeletal model of each participant can be accurately generated and updated throughout the gaming activity despite potential occlusions from the perspective of one or more of the volumetric sensors of the array. Additionally, the processing detour of the system may use the volumetric scan data to generate aspects (e.g., size, shape, contour) of the virtual representation of each participant within the virtual environment. In some embodiments, certain aspects of the virtual representation (e.g., color, texture, scale) of each participant may be further adjusted or modified based on information associated with the participant. As discussed below, this information may include information related to gaming activities (e.g., items acquired, achievements unlocked), as well as other information about the participant's activities outside of the game (e.g., participant performance in other games, items purchased by the participant, location accessed by the participant). In addition, the volumetric scan data collected by the volumetric sensor array may be bypassed by the processing of the gaming system to generate additional content, such as a memento image, in which the volumetric model of the participant is illustrated as being within the virtual world. Thus, the disclosed interactive video game system enables an immersion and engagement experience for multiple simultaneous participants.

In view of the foregoing, FIG. 1 is a schematic diagram of an embodiment of an interactive video-game system 10, the system 10 enabling a plurality of participants 12 (e.g., participants 12A and 12B) to individually control respective virtual representations 14 (e.g., virtual representations 14A and 14B) by performing an action in an active area 16. It may be noted that although the present description is directed to two participants 12 using the interactive video game system 10 for simplicity, in other embodiments, the interactive video game system 10 may support more than two (e.g., 6, 8, 10, 12, or more) participants 12. The active area 16 of the interactive video-game system 10 illustrated in fig. 1 is described herein as a 3D active area 16A. The term "3D active area" as used herein refers to an active area 16 having a width (corresponding to an x-axis 18), a height (corresponding to a y-axis 20), and a depth (corresponding to a z-axis 22), wherein the system 10 generally monitors the respective movements of the participant 12 along the x-axis 18, the y-axis 20, and the z-axis 22. In response to the participant 12 moving throughout the activity area 16, the interactive video game system 10 updates the positioning of the virtual representation 14 along the x-axis 26, y-axis 28, and z-axis 30 presented on the display device 24 in the virtual environment 32. Although the 3D active area 16A is illustrated as being substantially circular, in other embodiments, the 3D active area 16A may be square, rectangular, hexagonal, octagonal, or any other suitable 3D shape.

The embodiment of the interactive video-game system 10 illustrated in FIG. 1 includes a master controller 34 having memory circuits 33 and processing circuits 35 that generally provide control signals to control the operation of the system 10. Thus, the master controller 34 is communicatively coupled to an array 36 of sensing units 38 disposed around the 3D active area 16A. More specifically, the array 36 of sensing cells 38 may be described as being symmetrically distributed about the perimeter of the active area 16. In certain embodiments, at least a portion of the array 36 of sensing units 38 may be positioned above the active area 16 (e.g., suspended from a ceiling or on a raised platform or stand) and directed at a downward angle to image the active area 16. In other embodiments, at least a portion of the array 36 of sensing cells 38 may be positioned near the floor of the active area 16 and directed at an upward angle to image the active area 16. In some embodiments, the array 36 of the interactive video-game system 10 may include at least two sensing units 38 of each participant (e.g., participants 12A and 12B) in the active area 16. Thus, the array 36 of sensing units 38 is suitably positioned to image a substantial portion of the potentially vantage points about the active area 16 to reduce or eliminate potential participant obstruction.

In the illustrated embodiment, each sensing unit 38 includes a respective volume sensor 40, which may be an Infrared (IR) depth camera, a L IDAR device, or another suitable ranging and/or imaging device, for example, in some embodiments, all of the volume sensors 40 of the sensing units 38 in the array 36 are IR depth cameras or L IDAR devices, while in other embodiments, a mix of both IR depth cameras and L IDAR devices are present within the array 36.

In addition, each illustrated sensing unit 38 includes a sensor controller 42 having a suitable memory detour 44 and a processing detour 46. The processing circuitry 46 of each sensor unit 38 executes instructions stored in the memory detour 44 to enable the sensing unit 38 to volume scan the participant 12 to generate volume scan data for each participant 12. For example, in the illustrated embodiment, the sensing unit 38 is communicatively coupled to the main controller 34 via a high speed Internet Protocol (IP) network 48, the high speed Internet Protocol (IP) network 48 enabling low latency data exchange between the devices of the interactive video game system 10. Additionally, in certain embodiments, the sensing units 38 may each include a respective housing that encapsulates the sensor controller 42 with the volume sensor 40.

It may be noted that in other embodiments, the sensing unit 38 may not include the respective sensor controller 42. For such embodiments, the processing detour 35 of the main controller 34 or other suitable processing circuitry of the system 10 is communicatively coupled to the respective volume sensor 40 of the array 36 to provide control signals directly to the sensor 40 and to receive data signals directly from the volume sensor 40. However, it is presently recognized that the processing (e.g., filtering, bone mapping) of the volumetric scan data collected by each of these volumetric sensors 40 may be processor intensive. Thus, in certain embodiments, it may be advantageous to process the volumetric data collected by the respective sensors 40 by dividing the workload with a dedicated processor (e.g., the processor 46 of each sensor controller 42), and then send the processed data to the master controller 34. For example, in the illustrated embodiment, each processor 46 of the sensor controller 42 processes the volumetric scan data collected by their respective sensors 40 to generate a partial model (e.g., partial volumetric model, partial skeletal model, partial shadow model) of each participant 12, and the processing circuitry 35 of the master controller 34 receives and fuses or combines the partial models to generate a complete model of each participant 12, as discussed below.

Additionally, in certain embodiments, the master controller 34 may also receive information from other sensing devices in and around the active area 16. For example, the illustrated master controller 34 is communicatively coupled to a Radio Frequency (RF) sensor 45 disposed proximate the 3D active area 16A (e.g., above the active area 16A, below the active area 16A, adjacent to the active area 16A). The illustrated RF sensor 45 receives a unique identification RF signal from a wearable device 47, such as a bracelet or headband with Radio Frequency Identification (RFID) tags worn by each participant 12. In response, the RF sensor 45 provides signals to the master controller 34 regarding the identity and relative location of the participant 12 in the activity area 16. Thus, for the illustrated embodiment, the processing circuitry 35 of the master controller 34 receives and combines the data collected by the array 36 and, potentially, other sensors (e.g., RF sensors 45) to determine the identity, location and action of the participants 12 in the activity area 16 during the gaming activity. Additionally, the illustrated master controller 34 is communicatively coupled to a database system 50 or any other suitable data store that stores participant information. Database system 50 includes a processing detour 52 that executes instructions stored in a memory detour 54 to store and retrieve information associated with the participant 12, such as participant models (e.g., volume, shadow, skeleton), participant statistical information (e.g., win, loss, score, total game activity time), participant attributes or inventory (e.g., ability, texture, items), participant purchases at a gift store, participant scores in a loyalty award program, and the like. The processing bypass 35 of the master controller 34 may query, retrieve, and update information stored by the database system 50 relating to the participants 12 to enable the system 10 to operate as set forth herein.

Additionally, the embodiment of the interactive video-game system 10 illustrated in FIG. 1 includes an output controller 56, the output controller 56 communicatively coupled to the master controller 34. Output controller 56 generally includes a processing detour 58 that executes instructions stored in a memory detour 60 to control the output of stimuli (e.g., audio signals, video signals, light, physical effects) observed and experienced by participant 12 in activity area 16. Thus, the illustrated output controller 56 is communicatively coupled to the audio device 62 and the display device 24 to provide appropriate control signals to operate these devices to provide a particular output. In other embodiments, the output controller 56 may be coupled to any suitable number of audio and/or display devices. The display device 24 may be any suitable display device, such as a projector and screen, a flat screen display device, or an array of flat screen display devices, arranged and designed to provide a suitable view of the virtual environment 32 to the participants 12 in the active area 16. In some embodiments, the audio devices 62 may be arranged in an array around the activity area 16 to increase participant immersion during the gaming activity. In other embodiments, the system 10 may not include the output controller 56, and the processing detour 35 of the main controller 34 may be communicatively coupled to the audio device 62, the display device 34, etc., to generate various stimuli for the participant 12 to observe and experience in the activity area 16.

FIG. 2 is a schematic diagram of another embodiment of the interactive video-game system 10 that enables a plurality of participants 12 (e.g., participants 12A and 12B) to control a virtual representation 14 (e.g., virtual representations 14A and 14B) by performing an action in an active area 16. The embodiment of the interactive video-game system 10 illustrated in FIG. 2 includes many of the features discussed herein with reference to FIG. 1, including the main controller 34, the array 36 of sensing units 38, the output controller 56, and the display device 24. However, the embodiment of the interactive video-game system 10 illustrated in FIG. 2 is described herein as having a 2D active area 16B. As used herein, the term "2D active area" refers to an active area 16 having a width (corresponding to an x-axis 18) and a height (corresponding to a y-axis 20), wherein the system 10 generally monitors movement of each participant 12 along the x-axis 18 and the y-axis 20. For the embodiment illustrated in FIG. 2, the participants 12A and 12B are assigned sections 70A and 70B, respectively, of the 2D activity area 16B, and the participants 12 do not roam outside of their respective assigned sections during the gaming activity. The interactive video game system 10 updates the positioning of the virtual representation 14 presented on the display device 24 along the x-axis 26 and the y-axis 28 in the virtual environment 32 in response to the participant 12 moving (e.g., running along the x-axis 18, jumping along the y-axis 20) within the 2D activity area 16B.

Additionally, the embodiment of the interactive video-game system 10 illustrated in FIG. 2 includes an interface panel 74 that may enable enhanced participant interaction. As illustrated in FIG. 2, the interface panel 74 includes a plurality of input devices 76 (e.g., cranks, wheels, buttons, sliders, blocks) that are designed to receive input from the participants 12 during the gaming activity. Thus, the illustrated interface panel 74 is communicatively coupled to the main controller 34 to provide signals to the controller 34 indicating how the participant 12 manipulates the input devices 76 during gaming activities. The illustrated interface panel 74 also includes a plurality of output devices 78 (e.g., audio output devices, visual output devices, physical stimulus devices) designed to provide audio, visual, and/or physical stimuli to the participants 12 during the gaming activity. Thus, the illustrated interface panel 74 is communicatively coupled to the output controller 56 to receive the control signals and provide the appropriate stimuli to the participants 12 in the active area 16 in response to the appropriate signals from the master controller 34. For example, the output devices 78 may include audio devices such as speakers, horns, alarms, and the like. The output devices 78 may also include visual devices, such as lights or display devices of the interface panel 74. In certain embodiments, the output device 78 of the interface panel 74 comprises a physical effect device, such as an electrically controlled release valve coupled to a compressed air line, which provides bursts of hot or cold air or mist in response to appropriate control signals from the main controller 34 or the output controller 56.

As illustrated in FIG. 2, the array 36 of sensing units 38 disposed around the 2D activity area 16B of the illustrated embodiment of the interactive video-game system 10 includes at least two sensing units 38. That is, while the embodiment of the interactive video-game system 10 illustrated in FIG. 1 includes an array 36 having at least two sensing units 38 per participant, the embodiment of the interactive video-game system 10 illustrated in FIG. 2 includes an array 36 that may include as few as two sensing units 38 regardless of the number of participants. In certain embodiments, the array 36 may include at least two sensing units disposed at right angles (90 °) with respect to the participant 12 in the 2D active area 16B. In certain embodiments, the array 36 may additionally or alternatively include at least two sensing units disposed on opposite sides (180 °) relative to the participant 12 in the active area 16B. By way of specific example, in certain embodiments, the array 36 may include only two sensing units 38 disposed on different (e.g., opposite) sides of the participant 12 in the 2D active area 16B.

As mentioned, the array 36 illustrated in FIGS. 1 and 2 is capable of collecting volumetric scan data for each participant 12 in the active region 16. In some embodiments, the collected volumetric scan data may be used to generate various models (e.g., volumes, shadows, bones) for each participant, and these models may be subsequently updated based on the movement of the participant during the gaming activity, as discussed below. However, it is presently recognized that using a volumetric model that includes texture data is substantially more processor intensive (e.g., involving additional filtering, additional data processing) than using a shadow model that lacks the texture data. For example, in certain embodiments, the processing detour 35 of the master controller 34 may generate a shadow model for each participant 12 from the volumetric scan data collected by the array 36 by using edge detection techniques to distinguish between edges of the participant 12 and its surroundings in the active area 16. It is presently recognized that such edge detection techniques are substantially less processor intensive and involve substantially less filtering than using a volumetric model that includes texture data. Thus, it is presently recognized that certain embodiments of the interactive video-game system 10 generate and update shadow models rather than volumetric models that include textures, thereby enabling a reduction in the size, complexity, and cost of the processing circuitry 34 of the master controller 34. Additionally, as discussed below, the processing detour 34 may generate the virtual representation 14 of the participant 12 based at least in part on the generated shadow model.

As mentioned, the volumetric scan data collected by the array 36 of the interactive video game system 10 may be used to generate various models (e.g., volume, shadow, bone) for each participant. For example, FIG. 3 is a diagram illustrating a skeletal model 80 (e.g., skeletal models 80A and 80B) and a shadow model 82 (e.g., shadow models 82A and 82B) representing a participant in the 3D active area 16A. FIG. 3 also illustrates the corresponding virtual representations 14 (e.g., virtual representations 14A and 14B) of these participants presented in the virtual environment 32 on the display device 24 in accordance with the present technique. As illustrated, the represented participants are located at different locations within the 3D activity area 16A of the interactive video game system 10 during the gaming activity, as indicated by the positioning of the skeletal model 80 and the shaded model 82. The pictorial virtual representation 14 of the participant in the virtual environment 32 is generated based at least in part on the shadow model 82 of the participant. As mentioned above, as the participants move within the 3D active area 16A, the master controller 34 tracks these movements and generates updated skeletal and shadow models 80 and 82 and a virtual representation 14 of each participant accordingly.

Additionally, as illustrated in FIGS. 1 and 3, an embodiment of the interactive video-game system 10 having a 3D active area 16A enables tracking and participant movement along the z-axis 22 and translates it into movement of the virtual representation 14 along the z-axis 30. As illustrated in fig. 3, this enables the participant represented by the skeletal model 80A and the shaded model 82A to move the leading edge 84 of the 3D active area 16A as well as cause the corresponding virtual representation 14A to be rendered at a relatively deep point or level 86 along the z-axis 30 in the virtual environment 32. This also enables the participant represented by the skeletal model 80B and the shaded model 82B to move to a rear edge 88 of the 3D active area 16A, which results in the corresponding virtual representation 14B being rendered at a substantially shallower point or level 90 along the z-axis 30 in the virtual environment 32. Further, for the illustrated embodiment, the size of the rendered virtual representation 14 is modified based on the location of the participant in the 3D active area 16A along the z-axis 22. That is, the virtual representation 14A positioned relatively deeper along the z-axis 30 in the virtual environment 32 is rendered significantly smaller than the virtual representation 14B positioned at a shallower depth or layer along the z-axis 30 in the virtual environment 32.

It may be noted that for embodiments of the interactive gaming system 10 having a 3D participant area 16A, as represented in FIGS. 1 and 3, the virtual representation 14 may only be able to interact with virtual objects positioned at similar depths along the z-axis 30 in the virtual environment 32. For example, for the embodiment illustrated in fig. 3, virtual representation 14A can interact with a virtual object 92 positioned deeper along z-axis 30 in virtual environment 32, while virtual representation 14B can interact with another virtual object 94 positioned at a relatively shallower depth along z-axis 30 in virtual environment 32. That is, the virtual representation 14A cannot interact with the virtual object 94 unless the participant represented by the models 80A and 82A changes position along the z-axis 22 in the 3D active area 16A such that the virtual representation 14A moves to a similar depth as the virtual object 94 in the virtual environment 32.

For comparison, fig. 4 is a diagram illustrating examples of a skeletal model 80 (e.g., skeletal models 80A and 80B) and a shadow model 82 (e.g., shadow models 82A and 82B) representing a participant in the 2D active area 16B. FIG. 4 also illustrates the virtual representations 14 (e.g., virtual representations 14A and 14B) of the participants presented on the display device 24. As mentioned above, as the participants move within the 2D active area 16B, the master controller 34 tracks these movements and updates the skeletal model 80, the shaded model 82, and the virtual representation 14 of each participant accordingly. As mentioned, embodiments of the interactive video-game system 10 having the 2D activity area 16B illustrated in FIGS. 2 and 4 do not track movement of the participant along the z-axis (e.g., the z-axis 22 illustrated in FIGS. 1 and 3). Alternatively, for embodiments having a 2D activity area 16B, the size of the rendered virtual representation 14 may be modified based on the state or condition of the participant inside and/or outside the gaming activity. For example, in FIG. 4, virtual representation 14A is significantly larger than virtual representation 14B. In some embodiments, the size of the virtual representations 14A and 14B may be enhanced or enlarged in response to the virtual representation 14A or 14B interacting with a particular item, such as in response to the virtual representation 14A gaining a power-up (power-up) during a current or previous round of gaming activity. In other embodiments, the enlarged size of the virtual representation 14A and other modifications of the virtual representation (e.g., texture, color, transparency, items worn or carried by the virtual representation) may be a result of a corresponding participant interacting with an object or item external to the interactive video-game system 10, as discussed below.

It is presently recognized that the embodiment of the interactive video-game system 10 utilizing a 2D activity area 16B, as represented in FIGS. 2 and 4, achieves particular advantages over the embodiment of the interactive video-game system 10 utilizing a 3D activity area 16A, as illustrated in FIG. 1. For example, as mentioned, the array 36 of sensing units 38 in the interactive video game system 10 with the 2D active area 16B as illustrated in fig. 2 includes fewer sensing units 38 than the interactive video game system 10 with the 3D active area 16A as illustrated in fig. 1. That is, for an interactive video game system 10 having a 2D active area 16B as represented in fig. 2 and 4, depth (e.g., positioning and movement along the z-axis 22 as illustrated in fig. 1) is not tracked. Additionally, since the participants 12A and 12B remain in their respective assigned sections 70A and 70B of the 2D active area 16B, the potential for occlusion is significantly reduced. For example, by maintaining the participants within their assigned sections 70 of the 2D activity area 16B, interruptions between the participants occur only predictably along the x-axis 18. Thus, by using the 2D active area 16B, the embodiment of the interactive video-game system 10 illustrated in FIG. 2 enables the participant 12 to be tracked using a smaller array 36 with fewer sensing units 38 than the embodiment of the interactive video-game system 10 of FIG. 1.

Thus, it is recognized that the smaller array 36 of sensing units 38 used by embodiments of the interactive video game system 10 having a 2D active area 16B also generates significantly less data to process than embodiments having a 3D active area 16A. For example, because in the 2D active area 16B of FIGS. 2 and 4, occlusions between the participants 12 are significantly more restricted and predictable, fewer sensing units 38 may be used in the array 36 while still covering a substantial portion of the potential vantage points surrounding the active area 16. Thus, for embodiments of the interactive gaming system 10 having a 2D active area 16B, the processing circuitry 35 of the master controller 34 may be smaller, simpler, and/or more power efficient relative to the processing circuitry 35 of the master controller 34 for embodiments of the interactive gaming system 10 having a 3D active area 16A.

As mentioned, the interactive video-game system 10 is capable of generating various models of the participants 12. More specifically, in certain embodiments, the processing circuitry 35 of the master controller 34 is configured to receive partial model data (e.g., partial volumes, shadows, and/or bone models) from the various sensing units 38 of the array 36 and fuse the partial models into a complete model (e.g., a complete volume, shadow, and/or bone model) for each participant 12. Set forth below is an example in which the processing circuitry 35 of the master controller 34 fuses portions of the bone model received from the various sensing units 38 of the array 36. It will be appreciated that in some embodiments, the processing circuitry 35 of the master controller 34 may use a similar process to fuse the partial shadow model data into a shadow model and/or fuse the partial volume model data.

In an example, a partial skeletal model is generated by each sensing unit 38 of the interactive video game system 10 and then fused by the processing detour 35 of the master controller 34. In particular, the processing detour 35 may perform a one-to-one mapping of the corresponding skeleton of each participant 12 in each partial skeletal model generated by different sensing units 38 positioned at different angles (e.g., opposite sides, perpendicular) relative to the activity region 16. In certain embodiments, when the partial bone models generated by the different sensing units 38 are fused by the processing detour 35, relatively small differences between the partial bone models may be averaged to provide smoothing and prevent jerky movement of the virtual representation 14. Additionally, when the partial bone model generated by a particular sensing unit is significantly different from the partial bone models generated by at least two other sensing units, the processing detour 35 of the master controller 34 may determine that the data is erroneous and, therefore, not include the data in the bone model 80. For example, if a particular partial bone model lacks a skeleton that is present in other partial bone models, the processing detour 35 may determine that the missing skeleton is likely the result of an occlusion, and in response may discard all or some of the partial bone models.

It may be noted that precise coordination of the components of interactive gaming system 10 is desirable to provide smooth and responsive movement of virtual representation 14 within virtual environment 32. In particular, in order to properly fuse the partial models (e.g., partial bone, volume, and/or shadow models) generated by the sensing unit 38, the processing detour 35 may take into account the time at which each partial model was generated by the sensing unit 38. In some embodiments, the interactive video-game system 10 may include a system clock 100, as illustrated in fig. 1 and 2, for synchronizing operations within the system 10. For example, system clock 100 may be a component of master controller 34 or another suitable electronic device capable of generating a time signal broadcast over network 48 of interactive video game system 10. In some embodiments, various devices coupled to the network 48 may receive and use the time signals at particular times (e.g., at the beginning of gaming activity) to adjust the respective clocks, and when providing gaming activity data to the master controller 34, the devices may then include timing data based on the signals from these respective clocks. In other embodiments, the various devices coupled to the network 48 receive time signals from the system clock 100 continuously (e.g., at regular microsecond intervals) throughout the gaming activity, and when data (e.g., volumetric scan data, partial model data) is provided to the master controller 34, the devices then include timing data from the time signals. Additionally, the processing detour 35 of the master controller 34 may determine whether the partial model (e.g., partial volume, shadow, or bone model) generated by the sensor unit 38 is fresh enough (e.g., recent, contemporaneous with other data) for generating or updating a complete model, or whether the data should be discarded because it is not fresh. Thus, in certain embodiments, the system clock 100 enables the processing detour 35 to appropriately fuse the partial models generated by the various sensing units 38 into an appropriate volume, shadow, and/or skeletal model of the participant 12.

FIG. 5 is a flow diagram illustrating an embodiment of a process 110 for operating the interactive gaming system 10 in accordance with the present technology. It is to be appreciated that in other embodiments, certain steps of the illustrated process 110 may be performed in a different order, repeated multiple times, or skipped altogether in accordance with the present disclosure. The process 110 illustrated in FIG. 5 may be performed by the processing circuity 35 of the main controller 34 alone or in combination with other suitable processing circuity (e.g., the processing circuits 46, 52, and/or 58) of the system 10.

The illustrated embodiment of the process 110 begins with the interactive gaming system 10 collecting (block 112) volumetric scan data for each participant. In certain embodiments, as illustrated in FIGS. 1-4, the participant 12 may be scanned or imaged by the sensing units 38 positioned around the active area 16. For example, in some embodiments, the participant 12 may be prompted to reach a particular pose before beginning the gaming activity, while the sensing units 38 of the array 36 collect volume scan data about each participant. In other embodiments, the participant 12 may be volume scanned by a separate system prior to entering the active region 16. For example, a line of waiting participants may be directed through a pre-scanning system (e.g., similar to a security scanner at an airport) in which each participant is individually volume scanned (e.g., upon reaching a particular pose) to collect volume scan data for each participant. In certain embodiments, the pre-scanning system may be a smaller version of the 3D active area 16A illustrated in FIG. 1 or the 2D active area 16B in FIG. 2, in which the array 36 of sensing units 38 is positioned around the individual participant to collect volumetric scanning data. In other embodiments, the pre-scanning system may include fewer sensing units 38 (e.g., 1, 2, 3) positioned around an individual participant, and the sensing units 38 are rotated around the participant to collect complete volumetric scanning data. It is presently recognized that it may be desirable to collect the volumetric sweep data indicated in box 112 while the participant 12 is in the active area 16 to enhance the efficiency of the interactive gaming system 10 and reduce participant latency.

Next, the interactive video-game system 10 generates (block 114) a corresponding model for each participant based on the volumetric scan data collected for each participant. As set forth above, in certain embodiments, the processing circuitry 35 of the master controller may receive a partial model of each participant from each sensing cell 38 in the array 36, and may appropriately fuse the partial models to generate an appropriate model for each participant. For example, the processing detour 35 of the master controller 34 may generate a volumetric model for each participant that generally defines the 3D shape of each participant. Additionally or alternatively, the processing detour 35 of the master controller 34 may generate a shadow model for each participant that generally defines the non-textured 3D shape of each participant. In addition, the processing detour 35 may also generate a skeletal model that generally defines the predicted skeletal position and location of each participant within the activity area.

Next, continuing through the example process 110, the interactive video-game system 10 generates (block 116) a corresponding virtual representation for each participant based at least in part on the volumetric scan data collected for each participant and/or the one or more models generated for each participant. For example, in some embodiments, the processing detour 35 of the master controller 34 may generate a virtual representation of the participant using the shadow model generated in block 114 as a basis. It may be appreciated that in certain embodiments, the virtual representation 14 may have a shape or outline that is substantially similar to the shadow model of the corresponding participant, as illustrated in FIGS. 3 and 4. In addition to shapes, the virtual representation 14 may have other attributes that may be modified to correspond to attributes of the represented participant. For example, the participants may be associated with various attributes (e.g., items, status, points, statistics) that reflect their performance in other gaming systems, their purchases in gift shops, their members of loyalty programs, and so forth. Accordingly, the attributes of the virtual representation (e.g., size, color, texture, animation, presence of virtual items) may be set in response to various attributes associated with the corresponding participant, and further modified based on changes to the attributes of the participant during the gaming activity.

It may be noted that, in some embodiments, the virtual representation 14 of the participant 12 may not have an appearance or shape that is substantially similar to the generated volume or shadow model. For example, in some embodiments, the interactive video game system 10 may include or be communicatively coupled to a pre-generated library of fictional character (e.g., avanda) -based virtual representations, and the system may select a particular virtual representation or provide recommendations of a particular selectable virtual representation to the participant, typically based on a generated volumetric or shadow model of the participant. For example, if the game involves a large hero and a small partner, the interactive video game system 10 may select or recommend from a pre-generated library a relatively large hero virtual representation for an adult participant and a relatively small partner virtual representation for a child participant.

The process 110 continues with the interactive video-game system 10 presenting (block 118) the corresponding virtual representation 14 of each participant in the virtual environment 32 on the display device 24. In addition to the presentation, in some embodiments, the actions in block 118 may also include presenting other introductory presentations, such as welcome messages or orientation/instructional information, to the participants 12 in the activity area 16 prior to commencing the gaming activity. Further, in certain embodiments, the processing circuitry 35 of the master controller 34 may also provide suitable signals to set or modify parameters of the environment within the active area 16. For example, these modifications may include adjusting the light brightness and/or color of the house, playing game music or game sound effects, adjusting the temperature of the activity area, activating physical effects in the activity area, and so forth.

Once the gaming activity is initiated, the virtual representations 14 generated in block 116 and presented in block 118 can interact with each other and/or with virtual objects (e.g., virtual objects 92 and 94) in the virtual environment 32, as discussed herein with respect to fig. 3 and 4. During a gaming activity, the interactive video game system 10 generally determines (block 120) in-game actions for each of the participants 12 in the activity area 16 and corresponding in-game effects of those in-game actions. Additionally, the interactive video game system 10 generally updates (block 122) the virtual environment 32 and/or the corresponding virtual representation 14 of the participant 12 based on the in-game actions of the participant 12 in the activity area 16 and the corresponding in-game effects determined in block 120. As indicated by arrow 124, the interactive video-game system 10 may repeat the steps indicated in blocks 120 and 122 until the gaming activity is completed, e.g., because one of the participants 12 won the round of gaming activity or because the allotted gaming activity time has expired.

FIG. 6 is a flowchart illustrating an example embodiment of a more detailed process 130 by which process 130 interactive video-game system 10 performs the actions indicated in blocks 120 and 122 of FIG. 5. That is, the process 130 indicated in FIG. 6 includes a plurality of steps (as indicated by the classification (break) 120) of determining in-game actions of each participant in the active area and corresponding in-game effects of those in-game actions, and a plurality of steps (as indicated by the classification 122) of updating the virtual environment and/or corresponding virtual representations of each participant. In some embodiments, the actions described in process 130 may be encoded as instructions in a suitable memory, such as memory bypass 33 of master controller 34, and executed by a suitable processor, such as processing bypass 35 of master controller 34 of interactive video game system 10. It should be noted that the illustrated process 130 is provided as an example only, and in other embodiments, certain actions described may be performed in a different order, may be repeated, or may be skipped altogether.

The process 130 of fig. 6 begins with the processing circuitry 35 receiving (block 132) a partial model from a plurality of sensor units in an active area. As discussed herein with respect to fig. 1 and 2, the interactive video-game system 10 includes an array 36 of sensing units 38 disposed in different locations around the active area 16, and each of these sensing units 38 is configured to generate one or more partial models (e.g., partial volumes, shadows, and/or skeletal models) for at least a portion of the participant 12. Additionally, as mentioned, the processing circuitry 35 may also receive data from other devices (e.g., RF scanner 45, input device 76) regarding the actions of the participant 16 disposed within the activity area 16. Further, as mentioned, these partial models may be time stamped based on signals from clock 100 and provided to processing circuitry 35 of master controller 34 via high speed IP network 48.

For the illustrated embodiment of process 130, after receiving the partial models from the sensing unit 38, the processing circuitry 35 fuses the partial models to generate (block 134) updated models (e.g., volumes, shadows, and/or bones) for each participant based on the received partial models. For example, processing circuitry 35 may update a previously generated model, such as the initial bone model generated in block 114 of process 110 of fig. 5. Additionally, as discussed, when combining partial models, the processing circuit 35 may filter or remove inconsistent or delayed data to improve accuracy in tracking the participant despite potential occlusion or network delays.

Next, the illustrated process 130 continues with the processing circuitry 35 identifying (block 136) one or more in-game actions for the corresponding virtual representation 14 of each participant 12 based at least in part on the updated model of the participant generated in block 134. For example, the in-game action may include jumping, running, sliding, or otherwise moving the virtual representation 14 within the virtual environment 32. In-game actions may also include interacting with (e.g., moving, getting, losing, consuming) items such as virtual objects in virtual environment 32. In-game actions may also include completing a goal, defeating another participant, winning a turn, or other similar in-game actions.

Next, the processing circuitry 35 may determine (block 138) one or more in-game effects that are triggered in response to the identified in-game action of each participant 12. For example, when the determined in-game action is a movement of a participant, then the in-game effect may be a corresponding change in the position of the corresponding virtual representation within the virtual environment. When the determined in-game action is a jump, the in-game effect may include moving the virtual representation along the y-axis 20, as illustrated in fig. 1-4. When the determined in-game action is activating a particular strength-enhancing item, then the in-game effect may include modifying a state (e.g., health state, strength state) associated with the participant 12. Additionally, in some cases, the movement of the virtual representation 14 may be emphasized or augmented relative to the actual movement of the participant 12. For example, as discussed above with respect to modifying the appearance of the virtual representation, the movement of the virtual representation of the participant may be temporarily or permanently augmented (e.g., able to jump higher, able to jump farther) with respect to the actual movement of the participant based on attributes associated with the participant (including items acquired during gaming activities, items acquired during periods of other gaming activities, items purchased in gift shops, etc.).

The illustrated process 130 continues with the processing circuitry 35 generally updating the presentation to the participants in the active area 16 based on each participant's in-game action and corresponding in-game effect, as indicated by the classification 122. In particular, the processing circuitry 35 updates (block 140) the corresponding virtual representation 14 and virtual environment 32 of each participant 12 based on the updated model (e.g., shadow and skeletal model) of each participant generated in block 134, the in-game actions identified in block 136, and/or the in-game effects determined in block 138 to advance the gaming activity. For example, for the embodiment illustrated in fig. 1 and 2, the processing circuitry 35 may provide appropriate signals to the output controller 56 such that the processing circuitry 58 of the output controller 56 updates the virtual representation 14 and the virtual environment 32 presented on the display device 24.

Additionally, the processing circuitry 35 may provide suitable signals to generate (block 142) one or more sounds and/or one or more physical effects (block 144) in the active area 16 based at least in part on the determined in-game effect. For example, when it is determined that the in-game effect is a particular virtual representation of a participant striking the virtual pool, the main controller 34 may cause the output controller 56 to signal the speaker 62 to generate a suitable splash sound and/or to cause the physical effect device 78 to generate a fog of mist. Additionally, sound and/or physical effects may be produced in response to any number of in-game effects, including, for example, obtaining a strength boost, a bozz strength boost, a score, or moving through a particular type of environment. As mentioned with respect to FIG. 5, the process 130 of FIG. 6 may repeat until the gaming activity is completed, as indicated by arrow 124.

Further, it may be noted that the interactive video-game system 10 may also implement other functionality using the volumetric scan data collected by the array 36 of volumetric sensors 38. For example, as mentioned, in certain embodiments, the processing circuitry 35 of the master controller 34 may generate a volumetric model that includes both the texture and the shape of each participant. At the conclusion of the gaming activity, the processing circuitry 35 of the master controller 34 may generate simulated images that use the volumetric model of the participant to render the 3D similarity of the participant within a portion of the virtual environment 32, and these may be provided (e.g., printed, electronically transferred) to the participant 12 as a keepsake of their gaming activity experience. For example, this may include printing of a simulated image illustrating a volumetric model of the participant from the virtual environment 32 across a finish line within the scene.

Technical effects of the present method include an interactive video game system that enables a plurality of participants (e.g., two or more, four or more) to perform actions in a physical activity area (e.g., a 2D or 3D activity area) to control corresponding virtual representations in a virtual environment presented on a display device near the activity area. The disclosed system includes a plurality of sensors and a suitable processing detour configured to collect volumetric scan data and generate various models, such as volumetric models, shadow models, and/or skeletal models, for each participant. The system generates a virtual representation of each participant based at least in part on the generated participant models. Additionally, the interactive video game system may set or modify attributes (such as size, texture, and/or color) of the virtual representation based on various attributes (such as point, purchase, strength boost) associated with the participant.

While only certain features of the technology have been illustrated and described herein, many modifications and changes will occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the technology. Additionally, the technology presented and claimed herein is cited and applied to material objects and specific examples of practical nature, which indeed improve the present technology field and are thus not abstract, intangible, or purely theoretical. Furthermore, if any claim appended to the end of this specification contains one or more elements designated as "means for [ performing ] [ function ]. or" step for [ performing ] [ function ]. it is intended that such elements be construed in accordance with 35u.s.c.112 (f). However, for any claim that contains elements specified in any other way, it is intended that such elements not be construed in accordance with 35u.s.c.112 (f).

22页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:游戏装置、游戏系统以及程序

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类