System and method for partitioning a video feed to segment live athlete activity

文档序号:90949 发布日期:2021-10-08 浏览:9次 中文

阅读说明:本技术 用于划分视频馈送以分割实况运动员活动的系统和方法 (System and method for partitioning a video feed to segment live athlete activity ) 是由 E·施瓦茨 M·纳奎因 C·布朗 S·邢 P·查涅基 C·D·埃伯索尔 于 2020-01-21 设计创作,主要内容包括:一种划分视频馈送以分割实况运动员活动的过程包括:在第一次循环的基础上,从第一相机接收中央视频馈送的传输。该中央视频馈送针对由中央视频馈送所涵盖的以至少两个维度表示的空间区域进行校准。该过程包括:在第二次循环的基础上,从多个跟踪设备中的每个跟踪设备接收各自的带时间戳的位置信息。每个跟踪设备由空间区域上的对应主体佩戴,并且传输描述了对应主体在空间区域中的带时间戳的位置的位置信息。该过程使用接收到的信息和校准来定义与第一主体相关联的中央视频馈送的第一子视图。该第一子视图包括与第一主体相关联的对应子帧。(A process of partitioning a video feed to segment live athlete activity comprising: on a first cycle basis, a transmission of a central video feed is received from a first camera. The central video feed is calibrated for a spatial region in at least two dimensions encompassed by the central video feed. The process comprises the following steps: on a second cycle basis, respective time-stamped location information is received from each of the plurality of tracking devices. Each tracking device is worn by a corresponding subject over a spatial region and transmits location information describing a time-stamped location of the corresponding subject in the spatial region. The process uses the received information and the calibration to define a first sub-view of a central video feed associated with the first subject. The first sub-view includes a corresponding sub-frame associated with the first body.)

1. A system for partitioning a video feed to segment live athlete activity, the system comprising:

a communication interface configured to:

receiving, on a first loop basis, a transmission of a central video feed from a first camera, wherein the central video feed is calibrated for a spatial region in at least two dimensions encompassed by the central video feed; and

receiving, on a second cycle basis, respective time-stamped location information from each of a plurality of tracking devices, wherein each of the plurality of tracking devices is (a) worn by a corresponding subject of a plurality of subjects participating in a race on a spatial region, and (b) transmits location information describing a time-stamped location of the corresponding subject in the spatial region; and

a processor coupled to the communication interface and configured to:

defining a first sub-view of the central video feed using the received time-stamped location information and an alignment of the central video feed, the first sub-view being associated with a first subject included in the plurality of subjects and the first sub-view including, for each of a plurality of frames including the central video feed, a corresponding sub-frame associated with the first subject; and

causing the first sub-view to be transmitted to a device configured to display the first sub-view.

2. The system of claim 1, wherein the central video feed is provided at a first resolution and the first sub-view is provided at a second resolution lower than the first resolution.

3. The system of claim 1, wherein calibration of the central video feed for a spatial region is based at least in part on known information about the spatial region.

4. The system of claim 3, wherein the known information about the spatial region comprises at least one of: a boundary line, a line of known and uniform size, a line of known length or thickness, a location, or a rule.

5. The system of claim 3, wherein the known information about the region of space includes a position of the camera relative to the region of space.

6. The system of claim 5, wherein the camera is fixed such that movement of the camera in at least one dimension is limited.

7. The system of claim 5, wherein the camera is located at a fixed location where at least one of: tilt, pan, or zoom.

8. The system of claim 1, wherein the first sub-view has a different form factor than the central video feed.

9. The system of claim 1, wherein the first sub-view is one of portrait or landscape and the central video feed is the other of portrait or landscape.

10. The system of claim 1, wherein the first sub-view tracks the at least one subject as the at least one subject moves through the spatial region, including by tracking corresponding location information.

11. The system of claim 1, wherein the first subject is selected by a user at a remote device.

12. The system of claim 1, wherein the first subject is automatically selected based at least in part on being within a threshold distance to a ball or other subject.

13. The system of claim 1, wherein the first sub-view is transmitted independently of the central video feed.

14. The system of claim 1, wherein a first sub-view includes information relating to an identity of a first principal, the identity being displayable and the identity being selectable for display within the first sub-view.

15. The system of claim 1, wherein the processor is further configured to define a second sub-view of a central video feed using the received time-stamped location information and an alignment of a central video feed, wherein the second sub-view is associated with a second subject in at least one of: the same competition, different simultaneous live competitions, or historical competitions.

16. The system of claim 1, wherein:

each of the plurality of subjects having a plurality of sensors configured to collect sensor data;

identifying a geometric center or other point or range of points based at least in part on the collected sensor data and collective tracking information of the corresponding subject; and

defining the first sub-view is based at least in part on the identified geometric center or other point or range of points.

17. The system of claim 1, wherein the device configured to display the first sub-view is a remote device and is included in a plurality of remote devices.

18. The system of claim 1, wherein the device configured to display the first sub-view is a local device.

19. The system of claim 1, wherein the contest is a live sporting event involving (i) a first team comprising a first subset of the plurality of principals and (ii) a second team comprising a second subset of the plurality of principals.

20. The system of claim 1, wherein the plurality of subjects comprises at least one of: athletes or competition equipment.

21. A method of partitioning a video feed to segment live athlete activity, the method comprising:

receiving, on a first loop basis, a transmission of a central video feed from a first camera, wherein the central video feed is calibrated for a spatial region in at least two dimensions encompassed by the central video feed;

receiving, on a second cycle basis, respective time-stamped location information from each of a plurality of tracking devices, wherein each of the plurality of tracking devices is (a) worn by a corresponding subject of a plurality of subjects participating in a race on a spatial region, and (b) transmits location information describing a time-stamped location of the corresponding subject in the spatial region;

defining a first sub-view of the central video feed using the received time-stamped location information and an alignment of the central video feed, the first sub-view being associated with a first subject included in the plurality of subjects and the first sub-view including, for each of a plurality of frames including the central video feed, a corresponding sub-frame associated with the first subject; and

causing the first sub-view to be transmitted to a device configured to display the first sub-view.

22. A computer program product embodied in a non-transitory computer readable storage medium and comprising computer instructions for:

receiving, on a first loop basis, a transmission of a central video feed from a first camera, wherein the central video feed is calibrated for a spatial region in at least two dimensions encompassed by the central video feed;

receiving, on a second cycle basis, respective time-stamped location information from each of a plurality of tracking devices, wherein each of the plurality of tracking devices is (a) worn by a corresponding subject of a plurality of subjects participating in a race on a spatial region, and (b) transmits location information describing a time-stamped location of the corresponding subject in the spatial region;

defining a first sub-view of a central video feed using the received time-stamped location information and an alignment of the central video feed, the first sub-view being associated with a first subject included in the plurality of subjects, the first sub-view including, for each of a plurality of frames including the central video feed, a corresponding sub-frame associated with the first subject; and

causing the first sub-view to be transmitted to a device configured to display the first sub-view.

Background

Conventional camera tracking systems track objects via analysis of the subject captured by each respective camera. For example, a camera captures a series of images and analyzes the images to determine optical characteristics of the tracked object, such as identifying a color associated with the object or a contour of the object. These optical characteristics are identified in the further images, allowing tracking of the object through the progression of the series of images. However, these conventional systems are prone to losing track of an object if the object quickly flies out of the line of sight of the camera, or if there are multiple objects in the line of sight of the camera that are optically similar to the desired object. Accordingly, there is a need for improved object tracking and display systems.

Disclosure of Invention

Techniques, including systems, processors, and computer program products, are disclosed for partitioning (partitioning) a video feed to segment (segment) live player activity. In various embodiments, the process of dividing the video feed to segment live athlete activity comprises: on a first cycle (recurring) basis, a transmission of a central video feed is received from a first camera. The central video input is calibrated for a spatial region in at least two dimensions encompassed by the central video feed. On a second loop basis, the process receives respective time stamped location information from each of a plurality of tracking devices, wherein each of the plurality of tracking devices is (a) worn by a corresponding subject of a plurality of subjects participating in a race on a spatial region, and (b) transmits location information describing a time stamped location of the corresponding subject in the spatial region. The process uses the received time-stamped position information and the alignment of the central video feed to define a first sub-view of the central video feed. The first sub-view is associated with a first subject included in the plurality of subjects, and the first sub-view includes, for each of a plurality of frames including the central video feed, a corresponding sub-frame associated with the first subject. The process causes the first sub-view to be transmitted to a device configured to display the first sub-view.

Techniques for partitioning a video feed to segment live athlete activity are disclosed. These techniques enable better tracking of the subject of interest than conventional object tracking methods. In this regard, the central video feed may be divided into sub-views of the subject of interest. For example, viewing a video feed of a football game on a field is divided into isolated shots of a particular player or group of players, allowing spectators (e.g., fans, coaches, etc.) to track a particular player or group of players throughout the game. Conventionally, when a camera is panned away from a particular athlete, the audience is no longer able to track that particular athlete. By using the techniques disclosed herein for dividing a video feed, a viewer can visually track a particular athlete throughout a game without interruption.

Drawings

Various embodiments of the invention are disclosed in the following detailed description and the accompanying drawings.

Fig. 1 is a block diagram illustrating an embodiment of a system for partitioning a video feed to segment live athlete activity.

Fig. 2A shows a block diagram illustrating an embodiment of a system for dividing a video feed to segment live athlete activity.

Fig. 2B shows a block diagram illustrating an embodiment of a system for dividing a video feed to segment live athlete activity.

FIG. 3 is a block diagram illustrating an embodiment of a tracking device.

FIG. 4 is a block diagram illustrating an embodiment of a tracking device management system.

FIG. 5 is a block diagram illustrating an embodiment of a statistics system.

Fig. 6 is a block diagram illustrating an embodiment of an odds management system.

Fig. 7 is a block diagram illustrating an embodiment of a user equipment.

Fig. 8 is a flow diagram illustrating an embodiment of a process of dividing a video feed to segment live athlete activity.

Fig. 9 illustrates an example environment including a playing field including a tracking component in accordance with an embodiment of the present disclosure.

Fig. 10A shows an example of a central video feed in accordance with an embodiment of the present disclosure.

Fig. 10B illustrates an example of a first sub-view and a second sub-view according to an embodiment of the present disclosure.

Fig. 10C illustrates an example of a first sub-view and a second sub-view according to an embodiment of the present disclosure.

Detailed Description

The invention can be implemented in numerous ways, including as a process; a device; a system; composition of matter; a computer program product embodied on a computer-readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention. Unless stated otherwise, a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or as a specific component that is manufactured to perform the task. As used herein, the term "processor" refers to one or more devices, circuits, and/or processing cores configured to process data (such as computer program instructions).

The following provides a detailed description of one or more embodiments of the invention, along with the accompanying drawings that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured.

Fig. 1 is a block diagram illustrating an embodiment of a system for partitioning a video feed to segment live athlete activity. The exemplary system 48 partitions a video feed, such as a video feed of a contest between a first contestant and a second contestant. The first competitor includes a first group of one or more participants and the second competitor includes a second group of one or more participants. The system 48 includes a communication interface 107 and a processor 100. The communication interface 107 is configured to receive time stamped location information of one or more participants in one or both of the first group(s) of participants and the second group(s) of participants in the contest. In various embodiments, time-stamped location information is captured by a telemetry tracking system during a race. In this example, the telemetry tracking system is comprised of tracking device (S) 300-1 through 300-P, anchor device (S) 120-1 through 120-Q, and optional camera (S) 140-1 through 140-S, which are managed by tracker management system 400 as described further below.

The processor 100 is coupled to the communication interface 107 and is configured to: the first covariate parameter for each of the one or more participants in one or both of the first set of participants and the second set of participants is calculated at and/or until a point in time while the current contest is ongoing, for example. Each respective first covariate parameter is derived from time-stamped location information of a corresponding participant in the first or second group of one or more participants in the present contest at that point in time.

In various embodiments, processor 100 includes: a tracking management system 400 for tracking a plurality of subjects and a statistics system 500 for managing various statistics. The tracking device management system 400 facilitates management of one or more tracking devices 300 and one or more anchor devices 120 of the system. The statistics system 500 stores and/or generates various statistics for predicting outcomes of competitions such as live sporting events, providing odds (odds) wagered on various conditions or outcomes in sporting events and other similar activities. In various embodiments, the tracking management system 400 and statistics system 500 include: software engines or modules running on processor 100 and/or separate or possibly separate systems, each including and/or running on one or more processors comprising processor 100.

In various embodiments, system 48 includes: an odds management system 600 for managing odds and a plurality of user devices 700-1 through 700-R. Although the odds management system 600 is shown external to the processor 100, in some embodiments the odds management system is included in the processor. The odds management system 600 facilitates determining odds for outcomes in sporting events and managing various models related to predicting outcomes of live events.

In some embodiments, the system includes one or more user devices 700, which user devices 700 facilitate end user interaction with various systems of the present disclosure (such as the odds management system 600). Further, in some embodiments, system 48 includes: one or more cameras 140, these cameras 140 capturing live images and/or video of the live event, which are then utilized by the system of the present disclosure. In some embodiments, the camera 140 includes one or more high resolution cameras. As non-limiting examples, the one or more high resolution cameras include cameras having 1080p resolution, 1440p resolution, 2K resolution, 4K resolution, or 8K resolution. Utilizing a camera 140 with high resolution allows the video feed captured by the camera to be divided at higher resolutions while also allowing more partitions to be created without significant degradation in image quality.

The above components are optionally interconnected by a communication network. The elements in the dashed box may optionally be combined into a single system or device. Of course, other topologies for the computer system 48 are possible. For example, in some implementations, any of the illustrated devices and systems may actually constitute several computer systems linked together in a network, or may be virtual machines or containers in a cloud computing environment. Further, in some embodiments, the illustrated devices and systems do not rely on a physical communication network 106, but rather wirelessly transfer information between each other. Thus, the exemplary topology shown in fig. 1 is only used to describe features of embodiments of the present disclosure in a manner that will be readily understood by those skilled in the art.

In some implementations, the communication network 106 interconnects the following to each other: a tracking device management system 400 that manages one or more tracking devices 300 and one or more anchors 120, a statistics system 500, an odds management system 600, one or more user devices 700 and one or more cameras 140, and optionally external systems and devices. In some implementations, the communication network 106 optionally includes the internet, one or more Local Area Networks (LANs), one or more Wide Area Networks (WANs), other types of networks, or a combination of such networks.

Examples of network 106 include: the World Wide Web (WWW), intranets and/or wireless networks such as cellular telephone networks, wireless Local Area Networks (LANs) and/or Metropolitan Area Networks (MANs), and other devices that communicate via wireless. The wireless communication optionally uses any of a number of communication standards, protocols, and technologies, including Global System for Mobile communications (GSM), Enhanced Data GSM Environment (EDGE), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), evolution, data Only (EV-DO), HSPA +, Dual-cell HSPA (DC-HSPDA), Long Term Evolution (LTE), Near Field Communication (NFC), wideband code division multiple Access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Bluetooth, Wireless Fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11ac, IEEE 802.11ax, IEEE 802.11b, IEEE 802.11g, and/or IEEE 802.11 n), Voice over Internet protocol (VoIP), Wi-MAX, protocols for electronic mail (e.g., Internet Message Access Protocol (IMAP) and/or Post Office Protocol (POP))), Instant messaging (e.g., extensible messaging and presence protocol (XMPP), session initiation protocol with extended instant messaging and presence (SIMPLE), Instant Messaging and Presence Service (IMPS)), and/or Short Message Service (SMS), or any other suitable communication protocol, including communication protocols that have not been developed as of the filing date of this document.

In various embodiments, processor 100 includes: a machine learning engine 210 (not shown in fig. 1) that facilitates predicting an outcome of the competition. The following figures describe an example of a processor 100 that includes a machine learning engine in more detail.

Fig. 2A and 2B show block diagrams illustrating an embodiment of a system for dividing a video feed to segment live athlete activity. As depicted in fig. 2A, the anchor device array 120 receives telemetry data 230 from one or more tracking devices 300. To minimize errors when receiving telemetry from one or more tracking devices 300, anchor device array 120 preferably includes at least three anchor devices. The inclusion of at least three anchor devices 120 within the array of anchor devices allows each ping (e.g., telemetry data 230) received from a respective tracking device 300 to be triangulated using combined data from at least three anchors receiving the respective ping. Additional details and information regarding the system and method for receiving a ping from a tracking device and its optimization will be described in more detail below, particularly with reference to at least fig. 3 and 4.

In the example shown, telemetry 230 received by the anchor array 120 from the one or more tracking devices 300 includes location telemetry 232. The location telemetry data 232 provides location data for the respective tracking device 300 that describes the location of the tracking device within the spatial region. In some embodiments, the location telemetry data 232 is provided as one or more cartesian coordinates (e.g., X, Y, and/or Z coordinates) describing the location of each respective tracking device 300, although any coordinate system (e.g., polar, etc.) describing the location of each respective tracking device 300 is used in alternative embodiments.

Telemetry data 230 received by the anchor array 120 from one or more tracking devices 300 includes: kinetic telemetry data 234. The kinetic telemetry data 234 provides data relating to various kinematics of the respective tracking device. In some embodiments, the kinetic telemetry data 234 is provided as a velocity of the respective tracking device 300, an acceleration of the respective tracking device, and/or a jerk (jerk) of the respective tracking device. Additionally, in some embodiments, one or more of the above-described values are determined from an accelerometer (e.g., accelerometer 317 of fig. 8) of the respective tracking device 300 and/or are derived from position telemetry data 232 of the respective tracking device. Additionally, in some embodiments, telemetry 230 received by the anchor array 120 from the one or more tracking devices 300 includes biometric telemetry 236. Biometric telemetry data 236 provides biometric information about each subject associated with a respective tracking device 300. In some embodiments, the biometric information includes a subject's heart rate, temperature (e.g., skin temperature, time temperature, etc.), and the like.

In some embodiments, the anchor array 120 transmits the telemetry data (e.g., location telemetry 232, kinetic telemetry 234, biometric telemetry 236) described above to a telemetry resolution system 240. Thus, in some embodiments, the telemetry parsing system 240 transmits the telemetry data (e.g., data stream 244) to the machine learning engine 210 and/or the real-time data packager 246 for further processing and analysis.

In some embodiments, the real-time data packager 246 synchronizes one or more data sources (e.g., streaming data 244 from the telemetry parsing system 240, the game statistics input system 250, the machine learning engine 210, etc.) by using one or more timestamps associated with the respective data. For example, in some embodiments, the data source provides data associated with a real-world clock timestamp (e.g., an event occurs and is associated with a real-world time of 1:17 pm). In some embodiments, the data source provides data associated with a game clock timestamp for a live sporting event (e.g., an event occurs with 2 minutes 15 seconds remaining in the second section). Further, in some embodiments, the data source provides data associated with both real world clock timestamps and game clock timestamps. Synchronization of data sources via timestamps allows designers of the present disclosure to provide services with an additional level of accuracy, particularly in the case of result wagers and bets on live events. For example, in some embodiments, the data (e.g., the streaming data 280 and/or the direct data 282 of fig. 2B) provided to the user device 700 describes placing a wager (e.g., an odds) on the next game play in the football game. To determine whether the end user of the user device 700 places a wager within a predetermined time window (e.g., prior to the bowling of the next round), the game clock and real world time data received from and/or transmitted to the user device are analyzed and the wager is verified, rejected or maintained for further consideration.

In some embodiments, the machine learning engine 210 receives data from various sources of the present disclosure in order to predict future results of live sporting events, and generates statistics for analysis and use. For example, in some embodiments, the data source of the machine learning engine 210 includes a location data matrix (format) classifier 212, hereinafter referred to as a "neural network," that provides information about various configurations and matrices of the athlete at any given point in time in the race. For example, in some embodiments, the lineup classifier 212 parses the telemetry data 230 to analyze a pre-snap (pre-snap) lineup of the athlete. Analysis of the pre-roll telemetry data 230 allows the formation classifier 212 to determine various states and conditions of the game, such as an attack of the game, a position rule violation within the game (e.g., offside, illegal motion, etc.), and so forth. Further, in some embodiments, the lineup classifier 212 analyzes the telemetry data 230 received after the start of the game to further generate data and information regarding how each lineup evolves (e.g., expected running route versus actual running route, expected blocking assignment versus action blocking assignment, speed of athletes throughout the game, distance between two athletes throughout the game, etc.).

In some embodiments, the machine learning engine 210 includes a historical training data store 214. Historical data store 214 provides historical data and information related to each particular athletic activity (e.g., athletic activity historical data 508 of fig. 5), each particular team associated with the particular athletic activity (e.g., team historical data 510 of fig. 5), and/or each particular athlete associated with the particular athletic activity and/or team (e.g., athlete historical data 514 of fig. 5). In some embodiments, this data is initially used as a training data set for the machine learning engine 210. However, the present disclosure is not so limited, as this data may also be used to further augment the features and services provided by the machine learning engine 210 and other systems of the present disclosure.

Additionally, in some embodiments, the machine learning engine 210 includes various models 220 to predict future outcomes of sporting events and provide analysis of sporting events. In some embodiments, the model 220 of the machine learning engine 210 includes an expected points (expected points) model 222. The expected points model 222 provides the likelihood of getting points for a particular game at that event via numerical values. In some embodiments, the models 220 of the machine learning engine 210 include a winning probability model 224 that provides the likelihood of each participating team at the event winning, or the likelihood of any given point distribution between winning and losing teams at the event. Additionally, in some embodiments, the model 220 of the machine learning engine 210 includes a player-based Win Above Replacement (WAR) model 226. WAR model 226 provides the contribution value that the respective player added to its corresponding team (e.g., player 1 provides value 1 to the respective team and player 2 provides value 2 to the respective team, so player 2 is more valuable to the respective team).

In some embodiments, machine learning engine 210 includes a situation store 228. The situation store 228 is a cache of various situation details and/or statistics that are quickly accessed during a real game scenario. The quick access to the situation store 228 prevents delays (lag) that would otherwise arise from querying different databases and systems (e.g., the location data matrix classifier 212, the historical training data 214, etc.) for the same information. Additional details and information regarding the machine learning engine and components therein, including the various data stores and models described above, will be described in greater detail below, and particularly with reference to at least fig. 5 and 6.

The machine learning engine 210 communicates various odds and outputs of the various databases and models therein to the odds management system 600. In communication with the machine learning engine 210, the odds management system 600 provides the user device 700 with various wagers and predictive odds at a sporting event for future events, while also updating these odds in real-time to reflect the current situation and statistics of the game.

As depicted in fig. 2B, in some embodiments, system 48 includes a game statistics input system 250. The game statistics input system 250 is configured to provide at least: in-game data 254, which in the example case of football, describes the state of the game during a given game (e.g., the weak side player is running on a post route); and a race end data 256 describing the status of the game after a given race (e.g., the race results in a first attack at the code line of opponent 42). In some embodiments, the data of the statistical input system 250 is associated with the world and game clocks 242 and is thus transmitted to the telemetry parsing system 240 and/or the machine learning engine 210. In some embodiments, the game statistics input system 250 is included by the lineup classifier 212.

In some embodiments, various data is transmitted to an Application Programming Interface (API) server 260. The data may include streaming data 244, race end data 256, data from the odds management system 600, or a combination thereof. Thus, the API server 260 facilitates communication between the various components of the system, one or more user devices 700, and the master statistics database 270 in order to provide the various features and services of the present disclosure (e.g., streaming of games, requests for statistics, wagering on games, etc.). Communication between the API server 260 and one or more user devices 700 includes providing streaming data 280 and/or direct data 282 to each respective user device 700 over the communication network 106, and receiving various requests 284 from each respective user device. By way of non-limiting example, streaming data 280 includes: tracking "telemetry" data, including the athlete's xyz coordinates or the athlete's accelerometer data, direct data 282 includes clock, score, or timeout remaining (timeout).

In some embodiments, the master statistics database 270 includes some or all of the statistics known to the machine learning engine 210 that are available to the user. The master statistical database is updated periodically, such as at the end of each game or at the end of each few games. For example, in some embodiments, only a portion of the desired statistics known to the machine learning engine 210 may be available to the user and, thus, stored in the master statistics database 270. However, the present disclosure is not limited thereto. For example, in some embodiments, master statistics database 270 is included by machine learning engine 270. The elements in the dashed box may optionally be combined into a single system or device.

Now that the infrastructure of the system 48 has been generally described, an exemplary tracking device 300 will be described with reference to FIG. 3.

FIG. 3 is a block diagram illustrating an embodiment of a tracking device. In various implementations, a tracking device, also referred to hereinafter as a "tracker," includes: one or more processing units (CPUs) 374, a memory 302 (e.g., random access memory), one or more disk storage and/or permanent devices 390 optionally accessible by one or more controllers 388, a network or other communication interface (which may include RF circuitry) 384, an accelerometer 317, one or more optional intensity sensors 364, an optional input/output (I/O) subsystem 366, one or more communication buses 313 for interconnecting the above components, and a power supply 376 for powering the above components. In some implementations, the data in the memory 302 is seamlessly shared with the non-volatile memory 390 using known computing techniques such as caching. In some implementations, memory 302 and/or memory 390 may actually be hosted on a computer external to tracking device 300, but that computer may be electronically accessed by tracking device 300 through the internet, an intranet, or other form of network or electronic cable (illustrated as element 106 in fig. 1) using network interface 384.

In various embodiments, the tracking device 300 illustrated in fig. 3 includes, in addition to the accelerometer(s) 317, a magnetometer and/or a GPS (or GLONASS or other global navigation system) receiver for obtaining information about the position and/or orientation (e.g., portrait or landscape) of the tracking device 300.

It should be appreciated that the tracking device 300 illustrated in fig. 3 is merely one example of a device that may be used to obtain telemetry data (e.g., location telemetry 232, dynamics telemetry 234, and biometric telemetry 236) for a corresponding subject, and that the tracking device 300 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of components. The various components shown in fig. 3 are implemented in hardware, software, firmware, or a combination thereof, including one or more signal processing and/or application specific integrated circuits.

The memory 302 of the tracking device 300 illustrated in fig. 3 optionally includes high-speed random access memory, and optionally also includes non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Access to the memory 302 by other components of the tracking device 300, such as the CPU(s) 374, is optionally controlled by a memory controller 388.

In some embodiments, CPU(s) 374 and memory controller 388 are optionally implemented on a single chip. In some other embodiments, CPU(s) 374 and memory controller 388 are implemented on separate chips.

The Radio Frequency (RF) circuitry of network interface 384 receives and transmits RF signals, also referred to as electromagnetic signals. In some embodiments, the RF circuitry 384 converts electrical signals to/from electromagnetic signals and communicates via electromagnetic signals with a communication network and other communication devices, such as one or more anchor devices 120 and/or the tracking device management system 400. The RF circuitry 384 optionally includes well-known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chipset, a Subscriber Identity Module (SIM) card, memory, and so forth. In some embodiments, the RF circuitry 384 optionally communicates with the communication network 106.

In some embodiments, the network interface (including RF circuitry) 384 operates via ultra-wideband (UWB) technology, which allows the tracking device 300 to communicate with the array of anchor devices 120 in crowded spatial areas, such as live sporting events. In some embodiments, the tracking device 300 transmits a low power (e.g., approximately 1 milliwatt (mW)) signal at a predetermined center frequency (e.g., 6.55 GHz 200 mHz, yielding a total transmission frequency range of about 6.35 GHz to about 6.75 GHz). As used herein, these communications and transmissions are referred to hereinafter as "pings". For a discussion of UWB, see "Ultra-Wide Band technology applications in construction" by Jiang et al in 2000: a review "(Organization, Technology and Management in Construction 2 (2), 207- & 213).

In some embodiments, the power supply 358 optionally includes a power management system, one or more power sources (e.g., a battery, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a Light Emitting Diode (LED)), and any other components associated with the generation, management, and distribution of power in such tracking devices 300.

In some implementations, the memory 302 of the tracking device 300 for tracking the respective subject stores:

the operating system 304 (e.g., ANDROID, iOS, DARWIN, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components;

a tracking device identifier module 305 that stores data for identifying the respective tracking device 300, including a tracking device identifier 306 and optionally a tracking device group identifier 307; and

a tracking device ping module 308 that stores data and information related to a ping rate of a respective tracking device, the tracking device ping module 308 comprising:

an instantaneous ping rate 310, which describes the current ping rate at which the respective tracking device 300 is currently operating,

a minimum ping rate 312, which describes the minimum ping rate at which the respective tracking device 300 may operate,

a maximum ping rate 314, which describes the maximum ping rate at which the respective tracking device 300 may operate,

a threshold ping rate 316 that describes a minimum ping rate at which the respective tracking device 300 may operate, an

o a variable ping rate flag 318.

The tracking device identifier module 305 stores information related to identifying a corresponding tracking device 300 from a plurality of tracking devices (e.g., tracking device 1300-1, tracking device 2300-3 … … tracking device P300-P). In some embodiments, the information stored by the tracking device identifier module 305 includes: a tracking device Identifier (ID) 306, the tracking device Identifier (ID) 306 including a unique ID (e.g., a serial number or code) representing the respective tracking device 300. In some embodiments, the tracking device ID module 305 includes a tracking device group ID 307, the tracking device group ID 307 assigning the respective tracking device 300 to one or more tracking device groups (e.g., tracking device group 418-2 of fig. 4). Additionally, in some embodiments, the pings transmitted by the respective trace device 300 include data of the trace device ID module 305, thereby allowing the anchor device array 120 to identify pings received from more than one trace device. Additional details and information regarding the grouping of tracking devices 300 will be described in more detail below, particularly with reference to at least fig. 4.

The tracking device ping module 308 stores data and information related to various ping parameters and conditions of the respective tracking device 300 and facilitates managing pings. For example, in some embodiments, the tracking device ping module 308 manages the instantaneous ping rate 310 for the respective tracking device 300 (e.g., manages the instantaneous ping rate 310 to be 10 Hertz (HZ)). In some embodiments, the tracking device 300 is configured with one or more ping rate limits, including one or both of a minimum ping rate 312 and a maximum ping rate 314, that define the maximum and minimum ping rates at which the tracking device 300 may transmit pings. For example, in some embodiments, the minimum ping rate 312 and/or the maximum ping rate 314 may be set by the tracking device management system 400 based on one or more of bandwidth limitations, the number of activity tracking devices 300, and the type of activity expected (e.g., sports and/or event type, expected subject activity, etc.). When configured with one or two ping rate limits, the tracking device ping module 308 operates to adjust the instantaneous ping rate 310 between a minimum ping rate 312 and a maximum ping rate 314. Thus, the automatic optimization of the trace management system 400 may be used in conjunction with automatic ping rate adjustment of the trace device 300. In some embodiments, the tracking device ping module 308 is configured to compare the detected motion from the accelerometer 317 to a predefined threshold 316. Accordingly, the ping module 308 increases the instantaneous ping rate 310 (e.g., until the instantaneous ping rate 310 reaches the maximum ping rate 314) in accordance with a determination that the detected motion is greater than the predefined threshold 316. Similarly, the ping module 308 decreases the instantaneous ping rate 310 (e.g., until the instantaneous ping rate 310 reaches the minimum ping rate 312) in accordance with a determination that the detected motion is less than the threshold ping rate 316.

In some embodiments, the ping module 310 includes a variable ping rate flag 318 that is configured (e.g., wirelessly set) by the tracking device management system 400, the variable ping rate flag 318 determining whether the ping module 308 automatically changes the instantaneous ping rate 310 based on the determined activity. For example, the tracking device management system 400 may set the variable ping rate flag 318 to "false" for one or more tracking devices 300 associated with athletes not currently participating on the race course, where the instantaneous ping rate 310 remains at a low rate even if, for example, the athlete is actively warming up. The tracking device management system 400 sets the variable ping rate flag 318 to "true" for one or more athletes actively participating on the field of the playlot. Additionally, in some embodiments, each tracking device 300 is dynamically configured based on the location of the respective tracking device. For example, based on a determination that the tracking device 300 is within the field of the game (e.g., if the athlete is actively participating in the game) rather than a determination that the tracking device is outside the field of the game (e.g., if the athlete is not actively participating in the game).

Utilizing the tracking device ping model 308 and/or sensors within the tracking device 300 (e.g., the accelerometer 317 and/or the optional sensor 364) increases the reliability of the system 48 (e.g., the anchor array 120, the telemetry parsing system 240, the tracking device management system 400, etc.) in tracking the subject with which the tracking device is disposed.

As previously described, in some embodiments, each tracking device 300 provides telemetry data 230, which telemetry data 230 is received and transmitted by the various anchors 120 proximate the respective tracking device 300. The telemetry data includes location telemetry data 232 (e.g., X, Y and/or Z coordinates), dynamics telemetry data 234 (e.g., velocity, acceleration, and/or jerk), and/or biometric telemetry data 236 (e.g., heart rate, athlete's physical attributes, such as shoulder width, etc.).

In some embodiments, each subject in the race is equipped with more than one tracking device 300 in order to increase the accuracy of the data received from the tracking devices about that subject. For example, in some embodiments, both the left and right shoulders of the respective body are equipped with tracking devices 300, each of which functions normally and has a line of site to at least a subset of the anchors 120. Thus, in some embodiments, data from the left and right tracking devices 300 have their telemetry data 230 combined to form a single time-stamped object. The single object combines the position data from the two tracking devices 300 to create a centerline representation of the position of the respective athlete. In addition, the calculated position of the centerline provides a more accurate representation of the player's center of position on the field of the game. In addition, using relative position data from two tracking devices 300 positioned on the player's left and right shoulders allows the system 48 to determine the direction (e.g., rotation) the player is facing before a single player object is created as described above. In various embodiments, the inclusion of rotation data greatly simplifies the task of creating an avatar (avatar) from data created by recording telemetry data 230 during the game, and/or establishing complex covariates that can be used to better predict future events in the game or the final outcome of the game itself.

In some embodiments, the tracking device 300 has any or all of the circuitry, hardware components, and software components found in the device depicted in FIG. 3. For the sake of brevity and clarity, only a few of the possible components of the tracking device 300 are shown to better emphasize the additional software modules installed on the tracking device 300.

FIG. 4 is a block diagram illustrating an embodiment of a tracking device management system. Tracking device management system 400 is associated with one or more tracking devices 300 and anchors 120. The tracking device management system 400 includes: one or more processing units (CPUs) 474, a peripheral interface 470, a memory controller 488, a network or other communication interface 484, memory 402 (e.g., random access memory), a user interface 478 (the user interface 478 includes a display 482 and an input 480 (e.g., keyboard, keypad, touch screen, etc.)), an input/output (I/O) subsystem 466, one or more communication buses 413 for interconnecting the above components, and a power supply system 476 for supplying power to the above components.

In some embodiments, input 480 is a touch-sensitive display, such as a touch-sensitive surface. In some embodiments, user interface 478 includes one or more soft keyboard embodiments. The soft keyboard embodiment may include a standard (QWERTY) and/or non-standard configuration of symbols on the displayed icons.

It should be appreciated that the tracking device management system 400 is merely one example of a system that may be used to interface with the various tracking devices 300, and that the tracking device management system 400 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of components. The various components shown in fig. 4 are implemented in hardware, software, firmware, or a combination thereof, including one or more signal processing and/or application specific integrated circuits.

The memory 402 optionally includes high-speed random access memory, and optionally also includes non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Access to the memory 402 by other components of the management system 400, such as the CPU(s) 474, is optionally controlled by a memory controller 488.

Peripheral interface 470 may be used to couple input and output peripherals of the management system to CPU(s) 474 and memory 402. The one or more processors 474 execute or execute various software programs and/or sets of instructions stored in the memory 402 to carry out various functions for the management system 400 and to process data.

In some embodiments, peripheral interface 470, CPU(s) 474, and memory controller 488 are optionally implemented on a single chip. In some other embodiments, they are optionally implemented on separate chips.

In some embodiments, power supply system 476 optionally includes a power management system, one or more power sources (e.g., batteries, Alternating Current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a Light Emitting Diode (LED), etc.), and any other components associated with the generation, management, and distribution of power in a portable device.

As illustrated in fig. 4, the memory 402 of the tracking device management system preferably stores the following:

the operating system 404 (e.g., ANDROID, iOS, DARWIN, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between the various hardware and software components; and

a tracking device manager module 406 for facilitating management of one or more tracking devices 300, the tracking device manager module comprising:

a tracking device identifier store 408 for storing relevant information about each respective tracking device 410-1, including the tracking device identifier 306 and the tracking device ping rate 414, an

o-trace device group store 416 to facilitate management of one or more trace device groups 307.

Tracking device identifier store 408 includes information about each respective tracking device 410-1, including tracking device Identifier (ID) 306 of each respective tracking device 300, and the tracking device group 307 with which the respective tracking device is associated. For example, in some embodiments, a first tracking device group 307-1 is associated with the left shoulder of each respective subject and a second tracking device group 307-2 is associated with the right shoulder of each respective subject. Further, in some embodiments, a third tracking device group 307-3 is associated with a first location (e.g., receiver, defense, security, etc.) of each respective subject, and a fourth tracking device group 307-4 is associated with a second location. The grouping 307 of the trace device 300 allows a particular group to be designated with a particular ping rate (e.g., a faster ping rate for running a fallback). Grouping 307 of tracking devices 300 also allows a particular group to be isolated from other tracking devices not associated with the respective group, which is useful in viewing the representation of telemetry data 230 provided by the tracking devices of the group. Additional information about Tracking devices and Tracking device management systems may be found in U.S. patent No. 9,950,238 entitled "Object Tracking System Optimization and tools.

FIG. 5 is a block diagram illustrating an embodiment of a statistics system. In accordance with the present disclosure, the statistical system 500 stores and determines various statistical data. The statistical system 500 includes: one or more processing units (CPUs) 574, a peripheral interface 570, a memory controller 588, a network or other communication interface 584, memory 502 (e.g., random access memory), a user interface 578, the user interface 578 including a display 582 and inputs 580 (e.g., keyboard, keypad, touch screen, etc.), an input/output (I/O) subsystem 566, one or more communication buses 513 for interconnecting the above components, and a power supply system 576 for supplying power to the above components.

In some embodiments, input 580 is a touch-sensitive display, such as a touch-sensitive surface. In some embodiments, user interface 578 includes one or more soft keyboard embodiments. The soft keyboard embodiment may include a standard (QWERTY) and/or non-standard configuration of symbols on the displayed icons.

It should be appreciated that statistical system 500 is merely one example of a system that may be used to initiate and determine various statistical data, and that statistical system 500 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of components. The various components shown in fig. 5 are implemented in hardware, software, firmware, or a combination thereof, including one or more signal processing and/or application specific integrated circuits.

The memory 502 optionally includes high-speed random access memory, and optionally also includes non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Access to memory 502 by other components of statistics system 500, such as CPU(s) 574, is optionally controlled by memory controller 588.

Peripheral interface 570 may be used to couple input and output peripherals of the management system to CPU(s) 574 and memory 502. The one or more processors 574 execute or execute various software programs and/or sets of instructions stored in the memory 502 to perform various functions of the statistical system 500 and process data.

In some embodiments, peripheral interface 570, CPU(s) 574, and memory controller 588 are optionally implemented on a single chip. In some other embodiments, they are optionally implemented on separate chips.

In some embodiments, the power supply system 576 optionally includes a power management system, one or more power sources (e.g., battery, Alternating Current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a Light Emitting Diode (LED), etc.), and any other components associated with the generation, management, and distribution of power in a portable device.

As illustrated in fig. 5, the memory 502 of the remote user device preferably stores the following:

an operating system 504 (e.g., ANDROID, iOS, DARWIN, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components;

a position matrix classifier 212 for determining and analyzing the matrix of the athlete;

a historical training data store 214 for storing various statistics relating to each athletic activity 508, wherein each athletic activity 508 includes various team historical data 510 for one or more teams 512 and various player statistics 514 for one or more players 516; and

a situation store 228 for storing data relating to the athlete's formation and game situation.

A position lineup classifier 212 (sometimes referred to simply as a lineup classifier) provides information about the various states and lineups of the athlete at any given point in the game. For example, in some embodiments, the formation classifier 212 parses the telemetry data 230 to determine a pre-bowling formation. Thus, once the lineup is determined and telemetry data 230 is parsed, sub-categories of lineup (e.g., I-lineups with different sub-categories defining different rollbacks) may be determined. Further, in some embodiments, the lineup classifier 212 acts as a virtual referee and determines whether a violation has occurred in the game or game, such as a player offside, a neutral zone violation, an illegal move, an illegal lineup, and so forth. In some embodiments, the lineup classifier 212 includes one or more tables of various lineups in the football game, such as a first table of an attack group lineup, a second table of a defense group lineup, and a third table of a special team (special team) lineup. In some embodiments, the tables of the above-described arrays provide some or all of the arrays described by tables 1, 2, and 3.

TABLE 1 exemplary attack group Rugby matrix

Exemplary matrix form
Double wing array
Empty back field matrix
Goal line formation
I-matrix type
Pistol (Pistol) matrix
Discrete dual-run satellite (Pro set) array
Short pass (Short punt) formation
Shotgun (shotgun) matrix
Single set back (Single set back) array type
Single wing array
T-array type
Defend scatter (tack spread) matrix
V-array type
Victory matrix type
Wing type T array
Wishbone matrix

TABLE 2 exemplary defensive set football lineup

Exemplary matrix form
38 matrix type
46 array type
2-5 array type
3-4 matrix type
4-3 matrix type
4-4 matrix type
5-2 matrix type
5-3 matrix type
6-1 matrix type
6-2 matrix type
Seven-person linear array type
Five fens formation
Very defending array type
Four fens array type
Half dollar array

TABLE 3 exemplary Special service group Rugby lineup

Exemplary matrix form
Dot ball (Field goal) array
Attack (Kick return) array type
Open ball (Kickoff) array type
Kicking ball (Punt) array

Additionally, in some embodiments, the formation classifier 212 determines the ball carrier by comparing telemetry 230 provided by the ball with telemetry of the player closest to the ball. Likewise, in some embodiments, determining which team owns the ball is done in a similar manner. Additionally, in some embodiments, the lineup classifier 212 determines whether the athlete is within the boundaries of the game by analyzing telemetry 230 extracted from the athlete and comparing it to known boundaries of the field of the game play. In this manner, the lineup classifier 212 parses the telemetry data 230 to provide a score record table and/or an automatic color review (color comment) for the game.

Although the formation classifier 212 is labeled as a "neural network," it should be appreciated that the formation classifier 212 module need not use a neural network classifier to perform the classification of the team formation. In some embodiments, the lineup classifier 212 module utilizes virtually any classification scheme that can distinguish lineup patterns from telemetry data. For example, in some embodiments, the lineup classifier 212 utilizes a nearest neighbor algorithm to perform the classification of the team lineup. In other embodiments, the lineup classifier 212 utilizes clustering to perform classification of team lineups. In some embodiments, clarification (eliminations) of the lineup categories by the lineup classifier 212 is used as covariates in a statistical model to predict the outcome (e.g., win/loss, point distribution, etc.) of the current live game, as disclosed with respect to the methods and features described in fig. 8.

In more detail, in some embodiments, the array classifier 212 is based on a logistic regression algorithm, a neural network algorithm, a Support Vector Machine (SVM) algorithm, a naive bayes algorithm, a nearest neighbor algorithm, a boosted tree algorithm, a random forest algorithm, or a decision tree algorithm.

As non-limiting examples, the matrix classifier 212 is based on a logistic regression algorithm, a neural network algorithm, a Support Vector Machine (SVM) algorithm, a naive bayes algorithm, a nearest neighbor algorithm, a boosted tree algorithm, a random forest algorithm, or a decision tree algorithm. When used for classification, the SVM separates a given set of binary labeled data training sets using the hyperplane that is farthest from the labeled data. For the case where linear separation is not possible, SVMs may work in conjunction with "kernel" techniques, which automatically achieve a non-linear mapping to the feature space. The hyperplane found by the SVM in the feature space corresponds to a non-linear decision boundary in the input space. The tree-based approach partitions the feature space into a set of rectangles, and then fits a model (such as a constant) in each rectangle. In some embodiments, the decision tree is a random forest regression. One particular algorithm that may be used as the matrix classifier 212 of the present method is classification and regression trees (CART). Other specific decision tree algorithms that may be used as the matrix classifier 212 of the present method include, but are not limited to, ID3, C4.5, MART, and random forest.

In some embodiments, historical data store 214 stores statistics related to each athletic activity 508, each team 510 within the athletic tournament, and the corresponding athlete 512. As previously described, in some embodiments, the data stored in the historical data store 214 is used as a training data set for the machine learning engine 210 and/or the information classifier 212. For example, in some embodiments, the data stored in the historical data store 214 is used as an initial dataset at the start of the tournament, as inferred from other datasets similar to the tournament (e.g., using college football statistics if the athlete is a professional newshow), or if new statistics are being generated, the data stored in the historical data store 214 is used to create data points (e.g., previously unknown statistics become relevant). Additionally, in some embodiments, data from previously played games is stored in historical data store 214.

In some embodiments, the situation store 228 includes: data stored in one or more databases of machine learning engine 210 as a cache of information. This caching of the situation store 228 allows for fast querying and utilization of data, rather than having to query each respective database. In some embodiments, the situation store 288 creates a new data cache for each respective game. However, the present disclosure is not limited thereto.

Fig. 6 is a block diagram illustrating an embodiment of an odds management system. In accordance with the present disclosure, the odds management system 600 stores and determines various odds. The odds management system 600 includes: one or more processing units (CPU) 674, a peripheral interface 670, a memory controller 688, a network or other communication interface 684, memory 602 (e.g., random access memory), a user interface 678 including a display 682 and an input 680 (e.g., keyboard, keypad, touch screen, etc.), an input/output (I/O) subsystem 666, one or more communication buses 613 for interconnecting the above-described components, and a power supply system 676 for supplying power to the above-described components.

In some embodiments, input 680 is a touch-sensitive display, such as a touch-sensitive surface. In some embodiments, user interface 778 includes one or more soft keyboard embodiments. The soft keyboard embodiment may include a standard (QWERTY) and/or non-standard configuration of symbols on the displayed icons.

It should be appreciated that the odds management system 600 is but one example of a system that can be used to start and determine various statistics, and that the odds management system 600 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of components. The various components shown in fig. 6 are implemented in hardware, software, firmware, or a combination thereof, including one or more signal processing and/or application specific integrated circuits.

The memory 602 optionally includes high-speed random access memory, and optionally also includes non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Access to the memory 602 by other components of the odds management system 600, such as the CPU(s) 674, is optionally controlled by a memory controller 688.

Peripheral interface 670 may be used to couple input and output peripherals of the management system to CPU(s) 674 and memory 602. The one or more processors 674 run or execute various software programs and/or sets of instructions stored in the memory 602 to perform various functions of the odds management system 600 and process data.

In some embodiments, peripherals interface 670, CPU(s) 674, and memory controller 688 are optionally implemented on a single chip. In some other embodiments, they are optionally implemented on separate chips.

In some embodiments, the power supply system 676 optionally includes a power management system, one or more power sources (e.g., battery, Alternating Current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a Light Emitting Diode (LED), etc.), and any other components associated with the generation, management, and distribution of power in the portable device.

As illustrated in fig. 6, the memory 602 of the remote user device preferably stores the following:

the operating system 604 (e.g., ANDROID, iOS, DARWIN, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components;

a modeling engine 200 for storing one or more predictive or outcome models, the modeling engine comprising:

an expected points model module 222 for determining expected points values for scenes in the game,

a winning probability model 224 for determining the likelihood of winning the game, an

o a victory replacement model module based on athletes 226 for making the determination;

a o real time game situation module 614 for receiving and transmitting information relating to a game currently in progress; and

an odds management module 616 for facilitating management of various odds and wagering systems.

As previously described, modeling engine 200 includes: various algorithms and models for generating statistics and predicting outcomes at sporting events. In some embodiments, these models include an expected points model 222 that provides a value for each play of the game. For example, if the hit (drive) in a game that results in touchdown has a game hand that includes a 5 yard shot (rush), a 94 yard pass (pass), and a 1 yard shot, the 94 yard pass plays a much more important role in hitting a ball even though the 1 yard shot results in touchdown. Thus, in some embodiments, a 5 code burst is assigned an expected point value of 0.5, a 94 code pass is assigned an expected point value of 5.5, and a 1 code burst is assigned an expected point value of 1, where a high value indicates a more important or defined game for the game. In some embodiments, the modeling engine 200 uses telemetry data collected in accordance with the present disclosure to predict the outcome of a race (e.g., win/loss, point distribution, etc.), as disclosed with respect to the methods and features described in fig. 8.

In some embodiments, the real-time game situation module 614 receives information about situations occurring in the game. This information is then used to adjust the various weights and values in the model described above. For example, if a quadcast sprains his ankle and each play must be played from the shotgun position, this lack of movement of the quadcast will be reflected in the game model 220 by the real-time game situation module 614.

Fig. 7 is a block diagram illustrating an embodiment of a user equipment. According to the present disclosure, the user device is a remote user device 700 associated with an end user. The user equipment 700 includes: one or more processing units (CPU) 774, peripherals interface 770, memory controller 788, network or other communication interface 784, memory 702 (e.g., random access memory), user interface 778 (which user interface 778 includes display 782 and input 780 (e.g., keyboard, keypad, touch screen, etc.)), input/output (I/O) subsystem 766, optional accelerometer 717, an optional GPS 719, an optional audio circuit 772, an optional speaker 760, an optional microphone 762, one or more optional sensors 764, such as for detecting intensity of contacts on user device 700 (e.g., a touch-sensitive surface such as a touch-sensitive display system of device 700), and/or optical sensors, one or more communication buses 713 for interconnecting the above components, and a power system 776 for powering the above components.

In some embodiments, input 780 is a touch-sensitive display, such as a touch-sensitive surface. In some embodiments, user interface 778 includes one or more soft keyboard embodiments. The soft keyboard embodiment may include a standard (QWERTY) and/or non-standard configuration of symbols on the displayed icons.

It should be appreciated that user device 700 is but one example of a device of a multifunction device that may be used by an end user, and that user device 700 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of components. The various components shown in fig. 7 are implemented in hardware, software, firmware, or a combination thereof, including one or more signal processing and/or application specific integrated circuits.

The memory 702 may optionally include high-speed random access memory, and may optionally also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Access to memory 702 by other components of user device 700, such as CPU(s) 774, is optionally controlled by a memory controller 788.

Peripheral interface 770 may be used to couple input and output peripherals of the management system to CPU(s) 774 and memory 702. One or more processors 774 run or execute various software programs and/or sets of instructions stored in memory 702 to perform various functions and process data for user device 700.

In some embodiments, peripheral interface 770, CPU(s) 774 and memory controller 788 are optionally implemented on a single chip. In some other embodiments, they are optionally implemented on separate chips.

In some embodiments, audio circuit 772, speaker 760, and microphone 762 provide an audio interface between a user and device 700. Audio circuit 772 receives audio data from peripherals interface 770, converts the audio data to an electrical signal, and transmits the electrical signal to speaker 760. Speaker 760 converts electrical signals into human-audible sound waves. The audio circuit 772 also receives electrical signals converted from sound waves by the microphone 762. The audio circuit 772 converts the electrical signals into audio data and transmits the audio data to the peripheral interface 770 for processing. Optionally, audio data is retrieved from memory 702 and/or radio frequency circuitry 784 and/or transferred to memory 702 and/or radio frequency circuitry 784 by peripheral interface 770.

In some embodiments, power supply system 776 optionally comprises: a power management system, one or more power sources (e.g., battery, Alternating Current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a Light Emitting Diode (LED), etc.), and any other components associated with the generation, management, and distribution of power in a portable device.

As illustrated in fig. 7, the memory 702 of the remote user device preferably stores the following:

an operating system 704 (e.g., ANDROID, iOS, DARWIN, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components;

an electronic address 706 used to identify a particular user device during communication with the various systems and devices of the present disclosure;

a user information store 708 that stores adjunct information related to respective users associated with the corresponding user devices 700, such as user access information including usernames, user passwords, access tokens, and the like;

a game feed module 710 for viewing various representations of the game, including a whiteboard feed module 712, an avatar feed module 714, and a video feed module 716, and for viewing various statistics related to the game; and

a bet module 718 that facilitates betting on the game scenario.

In some embodiments, the wagering module 718 uses telemetry data collected in accordance with the present disclosure to predict the outcome of the current game using extended covariates (e.g., win/loss, point distribution, etc.), as disclosed with respect to the methods and features described in fig. 8. In some embodiments, the wagering module 718 uses telemetry data collected in accordance with the present disclosure to provide odds for future game events in the current live game.

Now that the general topology of the system 48 has been described, a method for partitioning a video feed to segment live athlete activity will be described with reference to at least fig. 1-7.

Fig. 8 is a flow diagram illustrating an embodiment of a process of dividing a video feed to segment live athlete activity. This process may be implemented by the processor 100 in cooperation with the user device 700 and other devices of the system 48 described above.

At 802, process 800 receives a transmission of a central video feed from a first camera on a first loop basis. Referring to fig. 10A, for example, a central video feed 1000-a is received from the first camera 140. In some embodiments, the camera 140 is a fixed camera (e.g., the camera is restricted from moving in at least one axis). For example, in some embodiments, the camera 140 is fixed such that the camera can have variable tilt, pan, and/or zoom (zoom) but cannot be physically moved to another location. The cameras may be disposed in a variety of positions and orientations, such as at a first end of the playing field in a lateral direction (e.g., a halffield line, a 50 yard line) or at a second end of the field in a longitudinal direction (e.g., an end zone, a target), among others. The camera 140 communicates with a network (e.g., the communication network 106) to communicate with one or more devices and systems of the present disclosure.

In some embodiments, the central video feed includes and/or is included in a plurality of central or other video feeds, each feed being generated by one or more cameras positioned and oriented to generate video of at least a portion of the playing field. In some embodiments, the central video feed and/or the other video feed may be generated, at least in part, by combining video data generated by multiple cameras, such as a composite or otherwise merged or combined video.

The central video feed is calibrated for a spatial region in at least two dimensions encompassed by the central video feed. In some embodiments, the spatial region is a region captured by the array of anchor devices 120. The spatial region may be a playing field of a live sporting event (e.g., playing field 902 of fig. 9).

In some embodiments, the calibration of the central video feed comprises: the equivalent portion of the central video feed is determined for the coordinate system used by the location information (e.g., telemetry 230). Since the standard playing field for athletic sports includes regular boundary lines, uniform length and thickness/width (e.g., cross-boundary lines, half-field lines, code lines, etc.), these lengths and thicknesses can be used to determine coordinate locations in the video source. For example, if a line on the playing field is known to have a uniform thickness (e.g., a thickness of 6 centimeters) and the thickness of the line in the central video as seen is determined to decrease linearly from a first thickness to a second thickness, then the exact position of the body relative to the line can be determined in the central video feed.

At 804, the process receives, on a second loop basis, respective time-stamped location information from each of a plurality of tracking devices, wherein each of the plurality of tracking devices is (a) worn by a corresponding subject of a plurality of subjects participating in a race on a spatial region, and (b) transmits location information describing a time-stamped location of the corresponding subject in the spatial region.

A respective transmission of time-stamped location information (e.g., telemetry data 230) is received from each tracking device 300 of the plurality of tracking devices. The recurring basis for receiving the transmission of the time-stamped location information may be the ping rate (e.g., the instantaneous ping rate 310 of fig. 3) of the individual tracking devices 300. In some embodiments, the transmission of the time stamped location information from each of the plurality of tracking devices occurs at a bandwidth greater than 500 MHz or a fractional bandwidth equal to or greater than 0.20. As non-limiting examples, the transmission of the time stamped location information from each of the plurality of tracking devices is within 3.4 GHz to 10.6 GHz, each of the plurality of tracking devices 300 has a signal refresh rate between 1 Hz to 60 Hz, and/or a cycle basis between 1 Hz to 60 Hz. Each tracking device 300 in the plurality of tracking devices transmits a unique signal received by the catch, the unique signal identifying the corresponding tracking device. If biometric data is collected, each tracking device may transmit biometric data (e.g., biometric telemetry 236) specific to the respective subject associated with the respective tracking device.

Each tracking device 300 is worn by a corresponding subject of a plurality of subjects participating in a race on a spatial area. In addition, each tracking device 300 transmits location information (e.g., telemetry data 230) describing a time-stamped location of the corresponding subject in the spatial region. In some embodiments, each of the plurality of subjects wears at least two tracking devices 300. Each additional tracking device 300 associated with a corresponding subject reduces the amount of error in predicting the actual position of the subject.

In some embodiments, the plurality of subjects includes a first team (e.g., a home team) and a second team (e.g., a guest team). In some embodiments, the first team and/or the second team are included in a team league (e.g., a football league, a basketball association, etc.). The first team includes a first plurality of athletes (e.g., a first player list), and the second team includes a second plurality of athletes (e.g., a second player list). Throughout the various embodiments of the present disclosure, a first team and a second team participate in a competitive game (e.g., a live sporting event), such as a football game or a basketball game. The spatial area is thus a playing field of an athletic game, such as a football field or a basketball field. In some embodiments, the subject of the present disclosure is an athlete, coach, referee, or combination thereof associated with the current game.

In some embodiments, each of the plurality of separate timestamped locations for a respective athlete in the first or second plurality of athletes includes an xyz coordinate of the respective athlete relative to the spatial zone. For example, in some embodiments, the spatial region is mapped such that a central portion of the spatial region (e.g., half field, 50 yard line, etc.) is the origin of the axis and a boundary region of the spatial region (e.g., off-boundary) is the maximum or minimum coordinate of the axis. In some embodiments, the xyz coordinates have an accuracy of ± 5 centimeters, ± 7.5 centimeters, ± 10 centimeters, ± 12.5 centimeters, ± 15 centimeters or ± 17.5 centimeters.

At 806, the process defines a first sub-view of the central video feed using the received time-stamped location information and the alignment of the central video feed. The first sub-view is associated with a first subject included in the plurality of subjects, and the first sub-view includes, for each of a plurality of frames including the central video feed, a corresponding sub-frame associated with the first subject.

For example, in some embodiments, at 806, the process applies a mathematical transformation to each of a plurality of consecutive frames of video data that is based at least in part on the corresponding camera/video calibration data to determine a subset or portion of each sequential frame that is associated with the corresponding location information of subject a based on timestamp data that includes the received location information and the location information associated with each timestamp (e.g., XYZ coordinates of subject a). A subset/portion of the determined sequential frames is used to provide a sub-view of the central video feed associated with subject a.

In various embodiments, the sub-views have a different resolution than the central video feed. Despite having different resolutions, the quality difference is not necessarily noticeable to the average viewer, so the viewing experience is still enjoyable. For example, the central video feed is provided at a first resolution (e.g., the native resolution of the camera 140), such as between 2K and 8K. To this end, in some embodiments, the central video feed includes a plurality of complete two-dimensional frames (e.g., a first frame associated with a first point in time, a second frame associated with a second point in time, … …, an nth frame associated with an nth point in time). Each respective full two-dimensional frame of the plurality of full two-dimensional frames has a first dimension size and a second dimension size (e.g., a horizontal size and a vertical size, such as a number of horizontal pixels and a number of vertical pixels). For each respective full two-dimensional frame of the plurality of full two-dimensional frames, the first sub-view includes a corresponding sub-frame. Each corresponding sub-frame is a portion of a corresponding full frame (e.g., sub-view 1000-B and sub-view 1000-C of fig. 10B and 10C), with fig. 10B and 10C illustrating, respectively, instantaneous sub-frames of the central video feed full frame 1000-a of fig. 10A.

In some embodiments, each subframe has a third dimension size and a fourth dimension size. Further, the third dimension size may be a fixed fraction of the first dimension size, and the fourth dimension size may be a fixed fraction of the second dimension size. For example, a fixed fraction of the size of the first dimension and a fixed fraction (e.g., 10%, 20%, 30%, … …, 90%) of the size of the second dimension having the same fraction. Similarly, a fixed score for the first dimension size may be a first score, and a fixed score for the second dimension size may be a second score different from the first score (e.g., the central video feed is captured in the landscape direction, and each sub-view is divided in the portrait direction). As a non-limiting example, (i) the first dimension size is 7680 pixels, and the third dimension size is 3840 pixels, and the second dimension size is 4320 pixels, and the fourth dimension size is 2160 pixels; or (ii) a first dimension size of 8192 pixels and a third dimension size of 3840 pixels, and a second dimension size of 4320 pixels and a fourth dimension size of 2160 pixels. In some embodiments, each respective full two-dimensional frame of the plurality of full two-dimensional frames includes at least 10 megapixels to 40 megapixels (e.g., 10 megapixels, 15 megapixels, 20 megapixels, … …, 40 megapixels).

In some embodiments, a sub-view (e.g., a first sub-view) includes a corresponding sub-frame that includes less than 5 megapixels to 15 megapixels (e.g., 5 megapixels, 7.5 megapixels, 10 megapixels, … …, 15 megapixels) for each respective full two-dimensional frame of a plurality of full two-dimensional frames.

The time-stamped location information of the catch is superimposed on the central video feed. This overlay is done using a calibration for the central video feed of the spatial region. For example, if the central video feed is calibrated for a spatial region using the same coordinate system as the location information, the received location information may be mapped onto the central video feed using the same coordinate system. The overlay determines a position of each of at least a subset of the plurality of subjects in the central video feed. For example, in some embodiments, the location information 230 provides at least the X and Y coordinates of the subject on a spatial region (e.g., the playing field 902 of fig. 9) at the corresponding timestamp. Since the central video feed has been calibrated using the same coordinate system as the position information, the X and Y coordinates of the subject can be equated to the position in the video feed. This allows the subject to be tracked in the central video feed using the position information, rather than using the optical characteristics of the subject (e.g., color of the subject, contour of the subject, etc.).

In some embodiments, the time-stamped location information comprises: a world clock timestamp, a game clock timestamp, or a combination thereof (e.g., the world and game clocks 242 of fig. 2). The overlay uses one or more time stamps to overlay time stamped location information of a ball catch with location information onto the central video feed. For example, if the time-stamped location information consumes a first time period until received by the system 48 and the central video feed consumes a second time period until received by the system, the time-stamped information associated with the central video feed and the location information is compared to ensure that the overlay is accurate and precise.

At 808, the process causes the first sub-view to be transmitted to a device configured to display the first sub-view. As described above, a first sub-view of the central video feed (e.g., sub-view 1000-B of fig. 10B) may be defined at a second resolution, i.e., the second resolution is less than the first resolution. For example, the first resolution is at least four, six, or eight times the pixel resolution of the second resolution of the video divided from the central video feed.

The center coordinates of the first sub-view within the central video feed vary over time without human intervention, the variation being in accordance with the variation over time of the position of the first subject determined from the cycle instances of the ball-catching that occurred by the overlap on the basis of the second cycle. In some embodiments, the center of the first sub-view is associated with location coordinates (e.g., XYZ) generated by a tracking device worn or otherwise associated with the subject. In some embodiments, the subject may wear multiple tracking devices, and the first sub-view is centered based on a set of coordinates generated based on tracking data from the multiple devices. For example, device data from multiple tracking devices worn by the subject may be correlated, e.g., based on time-stamped data, and a set of geometric or other center coordinates may be calculated based on coordinates generated by the respective tracking devices.

In some embodiments, the first sub-view of the central video feed is transmitted to a remote device (e.g., user device 700 of fig. 7) that is independent of the central video feed. Thus, the communication causes the remote device to display a first sub-view of the central video feed. By way of non-limiting example, the remote device is a handheld device such as a smartphone, tablet, game console, stationary computer system such as a personal home computer, or the like. Further, the communication may occur wirelessly (e.g., over network 106).

In various embodiments, at least a first principal in the subset of principals is selected (e.g., selecting sub-view 1000-C of FIG. 10C). The selection of at least the first subject may be made via the computer system 48, for example, by an operator of the computer system 48 (e.g., a video production expert, producer, director, etc.), an end user of each respective remote device (e.g., via the respective user device 700), or automatically. For example, a first subject is automatically selected based at least in part on proximity (within a predetermined distance) to a ball or other subject (e.g., a previously selected subject with which the subject is associated, such as in a one-to-one game or by association with a complementary location, such as opposing offensive and defensive frontiers). Further, a child view may be selected from a broader set of child views (e.g., a list of available child views, a preview of available child views, etc.). A more extensive set of sub-views includes sub-views for each player active in the competitive game (e.g., twenty-two sub-views of the american football game). Such end-user selection allows each user to select one or more subjects according to their needs. For example, if an end user has a list of favorite bodies distributed across multiple teams, the end user may view a sub-view of each of these favorite bodies on a single remote device and/or display.

In some embodiments, the identity of the first principal is received at the remote device. For example, the first sub-view includes information related to the identity of the first principal (e.g., the name of the first principal). This identity of the respective subjects allows the end user to quickly identify different sub-views when viewing more than one sub-view. In some embodiments, the tracking device 300 is attached to (e.g., embedded in) a ball used in athletic sports over a region of space. Thus, without human intervention, the identity of the first principal is determined based on a determination of which principal of the plurality of principals is currently closest to the ball using the respective transmissions of time-stamped location information from each tracking device 300.

In various embodiments, one or more steps of process 800 occur during a live game in which a plurality of subjects participate. However, the present disclosure is not limited thereto. For example, the communication may occur after the live game (e.g., such as viewing a highlight of the live game or a replay of the live game).

Fig. 9 illustrates an example environment including a playing field including a tracking component in accordance with an embodiment of the present disclosure. An exemplary environment (e.g., stadium 906) 900. Environment 900 includes a playing field 902 in which a game (e.g., a football game) is played. Environment 900 includes an area 904, where area 904 includes a playing field 902 and a zone immediately surrounding the playing field (e.g., a zone including a subject not participating in the game, such as subject 930-1 and subject 940-1). Environment 900 includes an array of anchor devices 120 (e.g., anchor device 1201-1, anchor device 120-2, anchor device 120-Q) that receive telemetry data from one or more tracking devices 300 associated with respective playing bodies. As illustrated in fig. 9, in some embodiments, the array of anchor devices is in communication (e.g., via the communication network 106) with the telemetry parsing system 240 (e.g., the tracker management system 400 of fig. 4). Further, in some embodiments, one or more cameras 140 (e.g., camera 140-1) capture images and/or video of the sporting event used to form the virtual representation. In fig. 9, the mark 930 indicates the body of the first team of the game, and the mark 940 indicates the body of the second team of the game.

Fig. 10A shows an example of a central video feed in accordance with an embodiment of the present disclosure. An exemplary virtual rendering 1000-a is illustrated. This virtual representation 1000-a includes some or all of the virtual representations described above (e.g., virtual representation 900 of fig. 9), but is illustrated at a different perspective (e.g., bird's eye view, wide angle view). For example, in some embodiments, an end user of the remote device 700 can play between one or more virtual renditions of the game, where each rendition virtual rendition has a unique viewing perspective and/or a unique level of detail within the virtual rendition (e.g., a high quality rendition that includes one or more selectable elements (such as the ending region 908) and a lower quality rendition that omits one or more selectable elements).

Fig. 10B and 10C illustrate examples of a first sub-view and a second sub-view according to embodiments of the present disclosure. The techniques disclosed herein may be applied to a virtual scene or video captured by a camera. Fig. 10A-10C represent actual video frames and are not necessarily sub-views of a composite/virtual scene. In some embodiments, selecting further comprises: a second principal in the subset of principals is selected instead of the first principal. The definition further includes: a second sub-view of the central video feed (e.g., sub-view 1000-C of fig. 10C) is defined at a second resolution. In some embodiments, the second sub-view is at a third resolution that is less than the first resolution and different from the second resolution.

The central video feed may be captured in one type of orientation (e.g., landscape) while the sub-views are displayed in one or more other orientations (e.g., all portrait or some portrait). In this example, the central video feed (fig. 10A) is captured in landscape and each sub-view (fig. 10B and 10C) is divided in portrait.

The center coordinates of the second sub-view within the central video feed vary over time without human intervention, the variation being in accordance with the variation over time of the position of the second subject determined from the cycle instances of the ball-catching that occurred by the overlap on the basis of the second cycle. Thus, the communication further transmits a second sub-view of the central video feed that is independent of the central video feed to the remote device. Thus, an end user of the remote device is enabled to view a first sub-view dedicated to the first body and a second sub-view dedicated to the second body. The first and second subjects may participate in the same race or different races (e.g., a first race and a second race). The first match and the second match are simultaneously played, being a historical match that has already been played, or the first match is a present match and the second match is a historical match. In addition, the first and second bodies may be in the same team or in different teams. Further, in some embodiments, a second sub-view may be defined, but not transmitted to the remote device.

In some embodiments, the first and second bodies are at the same location in the spatial region at a first point in time (e.g., both players are engaged in a side-by-side ball (scrum for the ball), both players are deliberately pressing against the ball (pill-up), etc.). This results in the first sub-view overlapping the second sub-view at a first point in time, since the first and second subjects are at the same position in the spatial region at a certain point in time. In addition, the first and second bodies are at different locations in the region of space at a second point in time. The difference results in the first sub-view not overlapping the second sub-view at the second point in time. Because the present disclosure utilizes time-stamped location information (e.g., telemetry data 230) to determine the location of a subject in a video feed, rather than analyzing the optical characteristics of images captured by camera 140, different objects can be tracked independently without interruption when they occupy the same location.

The techniques discussed herein may be applied to display more than two sub-views. For example, the plurality of sub-views includes: three or more sub-views, such as a view for each player on one side of the football game (e.g., a view for each player in the football player's fantasy team); or about 100 views. Each of the plurality of sub-views is centered on a different one of the subset of subjects. In some embodiments, centering on the respective body of each sub-view comprises: tolerances are assigned with respect to the position of the subject and the center of the sub-view. For example, if the tolerance of the position information is approximately 15 centimeters, the center of the sub-view will not change unless the position information indicates a position change greater than and/or equal to 15 centimeters. Thus, any jumps or jitter in the position information are not translated to the sub-view, thereby ensuring that the sub-view provides a smooth (e.g., lack of jitter) video stream.

In some embodiments, the central video feed is not transmitted to the remote device. For example, the central video feed is stored in a database of the present disclosure. Similarly, each subject of each respective subject captured by the central video feed is stored in the database of the present disclosure. Such storage allows for the planning of a collection of dedicated videos for each respective subject. The set may then be utilized, for example, to view each play that the respective subject has participated in over a period of time (e.g., a game, season, career, etc.).

In some embodiments, the plurality of bodies includes a first team and a second team. In some embodiments, the first and second teams form a team league (e.g., a football league, a soccer league, a basketball association, etc.). The first team includes a first plurality of athletes (and the second team includes a second plurality of athletes). The first team and the second team participate in the local game in the processes of ball catching, overlapping, selecting, defining and communicating. Selecting at least a first principal in the subset of principals comprises: each player on the first team who is actively participating in the local game in the spatial zone is selected. The center coordinates of the first sub-view within the central video feed vary over time without human intervention, in accordance with the variation over time of the position of the first subject, and in accordance with the variation over time of the position of each other player in the first team who is actively participating in the present game in the spatial zone, as determined from the recurring instances of the taking of the ball that occur through the overlap on the basis of the second cycle.

In some embodiments, selecting further comprises: a second principal in the subset of principals is selected instead of the first principal. The defining further comprises: a second sub-view of the central video feed is defined at a second resolution. The center coordinates of the second sub-view within the central video feed vary over time without human intervention, the variation being in accordance with the variation over time of the position of the second subject determined from the cycle instances of the ball-catching that occurred by the overlap on the basis of the second cycle. The communication transmits a second sub-view of the central video feed to the remote device that is independent of the central video feed and the first sub-view.

In some embodiments, the defining further comprises: a plurality of sub-views of the central video feed are defined at a second resolution. The plurality of sub-views includes a first sub-view (e.g., sub-view 1000-B of FIG. 10B). Without human intervention, the center coordinates of each of the multiple sub-views within the central video feed vary over time according to the variation over time of the position of the corresponding ones of the subset of subjects actively participating in the local game in the spatial region from the recurring instance of the catch occurring on the basis of the second iteration through the overlap. The communication transmits each of the plurality of sub-views independent of the central video feed to the remote device.

Thus, by the system and method of the present disclosure, one or more dedicated sub-views derived from a central video feed are transmitted to a remote device. Each respective dedicated sub-view is centered on a corresponding subject, which allows the end user to view video feeds that are exclusively dedicated to that corresponding subject (e.g., video feeds that are dedicated to the end user's favorite athletes). This enables the end user to view a selection of one or more dedicated sub-views of the subject according to the user's selection, such as a selection of subjects included in the user's fantasy football team. For example, if the end user is a prospective professional athlete, the end user may choose to view a sub-view of the subject dedicated to the same location as the end user to use as a training video. In addition, since the central video feed is a high resolution video feed, each sub-view is divided without losing significant image quality. This allows a single camera to produce any number of sub-views, which greatly reduces the number of cameras and operators required to capture a live sporting event.

While the present disclosure describes various systems and methods related to football games, those skilled in the art will appreciate that the present disclosure is not so limited. The techniques disclosed herein may find application in games having discrete or finite states where players or teams own ball rights (e.g., hold a ball) and other types of events. For example, in some embodiments, the systems and methods of the present disclosure are applied to events that include: a baseball game, a basketball game, a cricket game, a football game, a handball game, a hockey game (e.g., hockey on land), a kicking game, a lacrosse game, a football game, a softball game or a volleyball game, a race car, a boxing, a bicycle, running, swimming, tennis, etc., or any such event in which the location of a subject is correlated with the outcome of the event.

The present disclosure addresses a need in the art for improved systems and methods for delivering video content to a remote device. In particular, the present disclosure facilitates increasing audience engagement and interest in a live sporting event by partitioning a video feed to segment live player activity.

An evaluation of expected points, multiple logistic regression, or other type of analysis may be used to estimate the probability of each next event that is a possible outcome of a given game situation. The next event is a scoring event or a non-scoring event. The scoring events include a touchdown score (7 points) for a ball control team, a shoot score (3 points) for a ball control team, a safety score (2 points) for a ball control team, a safety score (-2 points) for an opponent, a shoot score (-3 points) for an opponent, and a touchdown score (-7 points) for an opponent. Non-scoring events (score 0) include events that describe possible attempts by the team. In one case, the ball team may attempt to propel the ball from the middle left, right, or down in the next game. In another case, the ball team may attempt to pass or dribble the ball at the next game.

Although the foregoing embodiments have been described in some detail for purposes of clarity of understanding, the invention is not limited to the details provided. There are many alternative ways of implementing the invention. The disclosed embodiments are illustrative and not restrictive.

37页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:人动作过程采集系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!