Method, apparatus and system on chip for radar-enabled sensor fusion

文档序号:1844945 发布日期:2021-11-16 浏览:25次 中文

阅读说明:本技术 用于支持雷达的传感器融合的方法、装置和片上系统 (Method, apparatus and system on chip for radar-enabled sensor fusion ) 是由 尼古拉斯·爱德华·吉利恩 卡斯滕·C·史维席格 连寄楣 帕特里克·M·阿米胡德 伊万·波派列 于 2016-10-06 设计创作,主要内容包括:公开了用于支持雷达的传感器融合的方法、装置和片上系统。本文档描述了用于支持雷达的传感器融合的装置和技术。在一些方面中,提供雷达场,并且接收与所述雷达场中的目标相对应的反射信号。变换所述反射信号以提供雷达数据,从所述雷达数据中提取指示所述目标的物理特性的雷达特征。基于所述雷达特征来激活传感器以提供与所述物理特性相关联的补充传感器数据。然后利用所述补充传感器数据扩充所述雷达特征以增强所述雷达特征,诸如通过增加所述雷达特征的准确度或分辨率。通过这样做,能够改进依赖于所增强的雷达特征的基于传感器的应用的性能。(Methods, apparatus, and systems on a chip for radar-enabled sensor fusion are disclosed. This document describes apparatus and techniques for radar-enabled sensor fusion. In some aspects, a radar field is provided and a reflected signal corresponding to a target in the radar field is received. The reflected signals are transformed to provide radar data from which radar features indicative of physical characteristics of the target are extracted. Activating a sensor based on the radar feature to provide supplemental sensor data associated with the physical characteristic. The radar feature is then augmented with the supplemental sensor data to enhance the radar feature, such as by increasing an accuracy or resolution of the radar feature. By doing so, the performance of sensor-based applications that rely on enhanced radar features can be improved.)

1. An apparatus, comprising:

at least one computer processor;

at least one radar sensor comprising:

at least one radar-emitting element configured to provide a radar field; and

at least one radar-receiving element configured to receive a radar-reflected signal resulting from reflection of the radar field from an object within the radar field;

a plurality of sensors, each of the sensors comprising a different type of sensor configured to sense a respective environmental change of the apparatus; and

at least one computer-readable storage medium having instructions stored thereon that, in response to execution by the computer processor, cause the apparatus to:

providing a radar field via the radar-emitting element;

receiving, via the radar-receiving element, a reflected signal resulting from reflection of the radar field from an object within the radar field;

based on the received reflected signal, determining at least one of: a radar detection characteristic, a radar reflection characteristic, a radar motion characteristic, a radar position characteristic, or a radar shape characteristic of the object;

selecting a first one of the sensors to provide supplemental sensor data based on the determined radar detection feature, radar motion feature, radar position feature, or radar shape feature of the object being a first type of the determined radar detection feature, radar motion feature, radar position feature, or radar shape feature of the object; or

Selecting a second one of the sensors to provide the supplemental sensor data based on the determined radar detection feature, radar motion feature, radar position feature, or radar shape feature of the object being a second type of the determined radar detection feature, radar motion feature, radar position feature, or radar shape feature of the object;

receiving the supplemental sensor data from the first sensor or the second sensor;

extending a radar detection feature, a radar motion feature, a radar location feature, or a radar shape feature of the object with the supplemental sensor data to enhance the radar detection feature, the radar motion feature, the radar location feature, or the radar shape feature of the object; and

providing enhanced radar detection features, radar motion features, radar position features, or radar shape features of the object to a sensor-based application to effectively improve performance of the sensor-based application.

2. The apparatus of claim 1, wherein:

the instructions further cause the apparatus to determine an indication of a number of objects, total reflected energy, moving energy, one-dimensional velocity dispersion, three-dimensional spatial coordinates, or one-dimensional spatial dispersion based on the reflected signal; and

selecting the first sensor or the second sensor is further based on the indication.

3. The apparatus of claim 1 or 2, wherein:

the instructions further cause the apparatus to perform a range-doppler transform, a range-image transform, a micro-doppler transform, an I/Q transform, or a spectrogram transform on the reflected signal; and

the determination is further based on the performed transformation.

4. The apparatus of claim 1 or 2, wherein the sensor-based application comprises proximity detection, user tracking, activity detection, facial recognition, respiration detection, or motion signature recognition.

5. The apparatus of claim 1 or 2, wherein the sensor comprises two or more of: an accelerometer, a gyroscope, a hall effect sensor, a magnetometer, a temperature sensor, a microphone, a capacitive sensor, a proximity sensor, an ambient light sensor, a Red Green Blue (RGB) image sensor, an infrared sensor, or a depth sensor.

6. The apparatus of claim 1 or 2, wherein augmenting the radar detection feature, radar motion feature, radar location feature, or radar shape feature of the object with the supplemental sensor data is effective to enhance the radar detection feature, radar motion feature, radar location feature, or radar shape feature of the object by:

increasing the position accuracy of a radar detection feature, a radar motion feature, a radar position feature, or a radar shape feature of the object;

mitigating false positive detection of a radar detection feature, a radar motion feature, a radar location feature, or a radar shape feature attributed to the object;

increasing a spatial resolution of a radar detection feature, a radar motion feature, a radar position feature, or a radar shape feature of the object;

increasing a surface resolution of a radar detection feature, a radar motion feature, a radar location feature, or a radar shape feature of the object; or

Improving the accuracy of classification of radar detection features, radar motion features, radar position features, or radar shape features of the object.

7. The apparatus of claim 1 or 2, wherein:

the radar sensor consumes less power than the first sensor and the second sensor;

the first sensor is in a low power state until the first sensor is selected; and

the second sensor is in a low power state until the second sensor is selected.

8. The apparatus of claim 1 or 2, wherein the apparatus is implemented as a smartphone, smart glasses, smart watch, tablet computer, laptop computer, set-top box, smart appliance, home automation controller, or television.

9. The apparatus of claim 1 or 2, wherein:

the radar receiving element comprises a plurality of antennas; and

the instructions further cause the apparatus to receive the reflected signal via the antenna using a beamforming technique.

10. A method, comprising:

providing a radar field via at least one radar-emitting element of a device;

receiving, via at least one radar-receiving element of the device, a reflected signal resulting from reflection of the radar field from an object within the radar field;

extracting radar features from the reflected signals indicative of physical characteristics of the object;

selecting a first sensor of a plurality of sensors of the device based on the radar feature being a first type of radar feature; or

Selecting a second sensor of the plurality of sensors of the device based on the radar feature being a second type of radar feature, the second sensor being a different type of sensor than the first sensor;

receiving supplemental sensor data from the selected sensor, the supplemental sensor data describing an environmental change corresponding to the selected different type of sensor, the environmental change further indicative of a physical characteristic of the object;

augmenting the radar feature with the supplemental sensor data; and

extended radar features are provided for sensor-based applications.

11. The method of claim 10, wherein the first sensor is configured to detect a first change in an environment surrounding the object and the second sensor is configured to detect a second change in the environment surrounding the object.

12. The method of claim 10 or 11, wherein the radar features of the first or second type comprise surface features, motion features, location features, or detection features.

13. The method of claim 10 or 11, wherein the plurality of sensors of the device comprise at least two of: an accelerometer, a gyroscope, a hall effect sensor, a magnetometer, a temperature sensor, a microphone, a capacitive sensor, a proximity sensor, an ambient light sensor, a Red Green Blue (RGB) image sensor, an infrared sensor, or a depth sensor.

14. The method of claim 10 or 11, wherein the radar feature is indicative of a number of objects in the radar field, total reflected energy, moving energy, one-dimensional velocity dispersion, three-dimensional spatial coordinates, or one-dimensional spatial dispersion.

15. A computer-readable storage medium storing instructions that, when executed by a computer, cause the computer to perform operations comprising:

causing at least one radar-emitting element to provide a radar field;

receiving, via at least one radar-receiving element, a reflected signal resulting from reflection of the radar field from an object within the radar field;

resolving the reflected signal into at least one of: a radar detection feature, a radar reflection feature, a radar motion feature, a radar location feature, or a radar shape feature of the object that results in the reflected signal;

selecting at least one sensor to provide supplemental sensor data for a radar detection feature, a radar reflection feature, a radar motion feature, a radar location feature, or a radar shape feature of the object based on the radar detection feature, the radar reflection feature, the radar motion feature, the radar location feature, or the radar shape feature of the object, the supplemental sensor data being indicative of at least one environmental change around the object;

receiving the supplemental sensor data from the selected sensor indicative of the environmental change;

augmenting a radar detection feature, radar reflection feature, radar motion feature, radar location feature, or radar shape feature of the object with the supplemental sensor data indicative of the environmental change to provide an enhanced radar detection feature, radar reflection feature, radar motion feature, radar location feature, or radar shape feature of the object; and

exposing the enhanced radar detection features, radar reflection features, radar motion features, radar location features, or radar shape features of the object to a sensor-based application to effectively improve performance of the sensor-based application.

16. The computer-readable storage medium of claim 15, wherein the operations further comprise:

causing the radar-emitting element to provide another radar field;

receiving, via the radar-receiving element, a further reflected signal resulting from a reflection of the further radar field from the object or a further object within the further radar field;

resolving the further reflected signal into at least one of: another radar detection feature, another radar reflection feature, another radar motion feature, another radar position feature, or another radar shape feature of the object or the another object that caused the another reflected signal;

based on the another radar detection feature, another radar reflection feature, another radar motion feature, another radar position feature, or another radar shape feature of the object or the another object, selecting at least one other sensor of a different type than the sensor to provide another supplemental sensor data for another radar detection feature, another radar reflection feature, another radar motion feature, another radar position feature, or another radar shape feature of the object or the another object, the other supplemental sensor data being indicative of at least one other environmental change around the object or the another object that is different from an environmental change around the object or the another object;

receiving the other supplemental sensor data from the selected other sensor indicative of the other environmental change;

augmenting another radar detection feature, another radar reflection feature, another radar motion feature, another radar location feature, or another radar shape feature of the object or another object with the other supplemental sensor data indicative of the other environmental change to provide another enhanced radar detection feature, radar reflection feature, radar motion feature, radar location feature, or radar shape feature of the object or another object; and

exposing the another enhanced radar detection feature, radar reflection feature, radar motion feature, radar position feature, or radar shape feature of the object or another object to the sensor-based application or another sensor-based application to effectively improve performance of the sensor-based application or the another sensor-based application.

17. The computer-readable storage medium of claim 15 or 16, wherein the operations further comprise: determining an indication of a number of objects, total reflected energy, movement energy, one-dimensional velocity dispersion, three-dimensional spatial coordinates, or one-dimensional spatial dispersion based on the reflected signals; and

the selecting is further based on the indication.

18. The computer-readable storage medium of claim 15 or 16, wherein:

the operations further include: performing range-doppler transform, range image transform, micro-doppler transform, I/Q transform, or spectrogram transform on the reflected signals; and

the parsing is further based on the performed transformation.

19. The computer-readable storage medium of claim 15 or 16, wherein the sensor-based application comprises proximity detection, user tracking, activity detection, facial recognition, respiration detection, or motion signature recognition.

20. The computer-readable storage medium of claim 15 or 16, wherein augmenting the radar detection feature, radar motion feature, radar location feature, or radar shape feature of the object with the supplemental sensor data effectively enhances the radar detection feature, radar motion feature, radar location feature, or radar shape feature of the object by:

increasing the position accuracy of a radar detection feature, a radar motion feature, a radar position feature, or a radar shape feature of the object;

mitigating false positive detection of a radar detection feature, a radar motion feature, a radar location feature, or a radar shape feature attributed to the object;

increasing a spatial resolution of a radar detection feature, a radar motion feature, a radar position feature, or a radar shape feature of the object;

increasing a surface resolution of a radar detection feature, a radar motion feature, a radar location feature, or a radar shape feature of the object; or

Improving the accuracy of classification of radar detection features, radar motion features, radar position features, or radar shape features of the object.

21. An apparatus, comprising:

at least one computer processor;

at least one radar sensor comprising:

at least one radar-emitting element configured to provide a radar field; and

at least one radar-receiving element configured to receive a radar-reflected signal resulting from reflection of the radar field from an object within the radar field;

a plurality of sensors, each of the sensors comprising a different type of sensor configured to sense a different type of environmental change of the apparatus; and

at least one computer-readable storage medium having instructions stored thereon that, in response to execution by the computer processor, cause the apparatus to:

detecting sensor data via a sensor of the plurality of sensors;

determining, based on the sensor data, different types of detected environmental changes;

in response to determining the detected different types of environmental changes, providing a radar field via the radar-emitting element, the radar field configured based on the different types of sensors that detected the different types of environmental changes and the detected different types of environmental changes;

receiving, via the radar-receiving element, a reflected signal resulting from reflection of the radar field from an object within the radar field;

determining, based on the received reflected signals, radar characteristics of the object, the radar characteristics including at least one of: a radar detection characteristic, a radar reflection characteristic, a radar motion characteristic, a radar position characteristic, or a radar shape characteristic of the object;

augmenting the sensor data with the radar features to enhance the sensor data; and

enhanced sensor data is provided to a sensor-based application to effectively improve the performance of the sensor-based application.

22. The apparatus of claim 21, wherein:

the instructions further cause the apparatus to determine an indication of a target number, total reflected energy, moving energy, one-dimensional velocity dispersion, three-dimensional spatial coordinates, or one-dimensional spatial dispersion based on the reflected signal; and

augmenting the sensor data is based on the indication.

23. The apparatus of claim 21, wherein:

the instructions further cause the apparatus to perform a range-doppler transform, a range-image transform, a micro-doppler transform, an I/Q transform, or a spectrogram transform on the reflected signal based on the different types of sensors or the detected different types of environmental changes; and

the determination of the radar characteristic is further based on the performed transformation.

24. The apparatus of claim 21, wherein the sensor-based application comprises proximity detection, user tracking, activity detection, facial recognition, breath detection, or motion signature recognition.

25. The apparatus of claim 21, wherein the sensors comprise two or more of: an accelerometer, a gyroscope, a hall effect sensor, a magnetometer, a temperature sensor, a microphone, a capacitive sensor, a proximity sensor, an ambient light sensor, a Red Green Blue (RGB) image sensor, an infrared sensor, or a depth sensor.

26. The apparatus of claim 21, wherein augmenting the sensor data with the radar feature effectively enhances the sensor data by:

increasing the positional accuracy of the sensor data;

mitigating false positive detection attributed to the sensor data; or

Improving the classification accuracy of the sensor data.

27. The device of claim 21, wherein the device is in a low power state until the sensor data is detected.

28. The apparatus of claim 21, wherein the apparatus is implemented as a smartphone, smart glasses, smart watch, tablet computer, laptop computer, set-top box, smart appliance, home automation controller, or television.

29. The apparatus of claim 21, wherein:

the radar receiving element comprises a plurality of antennas; and

the instructions further cause the apparatus to receive the reflected signal via the antenna using a beamforming technique.

30. A method, comprising:

monitoring a plurality of sensors for environmental changes, each of the sensors including a different type of sensor configured to sense a different type of environmental change;

detecting sensor data via the sensor;

determining different types of detected environmental changes based on the sensor data;

in response to determining the detected different types of environmental changes, providing a radar field via at least one radar-emitting element of a device, the radar field configured based on the different types of sensors that detected the different types of environmental changes and the detected different types of environmental changes;

receiving, via at least one radar-receiving element of the device, a reflected signal resulting from reflection of the radar field from a target within the radar field;

determining from the reflected signals a radar feature indicative of a physical characteristic of the target;

augmenting the sensor data with the radar features; and

augmented sensor data is provided to a sensor-based application.

31. The method of claim 30, wherein the different types of environmental changes comprise at least one of: voice of a user of the device, ambient noise, motion of the device, proximity of the user, temperature changes, or ambient light changes.

32. The method of claim 30, wherein the radar feature comprises a surface feature, a motion feature, a location feature, a reflection feature, a shape feature, or a detection feature.

33. The method of claim 30, wherein the sensor comprises an accelerometer, a gyroscope, a hall effect sensor, a magnetometer, a temperature sensor, a microphone, a capacitive sensor, a proximity sensor, an ambient light sensor, a red-green-blue (RGB) image sensor, an infrared sensor, or a depth sensor.

34. The method of claim 30, wherein the radar feature is indicative of a number of targets, total reflected energy, moving energy, one-dimensional velocity dispersion, three-dimensional spatial coordinates, or one-dimensional spatial dispersion in the radar field.

35. At least one computer-readable storage medium storing instructions that, when executed by a processing system, cause the processing system to:

monitoring a plurality of sensors for environmental changes, each of the sensors including a different type of sensor configured to sense a different type of environmental change;

detecting sensor data via the sensor;

determining different types of detected environmental changes based on the sensor data;

in response to determining the different types of detected environmental changes, causing at least one radar-emitting element to provide a radar field that is configured based on the different types of sensors that detected the different types of environmental changes and the different types of detected environmental changes;

receiving, via at least one radar-receiving element, a reflected signal resulting from reflection of the radar field from a target within the radar field;

resolving the reflected signal into supplemental radar data;

extracting radar features of the target from the supplemental radar data, the radar features including at least one of: radar detection characteristics, radar reflection characteristics, radar motion characteristics, radar position characteristics, or radar shape characteristics of the target;

augmenting the sensor data with the radar features to provide enhanced sensor data; and

exposing the enhanced sensor data to a sensor-based application to effectively improve performance of the sensor-based application.

36. The computer-readable storage medium of claim 35, wherein the sensor comprises an accelerometer, a gyroscope, a hall effect sensor, a magnetometer, a temperature sensor, a microphone, a capacitive sensor, a proximity sensor, an ambient light sensor, a red-green-blue (RGB) image sensor, an infrared sensor, or a depth sensor.

37. The computer-readable storage medium of claim 35, wherein the instructions further cause the processing system to determine, based on the supplemental radar data, an indication of a number of targets, total reflected energy, moving energy, one-dimensional velocity dispersion, three-dimensional spatial coordinates, or one-dimensional spatial dispersion based on the reflected signal; and

the augmentation is based on the indication.

38. The computer-readable storage medium of claim 35, wherein:

the instructions further cause the processing system to perform a range-doppler transform, a range-image transform, a micro-doppler transform, an I/Q transform, or a spectrogram transform on the reflected signals based on the detected different types of environmental changes; and

the parsing is based on the performed transformation.

39. The computer-readable storage medium of claim 35, the sensor-based application comprising proximity detection, user tracking, activity detection, facial recognition, respiration detection, or motion signature recognition.

40. The computer-readable storage medium of claim 35, augmenting the sensor data with the radar feature effective to enhance the sensor data by:

increasing the positional accuracy of the sensor data;

mitigating false positive detection attributed to the sensor data; or

Improving the classification accuracy of the sensor data.

Technical Field

The application relates to a method, an apparatus and a system on chip for radar-enabled sensor fusion.

Background

Many computing and electronic devices include sensors to provide a seamless and intuitive user experience based on the environment of the device. For example, the device may exit the sleep state in response to the accelerometer indicating device movement, or the touch screen of the device may be disabled in response to the proximity sensor indicating proximity to the user's face. However, most of these sensors have limited accuracy, range, or functionality and are only able to sense rough or severe changes around the device. Thus, without accurate sensor input, the device typically has to infer different types of user interaction or even whether the user is present, which results in incorrect user input, false or no detection of the user, and frustration for the user.

Examples of sensor inaccuracies in the above context (context) include: the device incorrectly exits the sleep state in response to the accelerometer sensing non-user related movement (e.g., a moving vehicle), and disables the touch screen in response to the user holding the device in an incorrect manner and partially blocking the proximity sensor. In such a case, the battery of the device may be drained due to an inadvertent power state transition, and user input through the touch screen is interrupted until the user moves his hand. These are just some examples of sensor inaccuracies that may disrupt the user's interaction experience with the device.

Disclosure of Invention

This disclosure describes apparatus and techniques for radar-enabled sensor fusion. In some embodiments, a radar field is provided and a reflected signal corresponding to a target in the radar field is received. The reflected signals are transformed to provide radar data from which radar features indicative of physical characteristics of the target are extracted. Based on the radar feature, the sensor is activated to provide supplemental sensor data associated with the physical characteristic. The radar feature is then augmented with the supplemental sensor data to enhance the radar feature, such as by increasing an accuracy or resolution of the radar feature. By doing so, the performance of sensor-based applications that rely on enhanced radar features may be improved.

In other aspects, radar sensors of the device are activated to obtain radar data of the space of interest. Three-dimensional (3D) radar features are extracted from the radar data and position data is received from the sensors. Based on the location data, spatial relationships of the 3D radar features are determined to generate a set of 3D landmarks of the space. The set of 3D landmarks is compared to known 3D context models to identify a 3D context model that matches the 3D landmark. Based on the matching 3D context model, a context of the space is retrieved and used to configure context settings of the device.

This summary is provided to introduce simplified concepts related to radar-enabled sensor fusion, which is further described below in the detailed description. This summary is not intended to identify essential features of the claimed subject matter, nor is it intended for use in determining the scope of the claimed subject matter.

Drawings

Embodiments of radar-enabled sensor fusion are described with reference to the following figures. The same reference numbers are used throughout the drawings to reference like features and components:

FIG. 1 illustrates an example environment including a computing device with a radar sensor and an additional sensor.

FIG. 2 illustrates an example type and configuration of sensors shown in FIG. 1.

FIG. 3 illustrates an example embodiment of the radar sensor and corresponding radar field shown in FIG. 1.

Fig. 4 illustrates another example embodiment of the radar sensor shown in fig. 1 and a penetrating radar field.

Fig. 5 illustrates a configuration example of components capable of implementing radar-enabled sensor fusion.

FIG. 6 illustrates an example method of augmenting radar data with supplemental sensor data.

Fig. 7 illustrates an example of an implementation of motion tracking with enhanced radar features.

FIG. 8 illustrates an example method for low power sensor fusion in accordance with one or more embodiments.

Fig. 9 illustrates an example of low power sensor fusion implemented by a smart tv including a sensor fusion engine.

FIG. 10 illustrates an example method for validating radar features using complementary sensor data.

FIG. 11 illustrates an example method for generating a context model for a space of interest.

Fig. 12 illustrates an example of a context mapped room in accordance with one or more embodiments.

Fig. 13 illustrates an example method for configuring context settings based on a context associated with a space.

FIG. 14 illustrates an example method of changing a contextual setting in response to a change in context of a space.

Fig. 15 illustrates an example of changing a contextual setting of a computing device in response to a change in context.

FIG. 16 illustrates an example computing system that can implement techniques to support radar-enabled sensor fusion.

Detailed Description

SUMMARY

Conventional sensor technology is often limited and inaccurate due to the inherent weaknesses associated with a given type of sensor. For example, motion may be sensed by data provided by an accelerometer, but accelerometer data may not be useful for determining the source of the motion. In other cases, the proximity sensor may provide data sufficient to detect proximity to the object, but the identity of the object may not be determinable from the proximity data. As a result, conventional sensors have weak or blind spots, which can lead to inaccurate or incomplete sensing of the surrounding environment of the device, including the relationship of the device to the user.

Apparatus and techniques to enable radar-enabled sensor fusion are described herein. In some embodiments, the respective strengths of the sensors are combined with the radar to mitigate the respective weaknesses of each sensor. For example, surface radar features of a user's face may be combined with imagery of a red-green-blue (RGB) camera to improve the accuracy of facial recognition applications. In other cases, radar motion features that can track fast motion are combined with images of RGB sensors that are good at capturing spatial information to provide an application that can detect fast spatial movement.

In other cases, the radar surface features may be augmented with orientation or direction information from the accelerometer to enable mapping of the device environment (e.g., a room or space). In such cases, the device may learn or detect the context in which the device is operating, enabling various contextual features and settings of the device. These are just a few examples of the ways in which radar may be fully utilized for sensor fusion or context sensing, which are described herein. The following discussion first describes an operating environment, then techniques that may be used in this environment, and finally an example system.

Operating environment

FIG. 1 illustrates a computing device that may implement radar-enabled sensor fusion. The computing device 102 is illustrated with various non-limiting example devices, smart glasses 102-1, smart watch 102-2, smart phone 102-3, tablet computer 102-4, laptop computer 102-5, and gaming system 102-6, although other devices may also be used, such as home automation and control systems, entertainment systems, audio systems, other household appliances, security systems, netbooks, automobiles, smart appliances, and e-readers. Note that the computing device 102 may be wearable, non-wearable but mobile, or relatively immobile (e.g., desktop computers and appliances).

Computing device 102 includes one or more computer processors 104 and computer-readable media 106, which include memory media and storage media. An application and/or operating system (not shown) embodied as computer-readable instructions on computer-readable medium 106 may be executed by computer processor 104 to provide some of the functionality described herein. The computer-readable media 106 also includes a sensor-based application 108, a sensor fusion engine 110, and a context manager 112, described below.

The computing device 102 may also include one or more network interfaces 114 for communicating data over a wired, wireless, or optical network and the display 116. Network interface 114 may transmit data over a Local Area Network (LAN), a Wireless Local Area Network (WLAN), a Personal Area Network (PAN), a Wide Area Network (WAN), an intranet, the internet, a peer-to-peer network, a mesh network, and so forth. Display 116 may be integrated with or associated with computing device 102, such as with gaming system 102-6.

The computing device 102 includes one or more sensors 118 that enable the computing device 102 to sense various properties, changes, stimuli, or characteristics of the environment in which the computing device 102 operates. For example, the sensors 118 may include various motion sensors, light sensors, acoustic sensors, and magnetic sensors. Alternatively or additionally, the sensors 118 support interaction with a user of the computing device 102 or receive input from a user of the computing device 102. The use and implementation of the sensor 118 may vary and is described below.

The computing device 102 may also be associated with or include a radar sensor 120. The radar sensor 120 represents a function of wirelessly detecting a target through transmission and reception of a Radio Frequency (RF) or radar signal. The radar sensor 120 may be implemented as a system embedded within the computing device 102 and/or radar-enabled components, such as a system-on-a-chip (SoC) or a chip-on-chip (chip-on-chip). However, it should be appreciated that the radar sensor 120 may be implemented in any other suitable manner, such as implemented as one or more Integrated Circuits (ICs), as a processor having embedded processor instructions or configured to access memory storing processor instructions, as hardware having embedded firmware, as a printed circuit board component having various hardware components, or any combination thereof. Here, radar sensor 120 includes a radar emitting element 122, an antenna 124, and a digital signal processor 126, which may be used together to wirelessly detect various types of targets in the environment of computing device 102.

In general, radar-emitting element 122 is configured to provide a radar field. In some cases, the radar field is configured to reflect at least partially off of one or more target objects. In some cases, the target object includes a device user or other person present in the environment of the computing device 102. In other cases, the target object includes a physical feature of the user, such as hand motion, breathing rate, or other physiological features. The radar field may also be configured to penetrate fabric or other obstacles and reflect from human tissue. These fabrics or barriers may comprise wood, glass, plastic, cotton, wool, nylon, and similar fibers, etc., while reflecting from human tissue, such as a human hand.

The radar field provided by radar-emitting element 122 may be of a small size, such as zero or 1 millimeter to 1.5 meters, or a medium size, such as 1 to 30 meters. It should be appreciated that these dimensions are for discussion purposes only, and that any other suitable size or range of radar fields may be used. For example, when the radar field is of a medium size, the radar sensor 120 may be configured to receive and process reflections of the radar field to provide a large body pose based on reflections from human tissue caused by body, arm, or leg motion.

In some aspects, the radar field may be configured to enable the radar sensor 120 to detect smaller and more accurate gestures, such as micro-gestures. Example medium sized radar fields include a user making a gesture to control a television from a couch, changing a song or volume from a stereo across a room, turning off an oven or oven timer (near field is also useful here), turning on or off lights in a room, and so forth. The radar sensor 120 or its transmitter may be configured to transmit continuously modulated radiation, ultra-wideband radiation, or sub-millimeter frequency radiation.

The antenna 124 transmits and receives the RF signal of the radar sensor 120. In some cases, radar-emitting element 122 is coupled with antenna 124 to emit a radar field. As will be appreciated by those skilled in the art, this is achieved by converting electrical signals into electromagnetic waves for transmission, and vice versa for reception. Radar sensor 120 may include one antenna or an array of any suitable number of antennas in any suitable configuration. For example, any of the antennas 124 may be configured as a dipole antenna, a parabolic antenna, a helical antenna, a planar antenna, an inverted-F antenna, a monopole antenna, and so forth. In some embodiments, antenna 124 is constructed or formed on a chip (e.g., as part of a SoC), while in other embodiments, antenna 124 is a separate component, metal, dielectric, hardware, etc., attached to radar sensor 120 or included within radar sensor 120.

A first one of the antennas 124 may be single-use (e.g., the first antenna may be used to transmit signals and a second one of the antennas 124 may be used to receive signals), or multi-use (e.g., the antennas are used to transmit and receive signals). Thus, some embodiments utilize different combinations of antennas, such as embodiments utilizing two single-use antennas configured for transmission in combination with four single-use antennas configured for reception. As further described herein, the arrangement, size, and/or shape of the antennas 124 may be selected to enhance a particular transmission mode or diversity scheme, such as a mode or scheme designed to capture information about the environment.

In some cases, antennas 124 may be physically separated from each other by a distance that allows radar sensor 120 to transmit and receive signals for a target object through different channels, different radio frequencies, and different distances together. In some cases, antennas 124 are spatially distributed to support triangulation techniques, while in other cases, antennas are collocated to support beamforming techniques. Although not shown, each antenna may correspond to a respective transceiver path that physically routes and manages outgoing signals for transmission and incoming signals for acquisition and analysis.

The digital signal processor 126(DSP or digital signal processing component) generally represents operations related to digitally capturing and processing signals. For example, the digital signal processor 126 samples analog RF signals received by the antenna 124 to generate radar data (e.g., digital samples) representative of the RF signals, which is then processed to extract information about the target object. In some cases, the digital signal processor 126 performs a transformation on the radar data to provide radar features that describe characteristics, location, or dynamics of the target. Alternatively or additionally, digital signal processor 126 controls the configuration of signals generated and transmitted by radar-emitting elements 122 and/or antennas 124, such as configuring multiple signals to form a particular diversity or beamforming scheme.

In some cases, the digital signal processor 126 receives input configuration parameters that control transmission parameters (e.g., frequency channels, power levels, etc.) of the RF signals, such as through the sensor-based application 108, the sensor fusion engine 110, or the context manager 112. The digital signal processor 126 in turn modifies the RF signal according to the input configuration parameters. Sometimes, the signal processing functions of the digital signal processor 126 are included in a signal processing function or algorithm library that is also accessible and/or configurable via the sensor-based application 108 or Application Programming Interface (API). The digital signal processor 126 may be implemented in hardware, software, firmware, or any combination thereof.

Fig. 2 illustrates an example type and configuration of sensors 118 that may be used to implement embodiments of radar-enabled sensor fusion, generally at 200. These sensors 118 enable the computing device 102 to sense various properties, changes, stimuli, or characteristics of the environment in which the computing device 102 operates. The data provided by the sensors 118 may be accessed by other entities of the computing device, such as the sensor fusion engine 110 or the context manager 112. Although not shown, the sensors 118 may also include a global positioning module, a micro-electro-mechanical system (MEMS), a resistive touch sensor, and the like. Alternatively or additionally, the sensors 118 can enable interaction with a user of the computing device 102 or receive input from a user of the computing device 102. In such a case, the sensors 118 may include piezoelectric sensors, touch sensors, or input sensing logic associated with hardware switches (e.g., a keyboard, dome-dome, or dial), among others.

In this particular example, sensors 118 include an accelerometer 202 and a gyroscope 204. These and other motion and position sensors, such as motion sensitive MEMS or Global Positioning Systems (GPS) (not shown), are configured to sense movement or orientation of the computing device 102. Accelerometer 202 or gyroscope 204 may sense movement or orientation of the device in any suitable aspect, such as in one dimension, two dimensions, three dimensions, multiple axes, combined multiple axes, and so forth. Alternatively or additionally, a location sensor such as GPS may indicate a travel distance, a travel speed, or an absolute or relative location of the computing device 102. In some embodiments, accelerometer 202 or gyroscope 204 enables computing device 102 to sense gestural input (e.g., a series of position and/or orientation changes) made when a user moves computing device 102 in a particular manner.

The computing device 102 also includes a hall effect sensor 206 and a magnetometer 208. Although not shown, the computing device 102 may also include a magnetic diode, a magnetic transistor, a magneto-sensitive MEMS, or the like. These magnetic field-based sensors are configured to sense magnetic field characteristics around the computing device 102. For example, the magnetometer 208 may sense changes in magnetic field strength, magnetic field direction, or magnetic field orientation. In some embodiments, the computing device 102 determines a proximity to the user or another device based on input received from the magnetic field-based sensor.

The temperature sensor 210 of the computing device 102 may sense a temperature of a housing of the device or an ambient temperature of an environment of the device. Although not shown, the temperature sensor 210 may also be implemented in conjunction with a humidity sensor that supports determining a humidity level. In some cases, the temperature sensor may sense a body temperature of a user holding, wearing, or carrying the computing device 102. Alternatively or additionally, the computing device may include an infrared thermal sensor that may sense temperature remotely or without physical contact with the object of interest.

The computing device 102 also includes one or more acoustic sensors 212. The acoustic sensor may be implemented as a microphone or as an acoustic wave sensor configured to monitor the sound of the environment in which the computing device 102 operates. The acoustic sensor 212 is capable of receiving a user's voice input, which may then be processed by a DSP or processor of the computing device 102. The sound captured by the acoustic sensor 212 may be analyzed or measured to obtain any suitable component such as pitch, timbre, harmonics, loudness, cadence, envelope characteristics (e.g., onset, duration, attenuation), and so forth. In some embodiments, the computing device 102 identifies or distinguishes users based on data received from the acoustic sensors 212.

The capacitive sensor 214 enables the computing device 102 to sense changes in capacitance. In some cases, the capacitive sensor 214 is configured as a touch sensor that can receive touch input or determine proximity to a user. In other cases, the capacitive sensor 214 is configured to sense a property of a material proximate to a housing of the computing device 102. For example, the capacitive sensor 214 may provide data indicating the proximity of the device to a surface (e.g., a table or desk), the user's body, or the user's clothing (e.g., a clothing pocket or sleeve). Alternatively or additionally, the capacitive sensor may be configured as a touch screen or other input sensor of the computing device 102 through which touch input is received.

The computing device 102 may also include a proximity sensor 216 that senses proximity to an object. The proximity sensor may be implemented with any suitable type of sensor, such as a capacitive or Infrared (IR) sensor. In some cases, the proximity sensor is configured as a short-range IR emitter and receiver. In such cases, the proximity sensor may be located within a housing or screen of the computing device 102 to detect proximity to the user's face or hand. For example, the proximity sensor 216 of the smartphone may enable detection of the user's face, such as during a voice call, in order to disable the smartphone's touchscreen to prevent receipt of inadvertent user input.

The ambience light sensor 218 of the computing device 102 may include a photodiode or other optical sensor configured to sense intensity, quality, or change in light of the environment. The light sensor can sense ambient light or directional light, which can then be processed by the computing device 102 (e.g., via a DSP) to determine various aspects of the device environment. For example, the change in ambient light may indicate that the user has picked up the computing device 102 or removed the computing device 102 from his or her pocket.

In this example, the computing device also includes a red-green-blue sensor 220(RGB sensor 220) and an infrared sensor 222. The RGB sensor 220 may be implemented as a camera sensor configured to capture imagery in the form of images or video. In some cases, RGB sensor 220 is associated with a Light Emitting Diode (LED) flash increase in luminosity of an image in a low light environment. In at least some embodiments, the RGB sensor 220 may be implemented to capture images associated with a user, such as the user's face or other physical features that are capable of identifying the user.

The infrared sensor 222 is configured to capture data in the infrared spectrum and may be configured to sense thermal changes or configured as an Infrared (IR) camera. For example, infrared sensor 222 may be configured to sense thermal data associated with a user or other persons in the device environment. Alternatively or additionally, an infrared sensor may be associated with the IR LED and configured to sense proximity or distance to an object.

In some embodiments, the computing device includes a depth sensor 224, which depth sensor 224 may be implemented in conjunction with the RGB sensor 220 to provide RGB enhanced depth information. The depth sensor 224 may be implemented as a single module or as separate components, such as an IR emitter, an IR camera, and a depth processor. When implemented separately, the IR emitter emits IR light that is received by an IR camera that provides IR imagery data to the depth processor. Based on a known variable such as speed of light, the depth processor of the depth sensor 224 may resolve the distance to the target (e.g., time-of-flight camera). Alternatively or additionally, the depth sensor 224 may resolve a three-dimensional depth map of the surface of the object or the environment of the computing device.

From a power consumption perspective, each sensor 118 may consume a different amount of the respective power when operating. For example, the magnetometer 208 or acoustic sensor 212 may consume several tens of milliamps to operate, while the RGB sensor, infrared sensor 222, or depth sensor 224 may consume several hundreds of milliamps to operate. In some embodiments, the power consumption of one or more sensors 118 is known or predefined, such that lower power sensors may be activated in place of other sensors to obtain certain types of data while conserving power. In many cases, radar sensor 120 may operate continuously or intermittently to obtain various data while consuming less power than sensor 118. In such a case, the radar sensor 120 may operate with all or most of the sensors 118 powered down to conserve power of the computing device 102. Alternatively or additionally, it may be determined to activate one of the sensors 118 to obtain additional sensor data based on data provided by the radar sensor 120.

Fig. 3 illustrates an example configuration of radar sensor 120 and the radar field provided thereby, generally at 300. In the context of fig. 3, two example configurations of radar sensor 120 are illustrated, a first example configuration in which radar sensor 302-1 is embedded in gaming system 304, and a second example configuration in which radar sensor 302-2 is embedded in television 306. The radar sensors 302-1 and 302-2 may be implemented similarly or differently from each other or the radar sensors described elsewhere herein. In a first example, radar sensor 302-1 provides a near radar field to interact with game system 304, and in a second example, radar sensor 302-2 provides a medium radar field (e.g., room size) to interact with television 306. These radar sensors 302-1 and 302-2 provide a near radar field 308-1 and a mid radar field 308-2, respectively, and are described below.

Gaming system 304 includes or is associated with radar sensor 302-1. These devices work together to improve user interaction with the gaming system 304. For example, assume that the gaming system 304 includes a touch screen 310, through which touch screen 310 content display and user interaction may be performed. The touch screen 310 can present challenges to the user, such as requiring a person to be seated in a particular orientation, such as upright and facing forward, to be able to touch the screen. Further, the size of the controls for selection via the touch screen 310 can make some user interactions difficult and time consuming. However, consider radar sensor 302-1, which provides a near radar field 308-1, which near radar field 308-1 enables a user's hand to interact with desktop computer 304 in three dimensions, e.g., with small or large, simple or complex gestures, including single-handed or two-handed gestures. It is apparent that the large volume through which a user may select may be substantially easier and provide a better experience than a flat surface such as touch screen 310.

Similarly, consider radar sensor 302-2 providing a medium radar field 308-2. Providing a radar field enables various interactions with a user located in front of a television. For example, the user may interact with the television 306 from a distance and through various gestures from a hand gesture to an arm gesture to a full body gesture. By doing so, user selection may be made simpler and easier than a flat surface (e.g., touch screen 310), a remote control (e.g., a game or television remote control), and other conventional control mechanisms. Alternatively or additionally, the television 306 may determine the identity of the user via the radar sensor 302-2, which may be provided to the sensor-based application to implement other functions (e.g., content control).

Fig. 4 illustrates another example configuration of a radar sensor and the penetrating radar field it provides at 400. In this particular example, the surface to which the radar field is applied is human tissue. As shown, hand 402 has a surface radar field 404, which is provided by radar sensor 120 (of fig. 1) included in laptop 406. The radar-emitting element 122 (not shown) provides a surface radar field 404 that penetrates the chair 408 and is applied to the hand 402. In this case, the antenna 124 is configured to receive reflections through (e.g., reflected back through) the chair 408 caused by interactions on the surface of the hand 402. Alternatively, the radar sensor 120 may be configured to provide and receive reflections through the fabric, such as when the smartphone is placed in a pocket of a user. Thus, the radar sensor 120 may map or scan the space through optical obstructions such as fabrics, clothing, and other opaque materials.

In some embodiments, the digital signal processor 126 is configured to process reflected signals received from the surface sufficient to provide radar data that can be used to identify the hand 402 and/or determine gestures made by the hand. Note that for surface radar field 404, the other hand may interact with surface radar field 404 by performing a gesture, such as tapping a surface on hand 402, through recognition or interaction. Example gestures include single and multi-finger sliding, opening, pinching, non-linear movement, and the like. Or the hand 402 may simply move or change shape to cause a reflection to also perform an occlusion gesture.

For human tissue reflections, the reflected radar field may process the fields to determine an identification mark based on the human tissue reflection, and confirm that the identification mark matches a recorded identification mark of a person, e.g., an authentication of the person authorized to control the corresponding computing device. These identifying indicia may include various biometric identifiers, such as the size, shape, size ratio, cartilage structure, and bone structure of the person or a portion of the person, such as a hand of the person. These identifying indicia may also be associated with devices worn by persons licensed to control the mobile computing device, such as devices with unique or difficult to replicate reflections (e.g., a wedding ring with 14 karate gold and three diamonds that reflect radar in a particular manner).

Additionally, the radar sensor system may be configured such that personally identifiable information is removed. For example, the identity of the user may be processed such that personally identifiable information cannot be determined for the user, or the geographic location of the user may be generalized (e.g., to a city, zip code, or state level) where location information is obtained such that a particular location of the user cannot be determined. Thus, the user may have control over what information is collected about the user, how the information is used, and which information is provided to the user.

Fig. 5 illustrates, generally at 500, an example configuration of components capable of implementing radar-enabled sensor fusion, including the sensor fusion engine 110 and the context manager 112. Although shown as separate entities, the radar sensor 120, the sensor fusion engine 110, the context manager 112, and other entities may be combined with one another, organized differently, or in direct or indirect communication via an interconnection or data bus not shown. Accordingly, the implementation of the sensor fusion engine 110 and context manager 112 shown in fig. 5 is intended to provide a non-limiting example of the manner in which these entities and other entities described herein can interact to implement radar-enabled sensor fusion.

In this example, the sensor fusion engine includes a radar signal transformer 502 (hereinafter simply referred to as "signal transformer 502") and a radar feature extractor 504 (hereinafter simply referred to as "feature extractor 504"). Although shown as separate entities implemented on the sensor fusion engine 110, the signal transformer 502 and the feature extractor 504 may also be implemented by or within the digital signal processor 126 of the radar sensor 120. The sensor fusion engine 110 is communicatively coupled with the sensors 118 to receive sensor data 506 therefrom. Sensor data 506 may include any suitable type of raw or pre-processed sensor data, such as data corresponding to any type of sensor described herein. The sensor fusion engine 110 is also operatively coupled with the radar sensor 120, the radar sensor 120 providing radar data 508 to the sensor fusion engine 110. Alternatively or additionally, the radar data 508 provided by the radar sensor 120 may include real-time radar data, such as raw data representing reflected signals of a radar field received by the radar sensor 120.

In some embodiments, signal converter 502 converts raw radar data representing the reflected signals into a radar data representation. In some cases, this includes performing signal pre-processing on the raw radar data. For example, when the antenna receives a reflected signal, some embodiments sample the signal to generate a digital representation of the original incoming signal. Once the raw data is generated, the signal transformer 502 pre-processes the raw data to clean up the signal or generate a version of the signal in the desired frequency band or desired data format. Alternatively or additionally, preprocessing the raw data may include filtering the raw data to reduce noise floor or remove aliasing, resampling the data to obtain different sampling rates, generating complex representations of the signal, and so forth. The signal converter 502 may pre-process the raw data based on default parameters, while in other cases, the type and parameters of pre-processing are configurable, such as by the sensor fusion engine 110 or the context manager 112.

The signal converter 502 converts the received signal data into one or more different data representations or data transformations. In some cases, signal transformer 502 combines data from multiple paths and corresponding antennas. The combined data may include data from various combinations of transmit paths, receive paths, or combined transceiver paths of the radar sensor 120. Any suitable type of data fusion technique may be used, such as weighted integration to optimize heuristics (e.g., signal-to-noise ratio (SNR) ratio or Minimum Mean Square Error (MMSE)), beamforming, triangulation, and so forth.

The signal transformer 502 may also generate multiple combinations of signal data for different types of feature extraction and/or transform the signal data into another representation as a predecessor (predecessor) for feature extraction. For example, the signal transformer 502 may process the combined signal data to generate a three-dimensional (3D) spatial profile of the target object. However, any suitable type of algorithm or transformation may be used to generate the view, abstraction, or version of the raw data, such as an I/Q transformation that produces a complex vector containing phase and magnitude information related to the target object, a beamforming transformation that produces a spatial representation of the target object within range of the gesture sensor device, or a range-doppler algorithm that produces the target velocity and direction. Other types of algorithms and transformations may include a range image algorithm that produces target identification information, a micro-doppler algorithm that produces high resolution target identification information, and a spectrogram algorithm that produces a visual representation of the corresponding frequencies, among others.

As described herein, raw radar data may be processed in several ways to generate corresponding transformed or combined signal data. In some cases, the same raw data may be analyzed or transformed in a variety of ways. For example, the same raw data capture may be processed to generate 3D contours, target velocity information, and target direction movement information. In addition to generating a transformation of the raw data, the radar signal transformer may also perform basic classification of the target object, e.g., identifying information about its presence, shape, size, orientation, velocity over time, etc. For example, some embodiments use the signal transducer 502 to identify the base orientation of the hand by measuring the amount of reflected energy on the hand over time.

These transformations and basic classification may be performed in hardware, software, firmware, or any suitable combination. Sometimes, the digital signal processor 126 and/or the sensor fusion engine 110 perform the transformation and the basic classification. In some cases, the signal transformer 502 transforms the raw radar data or performs basic classification based on default parameters, while in other cases the transformation or classification is configurable, such as by the sensor fusion engine 110 or the context manager 112.

Feature extractor 504 receives a representation of the transformed radar data from signal transformer 502. From these data transformations, feature extractor 504 parses, extracts, or identifies one or more radar features 510. These radar features 510 may be indicative of various properties, dynamics, or characteristics of the target, and in this example include detection features 512, reflection features 514, motion features 516, location features 518, and shape features 520. These features are described by way of example only and are not intended to limit the manner in which the sensor fusion engine extracts feature or gesture information from raw radar data or transformed radar data. For example, radar feature extractor 504 may extract alternative radar features, such as range features or image features, from the radar data representation provided by signal transformer 502.

The detection features 512 may enable the sensor fusion engine 110 to detect the presence of users, other people, or objects in the environment of the computing device 102. In some cases, the detection feature indicates a number of targets in the radar field or a number of targets in a room or space swept by the radar field. The reflection feature 514 may indicate a profile of energy reflected by the target, such as reflected energy that varies over time. This may effectively enable the speed of the object motion to be tracked over time. Alternatively or additionally, the reflection signature may indicate the energy of the strongest component or the total energy of the moving target.

The motion features 516 may enable the sensor fusion engine 110 to track movement or motion of the target in or through the radar field. In some cases, the motion feature 516 includes a one-dimensional or three-dimensional center of mass of velocity or a one-dimensional phase-based fine target displacement. Alternatively or additionally, the motion characteristics may include a target velocity or a 1D velocity dispersion. In some embodiments, the location features 518 include spatial 2D or 3D coordinates of the target object. The location features 518 may also be used to range or determine a distance to a target object.

Shape feature 520 indicates the shape of the object or surface and may include spatial dispersion. In some cases, the sensor fusion engine 110 may scan or beamform different radar fields to construct a 3D representation of the target or environment of the computing device 102. For example, the shape features 520 and other radar features 510 may be combined by the sensor fusion engine 110 to construct a unique identifier (e.g., fingerprint) for a particular room or space.

In some embodiments, the feature extractor 504 builds on the basic classifications identified by the signal transformer 502 for feature extraction or extraction. Consider the example above where the signal transformer 502 classifies the target object as a hand. Feature extractor 504 may extract lower resolution features of the hand from this basic classification construct. In other words, if the feature extractor 504 is provided with information identifying the target object as a hand, the feature extractor 504 uses this information to find features related to the hand (e.g., finger taps, shape gestures, or sliding movements) rather than features related to the head (e.g., blinks, lip movement descriptors, or head-shaking movements).

As another example, consider a scenario in which signal transformer 502 transforms raw radar data into a measure of the velocity of a target object that varies over time. In turn, the feature extractor 504 uses this information to identify finger tap motions by using thresholds to compare the acceleration of the target object to thresholds, slow tap features, and the like. The features may be extracted using any suitable type of algorithm, such as a machine learning algorithm implemented by a machine learning component (not shown) of the digital signal processor 126.

In various embodiments, the sensor fusion engine 110 combines the radar features 510 with the sensor data 506 from the sensors 118 or augments the radar features 510 with the sensor data 506 from the sensors 118. For example, the sensor fusion engine 110 may apply a single algorithm to extract, identify, or classify features, or apply multiple algorithms to extract a single feature or multiple features. Thus, different algorithms may be applied to extract different types of features on the same set of data or on different sets of data. Based on the radar signature, the sensor fusion engine 110 may activate particular sensors to provide data complementary or complementary to the radar signature. By doing so, the sensor data can be utilized to improve the accuracy or effectiveness of the radar feature.

The sensor fusion engine 110 provides or exposes the sensor data 506, the radar data 508, or various combinations thereof, to the sensor-based application 108 and the context manager 112. For example, the sensor fusion engine 110 may provide radar data augmented with sensor-based data or validated based on sensor data to the sensor-based application 108. The sensor-based application 108 may include any suitable application, function, utility, or algorithm that leverages information or knowledge about the environment of, or relationship to, the computing device 102 to provide device functionality or alter device operation.

In this particular example, sensor-based application 108 includes proximity detection 522, user detection 524, and activity detection 526. The proximity detection application 522 may detect proximity to a user or other object based on sensor data or radar data. For example, proximity detection application 522 may use detection radar feature 512 to detect an approaching object and then switch to proximity sensor data to confirm proximity to the user. Alternatively or additionally, the application may leverage the shape radar feature 520 to verify that the approaching object is the user's face, and not another large object of similar size.

User detection application 524 may detect the presence of a user based on sensor data or radar data. In some cases, user detection application 524 also tracks the user as they are detected in the environment. For example, user detection application 524 may detect presence based on shape radar features 520 and detection radar features 512 that match a known 3D profile of the user. The user detection application 524 may also verify the detection of the user through image data provided by the RGB sensor 220 or voice data provided by the acoustic sensor 212.

In some embodiments, the activity detection application 526 uses the sensor data and the radar data to detect activity in the environment of the computing device 102. The activity detection application 526 may monitor the radar for detection features 512 and motion features 516. Alternatively or additionally, the activity detection application 526 may use the acoustic sensor 212 to detect noise and the RGB sensor 220 or the depth sensor 224 monitor the movement.

The sensor-based applications also include biometric identification 528, physiological monitoring 530, and motion recognition 532. The biometric recognition application 528 may use the sensor data and radar data to capture or obtain biometric characteristics useful for identifying the user, such as to enable facial recognition. For example, the biometric recognition application 528 may confirm the identity of the user using the shape radar features 520 to obtain a 3D mapping of the skeletal structure of the user's face and the color images from the RGB sensor 220. Thus, even if an imposter is able to forge the look of the user, the imposter will not be able to replicate the exact facial structure of the user and thus cannot be identified by the biometric identification application 528.

The physiological monitoring application 530 may detect or monitor medical aspects of the user, such as respiration, heart rate, reflexes, fine motor skills, and the like. To this end, the physiological monitoring application 530 may use the radar data, such as to track motion of the user's chest, monitor arterial blood flow, subcutaneous muscle contractions, and so forth. The physiological monitoring application 530 may monitor supplemental data for other sensors of the device, such as sound, heat (e.g., temperature), image (e.g., skin or eye color), and motion (e.g., tremor) data. For example, the physiological monitoring application 530 may monitor the user's breathing pattern with the motion radar signature 516, the breathing noise recorded by the acoustic sensor 212, and the thermal signature of exhaled air captured by the infrared sensor 222.

The motion recognition application 532 may use the radar data and the sensor data to identify various motion markers. In some cases, motion radar feature 516 or other radar features may be used to track motion. In such a case, the motion may be too fast to be accurately captured by the RGB sensor 220 or the depth sensor 224. By using radar features that can track very fast motion, the motion recognition application 532 can track the motion and leverage the image data from the RGB sensor 220 to provide additional spatial information. Thus, the sensor fusion engine 110 and the motion recognition application 532 are able to track fast moving objects using the corresponding spatial information.

The gesture detection application 534 of the sensor-based application 108 performs gesture recognition and mapping. For example, consider a case where the finger tap action feature has been extracted. The gesture detection application 534 may use this information, sound data from the acoustic sensor 212, or image data from the RGB sensor 220 to recognize the feature as a double tap gesture. The gesture detection application 534 may use probabilistic determinations of which gesture is most likely to occur based on the radar data and sensor data provided by the sensor fusion engine 110 and how this information relates to one or more previously learned characteristics or features of various gestures. For example, machine learning algorithms may be used to determine how to weight various received characteristics to determine the likelihood that those characteristics correspond to a particular gesture (or component of a gesture).

The context manager 112 may access the sensor data 506 or radar features 510 of the sensor-based application 108, the sensor fusion engine 110 to enable radar-based context sensing. In some embodiments, the radar data 508 may be combined with the sensor data 506 to provide a map of the space or room in which the computing device 102 operates. For example, synthetic aperture techniques for capturing and gridding 3D radar imagery may be implemented with position and inertial sensor data. Thus, as the device moves through the environment, the context manager 112 may construct detailed or high-resolution 3D maps of various spaces and rooms. Alternatively or additionally, 3D imagery may be captured by optical occlusion, or used in conjunction with other techniques of sensor fusion to improve activity recognition.

In this particular example, the context manager 112 includes a context model 536, a device context 538, and context settings 540. The context model 536 includes a physical model of the respective space, such as dimensions, geometry, or characteristics of a particular room. In other words, the context model can be considered as describing unique characteristics of a particular space, such as a 3D fingerprint. In some cases, building context model 536 is implemented via machine learning techniques, and may be performed passively as the device enters or passes through a particular space. The device context 538 includes and can describe a number of contexts in which the computing device 102 can operate. These contexts may include a standard set of working contexts such as "meeting," "do not disturb," "available," "secure," "private," and so forth. For example, a "meeting" context may be associated with a device in a meeting room and a number of other colleagues and clients. Alternatively or additionally, the device context 538 may be user programmable or customized, such as the context of different rooms of a house, where each context indicates a respective privacy or security level associated with that context.

The context settings 540 include various device or system settings that are configurable based on context or other environmental properties. The context settings 540 may include any suitable type of device settings, such as ring volume, ring mode, display mode, connection to a network or other device, and so forth. Alternatively or additionally, the contextual settings 540 may include any suitable type of system settings, such as security settings, privacy settings, network or device connection settings, remote control features, and so forth. For example, if the user walks into her home theater, the context manager 112 may recognize the context (e.g., "home theater") and configure the context settings by muting the reminders of the device and configuring the wireless interface of the device to control the audio/video equipment of the home theater. This is just one example of how the context manager 112 may determine and configure a device based on context.

Having described respective examples of the computing device 102, the sensor fusion engine 110, and the context manager 112 in accordance with one or more embodiments, consider now a discussion of techniques that may be performed by those and other entities described herein to implement radar-enabled sensor fusion.

Example method

Fig. 6, 8, 10, 11, 13, and 14 depict methods for implementing radar-enabled sensor fusion and/or radar-based context sensing. The methodologies are shown as a set of blocks that specify operations performed, but are not necessarily limited to the orders or combinations of operations shown for performing the respective blocks. For example, the operations of the different methods may be combined in any order to implement alternative methods without departing from the concepts described herein. In portions of the following discussion, these techniques may be described with reference to fig. 1-5, the references to fig. 1-5 being for example only. The techniques are not limited to being performed by one or more entities operating on one device or the entities depicted in the figures.

Fig. 6 depicts an example method 600 for augmenting radar data with supplemental sensor data, including operations performed by the radar sensor 120, the sensor fusion engine 110, or the context manager 112.

At 602, a radar field, such as one of the radar fields shown in fig. 2 and 3, is provided. The radar field may be provided by a radar system or radar sensor, which may be implemented similarly or differently than radar sensor 120 and radar-emitting element 122 of fig. 1. The radar field provided may comprise a wide beam, a fully continuous radar field or a directed narrow beam, scanned radar field. In some cases, the radar field is provided at a frequency in the approximately 60GHz band, such as 57-64GHz or 59-61GHz, although other bands may be used.

As an example, consider FIG. 7, where laptop 102-5 includes radar sensor 120 at 700 and is capable of radar-enabled sensor fusion. Here, assume that user 702 is playing a First Person Shooter (FPS) video game using the gesture-driven control menu of laptop 102-5. Radar sensor 120 provides a radar field 704 to capture the movement of user 702 for game control.

At 604, one or more reflected signals corresponding to a target in the radar field are received. The radar reflection signal may be received as a superposition of multiple points of a target object in the radar field, such as a person or object within or passing through the radar field. In the context of the present example, a reflected signal from the user's hand is received by the radar sensor 120.

At 606, the one or more reflected signals are converted into a radar data representation. The reflected signals may be transformed using any suitable signal processing, such as by performing a range-doppler transform, a range image transform, a micro-doppler transform, an I/Q transform, or a spectrogram transform. Continuing with the example being described, the radar sensor performs a range-doppler transform to provide target velocity and direction information for the user's hand.

At 608, radar features representing characteristics of the target are extracted from the radar data. The radar signature may provide real-time measurements of the characteristics of the target, the position of the target, or the dynamics of the target. The radar features may include any suitable type of feature, such as detection features, reflection features, motion features, location features, or shape features, examples of which are described herein. In the context of the present example, the reflected radar features and the motion radar features of the user's hand are extracted from the radar data.

At 610, the sensor is activated based on the radar signature. The sensor may be activated to provide supplemental data. In some cases, the sensor is selected for activation based on the radar feature or the type of radar feature. For example, an RGB or infrared sensor may be activated to provide supplemental sensor data of surface features or motion features. In other cases, an accelerometer or gyroscope may be activated to obtain supplemental data of a motion characteristic or a position characteristic. In still other cases, data may be received from a microphone or depth sensor to improve detection characteristics. Continuing with the example being described, the sensor fusion engine 110 activates the RGB sensor 220 of the laptop computer 102-5 to capture spatial information.

At 612, the radar feature is augmented with supplemental sensor data. This may include combining or fusing the radar signature and the sensor data to provide a more accurate or precise radar signature. Augmenting the radar feature may include improving the accuracy or resolution of the radar feature based on the supplemental or compensated sensor data. Examples of such sensor fusion may include using sensor data to increase position accuracy of radar features, mitigate false detections due to radar features, increase spatial resolution of radar features, increase surface resolution of radar features, or improve classification accuracy of radar features. In the context of the present example, the sensor fusion engine 110 combines the motion radar features 516 and RGB information to provide sensor information that captures very fast movement in space. In some cases, the RGB sensor 220 will not be able to detect or capture such motion due to the inherent limitations of the sensor.

At 614, the augmented radar features are provided to the sensor-based application. This may be effective to improve the performance of sensor-based applications. In some cases, augmenting radar features improves the accuracy of detection applications such as proximity detection, user detection, activity detection, gesture detection, and the like. In such cases, the sensor data may be used to eliminate false detections, such as by confirming or denying detection of the target. In other cases, augmenting radar features may improve consistency of application. To summarize the present example, the fused radar data features are passed to a gesture detection application 534, which passes the gesture to the FPS video game as game control input.

Fig. 8 illustrates an example method for low-power sensor fusion, including operations performed by the radar sensor 120, the sensor fusion engine 110, or the context manager 112.

At 802, a radar sensor of a device is monitored for changes in reflected signals of a radar field. The radar sensor may provide a continuous or intermittent radar field from which reflected signals are received. In some cases, the radar sensor is a lower power sensor of the device that consumes less power when operating than other sensors of the device. Changes in the reflected signal may be caused by movement of the device or movement of objects within the device environment. For example, consider environment 900 of fig. 9, where a first user 904 is watching a radar-enabled television 902 in a living room. Here, assume that user 904 starts reading a magazine and that second user 906 enters the living room. The radar sensor 120 of the television detects changes in the reflected radar signal caused by these actions of the first user and the second user.

At 804, the reflected signal is transformed to detect a target in the radar field. In some cases, detection features are extracted from the transformed radar data to confirm the detection of the target in the radar field. In other cases, shape features or motion features are extracted from the transformed radar data to identify physical characteristics of the target or movement of the target in the radar field. In this example, detected radar features of the first user and the second user are extracted from the reflected radar signals.

At 806, in response to detection of the target in the reflected signal, a higher powered sensor is activated from the low power state to obtain sensor data related to the target. For example, if the radar detection feature indicates movement or presence of a user in the radar field, the RGB sensor may be activated to capture an image of the user. In other cases, the GPS module of the device may be activated in response to a location radar feature or a reflected radar feature indicating that the device is moving. Continuing the example being described, the sensor fusion engine 110 activates the RGB sensor 220 of the television 902. The RGB sensor 220 obtains facial image data 908 of the first user 904 and facial image data 910 of the second user 906. The image data may be a still image or video of the user's face so that eye tracking or other dynamic facial recognition features are supported.

At 808, sensor data relating to the target is passed to the sensor-based application. The sensor-based application may include any suitable application, such as the applications described herein. In some cases, execution of the sensor-based application is initiated or resumed in response to detecting a particular activity or target in the radar field. For example, the monitoring application may be resumed in response to a sensed activity characteristic indicating that an unauthorized person entered the controlled area. The RGB sensor may then pass the image data to a monitoring application. In the context of the present example in fig. 9, the RGB sensor passes the facial image data 908 and the facial image data 910 to the biometric recognition application 528.

Alternatively, at 810, radar features extracted from the transformed radar data are passed to a sensor-based application. In some cases, radar features provide additional context for sensor data passed to sensor-based applications. For example, the location radar features may be passed to an application that receives RGB imagery to enable the application to mark objects in the imagery with corresponding location information.

Continuing with the example being described, the sensor fusion engine 110 passes the respective radar surface features of the user's face to the biometric identification application 528. Here, the application may determine that the first user 904 is not watching television (e.g., eye tracking) and that the second user 906 is interested in watching television 902. Using the facial image data 910, the sensor fusion engine 110 can identify the second user 906 and retrieve a viewing history associated with that identity based on his identity. The context manager 112 leverages the viewing history to change the channel of the television to the last channel viewed by the second user 906.

At 812, the higher power sensor returns to the low power state. Once the sensor data is passed to the sensor-based application, the high-power sensor may return to a low-power state to conserve power of the device. Since radar sensors consume relatively little power while providing a range of capabilities, other sensors may be in a low power state until more sensor-specific data needs to be obtained. By doing so, the power consumption of the device may be reduced, which is effective for increasing the runtime of the battery-powered device. From operation 812, the method 800 may return to operation 802 to monitor the radar sensor for changes in reflected signals that monitor subsequent radar fields. To summarize the present example, after changing the channel of the television, the RGB sensor 220 returns to the low power state and may reside in the low power state until further activity is detected by the radar sensor 120.

Fig. 10 illustrates an example method for enhancing sensor data with radar features, including operations performed by the radar sensor 120, the sensor fusion engine 110, or the context manager 112.

At 1002, sensors of a device are monitored for environmental changes. The sensors may include any suitable type of sensor, such as those described with reference to fig. 2 and elsewhere herein. The sensor may be configured to monitor changes in the physical state of the device, such as device motion, or changes away from the device, such as ambient noise or light. In some cases, the sensors are monitored while the device is in a low power state or by a low power processor of the device to conserve device power.

At 1004, a change in environment is detected via a sensor. The environmental changes may include any suitable type of change, such as a user's voice, ambient noise, device motion, user proximity, temperature changes, ambient light changes, and so forth. The detected environmental changes may be associated with a particular context, activity, user, and the like. For example, the environmental change may include a voice command from a user to wake up the device from a hibernation state and unlock the device for use.

At 1006, in response to detecting the environmental change, the radar sensor is activated to provide a radar field. A radar sensor of a device may be activated to provide radar data that supplements or compensates for data provided by the sensor. In some cases, the radar field is configured based on the type of sensor that detects the environmental change or sensor data that characterizes the environmental change. For example, if a user proximity is detected, the radar sensor is configured to provide a short range radar field suitable for identifying the user. In other cases, the radar sensor may be configured to provide a sweeping far radar field in response to detecting atmospheric noise or vibration. In such cases, the long range radar field may be used to detect activity or location associated with the noise source.

At 1008, reflected signals from the radar field are transformed to provide radar data. The reflected signals may be transformed using any suitable signal processing, such as by performing a range-doppler transform, a range image transform, a micro-doppler transform, an I/Q transform, or a spectrogram transform. In some cases, one type of transformation used to provide radar data is selected based on the type of sensor that detects the environmental change or the data provided by the sensor.

At 1010, radar features are extracted from the radar data. The radar features may be extracted based on environmental changes. In some cases, one type of radar feature is selected based on the environmental change or the type of environmental change. For example, the detection feature or the motion feature may be selected in response to an acoustic sensor that detects the ambient noise. In other cases, the radar sensor may extract a location feature or a shape feature in response to the accelerometer sensing movement of the device.

At 1012, the sensor data is augmented with radar features to provide enhanced sensor data. This may effectively improve the accuracy or confidence associated with the sensor data. In other words, if the sensor is vulnerable to accuracy, range, or other metrics, the radar data can compensate for this shortcoming and improve the quality of the sensor data. For example, surface features of the user's face may confirm the identity of the user and the validity of a received voice command to unlock the device.

At 1014, the enhanced sensor data is exposed to a sensor-based application. This may effectively improve the performance of sensor-based applications by improving the accuracy of the applications, reducing the amount of sensor data used by the applications, extending the functionality of the applications, and so forth. For example, a motion-based power state application that wakes the device in response to movement may also authenticate the user and unlock the device based on enhanced sensor data including motion data and surface features of the user's facial structures.

Fig. 11 illustrates an example method for creating a 3D context model for a space of interest, including operations performed by the radar sensor 120, the sensor fusion engine 110, or the context manager 112.

At 1102, radar sensors of a device are activated to obtain radar data of a space or area of interest. The radar sensor may be activated in response to movement of the device, such as inertial data or GPS data indicating that the device is moving into a space or region. In some cases, the radar sensor is activated in response to detecting an unknown device, such as a wireless access point, a wireless appliance, or other wireless device in the space that transmits data detectable by a wireless interface of the device.

For example, consider environment 1200 of fig. 12, where user 1202 has pocketed his smartphone 102-3 into the living room. Here, assume that user 1202 has not previously traveled the space, and thus smartphone 102-2 has no prior contextual information associated with the space. In response to sensing an open area or wireless data transmission by television 1204, radar sensor 120 of smartphone 102-3 begins scanning the room through the radar-transparent pocket material and obtains radar data. As the user changes orientation in the room, the radar sensor 120 continues to scan the room for additional radar data.

At 1104, 3D radar features are extracted from the radar data. The 3D radar features may include 3D radar features or a combination of 1D and 2D features that may be used to construct the 3D radar features. The 3D radar features may include radar reflection features, motion features, or shape features that capture physical aspects of the space or region of interest. For example, the 3D radar features may include ranging, location, or shape information of objects in the space, such as furniture, walls, appliances, floor coverings, architectural features, and the like. In the context of this example, the context manager 112 extracts the location and surface radar features of objects in the room, such as television 1204, plant 1206, door 1208, lights 1210, picture 1212, and sofa 1214. The radar shape may indicate the approximate shape, surface texture, or location (absolute or relative to other targets) of each target in the room.

At 1106, location data is received from sensors of the device. The position data may include orientation data, inertial data, motion data, direction data, and the like. In some cases, the position data may be used to implement a synthetic aperture technique for radar scanning or radar imaging. Alternatively or additionally, other sensors of the device may provide data indicative of the environment of the space. For example, acoustic sensors may provide data for identifying ambient noise (e.g., fan noise or machine roar) present in a space. Continuing with the example being illustrated, accelerometer 202 of smartphone 102-3 provides inertial and orientation data to sensor fusion engine 110 as the user moves throughout the room.

At 1108, a spatial relationship of the 3D radar features is determined based on the location data. As described above, the position data can be exploited to provide a synthetic aperture through which the radar sensor can scan the region of interest. In other words, as the device moves in space, the radar sensor may capture physical features of the room as a plurality of 3D radar features. The position data received from the sensors may then be used to determine spatial relationships between multiple 3D features or how the features are pieced together in 3D space. In the context of the present example, the context manager 112 determines the spatial relationship between objects in the room by using the inertial and orientation data of the accelerometer 202.

At 1110, a portion of a 3D map is generated based on the 3D radar features and their spatial relationships. A 3D map may be generated for a portion of a space or room based on landmarks captured by radar features. These landmarks may include identifiable physical characteristics of the space, such as furniture, the basic shape and geometry of the space, the reflectivity of surfaces, and the like. From operation 1110, the method 1100 may return to operation 1102 to generate another portion of the 3D map of the space or continue to operation 1112. Continuing with the example being described and assuming that the sensor fusion engine has scanned most of the room, the context manager 112 generates portions of a 3D map of the room based on radar features of the targets 1204-1214 in the room and/or the overall dimensions of the room.

At 1112, the portions of the 3D map are combined to create a 3D model of the space. In some cases, these portions of the 3D map may be assembled or gridded by overlapping the corresponding edges. In other cases, the portions of the 3D map are combined based on previously obtained location data. The 3D map of the space may be complete or partial, depending on the many feasible 3D radar features extracted from the radar data. In the context of the present example, the context manager 112 meshes the previously generated portions to provide a 3D model of the living room.

At 1114, a 3D model of the space is associated with a context of the space. This can effectively create a 3D context model of the space. The context may be any suitable type of context, such as a room type, a security level, a privacy level, a device operational mode, and so forth. In some cases, the context is user-defined, which may include prompting the user to select from a predefined list of contexts. In other cases, the machine learning tool may implement mapping operations and assign contexts based on physical characteristics of the space. Continuing the example being illustrated, the context manager 112 associates a "living room" context with the space based on the presence of the television 1204 and the sofa 1214. This indicates that the area is private, that the security risk is low, and that television 1204 is media-enabled and can be controlled by wireless or gesture-driven control functions. For example, media playback on smartphone 102-3 may be transmitted to television 1204 upon entering the living room.

At 1116, the device stores a 3D context model of the space. The 3D context model may be stored to local memory or uploaded to the cloud to allow access by the device or other devices. In some cases, storing the 3D context model enables subsequent identification of the space via the radar sensor. For example, a device may maintain a library of 3D context models that enable the device to learn and remember the space and context associated with it. To summarize the present example, the context manager 112 stores a 3D context model of the living room to enable subsequent access and device configuration, examples of which are described herein.

Fig. 13 illustrates an example method for configuring context settings of a device based on a 3D context model, including operations performed by the radar sensor 120, the sensor fusion engine 110, or the context manager 112.

At 1302, radar sensors of the device are activated to obtain radar data of the space or area of interest. The radar sensor may be activated in response to movement of the device, such as inertial data or GPS data indicating that the device is moving into a space or region. In some cases, the radar sensor is activated in response to detecting a known device, such as a wireless access point, wireless device, or other wireless device in the space with which the device has previously been associated.

At 1304, 3D radar features are extracted from the radar data. The 3D radar features may include 3D radar features or a combination of 1D and 2D features that may be used to construct the 3D radar features. The 3D radar features may include radar reflection features, motion features, or shape features that capture physical aspects of the space or region of interest. For example, the 3D radar features may include ranging, location, or shape information of objects in the space, such as furniture, walls, appliances, floor coverings, architectural features, and the like.

At 1306, location data is received from sensors of the device. The position data may include orientation data, inertial data, motion data, direction data, and the like. In some cases, the position data may be used to implement a synthetic aperture technique for radar scanning or radar imaging. Alternatively or additionally, other sensors of the device may provide data indicative of the environment of the space. For example, acoustic sensors may provide data for identifying ambient noise (e.g., fan noise or machine roar) present in a space.

At 1308, spatial relationships of the 3D radar features are determined based on the position data provided by the sensors. As described above, the position data can be exploited to provide a synthetic aperture through which the radar sensor can scan the region of interest. In other words, as the device moves in space, the radar sensor may capture physical features of the room as a plurality of 3D radar features. The position data received from the sensors may then be used to determine spatial relationships between multiple 3D features or how the features are pieced together in 3D space.

At 1310, a set of 3D landmarks for a space is generated based on the 3D radar features and their spatial orientations. These landmarks may include identifiable physical characteristics of the space, such as furniture, the basic shape and geometry of the space, the reflectivity of surfaces, and the like. For example, a 3D landmark for a conference room may include a table with specially shaped legs and an overhead projector mounted to a stand protruding from the ceiling.

At 1312, the set of 3D landmarks is compared to a known 3D context model. This may effectively identify the space in which the device operates based on a known 3D context model. In some cases, when the set of 3D landmarks correspond to those of a 3D context model, a match to a known 3D context model is determined. To account for changes over time, such as moving or replacing furniture, a match may be determined when enough 3D landmarks match to meet a predefined confidence threshold. In such cases, static 3D landmarks, which may include room geometry and fixed architecture (e.g., stairs), may be weighted more heavily to minimize the impact of dynamic landmarks on the model match rate.

At 1314, a context associated with the space is retrieved based on the matching 3D context model. Once a match for the space is determined, the device may retrieve or access a context to be applied to the device. The context may be any suitable type of context, such as a privacy, meeting, appointment, or security context. Alternatively or additionally, if the context of the 3D context model is incompatible or outdated with the current device settings, the user may be prompted to select a context, create a context, or update the context of the space.

At 1316, context settings are configured based on the context associated with the space. The configured context settings may include any suitable type of setting, such as ring volume, ring mode, display mode, connection to a network or other device, and so forth. Further, the security or privacy settings of the device may be configured to enable or restrict the display of secure or private content.

Fig. 14 illustrates an example method for changing context settings in response to a change in context, including operations performed by the radar sensor 120, the sensor fusion engine 110, or the context manager 112.

At 1402, radar sensors of a device are activated to obtain radar data for a region of interest. The radar sensor may emit a continuous or directed radar field from which signals are reflected by targets in the area. The targets in the area may include any suitable type of object, such as walls, furniture, windows, floor coverings, appliances, room geometry, and so forth. As an example, consider environment 1500 of FIG. 15, where user 1502 is reading digital content displayed by tablet computer 102-4. Here, the context manager 112 activates the radar sensor 120 of the tablet computer 102-4 to obtain radar data for the room in which the desktop computer operates.

At 1404, radar features are extracted from the radar data. The 3D radar features may include 3D radar features or a combination of 1D and 2D features that may be used to construct the 3D radar features. The 3D radar features may include radar reflection features, motion features, or shape features that capture physical aspects of the space or region of interest. For example, the 3D radar features may include ranging, position, or shape information of a target in space. In the context of this example, sensor fusion engine 110 of tablet computer 102-4 extracts radar features that can be used to identify targets and geometries within the living room of environment 1500.

Optionally, at 1406, data is received from sensors of the device. In some cases, the sensor data may be used to determine a context of the space. For example, the acoustic sensor may provide data associated with identifiable ambient noise in the space, such as a particular frequency of running water from a fountain or fan noise. In other cases, the electrical or electronic device may emit certain squeaks, rumbles, or ultrasonic noises that may be detected by an acoustic sensor to provide identification data for a particular space.

At 1408, a context of the space is determined based at least on the radar feature. In some cases, the context of the device is determined based on geometry and occupancy derived from radar features. For example, the context manager may determine the size of the space, the number of other occupants, and the distance to those occupants in order to set a privacy bubble (privacy bubble) around the device. In other cases, a set of landmarks in the radar signature are compared to a known 3D context model. This may effectively identify the space in which the device operates based on a known 3D context model.

Although as described, the 3D context model may also be accessed or downloaded to the device, e.g., based on device location (e.g., GPS). Alternatively or additionally, other types of sensor data may be compared to known 3D context models. For example, sounds and wireless networks detected by the device may be compared to sound and network data of known 3D context models. Continuing with the example being illustrated, the context manager 112 of the tablet computer 102-4 determines the context of the environment 1500 as a "living room," i.e., a private, semi-secure context.

At 1410, contextual settings of the device are configured based on the determined context. The configured context settings may include any suitable type of setting, such as ring volume, ring mode, display mode, connection to a network or other device, and so forth. Further, the security or privacy settings of the device may be configured to enable or restrict the display of secure or private content. In the context of this example, assume that before unknown person 1504 enters the room, context manager 112 configures the display, security, and reminder settings of tablet computer 102-4 for a privacy context in which these settings are fully enabled or opened.

At 1412, the space is monitored via radar sensors for activity. The radar sensor may provide a continuous or intermittent radar field from which reflected signals are received. In some cases, the radar sensor detects an activity or object in the radar field in response to a change in the reflected signal. Continuing with the example being illustrated, radar sensor 120 monitors environment 1500 for any activity or detection event that may indicate a change in context.

At 1414, radar features are extracted from the radar data to identify the source of the activity in the space. This may include extracting detection, motion or shape radar features to identify objects in space. In some cases, the source of the activity is a target that leaves the space, such as someone who leaves the room. In other cases, the source of the activity may include a person or object entering the space. In the context of this example, assume that an unknown person 1504 enters the room and approaches the user 1502. In response, the radar sensor 120 provides detection and shape radar features 1506 to facilitate identification of the unknown person 1504.

At 1416, it is determined that the source of the activity changed the context of the space. In some cases, the departure of others from the space increases the privacy of the user or reduces the noise restrictions on the device, resulting in a more open context. In other cases, the entry of a person into the space or closer to the device may reduce the privacy of the user or increase the security concerns of the device and the user. As privacy decreases or the need for security increases, the context of the device may become more privacy and security oriented. Continuing with the example being described, the shape radar feature 1506 is used to attempt to identify the unknown person 1504 via facial recognition. Here, assume that facial recognition fails and the context manager 112 determines that the presence of an unknown entity would change the spatial context with respect to privacy and security.

At 1418, contextual settings of the device are changed based on the change in spatial context. In response to a change in context, the context settings of the device can be changed to compensate for the change in context. When the context of the device increases in privacy or security, changing the context settings may include limiting the content exposed by the device, such as by dimming the display, disabling a particular application, exercising display polarization, limiting the wireless connection of the device, or reducing audio playback volume. To summarize the present example, in response to detecting a change in context, the context manager 112 increases the privacy and security settings of the tablet computer 102-4, such as by turning off security applications, reducing the volume of prompts and device audio, or reducing the font size of the displayed content so that the content of the tablet computer is not discernible by an unknown person.

Example computing System

Fig. 16 illustrates various components of an example computing system 1600, which example computing system 1600 may be implemented as any type of client, server, and/or computing device as described with reference to fig. 1-15 above to implement radar-enabled sensor fusion.

The computing system 1600 includes a communication device 1602 that supports wired and/or wireless communication of device data 1604 (e.g., received data, data that is being received, data scheduled for broadcast, data packets of the data, etc.). The device data 1604 or other device content can include configuration settings of the device, media content stored on the device, and/or information associated with a user of the device (e.g., an identity of an actor performing the gesture). Media content stored on computing system 1600 may include any type of audio, video, and/or image data. Computing system 1600 includes one or more data inputs 1606 via which any type of data, media content, and/or inputs can be received, such as human utterances, interactions with radar fields, user-selectable inputs (explicit or implicit), messages, music, television media content, recorded video content, and any other type of audio, video, and/or image data received from any content and/or data source.

Computing system 1600 also includes communication interfaces 1608, which can be implemented as any one or more of a serial and/or parallel interface, a wireless interface, any type of network interface, a modem, and as any other type of communication interface. Communication interface(s) 1608 provide a connection and/or communication link between computing system 1600 and a communication network by which other electronic, computing, and communication devices communicate data with computing system 1600.

The computing system 1600 includes one or more processors 1610 (e.g., any of microprocessors, controllers, and the like) that process various computer-executable instructions to control the operation of the computing system 1600 and implement techniques for, or can implement, radar-enabled sensor fusion. Alternatively or in addition, computing system 1600 can be implemented with any one or combination of hardware, firmware, or fixed logic circuitry that is implemented in connection with processing and control circuits which are generally identified at 1612. Although not shown, the computing system 1600 may include a system bus or data transfer system that couples the various components within the device. A system bus can include any one or combination of different bus structures, such as a memory bus or memory controller, a peripheral bus, a universal serial bus, and/or a processor or local bus that utilizes any of a variety of bus architectures.

Computing system 1600 also includes computer-readable media 1614, such as one or more memory devices that support persistent and/or non-transitory data storage (i.e., as opposed to mere signal transmission), examples of which include Random Access Memory (RAM), non-volatile memory (e.g., any one or more of a read-only memory (ROM), flash memory, EPROM, EEPROM, etc.), and a disk storage device. A disk storage device may be implemented as any type of magnetic or optical storage device, such as a hard disk drive, a recordable and/or rewriteable Compact Disc (CD), any type of a Digital Versatile Disc (DVD), and the like. The computing system 1600 may also include a mass storage media device (storage media) 1616.

Computer-readable media 1614 provides data storage mechanisms to store the device data 1604, as well as various device applications 1618 and any other types of information and/or data related to operational aspects of the computing system 1600. For example, an operating system 1620 can be maintained as a computer application with the computer-readable media 1614 and executed on processors 1610. The device applications 1618 may include a device manager, such as any form of a control application, software application, signal processing and control module, code that is native to a particular device, extraction module or gesture module, and so forth. The device applications 1618 also include system components, engines, or managers that implement radar-enabled sensor fusion, such as the sensor-based application 108, the sensor fusion engine 110, and the context manager 112.

Computing system 1600 may also include or have access to one or more radar systems or sensors, such as a radar sensor chip 1622 having radar-emitting element 122, radar-receiving element, and antenna 124. Although not shown, one or more elements of the sensor fusion engine 110 or the context manager 112 may be implemented in whole or in part in hardware or firmware.

Conclusion

Although techniques using radar-enabled sensor fusion and apparatuses including radar-enabled sensor fusion have been described in language specific to features and/or methods, it is to be understood that the subject of the appended claims is not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as example ways in which radar-enabled sensor fusion may be implemented.

44页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于毫米波多普勒雷达的高空坠物探测方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类