Sleep monitoring based on implicit acquisition for computer interaction

文档序号:1493134 发布日期:2020-02-04 浏览:11次 中文

阅读说明:本技术 基于隐式采集的计算机交互的睡眠监测 (Sleep monitoring based on implicit acquisition for computer interaction ) 是由 T·阿尔特霍夫 E·J·霍维茨 R·W·怀特 于 2018-05-23 设计创作,主要内容包括:一种方法可以包括:从计算设备接收用户的隐式采集的计算机交互数据;访问先前采集的计算机交互数据的数据存储,该先前采集的计算机交互数据与用户的睡眠模式相关;将用户隐式采集的计算机交互数据与先前采集的计算机交互数据进行比较;以及基于比较来推断用户的睡眠模式。该方法可以提供现实世界认知表现的指示,该现实世界认知表现全天变化,并且受到昼夜节律、睡眠类型(早晨/晚上偏好)以及先前睡眠持续时长和定时的影响。(A method may include: receiving, from a computing device, computer interaction data of an implicit collection of users; accessing a data store of previously collected computer interaction data, the previously collected computer interaction data relating to a sleep mode of a user; comparing computer interaction data implicitly collected by a user with previously collected computer interaction data; and inferring a sleep pattern of the user based on the comparison. The method may provide an indication of real world cognitive performance that varies throughout the day and is affected by circadian rhythm, type of sleep (morning/evening preference), and duration and timing of previous sleep.)

1. A system, comprising:

at least one processor;

a storage device comprising instructions that, when executed by the at least one processor, configure the processor to perform operations comprising:

receiving, from a computing device, computer interaction data of an implicit collection of users;

accessing a data store of previously collected computer interaction data, the previously collected computer interaction data relating to a sleep pattern of a user;

comparing the user implicitly collected computer interaction data with the previously collected computer interaction data; and

inferring a sleep pattern of the user based on the comparison.

2. The system of claim 1, the operations further comprising:

presenting an indication of the sleep mode.

3. The system of claim 1, wherein inferring a sleep pattern of the user based on the comparison comprises: the inference is based on an average time between successive keyboard entries of the at least three keyboard entries.

4. The system of claim 1, wherein the user's computer implicit interaction data comprises a cursor operation on a display of the computing device, the user's cursor operation comprising an amount of time it takes to receive input selecting a newly presented object on the display, wherein the newly presented object is part of a plurality of newly presented objects, and the inferring the user's sleep pattern based on the comparing comprises: inferring based on a location of the newly rendered object within the plurality of newly rendered objects.

5. The system of claim 4, wherein the user's computer implicit interaction data comprises: an audible input and a visual input obtained by the computer from the user, wherein the audible input comprises a voice input generated by the user and the visual input comprises a line of sight input from a user or image data of the user.

6. A method, comprising:

receiving, from a computing device, computer interaction data of an implicit collection of users;

accessing a data store of previously collected computer interaction data, the previously collected computer interaction data relating to a sleep pattern of a user;

comparing the user implicitly collected computer interaction data with the previously collected computer interaction data; and

inferring a sleep pattern of the user based on the comparison.

7. The method of claim 6, further comprising:

presenting an indication of the sleep mode.

8. The method of claim 6, wherein the implicitly collected computer interaction data comprises keyboard input.

9. The method of claim 8, wherein inferring a sleep pattern of the user based on the comparison comprises: inferred based on the time between successive keyboard entries.

10. The method of claim 6, wherein inferring a sleep pattern of the user based on the comparison comprises: the inference is based on an average time between successive keyboard entries of the at least three keyboard entries.

11. The method of claim 6, wherein the computing device comprises a touchscreen and the computer interaction data implicitly collected by the user comprises contact of the user with the touchscreen.

12. The method of claim 6, wherein the user's computer implicit interaction data includes cursor operations on a display of the computing device.

13. The method of claim 12, wherein the user's cursor operation comprises: receiving an amount of time it takes to input selecting a newly presented object on the display, wherein the newly presented object is part of a plurality of newly presented objects, wherein inferring a sleep pattern of the user based on the comparison comprises: inferring based on a location of the newly rendered object within the plurality of newly rendered objects.

14. The method of claim 13, wherein the user's computer implicit interaction data comprises: an audible input and a visual input obtained by the computer from the user.

15. The method of claim 13, wherein the audible input comprises a voice input generated by the user and the visual input comprises a line of sight input from a user or image data of the user.

Technical Field

Embodiments described herein relate generally to inferring sleep patterns of a user and, without limitation, to inferring physiological patterns of a user based on collecting and comparing computer interaction data that is implicitly collected by the user.

Background

Maintaining optimal cognitive performance is important with respect to learning and productivity, as well as avoiding industrial and motor vehicle accidents. Cognitive performance varies throughout the day, affecting performance quality, which includes how we use and interact with vehicles, devices, resources, and applications.

After insomnia, cognitive performance decreased significantly. Understanding the real world impact of insufficient sleep is crucial. In addition to the increase in healthcare costs and risk of disease, the fatigue costs of american enterprises due to absenteeism, workplace accidents, decision disabilities and delays, and other productivity reductions annually estimate to be over $ 1500 billion. Although sleep-related performance is very important, the time evolution of sleep-based, real-world performance is still not well understood and has never been characterized on a large scale.

Cognitive performance varies daily and is driven by a portion of the intrinsic circadian rhythm approaching 24 hours. Existing studies on the effects of sleep and circadian rhythm on cognitive performance are typically limited to small laboratory-based studies that fail to capture the variability of real-world conditions, such as environmental factors, motivation, and sleep patterns in real-world settings.

The daily pattern of human cognitive performance is typically modeled based on representations of three biological processes: (i) cyclic rhythms (time-dependent, behavior-independent, near 24-hour oscillations); (ii) homeostatic (homeostatic) sleep stress (the longer the waking hours, the more tired it becomes); and (iii) sleep inertia (decreased performance occurring immediately after waking).

Existing sleep-related correlations are typically based on experimental studies in which participants are deprived of sleep and conventional manual tasks are performed to gauge performance, rather than non-invasively capturing performance through daily tasks in a real-world environment. In addition, participants in a manual laboratory setting may be affected by their knowledge of the study and change their behavior subconsciously.

Laboratory studies often fail to account for the myriad of effects in the real world, including motivation, mood, illness, environmental conditions, behavioral compensation including caffeine intake, and sleep patterns in the field, which are far more complex than the ones enforced in the study. In contrast, real world cognition shows day-to-day variation and is affected by circadian rhythm, sleep type (morning/evening preference), and duration and timing of previous sleep.

Drawings

In the drawings, which are not necessarily drawn to scale, like reference numerals may describe similar components in different views. Like numerals having different letter suffixes may represent different instances of similar components. Some embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:

FIG. 1 illustrates an example system according to an example embodiment.

Fig. 2 illustrates a system diagram of a network-based communication system for generating inferences in accordance with some examples of the present disclosure.

FIG. 3 illustrates a data flow to generate inferences according to some examples of the present disclosure.

FIG. 4 illustrates a data flow to generate inferences according to some examples of the present disclosure.

FIG. 5 is a flow diagram illustrating an example method of comparing computer interaction data acquired implicitly by a user with previously acquired computer interaction data to infer a sleep pattern of the user, in accordance with some embodiments.

Figure 6 is a block diagram illustrating components of a machine capable of reading instructions from a machine-readable medium and performing any of the methods discussed herein, in accordance with some embodiments.

Detailed Description

In some forms, cognitive performance may be measured based on the rate of individual keystrokes. In other forms, cognitive performance may be measured based on click interactions (e.g., based on search results displayed by a web search engine). These implicitly (e.g., data collected through a user's general computing session without prompting the user to explicitly perform functions for testing purposes) collected computer interaction data can be correlated to sleep measurements over time (e.g., through the use of a wearable sleep measurement device).

The methods, systems, machine-readable media, and devices described herein can obtain measures of cognitive performance through daily interactions with existing computing applications (e.g., keystroke speed and click interactions on web search engines) or other applications, such as e-mail, programming environments, error reporting systems, and office suites. Based solely on the recorded motion data, these measurements can be used to estimate factors that influence cognitive performance, such as sleep quality on the previous day and night or fatigue level at the current time.

Monitoring application usage by measuring implicitly collected computer interaction data may allow users to obtain insight into performance and productivity that may be used to improve people's awareness of such patterns and adjust the user experience accordingly (e.g., intelligently scheduling tasks to prevent or minimize human error; scheduling meetings based on participant's performance and sleep-type profiles). As an example, the generated inferences may be customized for an individual user or group of users over time.

These inferences can be generated by comparing the implicit interaction data to previously tracked sleep data in a sleep pattern correlation database. In some forms, the methods, systems, machine-readable media, and devices described herein may make use of thousands of actual non-implied correlations between current implicitly acquired computer interaction data and sleep patterns (or other physiological traits) to make inferences about a user.

The system may then provide a suggestion based on the inference. One of the advantages of inference over the prior art is because inference is based on actual data rather than some form of modeling.

In some forms, the methods, systems, machine-readable media, and devices described herein may establish sleep pattern correlations to continuously and non-invasively monitor human performance on a population scale. Inferences and sleep pattern correlations that may be determined are related to: (i) sleep scientists seeking larger scale real world performance measures; and (ii) computer scientists who build tools and applications that may be affected by changes in human performance in order to solve problems and challenges in the field of public health.

Examples of other implicitly collected computer interaction data that may be relevant to sleep measurement over time include mouse cursor activity (e.g., (i) mouse speed, (ii) the number of times a user crosses a link before clicking, (iii) response time to system alerts and notifications, (iv) time to select items in standard UI elements such as lists and drop-down menus, and/or (v) scrolling features such as speed.

A number of different types of cursor movement may be part of the implicitly collected computer interaction data. Some examples include: (i) cursor speed; (ii) cursor directionality (i.e., the amount of deviation from the shortest path); (iii) a cursor; (iv) cursor acceleration; and/or (v) objects that are crossed using the cursor (as well as other types of cursor movement).

There may also be metadata associated with various actions. Example metadata may include the time the action occurred and where the action occurred. In some forms, where the action occurs may be used to determine relevance based on the user's location and sleep pattern.

In some forms, computer interaction data acquired implicitly by a user includes both audible and visual input obtained by the computer from the user. As an example, the audible input may include a voice input generated by a user and detected by a computer. As another example, the visual input may include a gaze input generated by a user and detected by a computer.

In other forms, the computer may obtain image data of the user. The image data may be compared to previously acquired image data to determine whether there is a change (e.g., a user's eye bag) indicative of a sleep mode change.

Another example of other implicitly collected computer interaction data that may be relevant to sleep measurements over time includes: content of a search engine query for which a user may be expressing fatigue is analyzed. In other forms, implicitly acquired computer interaction data that may be relevant to sleep measurements over time includes: content of social media postings that a user may be expressing fatigue is analyzed. Other forms of online computer interaction may be envisaged to determine sleep pattern dependencies.

In some forms, the methods, systems, machine-readable media, and devices described herein may measure (without any additional hardware or explicit testing) billions of existing search engine interactions per day in a real-world setting (e.g., at the user's option). Human performance measured by computer interaction data collected implicitly varies throughout the day based on sleep type and sleep.

Inferences based on relevance of sleep patterns (or other physiological aspects) can provide insight into sleep and performance due to the ability to exploit online activity to study human cognition, motor skills, and public health. Large scale biometric sensing of online data based on computer interaction data encompassing implicit acquisitions results in: (i) study sleep and performance outside of a small laboratory setting without actively inducing sleep deprivation; (ii) non-invasively measuring cognitive performance without forcing the individual to interrupt their work or perform a separate manual task; and/or (iii) identifying real-world measures of cognitive performance based on frequent tasks and interactions or continuous monitoring of measurements.

As an example, inference and sleep pattern correlations may be obtained from computing applications such as email, programming environments, error reporting systems, office suites, and so forth. Inferences and sleep pattern correlations may provide insight regarding performance and productivity gained by monitoring these applications to potentially improve a user's awareness of patterns and/or to appropriately adjust a user experience. For example, tasks may be intelligently scheduled based on the participant's performance and sleep-type profile to prevent or minimize human error.

FIG. 1 illustrates a schematic diagram of an example implicit computer interaction data acquisition system 100, in accordance with various example embodiments. As shown, system 100 includes device 102A. Device 102A may be a laptop computer, desktop computer, terminal, mobile phone, tablet computer, smart watch, Personal Digital Assistant (PDA), wearable device, digital music player, server, and the like. The user 130 may be a human user that is able to interact with the device 102A, such as by providing various inputs (e.g., via an input device/interface such as a keyboard, mouse, touch screen, etc.).

In some implementations, the device 102A can include or be connected to various components, such as a display device 104 and one or more tracking components 108. The display device 104 may be, for example, a Light Emitting Diode (LED) display, a Liquid Crystal Display (LCD), a touch screen display, and/or any other such device capable of displaying, depicting, or otherwise presenting a user interface 106 (e.g., a Graphical User Interface (GUI)).

Tracking component(s) 108 can be, for example, a sensor (e.g., an optical sensor), a camera (e.g., a two-dimensional or three-dimensional camera), and/or any other such device capable of tracking computer interaction data that is implicitly acquired as described herein. It should be appreciated that while fig. 1 depicts the display device 104 and tracking component(s) 108 as being integrated within a single device 102A (such as in the case of a laptop computer with an integrated webcam or a tablet/smartphone device with an integrated front-facing camera), in other implementations, the display device 104 and tracking component(s) 108 can be separate elements (e.g., when a peripheral webcam device is used).

For example, as shown in fig. 1, device 102A may present user interface 106 to user 130 via display device 104. The user interface 106 can be graphical depictions of various applications executing on the device 102A (and/or any other such content displayed or depicted via the display device 104), such as an application 110A (which can be, for example, a web browser) and an application 110B (which can be, for example, a media/video player).

Such applications may also include or otherwise reflect various content elements (e.g., content element search results 120). Such content elements may be, for example, alphanumeric characters or character strings, words, text, images, media (e.g., video), and/or any other such electronic or digital content that may be displayed, depicted, or otherwise presented via device 102A.

Various applications may also depict, reflect, or otherwise be associated with content location 112. Content location 112 may include or otherwise reflect a local and/or network/remote location (e.g., a Uniform Resource Locator (URL), a local or remote/network file location/path, etc.) where various content elements may be stored or located.

It should be noted that although fig. 1 (and the various other examples and illustrations provided herein) depicts the device 102A as a laptop computing device or a desktop computing device, this is merely for purposes of clarity and brevity. Thus, in other implementations, device 102A may be various other types of devices including, but not limited to, various wearable devices.

For example, in some implementations, the device 102A may be a Virtual Reality (VR) and/or Augmented Reality (AR) headset. Such headsets may be configured to be worn on or positioned near the head, face or eyes of a user. Content such as immersive visual content (which spans most or all of the user's field of view) may be presented to the user via headphones. Thus, such VR/AR headphones may include or incorporate components corresponding to those depicted in fig. 1, and/or described herein.

By way of illustration, the VR headset may include a display device, e.g., one or more screens, displays, etc., included/incorporated within the headset. Such a screen, display, etc. may be configured to present/project the VR user interface to a user wearing the headset. Additionally, the displayed VR user interface may further include visual/graphical depictions of various applications (e.g., VR applications) executing on the headset (or on another computing device connected to or in communication with the headset).

Additionally, in certain implementations, such headphones may include or incorporate a tracking component such as described/referenced herein. For example, the VR headset may include sensor(s), camera(s), and/or any other such component capable of detecting motion or otherwise tracking the user's eyes (e.g., while wearing or utilizing the headset). Thus, the various examples and illustrations provided herein (e.g., with respect to device 102A) should be understood to be non-limiting, as the described techniques may also be implemented in other settings, contexts, etc. (e.g., with respect to VR/AR headsets).

Turning now to fig. 2, a system diagram of a network-based communication system illustrates further aspects of the implicit computer interaction data collection system 100. The components in fig. 2 may be configured to communicate with each other, e.g., via a network coupling (such as network 215), a shared memory, a bus, a switch, etc.

In various examples, the servers and components shown in fig. 2 may communicate via one or more networks (not shown). The network may include a Local Area Network (LAN), a Wide Area Network (WAN), a wireless network (e.g., 802.11 or cellular), a Public Switched Telephone Network (PSTN) network, an ad hoc network, cellular, personal area network, or point-to-point (e.g.,

Figure BDA0002317103210000071

Wi-Fi Direct) or other combinations or permutations of network protocols and network types. These networks may include a single Local Area Network (LAN) or a Wide Area Network (WAN), or a combination of LANs or WANs, such as the Internet.

It is to be appreciated that each component can be implemented as a single component, combined into other components, or further subdivided into multiple components. Any one or more of the components described herein may be implemented using hardware (e.g., a processor of a machine) or a combination of hardware and software (e.g., the machine 100 of fig. 1). For example, although the figure shows network-based communication service 250, information service 255, relevance service 260, action service 292, and inference model 297 as separate computing devices communicating with each other over network 215, the functionality of those components may be implemented by the same computing device, or by different computing devices connected over a different network (e.g., a local area network, LAN) than network 215.

Computing devices 205 and 210 (which are illustrated as mobile devices such as smartphones) may include an operating system 220 and instances of a communication application 225. The operating system 220 may provide one or more services to applications, such as the communication application 225. Services may include memory management, scheduling, multitasking, notifications, interrupts, event notifications, hardware interfaces, and the like. The communication application 225 may be a network-based communication application and may include a GUI component 230, a service component 235, and a cache component 240. The GUI component 230 can present one or more GUIs.

Service component 235 may be connected to network-based communication service 250 and may send messages entered by a user of computing device 205 via a GUI provided by GUI component 230 to one or more other users of network-based communication service 250. The service component 235 may receive one or more other messages from other users through the network-based communication service 250 and may cause those communications to be displayed in a GUI provided through the GUI component 230.

The service component 235 may also receive implicitly collected computer interaction data of the user from a computing device (e.g., from one of the devices 102A, 205, 210). Service component 235 may also access a data store of implicitly collected computer interaction data that is related to the user's sleep mode.

It should be noted that implicitly collected computer interaction data can be collected from any number of sources and can be normalized and/or reconciled. Thus, once a data store (e.g., file system, relational database, NoSQL database, flat file database) is populated with sufficient semantically complete data, the data can be mined, stored, queried, audited, and validated. Implicitly collected computer interaction data can originate from several forms, such as unstructured data, spreadsheets, relational databases, extensible markup language (XML), JavaScript object representations, and the like. In some instances, a service (e.g., a web service) may map or convert various formats into a common format to facilitate data mining.

Relevance service 260 can compare the user's implicitly collected computer interaction data with previously collected computer interaction data and infer the user's sleep pattern based on the comparison. As an example, the correlation service 260 may compare data (e.g., wake time) of a particular user to previously acquired and correlated sleep data for a population of other users in order to determine inferences about the data of the particular user.

In some forms, the computer interaction data collected implicitly includes keyboard input. As an example, the client device 100 may include a touchscreen, and the user implicitly collecting computer interaction data includes user contact with the touchscreen.

Relevance service 260 can also cause sleep inferences to be displayed via a GUI provided through GUI component 230. Additionally, user input (i.e., feedback) regarding the inference can be sent by the service component 235 to the relevance service 260.

Based on feedback from one or more users, the relevance service 260 can take actions on behalf of the user, send additional content or information (which may have been requested by the user), and customize inferences for the user. As a result, additional information, content, or implicitly collected computer interaction data can be sent or received by the correlation service 260.

The service component 235 can cause the additional information to be displayed in a GUI provided through the GUI component 230. In other examples, information needed for the interaction is sent by the relevance service 260 along with the inference. The inferences may include the initially displayed inferences (e.g., "primary sleep benefit is obtained when sleeping from 9 pm to 5 am") and additional information sent by the relevance service 260 when selected (e.g., "relaxation techniques for sleep").

The caching component 240 can cache communications and suggestions and allow a user to view past communications and past inferences. The user may select or activate past inferences at any time while their communication session is active. In some examples, the cache component may allow for reviewing past communications and inferences and activating those inferences at any time.

The network-based communication service 250 can receive communications (which can include implicitly collected computer interaction data) from computing devices and route those communications to other computing devices participating in the network-based communication session. For example, if computing device 210 and computing device 205 are in a communication session together, communications from computing device 205 may be routed to computing device 210. Additionally, the network-based communication service 250 may route copies of these communications to the correlation service 260. The suggestions from the correlation service 260 may be routed directly to the participants in the communication session or through the network-based communication service 250.

Relevance service 260 can receive communications from service component 235 or network-based communication service 250, and can generate one or more inferences and return those inferences to the computing device in the communication session. These inferences may be personalized for each user (or group of users), and thus, the inference given to computing device 205 may be different from the inference given to computing device 210 for the same communication message. The inference can be personalized based on a user profile stored at the computing device or a network-based profile service (e.g., a network-based computing device that stores multiple user profiles to provide internal computing device knowledge of the profiles). Some example methods of inferring sleep patterns are summarized in the following paragraphs. Additional implementations are further discussed herein.

In some example forms, inferring a sleep pattern of the user based on comparing the implicitly collected computer interaction data comprises: inferred based on the time between successive keyboard entries. As an example, inferring a sleep pattern of the user based on the comparison may include: the inference is based on an average time between successive keyboard entries of the at least three keyboard entries. In other examples, the inference may be based on any of a maximum, median, mode, etc. (or any combination of time-based factors) of time between successive keyboard inputs.

In some implementations, the user's computer implicit interaction data includes cursor operations on a display of the computing device. As an example, a user's cursor operation may provide input related to the amount of time it takes to select a newly presented object on the display using a cursor (or using other forms of touch screens).

In some forms, the newly rendered object may be part of a plurality of newly rendered objects on the display. Thus, the amount of time it takes to select a new subject from a plurality of subjects may provide a strong correlation with sleep patterns.

Additionally, inferring the sleep pattern of the user based on the comparison may include: the inference is based on a location of the newly rendered object within the plurality of newly rendered objects. As an example, the distance from the current selection to the previous selection to select a new subject from a plurality of subjects may provide an additional factor that demonstrates a strong correlation with sleep patterns.

The correlation service 260 can include a distribution component 280. Distribution component 280 can distribute communications received from network-based communication service 250 to one or more inference models, such as inference model 297. Inference model 297 can determine inferences based on rules (e.g., heuristics), such as determining inferences based on the presence of one or more particular keywords. In other examples, inference model 297 can be an unsupervised or supervised machine learning model. Examples include natural language processing, decision trees, random forests, support vector machines, and the like.

Inferences can be developed that implement a large amount of sleep and cognitive performance data. These inferences may demonstrate that the relative effects of circadian rhythm, homeostatic sleep drive, and sleep inertia are consistent with expectations of laboratory-based sleep studies. As an example, the effects of insufficient sleep (e.g., less than six hours of continuous two night sleep) may be associated with a decline in cognitive performance that may last for several days.

As an example, a computing system may suggest that a person sleep more (or less) on a given night and then measure their attention the next day (e.g., through events related to using a computer). An existing causal example (which forms the basis of sleep pattern correlation) is that shortening sleep will reduce attention. Once a correlation is found, a probability is calculated which can be used to determine a numerical representation of the correlation value representing the known correlation and its strength.

In some examples, implicitly collected computer interaction data may be sent to multiple inference models, each of which is designed to determine different inferences or different types of inferences. In some examples, the implicitly collected computer interaction data may be sent to a plurality of inference models, each of which is a different type of model; that is, one model may be rule-based and another model may be a machine learning model.

Inference model 297 can respond with one or more computed inferences. Each inference model 297 may be trained and/or designed to detect certain types of inferences. For example, one model may be trained for a first type of inference and a second model may be trained for a second type of inference. For example, one model may be designed to detect inferences pertaining to sleep mode and implicitly collected computer interaction data.

Another model may be designed to detect inferences about sleep and physical activity. Other examples include information about diet, stress, physical discomfort, and/or travel. By providing communications to several specially trained models looking for specific inferences, the system can more accurately determine the user's inferences than providing communications to a general model.

In some examples, inference model 297 does not respond if the communication does not yield an inference above a certainty threshold. However, multiple models may respond with the calculated inference. The suggestion generation component 285 may then generate a suggestion for each inference based on one or more sleep pattern correlations returned by the model.

Inference can generate suggestions that can be classified as actions (e.g., recommended sleep time and/or duration), content suggestions (providing information about relaxation and exercise), and the like. Example action suggestions include using a particular application, visiting a website, setting a reminder (e.g., take a break), and so forth. Example content suggestions may include providing information about diet, sleep, relaxation, and/or stress (e.g., by showing documents, showing videos, showing audio clips, showing pictures, etc.).

The inference may be based on one or more if-then-else rule sets that generate suggestions to attempt to improve the effectiveness of sleep based on the inference returned. For example, if the inference returned is that the user is sleeping too much, the suggestion may be to provide more information about the effects of the sleeping too much. As another example, if the inference returned is that the user wishes to get up at a particular time each day, the suggestion may be a calendar reminder for bedtime. Other methods of transforming inferences into suggestions that can improve sleep may be utilized, such as decision trees, random forests, and the like.

The content population component 290 can populate suggestions with content from one or more sources, such as the information service 255 (e.g., a calendar service, search engine, website, or other database). Content population component 290 can also utilize one or more Application Programming Interfaces (APIs) to communicate with these services.

The content population component can utilize a user profile of the user. The content may be populated with known preferences of the user in the user profile. For example, a sleep schedule may be selected using a user's bedtime preference. This allows personalized suggestions according to the user's preferences. For example, the system may know that the user is a vegetarian and may suggest a vegetarian diet that promotes sleep.

The suggestion ranking component 295 can rank inferences determined by the suggestion generation component 285. The suggestion ranking component 295 may have rules that specify a ranking. In some forms, suggestions within the suggestion generation component may be ranked using a number of different heuristics. As an example, heuristics may consider actual topics inferred. For example, calendar action inferences may precede sleep schedule inferences, and so on.

In some examples, the heuristics may be adjusted based on feedback from all users of the correlation service 260. For example, an action suggestion may take precedence over a content suggestion if the user generally interacts (e.g., selects) more with an action suggestion generated by the inference than with a content suggestion generated by the inference. Thus, by using feedback, the global user model can adjust the heuristic to better meet the user's needs.

In some examples, each user may have learned preferences for certain types of suggestions generated by inference, in addition to the global user model. Thus, even though the global model may determine that a larger user population interacts more with action suggestions than content suggestions, content suggestions may be prioritized over other suggestion types if a particular user prefers inference-based content suggestions over inference-based action suggestions. Example models may include neural networks, decision trees, random forests, regression algorithms, and so forth. An example model may be (i) an individual user model; (ii) a group model; and/or (iii) a global model.

In some examples, three models (heuristic model, global model, individual model) may be combined such that the models may have a hierarchical structure. Thus, individual models can be controlled unless personal interaction data is insufficient; under the condition that personal interactive data are insufficient, controlling the global model unless the global interactive data are insufficient; and under the condition of insufficient global interactive data, utilizing a heuristic method.

In other examples, each model may contribute to the final ranking. For example, each model may assign a fixed number of points to each suggestion according to the model rules. Each suggestion based on inference(s) may then be scored according to each model by using a weighted sum that combines point values for each suggestion and/or inference. Each model may weight the perceived accuracy of the user preferences according to the model. For example, individual models may be weighted more than global models, and both global and individual models may be weighted more than heuristic models. The weights may be dynamically changed over time based on user feedback in the form of explicit feedback (e.g., GUI elements indicating satisfaction with the suggestion) or based on implicit feedback in the form of interactions with the suggestion based on some inference (interactions with suggestions representing satisfaction).

The ranking may be calculated based on the type of inference and/or suggestion (action and content), and may also be based on the actual suggested content. For example, if it is determined that more sleep is inferred to be helpful in cognition, the recommendation may be (i) early sleep; (ii) sleep longer; and/or (iii) sleep later. All three are action suggestions, but these suggestions can be ranked based on the user's preference for these particular actions.

The interaction component 275 processes user interactions with suggestions by exchanging information with the user, such as content determined by the content population component 290, to perform action suggestions and the like. The action service 292 may be a calendar service, an exercise service, a diet service, a relaxation service, and other action services.

Turning now to fig. 3, a data flow of inference generation 300 is illustrated, in accordance with some examples of the present disclosure. Relevance service 360 may be an example of relevance service 260. In some examples, the distribution component 380 can be an example of the distribution component 280. In some examples, the suggestion generation component 385 can be an example of the suggestion generation component 285. In some examples, the content population component 390 can be an example of the content population component 290. In some examples, suggestion ranking component 295 may be an example of suggestion ranking component 295.

As noted above with respect to fig. 2, all or some of the components are configured to communicate with each other, e.g., via network couplings, shared memories, buses, switches, and so forth. It is to be appreciated that each component can be implemented as a single component, combined into other components, or further subdivided into multiple components. Any one or more of the components described herein may be implemented using hardware (e.g., a processor of a machine) or a combination of hardware and software.

Communication text 305 from a communication session (which can include all types of implicitly collected computer interaction data generated when a user interacts with a computing device) is processed by distribution component 380. The distribution component 380 inputs the communication text (or other type of input) 305 to one or more inference models 307-310.

As an example, keyboard input model 307 (which may identify implicitly collected computer interaction data referenced within communication text 305) may determine inferences based on sleep pattern relevance referenced in communication text 305 (e.g., by receiving data related to keystroke speed). The touch screen input model 308 can determine from the communicated text 305 whether (and how) the user interacted with the touch screen or the like (e.g., by receiving data related to the speed of contact of the touch screen). The object interaction model 309 can determine from the communication text 305 whether (and how) the user interacted with the mouse to select an object on a display or the like (e.g., by receiving data related to the speed at which the user selected the object, which was found in the search results of the web browser's search engine).

As shown in fig. 3, any number of other models, such as object model 310, may be utilized to determine intent. The model 307-310 may be implemented as part of the correlation service 360 or may be a separate service in communication with the correlation service 360. As previously indicated, the model may be a machine learning model, including supervised or unsupervised learning models. Examples include neural networks, regression, natural language processing, random forests, decision trees, decision jungles, or other models.

To better determine inferences, the model may interact with an action service, such as action service 292 of FIG. 2, to further determine one or more characteristics of the user. For example, action service 292 may be a calendar service(e.g., MICROSOFT)

Figure BDA0002317103210000151

) The calendar service may track appointments and meetings of the user. These models may consult a calendar service to determine whether a particular suggestion based on the inference is compatible with a user's characteristics (e.g., the user's schedule). For example, if the user is in a meeting at 7 am, a recommendation that the user be asleep no later than 6 am may be optimal.

The suggestion generation component 385 receives the determined inferences and maps the inferences to suggestions. The returned inferences may include content as well as or instead of semantic meaning (such as text entry via a keypad or the like). For example, suggestion generation component 385 uses a database (such as inference to suggestion mapping 387) that may contain indications of possible inferences and corresponding sleep-related suggestions. For example, a sleep time suggestion may have a corresponding calendar entry suggestion.

In other examples, the inference to suggestion mapping 387 may be rule-based, rather than table, where if-then statements are evaluated against inferred values to determine suggestions. For example, if the inference is to create a calendar entry, the corresponding suggestion is to create a calendar entry. More than one suggestion may be made from a single inference.

The suggestion generation component 385 may interact with an action service, such as the action service 292 of fig. 2, to determine one or more characteristics of a user. For example, the action service may be a calendar service (e.g., MICROSOFT)

Figure BDA0002317103210000152

) The calendar service may track appointments and meetings of the user. The suggestion generation component 385 can consult a calendar service to determine whether the suggestion is compatible with characteristics of the user (e.g., the user's preferred sleep schedule). For example, if the user is already busy, the suggestion that the user needs to go to sleep at another time may be better.

The content population component 390 can employ the suggestions and populate the content in the suggestions, for example, via the contact information service 355. Content population may also utilize the user's profile data 392. The profile data 392 may be obtained based on prior use of the suggestion service 360 and may include action preferences, content preferences, location information, and the like. For example, profile data 392 may include sleep preferences, diet preferences, exercise preferences, etc. of the user.

The profile data 392 may be context specific such that it stores preferences for different contexts of a user (or similar group of users). For example, it may store that the user prefers to sleep later on weekends, but does not like to sleep 7 am on weekdays. The system may then fill in recommendations for weekdays, but also fill in other recommendations for weekends.

The suggestion ranking component 395 can rank the suggestions and select one or more of the suggestions to send to a computing device of the user. As noted, suggestion ranking component 395 can consult both user profile data 392 and interaction history 394 to rank inferences and/or suggestions relative to one another based on a global model that is built using all inferences and/or suggestions of all users with all implicitly collected computer interaction data analyzed by suggestion service 360, as well as an individual model that is built using inferred and/or suggested interactions of a current user with all implicitly collected computer interaction data for that user.

The suggestion ranking component 395 may suggest all suggestions relative to the inference or may select a subset of suggestions relative to the inference to send to the user's computing device based on the ranking. For example, the suggestion ranking component 395 may select a predetermined number or percentage of the highest ranked high ranked suggestions (e.g., top three or top 10%) and send them to the user's computing device. In other examples, the suggestion ranking component 395 may determine a screen size of the user's computing device.

The suggestion ranking component 395 may select a set of suggestions to send to the user's computing device that maximizes the overall utility of the suggestions given the constraints on the length of the suggestions and the size of the suggested regions of the GUI of the user's computing device. For example, the ranking may be reflected in several points, where higher ranking means higher point value.

In some examples, the suggestions selected by suggestion ranking component 395 may include different types of suggestions, such as content suggestions and action suggestions. As previously noted, the system may record interactions with a given suggestion, and the system may use this feedback to learn how to better rank the suggestions. In some examples, this feedback may be shared with the inference model 297 to allow the inference model to better learn the appropriate inferences. Thus, user interaction with a particular suggestion provides an indication as to how inferences and corresponding suggestions should be ranked, and provides an indication as to whether the implicitly collected computer interaction data determined by the inference model correctly identified an inference of the implicitly collected computer interaction data. The lack of interaction with the suggestion also indicates: those recommendations are not prioritized and the inference model is incorrect in determining implicitly collected computer interaction data.

Turning now to fig. 4, a data flow of suggestion generation 400 according to some examples of the present disclosure is illustrated. According to some examples of the disclosure, suggestion service 460 may be an example of suggestion services 360 and 260. Interactions with the suggestions 410 are received by the interaction component 420 at the suggestion service 460.

The interaction component 420 can respond with additional information regarding the suggestion or additional context. E.g., preferred awake time of the user, additional context, etc. In some examples, the additional information may be obtained from information service 455 (which may be an example of information service 355, 255). The user may interact with the additional content, such as by selecting a wake time for a weekday and receiving more content about the user's calendar. This ensures a good user experience without the user having to leave the user interface of the user-preferred application.

Interactions and additional content may be created by means of the profile data 492 and the interaction history 494. The profile data 492 may be an example of the profile data 392 of fig. 3. Interaction history 494 may be an example of interaction history 394 of FIG. 3. Upon interacting with the suggestion, the interaction component 420 can update the interaction history 494 such that the suggestion ranking component can update its user-based model.

The interaction component 420 may interact with the action service 492. The action service 492 may be an example of the action service 292 of fig. 2. The interaction component 420 can consult the action service 492 to provide content and interactions associated with inferences and/or suggestions (e.g., sleep schedules, exercise schedules, dietary recommendations, etc.). The interaction component 420 may also interact with the action service 492 to implement any action suggestions selected by the user, such as selecting medications, food to eat, vitamins to ingest, and the like.

FIG. 5 illustrates a flow chart of a method 500 of forming inferences by comparing implicitly collected computer interaction data. The method 500 may be implemented at a computing system, such as any of the devices or components described herein.

At operation 510, the methods, systems, machine-readable media, and devices receive one or more types of implicitly collected computer interaction data for a user from a computing device (e.g., computing device 110 of FIG. 1). In some forms, the computer interaction data collected implicitly includes keyboard input. As another example, the client device 100 may include a touchscreen, and the computer interaction data that is implicitly collected by the user includes user contact with the touchscreen.

There are many different types of user contact with a touch screen that can be collected as part of implicitly collected computer interaction data. Some examples include sliding, panning, and zooming movements (among others) on a touch screen.

At operation 520, the methods, systems, machine-readable media, and devices described herein access a data store of previously collected computer interaction data. The previously obtained implicitly collected computer interaction data is correlated with the user's sleep pattern (e.g., by using the correlation service 260, 360, 460). Previously acquired implicitly acquired computer interaction data and corresponding sleep pattern correlations may be accessed from a database (e.g., information service 255 or advice service 260 in fig. 2) located in any of the components described herein.

Accessing the correlations may identify the type of correlation (e.g., biometric, typing speed). Any correlation technique may be used to generate sleep pattern correlations, for example, regression (e.g., linear regression, power regression, logarithmic regression, exponential regression, etc.) may be used. These correlations may be stored in the same database as the implicitly collected computer interaction data, or may be stored in a different database.

In some aspects of the subject technology, methods, systems, machine-readable media, and devices include an input/output component (e.g., computing device 100 in fig. 1 or a mobile phone, tablet computer, laptop computer, desktop computer, server, etc.) that receives implicitly collected computer interaction data. Computer interaction data acquired implicitly in the form of one or more events (e.g., sleep, run, typing, meal) and/or metadata about the events (e.g., metadata indicating keystroke measurements, cursor movement, heart rate, distance run, time spent running, calories consumed, etc.) may be received. The events and their metadata may be manually entered by a user or automatically received from a computing device 102A, 205, 210, a sensor, or some other data collection device (e.g., a fitness tracker).

In some forms, computing system 100 retrieves (and possibly stores) the relevance from another computing device (i.e., a separate database on another computing device). In some implementations, the event may be generated (or triggered) by the computing system, a user of the computing system, or otherwise. Example computer system event types include, but are not limited to, clicking a button, mouse movement, text entry (e.g., typing speed and accuracy), programming close, adjusting a scroll bar (e.g., adjusted speed and accuracy), scroll wheel movement (e.g., speed and accuracy of movement), and so forth.

A computing system may access events indicating that implicitly collected computer interaction data is available from a plurality of different sources. The computing system may access information related to events (and corresponding implicitly collected computer interaction data) of the user and possibly other similar users.

According to some examples, the event is an object in the programming language Java or another programming language. They may come from a series of classes (or another series of classes or different programming language structures) stored in java.

In some forms, the implicitly acquired computer interaction data may have stored indications of their corresponding sleep mode correlation strengths (e.g., in the form of correlation coefficients also referred to herein as correlation values). Sleep pattern relevance values may be generated for individuals, groups of individuals similar to a user, or an entire user population from a database containing various types of implicitly collected computer interaction data.

As an example, the sleep pattern correlation value may take the form of a severity score that is based on a difference between the inferred amount of sleep and a recommended amount of sleep (e.g., a given user demographic or other factors). The severity score may determine (i) a selection of a suggested action; (ii) content of the suggested content; and/or (iii) suggested wordings (as well as other types of information that may be included in the suggestions).

At operation 530, the methods, systems, machine-readable media, and devices described herein compare computer interaction data collected implicitly by the user with previously collected computer interaction data (e.g., by using the correlation service 860 in fig. 2). The comparison may be generated as a result of communications between users of the network-based communication service. The user may have received a communication from another user or the user may have sent the communication to another user.

In some implementations, the user's computer implicit interaction data includes one, some, or all cursor operations on a display of the computing device. As an example, a user's cursor operation may provide input related to the amount of time it takes to select a newly presented object on the display using the cursor.

In some forms, the newly rendered object may be part of a plurality of newly rendered objects on the display. Thus, the amount of time it takes to select a new subject from a plurality of subjects may provide a strong correlation with sleep patterns.

Additionally, inferring the sleep pattern of the user based on the comparison may include: the inference is based on a location of the newly rendered object within the plurality of newly rendered objects. As an example, the distance from the current selection to the previous selection to select a new subject from a plurality of subjects may provide an additional factor that demonstrates a strong correlation with sleep patterns.

The communication may be submitted to different models that analyze sleep pattern correlations. Each of the models for analyzing correlations (as described with reference to fig. 2-4) may be executed on one or more computing devices.

The comparison may be based on an event associated with the user, a group of users similar to the user, or the entire group of users. Thus, each user may have their own different personalized sleep pattern correlations. Relevance values for a particular user may be learned over time from interactions with the network-based communication service and interactions with other applications on the user's one or more computing systems.

Sleep mode correlations may be ranked by perceptual strength of the correlation (e.g., a correlation coefficient closer to 1 or-1 is stronger than a correlation coefficient of 0). Determining sleep pattern correlations using many different models may be more accurate than a general model that attempts to discern general correlations.

At operation 540, the methods, systems, machine-readable media, and devices described herein infer a sleep pattern of the user based on the comparison. As an example, the user's sleep pattern may be inferred by using inference model 297 (shown in FIG. 2). In some forms, inferring a sleep pattern of the user includes: inferred based on the time between successive keyboard entries. As an example, inferring a sleep pattern of the user may include: the inference is based on an average time between successive keyboard entries of the at least three keyboard entries.

In some examples, a machine learning classification algorithm (e.g., a multi-class logistic regression algorithm, a multi-class neural network, a multi-class decision forest, etc.) may learn appropriate inferences based on sleep pattern correlations. Machine learning algorithms can utilize training data to identify appropriate inferences based on learned relationships between implicitly collected computer interaction data and sleep pattern correlations.

In some forms, the heuristic model may set the prioritization based on the type of inference. The inference model can modify the rules based on user interaction and the type of computer interaction data collected implicitly. In other examples, other machine learning models may be utilized, such as logistic regression, linear regression, neural networks, decision trees, decision forests, and so forth. These models may be initially trained using heuristic models, and then the models may be refined first using global interaction data, and then using user-specific interaction data.

Starting from a set of heuristics, an initial user of the system may experience a baseline ranking performance, which is then trained by a global user base and the user's own preferences and choices. The use of both individual and system-wide implicitly acquired computer interaction data can provide a large amount of training data to improve the accuracy of the model. Depending on the amount of personalized implicitly collected computer interaction data obtained, some implicitly collected computer interaction data may be more heavily weighted in training the model to customize the inferred ranking for the user.

The method 500 may further include an operation 550 in which methods, systems, machine-readable media, and devices described herein present an indication of a sleep mode. As shown in fig. 1, a GUI (e.g., GUI interface 106 in fig. 1) is shown generated by a network-based communication application (e.g., GUI component 230 in fig. 2) of a network-based communication service, according to some examples of the present disclosure.

In some forms, the computing system 100 presents suggestions based on inferences generated from sleep pattern correlations using GUI elements in the personalized suggestion region 110A on the display 104. The computing system 100 may also provide reports to the user of how various implicitly collected computer interaction data affects sleep pattern relevance values. In some examples, the GUI element containing the suggestion may be a button that fits into the personalized suggestion region 110A on the GUI 106. In addition, some suggestions may be potentially distinguished from other suggestions in some manner (e.g., by highlighting).

In some forms, the information may be provided by a personal digital assistant (e.g., fromIs/are as follows) To deliver personalized suggestion(s). For example, as shown in FIG. 1, GUI 106 may present a user interface to user 130 via display device 104. The user interface may be a graphical depiction of various applications executing on one or more of the client devices 110, such as application 110A, which may be, for example, a web browser, or application 110B, which may be, for example, a media/video player.

The application or applications may also include or reflect various content elements. Such content elements may be, for example, alphanumeric characters or character strings, words, text, images, media (e.g., video), and/or any other electronic or digital content that may be displayed, depicted, or otherwise presented via the display 104.

By way of example, generally speaking, GUI 106 may present an indication of sleep mode to user 130 on one or more of client devices 110 by passively making suggestions as to getting more sleep. As another example, GUI 106 may present an indication of sleep mode to user 130 on one or more of client devices 110 by actively alerting the user in certain situations where intervention may be more desirable (e.g., via warning of user fatigue prior to driving).

Other forms of the methods, systems, machine-readable media, and devices described herein are contemplated in which other types of correlations besides sleep mode correlations are determined based on implicitly collected computer interaction data. By way of example, a correlation service described herein (e.g., see correlation services 260, 360 in fig. 2 and 3) can analyze implicitly acquired computer interaction data to form correlations and corresponding inferences about other types of physiological patterns (individual or collective). Other example physiological patterns include patterns related to stress, neurodegenerative diseases, cognitive disorders, and sleep disorders (not just sleep deficits). Other types of correlations are also contemplated in addition to those described herein.

To protect user privacy, a Graphical User Interface (GUI) may be provided for user approval and opt-in or opt-out to allow the user to approve or restrict the collection of personal information. In some examples, these GUIs may allow a user to delete previously collected information or set constraints on the type and content of the collected information.

Certain embodiments are described herein as comprising logic or several components or mechanisms. The components may constitute software components (e.g., code embodied on a machine-readable medium) or hardware components. A "hardware component" is a tangible unit that is capable of performing certain operations and may be configured or arranged in some physical manner. In various example embodiments, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware components of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware component that operates to perform certain operations described herein.

In some embodiments, the hardware components may be implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware component may comprise dedicated circuitry or logic that is permanently configured to perform certain operations. For example, the hardware component may be a special purpose processor, such as a Field Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC). The hardware components may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, the hardware components may include software executed by a general purpose processor or other programmable processor. Once configured by such software, the hardware components become a specific machine (or specific components of a machine) specifically tailored to perform the configured functions and are no longer general purpose processors. It should be appreciated that decisions to implement hardware components in mechanical, in dedicated and permanently configured circuits, or in temporarily configured (e.g., configured by software) circuits may be driven by cost and time considerations.

Thus, the phrase "hardware component" should be understood to encompass a tangible record, be it physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform a certain operation described herein. As used herein, "hardware-implemented component" refers to a hardware component. Considering embodiments in which the hardware components are temporarily configured (e.g., programmed), each of the hardware components need not be configured or instantiated at any one time. For example, where the hardware components include a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured at different times to be different special-purpose processors (e.g., including different hardware components), respectively. The software accordingly configures one or more particular processors to constitute, for example, particular hardware components at one time and different hardware components at different times.

A hardware component may provide information to and receive information from other hardware components. Thus, the described hardware components may be viewed as being communicatively coupled. In the case where a plurality of hardware components coexist, communication may be achieved by signal transmission between two or more of the hardware components (for example, by appropriate circuits and buses). In embodiments in which multiple hardware components are configured or instantiated at different times, communication between such hardware components may be achieved, for example, by storing and retrieving information in memory structures accessible to the multiple hardware components. For example, one hardware component may perform an operation and store the output of the operation in a memory device to which it is communicatively coupled. Yet another hardware component may then later access the memory device to retrieve and process the stored output. The hardware components may also initiate communication with input or output devices and may operate on resources (e.g., collections of information).

Various operations of the example methods described herein may be performed, at least in part, by one or more processors that are temporarily configured (e.g., via software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such a processor may constitute a processor-implemented component that operates to perform one or more operations or functions described herein. As used herein, "processor-implemented component" refers to a hardware component that is implemented using one or more processors.

Also, the methods described herein may be implemented at least in part by a processor, where one or more particular processors are examples of hardware. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented components. Moreover, the one or more processors may also operate in a "cloud computing" environment or as a "software as a service" (SaaS) to support the performance of related operations. For example, at least some of the operations may be performed by a group of computers (as an example of a machine including a processor), where the operations are accessible via a network (e.g., the internet) and one or more appropriate interfaces (e.g., APIs).

The execution of some of the operations may be distributed among the processors, not only residing within a single machine, but also being deployed across several machines. In some example embodiments, the processors or processor-implemented components may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the processor or processor-implemented component may be distributed across several geographic locations.

Some aspects of the present technology relate to collecting personal information about a user. It should be noted that after receiving positive consent from the user for collecting and storing such information, personal information about the user is collected. A persistent alert (e.g., an email message or display of information within an application) is provided to the user to inform the user that information is being collected and stored. A persistent reminder may be provided each time a user accesses an application or for each threshold period of time (e.g., weekly email messages). For example, an arrow symbol may be displayed to the user on the user's mobile device to inform the user that his/her Global Positioning System (GPS) location is being tracked. The personal information is stored in a secure manner to ensure that unauthorized access to the information does not occur. For example, medical and health-related information may be stored in a manner consistent with the health insurance circulation and accountability act (HIPAA).

Example machine and software architecture

In some embodiments, the components, methods, applications, etc. described in connection with fig. 1-5 are implemented in the context of a machine and associated software architecture. The following sections describe representative software architecture(s) and machine (e.g., hardware) architecture(s) suitable for use with the disclosed embodiments.

Software architectures are used in conjunction with hardware architectures to create devices and machines that are tailored to specific purposes. For example, a particular hardware architecture coupled with a particular software architecture will create a mobile device, such as a mobile phone, tablet device, and the like. Slightly different hardware and software architectures may result in smart devices for the "internet of things," yet a combination results in server computers for the cloud computing architecture. Not all combinations of such software and hardware architectures are presented herein, as those skilled in the art can readily appreciate how to implement the subject matter of the present invention in contexts other than the disclosure contained herein.

Fig. 6 is a block diagram illustrating components of a machine 600 capable of reading instructions from a machine-readable medium (e.g., a machine-readable storage medium) and performing any one or more of the methodologies discussed herein, according to some example embodiments. In particular, fig. 6 illustrates a schematic representation of a machine 600 in the example form of a computer system within which instructions 616 (e.g., software, a program, an application, an applet, an app, or other executable code) may be executed for causing the machine 600 to perform any one or more of the methodologies discussed herein. The instructions 616 transform the general-purpose, unprogrammed machine into a specific machine that is programmed to perform the functions described and illustrated in the described manner. In alternative embodiments, the machine 600 operates as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine 600 may operate in the capacity of a server machine or a client machine in server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine 600 may include, but is not limited to, a server computer, a client computer, a PC, a tablet computer, a laptop computer, a netbook, a Personal Digital Assistant (PDA), an entertainment media system, a cellular telephone, a smartphone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a network appliance, a network router, network switch, network bridge, or any machine capable of executing the instructions 616 (which specify actions to be taken by the machine 600) in sequence or otherwise. Further, while only a single machine 600 is illustrated, the term "machine" shall also be taken to include a collection of machines 600 that individually or jointly execute the instructions 616 to perform any one or more of the methodologies discussed herein.

The machine 600 may include a processor 610, memory/storage 630, and I/O components 650, which may be configured to communicate with one another, such as via a bus 602. In an example embodiment, the processor 610 (e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an ASIC, a Radio Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor 612 and a processor 614 that may execute instructions 616. The term "processor" is intended to include multi-core processors, which may include two or more independent processors (sometimes referred to as "cores") that may execute instructions concurrently. Although fig. 6 illustrates multiple processors 610, the machine 600 may include a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiple cores, or any combination thereof.

Memory/storage 630 may include a memory 632, such as a main memory or other memory storage, and a storage unit 636, both memory 632 and storage unit 636 being accessible to processor 610, such as via bus 602. The storage unit 636 and the memory 632 store the instructions 616 embodying any one or more of the methodologies or functions described herein. The instructions 616 may also reside, completely or partially, within the memory 632, within the storage unit 636, within at least one of the processors 610 (e.g., within a cache memory of the processor), or any suitable combination thereof during execution of the instructions 616 by the machine 600. Thus, the memory 632, the storage unit 636, and the memory of the processor 610 are examples of machine-readable media.

As used herein, a "machine-readable medium" means a device capable of storing instructions (e.g., instructions 616) and data either temporarily or permanently, and may include, but is not limited to, Random Access Memory (RAM), Read Only Memory (ROM), cache memory, flash memory, optical media, magnetic media, cache memory, other types of storage (e.g., erasable programmable read-only memory (EEPROM)), and/or any suitable combination thereof. The term "machine-readable medium" shall be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) that are capable of storing instructions 616. The term "machine-readable medium" shall also be taken to include any medium, or combination of media, that is capable of storing instructions (e.g., instructions 616) for execution by a machine (e.g., machine 600), such that the instructions, when executed by one or more processors of the machine (e.g., processor 610), cause the machine to perform any one or more of the methodologies described herein. Thus, "machine-readable medium" refers to a single storage apparatus or device, as well as a "cloud-based" storage system or storage network that includes multiple storage apparatuses or devices. The term "machine-readable medium" does not include a signal per se.

The I/O components 650 can include a wide variety of components to receive input, provide output, generate output, transmit information, exchange information, capture measurements, and so forth. The particular I/O components 650 included in a particular machine will depend on the type of machine. For example, a portable machine such as a mobile phone would likely include a touch input device or other such input mechanism, whereas a headless server machine would not likely include such a touch input device. It is to be appreciated that the I/O components 650 can include many other components not shown in FIG. 6. The I/O components 650 are grouped by function only to simplify the following discussion, and the grouping is not limiting. In various example embodiments, the I/O components 650 may include output components 652 and input components 654.

The output components 652 may include visual components (e.g., a display such as a Plasma Display Panel (PDP), a Light Emitting Diode (LED) display, a Liquid Crystal Display (LCD), a projector, or a Cathode Ray Tube (CRT)), acoustic components (e.g., speakers), tactile components (e.g., a vibrating motor, a resistance mechanism), other signal generators, and so forth.

The input components 654 may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, an electro-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or other pointing instrument), tactile input components (e.g., physical buttons, a touch screen or other tactile input components that provide the location and/or force of a touch or touch gesture), audio input components (e.g., a microphone), and so forth.

In other example embodiments, the I/O component 650 may include a biometric component 656, a motion component 658, an environmental component 660, or a location component 662 among numerous other components. For example, biometric component 656 may include components for detecting expressions (e.g., hand expressions, facial expressions, voice expressions, body gestures, or eye tracking), measuring biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), measuring exercise-related metrics (e.g., distance traveled, speed traveled, or time spent exercising), identifying a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and so forth. The motion components 658 may include acceleration sensor components (e.g., accelerometers), gravity sensor components, rotation sensor components (e.g., gyroscopes), and so forth.

Environmental components 660 may include, for example, lighting sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detect harmful gas concentrations to ensure safety or to measure pollutants in the atmosphere), or other components that may provide an indication, measurement, or signal corresponding to the surrounding physical environment.

The position components 662 may include location sensor components (e.g., Global Positioning System (GPS) receiver components), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like.

Communication may be accomplished using a wide variety of techniques. The I/O components 650 may include a communications component 664 operable to couple the machine 600 to a network 680 or a device 660 via a coupling 682 and a coupling 662, respectively. For example, the communication component 664 may include a network interface component or other suitable device that interfaces with the network 680. In further examples, the communication component 664 can include a wired communication component, a wireless communication component, a cellular communication component, a Near Field Communication (NFC) component, a wireless communication component,

Figure BDA0002317103210000291

the components (e.g.,

Figure BDA0002317103210000292

LowEnergy)、

Figure BDA0002317103210000293

components, and other communication components that provide communication via other forms. Device 660 may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via USB).

Moreover, the communication component 664 can detect the identifier or include a component operable to detect the identifier. For example, the communication component 664 may include a Radio Frequency Identification (RFID) tag reader component, an NFC smart tag detection component, an optical reader component, or an acoustic detection component (e.g., a microphone for identifying an audio signal of a tag). In addition, a variety of information can be derived via the communications component 664, such as via Internet Protocol (IP) geolocation, via

Figure BDA0002317103210000294

Signal triangulation location, location via detection of NFC beacon signals that may indicate a particular location, and the like.

In various example embodiments, one or more portions of network 680 may be an ad hoc network, an intranet, an extranet, a Virtual Private Network (VPN), a Local Area Network (LAN), a Wireless LAN (WLAN), a WAN, a Wireless WAN (WWAN), a Metropolitan Area Network (MAN), the Internet, a portion of the Public Switched Telephone Network (PSTN), a Plain Old Telephone Service (POTS) network, a cellular telephone network, a wireless network, a network such as a WLAN, a network such as a WLAN,

Figure BDA0002317103210000295

a network, another type of network, or a combination of two or more such networks. For example, network 680 or a portion of network 680 may include a wireless network or a cellular network, and coupling 682 may be a Code Division Multiple Access (CDMA) connection, a global system for mobile communications (GSM) connection, or other type of cellular or wireless coupling. In this example, coupling 682 may implement any of a number of types of data transmission techniques, such as single carrier radio transmission technique (1xRTT), evolution-data optimized (EVDO) technique, General Packet Radio Service (GPRS) technique, data transmission techniques,Enhanced data rates for GSM evolution (EDGE) technology, third generation partnership project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standards, other standards defined by various standards-making organizations, other remote protocols, or other data transmission technologies.

The instructions 616 may be transmitted or received over a network 680 using a transmission medium via a network interface device (e.g., a network interface component included in the communications component 664) and utilizing any of a number of well-known transmission protocols (e.g., HTTP). Likewise, the instructions 616 may be transmitted or received to the device 660 using a transmission medium via a coupling 662 (e.g., a peer-to-peer coupling). The term "transmission medium" shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions 616 for execution by the machine 600, and the term "transmission medium" shall be taken to include digital or analog communications signals or other intangible medium to facilitate communication of such software.

27页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:光体积描记图形数据的可靠获取

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!