Personalized digital treatment method and device

文档序号:473714 发布日期:2021-12-31 浏览:2次 中文

阅读说明:本技术 个性化数字化治疗方法和装置 (Personalized digital treatment method and device ) 是由 布伦特·沃恩 谢里夫·哈利利·塔拉曼 阿卜杜勒·哈利姆·阿巴斯 于 2020-03-20 设计创作,主要内容包括:本文公开的平台、系统、装置、方法和介质可以评估受试者的一种或多种发育病症,并且提供增强的数字化治疗。本文公开的平台、系统、装置、方法和介质可以被配置为基于数字反馈来改善所述一种或多种发育病症。(The platforms, systems, devices, methods, and media disclosed herein can assess one or more developmental disorders of a subject and provide enhanced digital treatment. The platforms, systems, devices, methods, and media disclosed herein may be configured to improve the one or more developmental disorders based on digital feedback.)

1. A device for evaluating and providing treatment for an individual for a behavioral disorder, developmental delay, or neural injury, the device comprising:

a processor;

a non-transitory computer readable medium storing a computer program configured to cause the processor to:

(a) receiving an input from the individual related to the behavioral disorder, the developmental delay, or the neural injury;

(b) determining that the individual is indicative of the behavioral disorder, the developmental delay, or the neurological damage using a trained classifier module of the computer program that is trained using data from a plurality of individuals having the behavioral disorder, the developmental delay, or the neurological damage;

(c) Determining, using a machine learning model generated by the computer program, that the behavioral disorder, the developmental delay, or the neurological impairment for which the individual has the indication will be ameliorated by digital therapy that promotes social reciprocity; and

(d) digital treatment is provided that facilitates social interaction.

2. The apparatus of claim 1, wherein the machine learning model determines a degree of improvement to be achieved by the digital treatment.

3. The apparatus of claim 1, wherein the processor is configured with further instructions to provide the digital treatment to the individual upon determining that the behavioral disorder, the developmental delay, or the neural injury will be improved by the digital treatment.

4. The apparatus of claim 3, wherein the digital treatment comprises an augmented reality experience.

5. The apparatus of claim 3, wherein the digital treatment comprises a virtual reality experience.

6. The device of claim 4 or 5, wherein the digital treatment is provided by a mobile computing device.

7. The device of claim 6, wherein the mobile computing device comprises a smartphone, tablet computer, laptop computer, smart watch, or other wearable computing device.

8. The device of claim 6, wherein the processor is configured with further instructions to obtain, with a camera of the mobile computing device, video or images of a person interacting with the individual in the augmented reality experience.

9. The apparatus of claim 8, wherein the processor is configured with further instructions to analyze the video or the image using an image analysis module to determine an emotion associated with the person.

10. The apparatus of claim 5, wherein the virtual reality experience includes a displayed avatar or character, and the apparatus further comprises determining an emotion expressed by the avatar or character within the virtual reality experience.

11. The device of claim 9 or 10, wherein the description of the emotion is displayed to the individual in real-time within the augmented reality or virtual reality experience by printing the description on a screen of the mobile computing device or by an audio output coupled with the mobile computing device.

12. The apparatus of claim 9, wherein the analysis module comprises a facial recognition module to detect the person's face within the video or image.

13. The device of claim 12, wherein the image analysis module comprises a classifier that uses machine learning training to classify the face as exhibiting the emotion.

14. The device of claim 7, wherein the computing device comprises a microphone configured to capture audio from the augmented reality experience.

15. The apparatus of claim 14, wherein the processor is configured with further instructions to classify sound from the microphone as being associated with emotion.

16. The apparatus of claim 15, wherein the processor is configured with further instructions to provide the individual with instructions with the digital treatment to engage in an activity pattern.

17. The device of claim 16, wherein the activity pattern comprises emotion-inspired activity, emotion recognition activity, or unstructured play.

18. The apparatus of claim 17, wherein a therapeutic agent is provided to the individual with the digital treatment.

19. The device of claim 18, wherein the therapeutic agent improves cognition in the subject when the subject is receiving the digital treatment.

20. The device of claim 1, wherein the device is a wearable device.

21. A computer-implemented method of treating an individual for a behavioral disorder, developmental delay, or neural injury using digital therapy, the method comprising:

(a) receiving an input from the individual related to the behavioral disorder, the developmental delay, or the neural injury;

(b) determining that the individual has evidence of having the behavioral disorder, the developmental delay, or the neurological damage using a trained classifier;

(c) determining, using a machine learning model, that the behavioral disorder, the developmental delay, or the neurological impairment that the individual has evidence of will be improved by a digital therapy configured to promote social reciprocity.

22. The method of claim 21, wherein the machine learning model determines a degree of improvement to be achieved by the digital treatment.

23. The method of claim 21, comprising providing the digital treatment to the individual when the behavioral disorder, the developmental delay, or the neural injury is determined to be autism or autism spectrum disorder.

24. The method of claim 23, wherein the digital treatment comprises an augmented reality experience.

25. The method of claim 23, wherein the digital treatment comprises a virtual reality experience.

26. The method of claim 24 or 25, wherein the digital treatment is provided by a mobile computing device.

27. The method of claim 26, wherein the mobile computing device comprises a smartphone, tablet computer, laptop computer, smart watch, or other wearable computing device.

28. The method of claim 26, comprising obtaining, using a camera of the mobile computing device, video or images of a person interacting with the individual in the augmented reality experience.

29. The method of claim 28, comprising analyzing the video or the image using an image analysis module to determine an emotion associated with the person.

30. The method of claim 25, wherein the virtual reality experience includes a displayed avatar or character, and the method further comprises determining an emotion expressed by the avatar or character within the virtual reality experience.

31. The method of claim 29 or 30, wherein the description of the emotion is displayed to the individual in real-time within the augmented reality or virtual reality experience by printing the description on a screen of the mobile computing device or by an audio output coupled with the mobile computing device.

32. The method of claim 29, wherein the analysis module comprises a facial recognition module for detecting the person's face within the video or image.

33. The method of claim 32, wherein the image analysis module comprises a classifier that uses machine learning training to classify the face as exhibiting the emotion.

34. The method of claim 27, wherein the computing device comprises a microphone configured to capture audio from the augmented reality experience.

35. The method of claim 34, comprising classifying sound from the microphone as being associated with emotion.

36. The method of claim 35, further comprising providing the individual with instructions with the digital treatment to engage in an activity pattern.

37. The method of claim 36, wherein the activity pattern comprises emotion-inspired activity, emotion recognition activity, or unstructured play.

38. The method of claim 23, comprising providing a therapeutic agent with the digital treatment.

39. The method of claim 38, wherein the therapeutic agent improves cognition in the subject when the subject is receiving the digital treatment.

40. The method of claim 21, wherein the digital treatment is configured to promote social reciprocity of the individual.

Background

Many people suffer from cognitive disorders, developmental delays, and neurological impairment. These conditions are difficult to diagnose and treat using conventional diagnostic and therapeutic methods.

Disclosure of Invention

Described herein are platforms, systems, devices, methods, and media for diagnosing and treating individuals with one or more diagnoses from a group of related conditions comprising cognitive impairment, developmental delay, and neural injury.

Non-limiting examples of cognitive disorders and developmental delays include autism, autism spectrum disorders, attention deficit hyperactivity disorders, and speech and learning disorders. Non-limiting examples of nerve damage include cerebral palsy and neurodegenerative diseases. These groups of related disorders, including cognitive disorders, developmental delays, and neural injury, are related in the sense that an individual may exhibit more than one symptom or behavior classified in these groups of disorders, and often an individual has multiple of these disorders. Thus, it is difficult to accurately distinguish between diagnoses having multiple states along a disease spectrum (e.g., autism spectrum disorder). Therefore, it is difficult to distinguish between diagnoses with overlapping symptoms (e.g., autism and ADHD).

Current methods for diagnosing and treating cognitive, developmental delay, and neurological disorders encounter bottlenecks in the information used during diagnosis and the information that can be used to determine therapy. For example, an individual may be given a classification diagnosis of having an autism spectrum disorder and then provided with a universal treatment regimen based on the diagnosis. During the determination of appropriate treatment, there may be no information that may be relevant to a particular injury, such as, for example, the degree of ability to identify emotional cues based on facial expressions.

Accordingly, disclosed herein are platforms, systems, devices, methods and media that provide a technical solution to this long-standing problem by incorporating diagnostic data into the design of a therapy. The diagnosis or assessment process may be combined with a treatment process that incorporates a multidimensional space from the assessment or diagnosis with the goal of customizing the treatment on a case-by-case basis, rather than providing a general diagnosis that places the patient in one of several classification buckets, and then general treatment is typically performed by a different healthcare provider.

In some cases, a single user account is provided containing both diagnostic and therapeutic information, linking the user information to both processes. This integrated approach helps ensure that no potentially relevant information is missed when making a diagnosis or determining an appropriate treatment regimen. By connecting the diagnosis and therapy to the same platform or process, a network and/or synergistic effect may be achieved. For example, the therapy may be tailored to take into account specific dimensions associated with emotion recognition that are used to determine that a subject would benefit from emotion recognition detection therapy using an augmented reality tool, or even from specific activities using the tool that are predicted to work more than other similar activities.

Internal diagnostic dimensions calculated based on the input data during the diagnostic procedure may be preserved and then transferred into the therapeutic procedure for identifying the optimal therapy. Thus, the patient position within the multidimensional space (dimensions being a non-linear combination of input features) generated by the diagnostic process may be analyzed by the therapy model to determine or identify one or more specific therapies predicted to provide (improved) therapeutic effects.

Thus, the digital treatment can be customized on a case-by-case basis based on a multidimensional feature set calculated during application of a digital diagnosis or assessment of the same subject. This approach provides a unique ability to apply precise digital treatments, which is more efficient than conventional approaches because therapy planning is based on categorical diagnosis rather than a refined understanding of the behavior of a particular disorder in a particular situation.

The methods and apparatus disclosed herein are configured to determine a cognitive function attribute, such as, for example, the developmental progress of a subject in a clinical or non-clinical setting. For example, the described methods and devices may identify a subject as having advanced development in one or more developmental areas, or as having retarded development or being at risk of having one or more developmental disorders.

The disclosed methods and apparatus may determine the developmental progress of a subject by evaluating a plurality of characteristics or features of the subject based on a rating model, wherein the rating model may be generated from a large dataset of a population of related subjects using a machine learning method. The methods and apparatus disclosed herein include improved logical structures and processes for diagnosing a subject with one of a plurality of disorders using one or more machine learning models.

Identifying treatments for cognitive functional attributes (including, for example, developmental disorders) of a subject can present daunting technical problems in terms of accuracy and efficiency. Many known methods for identifying such attributes or disorders are often time and resource consuming, requiring the subject to answer a large number of questions or undergo lengthy observations under the management of qualified clinicians, the number and availability of which are limited depending on the geographic location of the subject.

In addition, the accuracy and consistency of many known methods for identifying and treating behavioral, neurological, or psychosanitary conditions or disorders is less than ideal because multiple diseases within the relevant categories of behavioral disorders, developmental delays, and neurological injuries are interrelated. Furthermore, many subjects may have two or more related disorders or conditions. If each test is designed to diagnose or identify only a single disorder or condition, a subject presenting with multiple disorders may need to perform multiple tests. Evaluating a subject using multiple diagnostic tests can be lengthy, expensive, inconvenient, and difficult to arrange on the go. It would be desirable to provide a method of testing a subject using a single diagnostic test that is capable of identifying or diagnosing a plurality of related disorders or conditions with sufficient sensitivity and specificity.

Described herein is a technical solution to such technical problem, wherein the described technical solution improves both the accuracy and the efficiency of existing methods. Such technical solutions reduce the time and resources required to manage methods for identifying and treating cognitive function attributes (such as behavioral, neurological or mental health conditions or disorders), and improve the accuracy and consistency of identification results across subjects.

Further, disclosed herein are methods and treatments that can be applied to a subject to improve cognitive function in a subject with advanced, normal, and reduced cognitive function. In view of the above, there is a need for improved methods and systems for diagnosing and identifying subjects at risk for specific cognitive functional attributes, such as developmental disorders, and providing improved digital treatment. Ideally, such methods and devices would require fewer problems, a reduced amount of time, determine various cognitive functional attributes such as behavioral, neurological or mental health conditions or disorders, and provide clinically acceptable sensitivity and specificity in a clinical or non-clinical setting, which can be used to monitor and adjust treatment efficacy. In addition, the improved digital treatment may provide a customized treatment plan to the patient, receive updated diagnostic data in response to the customized treatment plan to determine progress, and update the treatment plan accordingly. Ideally, these methods and devices can also be used to determine developmental progression in a subject and provide treatment to promote developmental progression.

The methods and devices disclosed herein are capable of diagnosing or identifying a subject as being at risk of having one or more attributes of cognitive function among a plurality of developmental disorders, such as, for example, a subject being at risk of having one or more developmental disorders, in a clinical or non-clinical setting, with fewer questions, in a reduced amount of time, and with clinically acceptable sensitivity and specificity in a clinical setting. The processor may be configured with instructions to identify the most predictive next issue, such that people can be diagnosed or identified as at risk with fewer issues. Identifying the most predictive next question in response to multiple answers has the advantage of improving sensitivity and specificity with fewer questions. The methods and devices disclosed herein may be configured to evaluate a subject for a plurality of related developmental disorders using a single test, and to diagnose or determine that the subject is at risk for one or more of the plurality of developmental disorders using the single test. Reducing the number of problems presented may be particularly helpful if the subject presents a variety of possible developmental disorders. Evaluating a subject for multiple possible obstacles using only a single test can greatly reduce the length and cost of the evaluation procedure. The methods and devices disclosed herein can diagnose or identify a subject as being at risk of having a single developmental disorder among a plurality of possible developmental disorders that may have overlapping symptoms.

Although the most predictive next question can be determined in a variety of ways, in many cases the most predictive next question is determined in response to a plurality of answers to previous questions, which may include a previous most predictive next question. The most predictive next question may be determined by statistics and a set of possible most predictive next questions is evaluated to determine the most predictive next question. In many cases, the answer to each of the most predictive next questions possible is related to the relevance of the question, and the relevance of the question may be determined in response to the combined feature importance of each possible answer to the question.

The methods and devices disclosed herein can classify subjects into one of three categories: have one or more developmental conditions, develop normally or typically, or are indeterminate (i.e., additional assessments are required to determine whether the subject has any developmental conditions). The developmental condition may be a developmental disorder or developmental progression. Note that the methods and apparatus disclosed herein are not limited to developmental conditions, and may be applied to other cognitive function attributes, such as behavioral, neurological, or mental health conditions. The method and apparatus may initially classify a subject into one of three categories and then proceed to evaluate subjects that were initially classified as "uncertain" by collecting additional information from the subject. Such continuous assessment of subjects initially classified as "uncertain" can be performed continuously through a single screening program (e.g., comprising various assessment modules). Alternatively or additionally, subjects identified as belonging to an indeterminate group may be evaluated using a separate additional screening program and/or referral to a clinician for further evaluation.

The methods and devices disclosed herein can evaluate a subject using a combination of questionnaires and video inputs, where both inputs can be mathematically integrated to optimize the sensitivity and/or specificity of classification or diagnosis of the subject. Alternatively, the methods and apparatus may be optimized for different environments (e.g., primary care versus secondary care) to account for differences in expected incidence depending on the application environment.

The methods and apparatus disclosed herein may take into account different subject-specific dimensions, such as, for example, the age of the subject, a geographic location associated with the subject, the gender of the subject, or any other subject-specific or demographic data associated with the subject. In particular, the methods and devices disclosed herein may take into account different subject-specific dimensions when identifying a subject as being at risk for having one or more cognitive functional attributes (such as a developmental status), in order to increase the sensitivity and specificity of the subject's assessment, diagnosis and classification. For example, subjects belonging to different age groups may be evaluated using different machine-learned assessment models, each of which may be specifically tuned to identify one or more developmental conditions in subjects of a particular age group. Each age group-specific rating model may contain a unique set of ratings items (e.g., questions, video observations), some of which may overlap with the ratings items of the specific rating model of other age groups.

In addition, the digitally personalized medical systems and methods described herein may provide digital diagnosis and digital treatment to a patient. The digitally personalized medical system may use the digitized data to assess or diagnose symptoms of the patient in a manner that informs of personalized or more appropriate therapeutic intervention and improved diagnosis.

In one aspect, the digitally personalized medical system may include a digitizing device having a processor and associated software, the digitizing device may be configured to: using the data to assess and diagnose the patient; capturing interaction and feedback data identifying relative levels of efficacy, compliance, and response resulting from a therapeutic intervention; and performing data analysis. Such data analysis may include artificial intelligence, including, for example, machine learning and/or statistical models, to assess user data and user profiles to further personalize, improve or assess the efficacy of the therapeutic intervention.

In some cases, the system may be configured to use digital diagnostics and digital therapy. In some embodiments, the digital diagnosis and digital treatment together comprise an apparatus or method for digitally collecting information and processing and evaluating the provided data to improve the medical, psychological or physiological state of the individual. The digital treatment system may apply software-based learning to evaluate user data, monitor and improve diagnosis, and provide therapeutic intervention. In some embodiments, the digital treatment is configured to improve social reciprocity (social recipcity) of individuals with autism or autism spectrum disorder by helping them recognize the expression of emotion in real time when they interact with their person or virtual images expressing emotion.

The digitized diagnostic data in the system may include data and metadata collected from the patient, or caregiver, or a party unrelated to the individual being assessed. In some cases, the collected data may include monitoring behavior, observation, judgment, or may be assessed by a party other than the individual. In other cases, the assessment may include an adult performing the assessment or providing data for assessing a child or adolescent. The data and metadata may be actively or passively collected in a digitized format via one or more digitizing devices, such as a mobile phone, a video capturer, an audio capturer, an activity monitor, or a wearable digitizing monitor.

The digital diagnosis uses data collected by the system about the patient, which may include supplemental diagnostic data captured externally from the digital diagnosis, for analysis from tools such as machine learning, artificial intelligence, and statistical modeling to assess or diagnose the condition of the patient. The digital diagnosis may also assess changes in the patient's state or performance, either directly or indirectly via data and metadata that may be analyzed and evaluated by tools such as machine learning, artificial intelligence, and statistical modeling to improve or refine the diagnosis and potential therapeutic intervention, to provide feedback to the system.

Data assessment and machine learning from the digital diagnosis and corresponding response, or in the absence of the digital diagnosis and corresponding response, from the therapeutic intervention can result in identifying a new diagnosis for the subject and a new treatment regimen for the patient and caregiver.

For example, the types of data collected and employed by the system may include video, audio, responses to questions or activities of patients and caregivers, and active or passive data streams from user interaction with the activity, game, or software features of the system. Such data may also include metadata from the patient or caregiver's interaction with the system, for example, in performing recommended activities. Specific metadata examples include data of user interactions with devices or mobile applications of the system that capture various aspects of the user's behavior, profile, activity, interactions with the software system, interactions with games, frequency of use, session time, selected options or features, content or activity preferences. The data may also include data and metadata, games or interactive content from various third party devices such as activity monitors.

In some embodiments, disclosed herein are personalized treatment regimens that include digital therapy, non-digital therapy, a drug, or any combination thereof. Digital treatment may include instructions, feedback, activities, or interactions provided by the system to the patient or caregiver. Examples include suggested behaviors, activities, games, or interactive sessions with system software and/or third party devices. Digital treatment can be achieved using various methods including augmented reality, real-time cognitive assistance, virtual reality, or other behavioral therapies augmented with technology. In some cases, digital treatment is implemented using artificial intelligence. For example, artificial intelligence driven wearable devices may be used to provide behavioral intervention to improve social outcomes for children with behavioral, neurological or mental health conditions or disorders. In some embodiments, the personalized treatment regimen is adaptive, e.g., its therapy is dynamically updated or reconfigured based on feedback captured from the subject during ongoing treatment and/or additional relevant information (e.g., results from an autism assessment).

In a further aspect, the digital treatment methods and systems disclosed herein can diagnose and treat subjects at risk for having one or more of a plurality of behavioral, neurological, or psychosanitary conditions or disorders, neurological conditions or disorders, or psychosanitary conditions or disorders in a clinical or non-clinical setting. The diagnosis and treatment can be accomplished with fewer problems, a reduced amount of time, and clinically acceptable sensitivity and specificity in a clinical setting using the methods and systems disclosed herein, and can provide treatment recommendations. This may be helpful, for example, when a subject initiates treatment based on a wrong diagnosis. The processor may be configured with instructions to identify the most predictive next question or the most illustrative next symptom or observation, such that an individual may be reliably diagnosed or identified as at risk using only an optimal number of questions or observations. An advantage of identifying the most predictive next question or the most illustrative next symptom or observation in response to multiple answers is that treatment is provided with fewer questions without reducing the sensitivity and specificity of the diagnostic process. In some cases, additional processors may be provided to predict or collect information about the next more relevant symptom. The methods and devices disclosed herein may be configured to evaluate and treat a subject for a plurality of related disorders using a single fitness test, and to diagnose or determine the subject as being at risk for one or more of the plurality of disorders using the single test. Reducing the number of problems or symptoms presented or measures used may be particularly helpful when the subject has multiple possible disorders that can be treated. Using only a single fitness test to assess the multiple possible disorders of the subject may greatly reduce the length and cost of the assessment procedure and improve treatment. The methods and devices disclosed herein can diagnose and treat a subject at risk for a single disorder of a plurality of possible disorders that may have overlapping symptoms.

The most predictive next question, the most illustrative next symptom, or observation for the digital therapeutic treatment can be determined in a variety of ways. In many cases, the most predictive next question, symptom, or observation may be determined in response to a plurality of answers to previous questions or observations, which may include a previous most predictive next question, symptom, or observation, in order to evaluate the treatment and provide a closed-loop assessment of the subject. The most predictive next question, symptom or observation may be determined statistically, and a set of candidates may be evaluated to determine the most predictive next question, symptom or observation. In many cases, the observations or answers to the possible candidates relate to the question or the observed relevance, and the question or observed relevance may be determined in response to the combined feature importance of each possible answer or observation to the question. Once treatment has begun, the questions, symptoms, or observations may be repeated, or different questions, symptoms, or observations may be used to more accurately monitor progress and suggest changes to the digital treatment. The relevance of the next question, symptom or observation may also depend on the final assessed change in potential choices of different answer choices or observations of the question. For example, questions for which answer choices may have a significant impact on the final assessment at all times may be considered more relevant than questions for which answer choices may only help to discern a difference in severity of a particular condition or are otherwise less important.

Exemplary devices

Described herein is a platform for evaluating and providing treatment for an individual for a behavioral disorder, developmental delay, or neural injury, the platform comprising a computing device comprising: a processor; a non-transitory computer readable medium storing a computer program configured to cause the processor to: (a) receiving an input from the individual related to the behavioral disorder, the developmental delay, or the neural injury; (b) determining, using a trained classifier module of the computer program that is trained using data from a plurality of individuals having the behavioral disorder, the developmental delay, or the neurological damage, that the individual has evidence of the presence of the behavioral disorder, the developmental delay, or the neurological damage; (c) determining, using a machine learning model generated by the computer program, that the behavioral disorder, the developmental delay, or the neurological impairment for which the individual has the signs of the presence will be ameliorated by digital therapy that promotes social reciprocity; and (d) providing a digital treatment that promotes social interaction.

In some embodiments, the machine learning model determines a degree of improvement to be achieved by the digital treatment.

In some embodiments, the behavioral disorder, the developmental delay, or the neural injury is autism or an autism spectrum disorder.

In some embodiments, the processor is configured with further instructions to provide the digital treatment to the individual upon determining that the autism or the autism spectrum disorder will be ameliorated by the digital treatment.

In some embodiments, the digital treatment comprises an augmented reality experience.

In some embodiments, the digital treatment comprises a virtual reality experience.

In some embodiments, the digital treatment is provided by a mobile computing device.

In some implementations, the mobile computing device includes a smartphone, tablet computer, laptop computer, smart watch, or other wearable computing device.

In some implementations, the processor is configured with other instructions to obtain, with a camera of the mobile computing device, video or images of a person interacting with the individual in the augmented reality experience.

In some embodiments, the processor is configured with further instructions to analyze the video or the image using an image analysis module to determine an emotion associated with the person.

In some embodiments, the virtual reality experience includes a displayed avatar or character, and the apparatus further includes determining an emotion expressed by the avatar or character within the virtual reality experience.

In some implementations, a description of the emotion is displayed to the individual in real-time within the augmented reality or virtual reality experience by printing the description on a screen of the mobile computing device or by emitting the description through an audio output of the mobile computing device.

In some embodiments, the analysis module comprises a face recognition module for detecting the face of the person within the video or image.

In some implementations, the image analysis module includes a classifier that uses machine learning training to classify the face as exhibiting the emotion.

In some implementations, the computing device includes a microphone configured to capture audio from the augmented reality experience.

In some embodiments, the processor is configured with further instructions to classify sound from the microphone as being associated with emotion.

In some embodiments, the processor is configured with further instructions to provide the individual with instructions to engage in an activity pattern with the digital treatment.

In some embodiments, the activity pattern includes emotion-inspired activity, emotion recognition activity, or unstructured play.

In some embodiments, a therapeutic agent is provided to the individual along with the digital treatment.

In some embodiments, the therapeutic agent improves cognition in the individual when the individual is receiving the digital treatment.

In some embodiments, the device is a wearable device.

In some implementations, the platform includes a video analyst portal that allows a video analyst to review one or more videos captured and uploaded using the computing device and upload a portion of the input.

In some implementations, the platform includes a healthcare provider portal that allows a healthcare provider to upload a portion of the input.

Another exemplary apparatus

In some aspects, disclosed herein is a device for evaluating and providing treatment for an individual for a behavioral disorder, developmental delay, or neural injury, the device comprising: a processor; a non-transitory computer readable medium storing a computer program configured to cause the processor to: (a) receiving an input from the individual related to the behavioral disorder, the developmental delay, or the neural injury; (b) determining, using a trained classifier module of the computer program that is trained using data from a plurality of individuals having the behavioral disorder, the developmental delay, or the neurological damage, that the individual has evidence of the presence of the behavioral disorder, the developmental delay, or the neurological damage; (c) determining, using a machine learning model generated by the computer program, that the behavioral disorder, the developmental delay, or the neurological impairment for which the individual has the signs of the presence will be ameliorated by digital therapy that promotes social reciprocity; and (d) providing a digital treatment that promotes social interaction.

In some embodiments, the machine learning model determines a degree of improvement to be achieved by the digital treatment.

In some embodiments, the behavioral disorder, the developmental delay, or the neural injury is autism or an autism spectrum disorder.

In some embodiments, the processor is configured with further instructions to provide the digital treatment to the individual upon determining that the autism or the autism spectrum disorder will be ameliorated by the digital treatment.

In some embodiments, the digital treatment comprises an augmented reality experience.

In some embodiments, the digital treatment comprises a virtual reality experience.

In some embodiments, the digital treatment is provided by a mobile computing device. In some implementations, the mobile computing device includes a smartphone, tablet computer, laptop computer, smart watch, or other wearable computing device.

In some implementations, the processor is configured with other instructions to obtain, with a camera of the mobile computing device, video or images of a person interacting with the individual in the augmented reality experience.

In some embodiments, the processor is configured with further instructions to analyze the video or the image using an image analysis module to determine an emotion associated with the person.

In some embodiments, the virtual reality experience includes a displayed avatar or character, and the apparatus further includes determining an emotion expressed by the avatar or character within the virtual reality experience.

In some implementations, a description of the emotion is displayed to the individual in real-time within the augmented reality or virtual reality experience by printing the description on a screen of the mobile computing device or by emitting the description through an audio output coupled with the mobile computing device.

In some embodiments, the analysis module comprises a face recognition module for detecting the face of the person within the video or image.

In some implementations, the image analysis module includes a classifier that uses machine learning training to classify the face as exhibiting the emotion.

In some implementations, the computing device includes a microphone configured to capture audio from the augmented reality experience.

In some embodiments, the processor is configured with further instructions to classify sound from the microphone as being associated with emotion.

In some embodiments, the processor is configured with further instructions to provide the individual with instructions to engage in an activity pattern with the digital treatment.

In some embodiments, the activity pattern includes emotion-inspired activity, emotion recognition activity, or unstructured play.

In some embodiments, a therapeutic agent is provided to the individual along with the digital treatment.

In some embodiments, the therapeutic agent improves cognition in the individual when the individual is receiving the digital treatment.

In some embodiments, the device is a wearable device.

Exemplary method

In some aspects, disclosed herein is a computer-implemented method of treating an individual for a behavioral disorder, developmental delay, or neural injury using digital therapy, the method comprising: (a) receiving an input from the individual related to the behavioral disorder, the developmental delay, or the neural injury; (b) determining that the individual has evidence of having the behavioral disorder, the developmental delay, or the neurological damage using a trained classifier; (c) determining, using a machine learning model, that the behavioral disorder, the developmental delay, or the neurological impairment that the individual has evidence of will be improved by a digital therapy configured to promote social reciprocity.

In some embodiments, the machine learning model determines a degree of improvement to be achieved by the digital treatment.

In some embodiments, the method comprises providing the digital treatment to the individual when the developmental disorder is determined to be autism or autism spectrum disorder.

In some embodiments, the digital treatment comprises an augmented reality experience.

In some embodiments, the digital treatment comprises a virtual reality experience.

In some embodiments, the digital treatment is provided by a mobile computing device.

In some implementations, the mobile computing device includes a smartphone, tablet computer, laptop computer, smart watch, or other wearable computing device.

In some implementations, the method includes obtaining, with a camera of the mobile computing device, a video or image of a person interacting with the individual in the augmented reality experience.

In some embodiments, the method includes analyzing the video or the image using an image analysis module to determine an emotion associated with the person.

In some embodiments, the virtual reality experience includes a displayed avatar or character, and the method further includes determining an emotion expressed by the avatar or character within the virtual reality experience.

In some implementations, a description of the emotion is displayed to the individual in real-time within the augmented reality or virtual reality experience by printing the description on a screen of the mobile computing device or by emitting the description through an audio output coupled with the mobile computing device.

In some embodiments, the analysis module comprises a face recognition module for detecting the face of the person within the video or image.

In some implementations, the image analysis module includes a classifier that uses machine learning training to classify the face as exhibiting the emotion.

In some implementations, the computing device includes a microphone configured to capture audio from the augmented reality experience.

In some embodiments, the method includes classifying sound from the microphone as being associated with emotion.

In some embodiments, the method further comprises providing the individual with instructions to engage in an activity pattern with the digital treatment.

In some embodiments, the activity pattern includes emotion-inspired activity, emotion recognition activity, or unstructured play.

In some embodiments, the method comprises providing a therapeutic agent with the digital treatment.

In some embodiments, the therapeutic agent improves cognition in the individual when the individual is receiving the digital treatment.

In some embodiments, the digital treatment is configured to promote social reciprocity of the individual.

Exemplary Medium

In some aspects, disclosed herein is a non-transitory computer readable medium storing a computer program configured to cause the processor to: (a) receiving an input from the individual related to the behavioral disorder, the developmental delay, or the neural injury; (b) determining, using a trained classifier module of the computer program that is trained using data from a plurality of individuals having the behavioral disorder, the developmental delay, or the neurological damage, that the individual has evidence of the presence of the behavioral disorder, the developmental delay, or the neurological damage; (c) determining, using a machine learning model generated by the computer program, that the behavioral disorder, the developmental delay, or the neurological impairment for which the individual has the signs of the presence will be ameliorated by digital therapy that promotes social reciprocity; and (d) providing a digital treatment that promotes social interaction.

In some embodiments, the machine learning model determines a degree of improvement to be achieved by the digital treatment.

In some embodiments, the behavioral disorder, the developmental delay, or the neural injury is autism or an autism spectrum disorder.

In some embodiments, the processor is configured with further instructions to provide the digital treatment to the individual when it is determined that the autism or the autism spectrum disorder will be ameliorated by the digital treatment.

In some embodiments, the digital treatment comprises an augmented reality experience.

In some embodiments, the digital treatment comprises a virtual reality experience.

In some embodiments, the digital treatment is provided by a mobile computing device.

In some implementations, the mobile computing device includes a smartphone, tablet computer, laptop computer, smart watch, or other wearable computing device.

In some implementations, the computer-readable medium is configured with further instructions to cause the processor to obtain, with a camera of the mobile computing device, video or images of a person interacting with the individual in the augmented reality experience.

In some embodiments, the computer readable medium is configured with further instructions to cause the processor to analyze the video or the image using an image analysis module to determine an emotion associated with the person.

In some embodiments, the virtual reality experience includes a displayed avatar or character, and the apparatus further includes determining an emotion expressed by the avatar or character within the virtual reality experience.

In some implementations, a description of the emotion is displayed to the individual in real-time within the augmented reality or virtual reality experience by printing the description on a screen of the mobile computing device or by emitting the description through an audio output coupled with the mobile computing device.

In some embodiments, the analysis module comprises a face recognition module for detecting the face of the person within the video or image.

In some implementations, the image analysis module includes a classifier that uses machine learning training to classify the face as exhibiting the emotion.

In some implementations, the computing device includes a microphone configured to capture audio from the augmented reality experience.

In some embodiments, the computer readable medium is configured with further instructions to cause the processor to classify sound from the microphone as being associated with emotion.

In some embodiments, the computer-readable medium is configured with further instructions to cause the processor to provide the individual with instructions with the digital treatment to engage in an activity pattern.

In some embodiments, the activity pattern includes emotion-inspired activity, emotion recognition activity, or unstructured play.

In some embodiments, a therapeutic agent is provided to the individual along with the digital treatment.

In some embodiments, the therapeutic agent improves cognition in the individual when the individual is receiving the digital treatment.

In some embodiments, the device is a wearable device.

Another exemplary method

In some aspects, disclosed herein is a computer-implemented method of treating an individual for a behavioral disorder, developmental delay, or neural injury using digital therapy, the method comprising: (a) receiving an input from the individual related to the behavioral disorder, the developmental delay, or the neural injury; (b) determining that the individual has evidence of having the behavioral disorder, the developmental delay, or the neurological damage using a trained classifier; (c) determining, using a machine learning model, that the behavioral disorder, the developmental delay, or the neurological impairment that the individual has evidence of will be improved by a digital therapy configured to promote social reciprocity.

In some embodiments, the machine learning model determines a degree of improvement to be achieved by the digital treatment.

In some embodiments, the method comprises providing the digital treatment to the individual when the developmental disorder is determined to be autism or autism spectrum disorder.

In some embodiments, the digital treatment comprises an augmented reality experience.

In some embodiments, the digital treatment comprises a virtual reality experience.

In some embodiments, the digital treatment is provided by a mobile computing device.

In some implementations, the mobile computing device includes a smartphone, tablet computer, laptop computer, smart watch, or other wearable computing device.

In some implementations, the method includes obtaining, with a camera of the mobile computing device, a video or image of a person interacting with the individual in the augmented reality experience.

In some embodiments, the method includes analyzing the video or the image using an image analysis module to determine an emotion associated with the person.

In some embodiments, the virtual reality experience includes a displayed avatar or character, and the method further includes determining an emotion expressed by the avatar or character within the virtual reality experience.

In some implementations, a description of the emotion is displayed to the individual in real-time within the augmented reality or virtual reality experience by printing the description on a screen of the mobile computing device or by emitting the description through an audio output coupled with the mobile computing device.

In some embodiments, the analysis module comprises a face recognition module for detecting the face of the person within the video or image.

In some implementations, the image analysis module includes a classifier that uses machine learning training to classify the face as exhibiting the emotion.

In some implementations, the computing device includes a microphone configured to capture audio from the augmented reality experience.

In some embodiments, the method includes classifying sound from the microphone as being associated with emotion.

In some embodiments, the method further comprises providing the individual with instructions to engage in an activity pattern with the digital treatment.

In some embodiments, the activity pattern includes emotion-inspired activity, emotion recognition activity, or unstructured play.

In some embodiments, the method comprises providing a therapeutic agent with the digital treatment.

In some embodiments, the therapeutic agent improves cognition in the individual when the individual is receiving the digital treatment.

In some embodiments, the digital treatment is configured to promote social reciprocity of the individual.

Another exemplary apparatus

In some aspects disclosed herein, a device for providing digital treatment to an individual for a behavioral disorder, developmental delay, or neural injury, the device comprising: (a) a display; and (b) a processor configured with instructions to: (i) receiving input from the individual related to the plurality of related behavioral disorders, developmental delay, and neural injury; (ii) determining, using a rating classifier, that the individual has a diagnosis of autism or autism spectrum disorder based on the input; and (iii) determining that the autism or autism spectrum disorder in the individual will be improved by the digital treatment using a machine learning model.

In some embodiments, the machine learning model determines a degree of improvement to be achieved by the digital treatment.

In some embodiments, the behavioral disorder, the developmental delay, or the neural injury is autism or an autism spectrum disorder.

In some embodiments, the processor is configured with further instructions to provide the digital treatment to the individual upon determining that the autism or the autism spectrum disorder will be ameliorated by the digital treatment.

In some embodiments, the digital treatment comprises an augmented reality experience.

In some embodiments, the digital treatment comprises a virtual reality experience.

In some embodiments, the digital treatment is provided by a mobile computing device.

In some implementations, the mobile computing device includes a smartphone, tablet computer, laptop computer, smart watch, or other wearable computing device.

In some implementations, the processor is configured with other instructions to obtain, with a camera of the mobile computing device, video or images of a person interacting with the individual in the augmented reality experience.

In some embodiments, the processor is configured with further instructions to analyze the video or the image using an image analysis module to determine an emotion associated with the person.

In some embodiments, the virtual reality experience includes a displayed avatar or character, and the apparatus further includes determining an emotion expressed by the avatar or character within the virtual reality experience.

In some implementations, a description of the emotion is displayed to the individual in real-time within the augmented reality or virtual reality experience by printing the description on a screen of the mobile computing device or by emitting the description through an audio output coupled with the mobile computing device.

In some embodiments, the analysis module comprises a face recognition module for detecting the face of the person within the video or image.

In some implementations, the image analysis module includes a classifier that uses machine learning training to classify the face as exhibiting the emotion.

In some implementations, the computing device includes a microphone configured to capture audio from the augmented reality experience.

In some embodiments, the processor is configured with further instructions to classify sound from the microphone as being associated with emotion.

In some embodiments, the processor is configured with instructions for providing the individual with instructions to engage in an activity pattern with the digital treatment.

In some embodiments, the activity pattern includes emotion-inspired activity, emotion recognition activity, or unstructured play.

In some embodiments, a therapeutic agent is provided to the individual along with the digital treatment.

In some embodiments, the therapeutic agent improves cognition in the individual when the individual is receiving the digital treatment.

In some embodiments, the digital treatment is configured to promote social reciprocity of the individual.

Another exemplary method

In one aspect, a method of providing an assessment of at least one cognitive function attribute of a subject may comprise: storing, on a computer system having a processor and a memory, a computer program for execution by the processor, the computer program comprising instructions for: receiving data of the subject related to the cognitive function attribute; evaluating the data of the subject using a machine learning model; and providing an assessment of the subject selected from the group consisting of an uncertainty determination and a classification determination in response to the data. The machine learning model may include a selected subset of a plurality of machine learning assessment models.

The classification determination may include a presence of the cognitive function attribute and an absence of the cognitive function attribute. Receiving data from the subject may include receiving an initial data set. Evaluating the data from the subject may include evaluating the initial data set using a preliminary subset of adjustable machine learning assessment models selected from a plurality of adjustable machine learning assessment models to output a numerical score for each of the preliminary subset of adjustable machine learning assessment models.

The method may further comprise providing a categorical determination or uncertainty determination regarding the presence or absence of the cognitive function attribute in the subject based on the analysis of the initial dataset, wherein a ratio of uncertainty determination to categorical determination may be adjusted. The method may further comprise: determining whether to apply an additional assessment model selected from the plurality of adjustable machine learning assessment models if the analysis of the initial data set yields an uncertainty determination; receiving an additional data set from the subject based on a result of the decision; evaluating the additional data set from the subject using the additional assessment model based on the outcome of the decision to output a numerical score for each of the additional assessment models; and providing a classification determination or an uncertainty determination regarding the presence or absence of the cognitive function attribute in the subject based on the analysis of the additional dataset from the subject using the additional assessment model, wherein a ratio of uncertainty determination to classification determination may be adjusted.

The method may further comprise: combining the numerical scores of each of the preliminary subsets of the assessment models to generate a combined preliminary output score; and mapping the combined preliminary output score to a classification determination or uncertainty determination regarding the presence or absence of the cognitive function attribute in the subject, wherein a ratio of uncertainty determination to classification determination may be adjusted.

The method may further comprise employing rule-based logic or a combination technique for combining the numerical scores of the preliminary subset of each of the assessment models and for combining the numerical scores of each of the additional assessment models. The ratio of the uncertainty determination to the classification determination may be adjusted by specifying an inclusion rate. Categorical determinations regarding the presence or absence of a developmental condition in the subject can be assessed by providing sensitivity and specificity metrics. The inclusion rate may be not less than 70% and the classification determination may yield a sensitivity of at least 70 with a corresponding specificity of at least 70. The inclusion rate may be no less than 70% and the classification determination may yield a sensitivity of at least 80 with a corresponding specificity of at least 80. The inclusion rate may be no less than 70% and the classification determination may yield a sensitivity of at least 90 with a corresponding specificity of at least 90.

The data from the subject may include at least one of a sample of diagnostic tools, wherein the diagnostic tools include a set of diagnostic questions and corresponding selectable answers, and demographic data.

The method may further comprise: training a plurality of adjustable machine learning assessment models using data from a plurality of subjects previously evaluated for the developmental condition, wherein training comprises: pre-processing the data from the plurality of subjects using machine learning techniques; extracting and encoding machine learning features from the pre-processed data; processing the data from the plurality of subjects to reflect an expected incidence of a cognitive function attribute of the subject in an intended application environment; selecting a subset of the processed machine learning features; evaluating performance of each of the plurality of adjustable machine learning assessment models, wherein sensitivity and specificity of each model for a predetermined inclusion rate is evaluated; and determining an optimal set of parameters for each model based on determining a benefit of using all models in the selected subset of the plurality of adjustable machine learning assessment models. Determining the optimal set of parameters for each model may include tuning parameters for each model in a different tuning parameter environment.

Processing the encoded machine learning features may include: calculating and assigning a sample weight to each sample of data, wherein each sample of data corresponds to a subject in the plurality of subjects, wherein the samples are grouped according to subject-specific dimensions, and wherein the sample weights are calculated and assigned to balance one sample group for each other sample group to reflect an expected distribution of each dimension of the subject in an intended environment. The subject-specific dimensions may include the gender of the subject, the geographic region in which the subject is living, and the age of the subject. Extracting and encoding machine learning features from the pre-processed data may include using feature encoding techniques such as, but not limited to, one-hot encoding, severity encoding, and behavioral presence encoding. Selecting the subset of processed machine-learned features may include identifying a subset of discriminative features from the processed machine-learned features using a bootstrapping technique.

The cognitive function attributes may include behavioral disorders and developmental progression. The classification determination provided to the subject may be selected from the group consisting of an uncertainty determination in response to the data, the presence of a plurality of cognitive function attributes, and the absence of a plurality of cognitive function attributes.

In another aspect, an apparatus for evaluating a cognitive function attribute of a subject may include a processor configured with instructions that, when executed, cause the processor to perform the above-described method.

Another exemplary apparatus

In another aspect, a mobile device for providing an assessment of at least one cognitive function attribute of a subject may comprise: a display; and a processor configured with instructions to: receiving and displaying data of the subject related to the cognitive function attribute; and receiving and displaying an assessment of the subject, the assessment selected from the group consisting of an uncertainty determination and a classification determination; wherein the assessment of the subject has been determined in response to data of the subject.

The classification determination may be selected from the presence of the cognitive function attribute and the absence of the cognitive function attribute. The cognitive function attribute may be determined with a sensitivity of at least 80% and a specificity of at least 80% for the presence or the absence of the cognitive function attribute, respectively. The cognitive function attribute may be determined with a sensitivity of at least 90% and a specificity of at least 90% for the presence or the absence of the cognitive function attribute, respectively. The cognitive function attribute may include a behavioral disorder, developmental delay, or neurological impairment.

Another exemplary apparatus

In another aspect, a digital therapy delivery device may comprise: one or more processors comprising software instructions for the following modules; a diagnostic module for receiving data from the subject and outputting diagnostic data of the subject, the diagnostic module comprising one or more classifiers constructed using machine learning or statistical modeling based on a population of subjects to determine the diagnostic data of the subject.

In some embodiments, the diagnostic software employs the Triton model. Wherein the diagnostic data comprises an assessment of the subject selected from the group consisting of an uncertainty determination and a classification determination in response to data received from the subject; and a therapy module for receiving the diagnostic data and outputting the personal therapeutic treatment plan for the subject, the therapy module comprising one or more models constructed using machine learning or statistical modeling based on at least a portion of the population of subjects to determine and output the personal therapeutic treatment plan for the subject, wherein the diagnostic module is configured to receive updated subject data from the subject in response to therapy of the subject and generate updated diagnostic data from the subject, and wherein the therapy module is configured to receive the updated diagnostic data and output an updated personal therapeutic plan for the subject in response to the diagnostic data and the updated diagnostic data.

The diagnostic module may comprise a diagnostic machine learning classifier trained on the population of subjects, and the therapy module may comprise a therapeutic machine learning classifier trained on at least a portion of the population of subjects, and the diagnostic module and the therapy module may be arranged for the diagnostic module to provide feedback to the therapy module based on performance of the therapy plan. The therapeutic classifier can include instructions trained on a data set that includes a population of which the subject is not a member, and the subject can include individuals that are not members of the population. The diagnostic module can include a diagnostic classifier trained on a plurality of profiles of a population of subjects of at least 10,000 people and a therapy profile trained on the plurality of profiles of the population of subjects.

Another exemplary System

In another aspect, a system for assessing at least one cognitive function attribute of a subject may comprise: a processor configured with instructions that, when executed, cause the processor to: presenting a plurality of questions from a plurality of classifier chains, the plurality of classifier chains including a first chain including a social/behavioral tarnishment classifier and a second chain including a speech and language tarnishment classifier. The social/behavioral retardation classifier may be operably coupled to an autism and attention deficit hyperactivity disorder, ADHD, classifier. The social/behavioral delay classifier may be configured to output a positive result if the subject has social/behavioral delay and a negative result if the subject does not have the social/behavioral delay. The social/behavioral retardation classifier can be configured to output an uncertainty result if it cannot be determined with a specified sensitivity and specificity whether the subject has the social/behavioral retardation. The social/behavioral retardation classifier output may be coupled to an input of an autism and ADHD classifier, and the autism and ADHD classifier may be configured to output a positive result if the subject has autism or ADHD. The output of the autism and ADHD classifier may be coupled to an input of an autism-to-ADHD classifier, and the autism-to-ADHD classifier may be configured to generate a first output if the subject has autism and a second output if the subject has ADHD. The autism versus ADHD classifier may be configured to provide an uncertainty output if it cannot be determined with a specified sensitivity and specificity whether the subject has autism or ADHD. The speech and language delay classifier may be operably coupled to a mental disorder classifier. The language-to-language delay classifier may be configured to output a positive result if the subject has speech and language delay and a negative output if the subject does not have the speech and language delay. The language-to-language-delay classifier may be configured to output an uncertainty result if it cannot be determined with a specified sensitivity and specificity whether the subject has the language-to-language delay. The speech and language slowness classifier output may be coupled to an input of a mental disorder classifier, and the mental disorder classifier may be configured to generate a first output if the subject has mental disorder and a second output if the subject has the speech and language slowness but does not have mental disorder. The dysnoesia classifier can be configured to provide an uncertainty output if it cannot be determined with a specified sensitivity and specificity whether the subject has the dysnoesia.

The processor may be configured with instructions to present the questions of each chain in order and to skip overlapping questions. The first chain may include a social/behavioral retardation classifier coupled to an autism and ADHD classifier. The second chain may include a speech and language delay classifier coupled to a mental disorder classifier. The user may sequentially pass through the first chain and the second chain.

Another exemplary method

In another aspect, a method for administering a drug to a subject can comprise: detecting a neurological disorder of a subject with a machine learning classifier; and administering a drug to the subject in response to the detected neurological disorder.

Amphetamine may be administered in a dose of 5mg to 50 mg. Dextroamphetamine may be administered in a dose of 5mg to 60 mg. Methylphenidate may be administered in a dose of 5mg to 60 mg. Methamphetamine may be administered in a dose of 5mg to 25 mg. The dexmethylphenidate may be in the order of 2. 5mg to 40 mg. Guanfacine may be administered in a dose of 1mg to 10 mg. Tomoxetine can be administered in a dose of 10mg to 100 mg. The dexamphetamine may be administered at a dose of 30mg to 70 mg. Clonidine may be administered in a dose of 0.1mg to 0.5 mg. Modafinil may be administered in doses ranging from 100mg to 500 mg. Risperidone may be administered at a dose of 0.5mg to 20 mg. Quetiapine can be administered in a dose of 25mg to 1000 mg. Buspirone may be administered in a dose of 5mg to 60 mg. Sertraline may be administered in doses up to 200 mg. Escitalopram may be administered in a dose of up to 40 mg. Citalopram may be administered in a dose of up to 40 mg. Fluoxetine can be administered at a dose of 40mg to 80 mg. Paroxetine may be administered at a dose of 40mg to 60 mg. Venlafaxine may be administered in a dose of up to 375 mg. Clomipramine may be administered in doses up to 250 mg. Fluvoxamine may be administered in a dose of up to 300 mg.

The machine learning classifier may have an inclusion rate of not less than 70%. The machine learning classifier may be capable of outputting an uncertainty result.

Another exemplary method

Described herein is a computer-implemented method of evaluating an individual for a plurality of related behavioral disorders, developmental delays, and neural injury, the method comprising: receiving input from the individual related to the plurality of related behavioral disorders, developmental delay, and neural injury; determining, using a rating classifier, that the individual has a diagnosis of autism or autism spectrum disorder based on the input; and determining that the autism or autism spectrum disorder in the individual will be improved by the digital treatment using a machine learning model.

In some embodiments, the machine learning model determines a degree of improvement to be achieved by the digital treatment.

In some embodiments, the method comprises providing the digital treatment to the individual when it is determined that the autism or the autism spectrum disorder will be ameliorated by the digital treatment.

In some embodiments, the digital treatment comprises an augmented reality experience.

In some embodiments, the digital treatment comprises a virtual reality experience.

In some embodiments, the digital treatment is provided by a mobile computing device.

In some implementations, the mobile computing device includes a smartphone, tablet computer, laptop computer, smart watch, or other wearable computing device.

In some implementations, the method includes obtaining, with a camera of the mobile computing device, a video or image of a person interacting with the individual in the augmented reality experience.

In some embodiments, the method includes analyzing the video or the image using an image analysis module to determine an emotion associated with the person.

In some embodiments, the virtual reality experience includes a displayed avatar or character, and the method further includes determining an emotion expressed by the avatar or character within the virtual reality experience.

In some implementations, a description of the emotion is displayed to the individual in real-time within the augmented reality or virtual reality experience by printing the description on a screen of the mobile computing device or by emitting the description through an audio output coupled with the mobile computing device.

In some embodiments, the analysis module comprises a face recognition module for detecting the face of the person within the video or image.

In some implementations, the image analysis module includes a classifier that uses machine learning training to classify the face as exhibiting the emotion.

In some implementations, the computing device includes a microphone configured to capture audio from the augmented reality experience.

In some embodiments, the method includes classifying sound from the microphone as being associated with emotion.

In some embodiments, the method comprises providing the individual with instructions to engage in an activity pattern with the digital treatment.

In some embodiments, the activity pattern includes emotion-inspired activity, emotion recognition activity, or unstructured play.

In some embodiments, the method comprises providing a therapeutic agent with the digital treatment.

In some embodiments, the therapeutic agent improves cognition in the individual when the individual is receiving the digital treatment.

In some embodiments, the digital treatment is configured to promote social reciprocity of the individual.

Another exemplary apparatus

Described herein is a device for providing digital treatment to an individual for a behavioral disorder, developmental delay, or neurological injury, the device comprising: a display; and a processor configured with instructions to: receiving input from the individual related to the plurality of related behavioral disorders, developmental delay, and neural injury; determining, using a rating classifier, that the individual has a diagnosis of autism or autism spectrum disorder based on the input; and determining that the autism or autism spectrum disorder in the individual will be improved by the digital treatment using a machine learning model.

In some embodiments, the machine learning model determines a degree of improvement to be achieved by the digital treatment.

In some embodiments, the processor is configured with further instructions to provide the digital treatment to the individual upon determining that the autism or the autism spectrum disorder will be ameliorated by the digital treatment.

In some embodiments, the digital treatment comprises an augmented reality experience.

In some embodiments, the digital treatment comprises a virtual reality experience.

In some embodiments, the digital treatment is provided by a mobile computing device.

In some implementations, the mobile computing device includes a smartphone, tablet computer, laptop computer, smart watch, or other wearable computing device.

In some implementations, the processor is configured with other instructions to obtain, with a camera of the mobile computing device, video or images of a person interacting with the individual in the augmented reality experience.

In some embodiments, the processor is configured with further instructions to analyze the video or the image using an image analysis module to determine an emotion associated with the person.

In some embodiments, the virtual reality experience includes a displayed avatar or character, and the apparatus further includes determining an emotion expressed by the avatar or character within the virtual reality experience.

In some implementations, a description of the emotion is displayed to the individual in real-time within the augmented reality or virtual reality experience by printing the description on a screen of the mobile computing device or by emitting the description through an audio output coupled with the mobile computing device.

In some embodiments, the analysis module comprises a face recognition module for detecting the face of the person within the video or image.

In some implementations, the image analysis module includes a classifier that uses machine learning training to classify the face as exhibiting the emotion.

In some implementations, the computing device includes a microphone configured to capture audio from the augmented reality experience.

In some embodiments, the processor is configured with further instructions to classify sound from the microphone as being associated with emotion.

In some embodiments, the processor is configured with further instructions to provide the individual with instructions to engage in an activity pattern with the digital treatment.

In some embodiments, the activity pattern includes emotion-inspired activity, emotion recognition activity, or unstructured play.

In some embodiments, a therapeutic agent is provided to the individual along with the digital treatment.

In some embodiments, the therapeutic agent improves cognition in the individual when the individual is receiving the digital treatment.

In some embodiments, the digital treatment is configured to promote social reciprocity of the individual.

Is incorporated by reference

All publications, patents, and patent applications mentioned in this specification are herein incorporated by reference to the same extent as if each individual publication, patent, or patent application was specifically and individually indicated to be incorporated by reference.

Drawings

The novel features of the invention are set forth with particularity in the appended claims. A better understanding of the features and advantages of the present invention will be obtained by reference to the following detailed description that sets forth a non-limiting illustrative embodiment, in which the principles of the invention are utilized, and the accompanying drawings of which:

fig. 1A and 1B illustrate some exemplary developmental disorders that may be assessed using an assessment program as described herein.

FIG. 2 is a schematic diagram of an exemplary data processing module for providing an assessment program as described herein.

FIG. 3 is a schematic diagram illustrating a portion of an exemplary assessment model based on a random forest classifier.

FIG. 4 is an exemplary operational flow of a prediction module as described herein.

FIG. 5 is an exemplary operational flow of a feature recommendation module as described herein.

FIG. 6 is an exemplary operational flow of a desired feature importance determination algorithm executed by the feature recommendation module described herein.

FIG. 7 illustrates a method of administering an assessment program as described herein

FIG. 8 illustrates a computer system suitable for incorporating the methods and apparatus described herein.

Fig. 9 shows a Receiver Operating Characteristic (ROC) curve mapping sensitivity versus false detection for an exemplary assessment model as described herein.

FIG. 10 is a scatter diagram illustrating performance metrics of a feature recommendation module as described herein.

FIG. 11 is an exemplary operational flow of an evaluation module as described herein.

FIG. 12 is an exemplary operational flow of a model adjustment module as described herein.

FIG. 13 is another exemplary operational flow of an evaluation module as described herein.

FIG. 14 is an exemplary operational flow of the model output combining step depicted in FIG. 13.

Fig. 15 illustrates an exemplary questionnaire screening algorithm configured to provide only categorical determinations as described herein.

Fig. 16 illustrates an exemplary questionnaire screening algorithm configured to provide classification determinations and uncertainty determinations as described herein.

Fig. 17 shows a comparison of the performance of various algorithms for all samples as described herein.

Fig. 18 shows a comparison of the performance of various algorithms for samples taken from children under 4 years of age as described herein.

Fig. 19 shows a comparison of the performance of various algorithms for samples taken from children 4 years of age and over as described herein.

Figure 20 shows the specificity of the algorithm at the 75% -85% sensitivity range for all samples as described herein.

Fig. 21 shows the specificity of the algorithm in the 75% -85% sensitivity range for children under 4 years of age as described herein.

Fig. 22 shows the specificity of the algorithm at the 75% -85% sensitivity range for children 4 years of age and older as described herein.

Fig. 23A illustrates an exemplary system diagram of a digitally personalized medicine platform.

FIG. 23B illustrates a detailed view of an exemplary diagnostic module.

Fig. 23C illustrates an exemplary therapy module diagram.

Fig. 24 illustrates an exemplary diagnosis and treatment method provided in the digital personalized medicine platform.

Fig. 25 illustrates a schematic flow chart showing the treatment of autism-related developmental delay.

Fig. 26 illustrates an overview of the data processing flow of a digitally personalized medical system comprising a diagnosis module and a therapy module configured to integrate information from multiple sources.

Fig. 27 shows a system for assessing a plurality of clinical indications of a subject.

Fig. 28 illustrates a drug that may be administered in response to a diagnosis of the platforms, systems, devices, methods, and media described herein.

FIG. 29 shows a diagram of a platform for assessing an individual as described herein.

FIG. 30 shows a non-limiting flow diagram for evaluating an individual.

FIG. 31A illustrates a login screen for assessing an individual's mobile device according to the platforms, systems, devices, methods, and media described herein.

Fig. 31B shows a display screen of the mobile device indicating completion of the user portion of the ASD evaluation.

Fig. 31C shows a display screen of a mobile device that provides instructions for capturing video of a subject suspected of having an ASD.

Fig. 31D, 31E, and 31F illustrate display screens of mobile devices prompting a user to answer questions for assessing a subject in accordance with the platforms, systems, devices, methods, and media described herein.

FIG. 32 illustrates a display screen of a video analyst portal displaying questions as part of a video analyst questionnaire in accordance with the platforms, systems, apparatuses, methods, and media described herein.

FIG. 33 illustrates a display screen of a healthcare provider portal displaying questions as part of a healthcare provider questionnaire in accordance with the platforms, systems, apparatus, methods, and media described herein.

FIG. 34 illustrates a display screen of a healthcare provider portal displaying uploaded information for individuals containing videos and completed caregiver questionnaires according to the platforms, systems, apparatus, methods, and media described herein.

Fig. 35 shows a diagram of a platform including mobile device software and server software for providing digital therapy to a subject as described herein.

Fig. 36 illustrates a diagram of an apparatus configured to provide digital therapy according to the platforms, systems, apparatuses, methods, and media described herein.

Fig. 37 illustrates an operational flow for combined digital diagnosis and digital treatment according to the platforms, systems, devices, methods, and media described herein.

FIG. 38 shows a diagram of the facial recognition module and emotion detection module performing image or video analysis to detect emotions or social cues.

Detailed Description

The terms "based on" and "responsive to" are used interchangeably with respect to this disclosure.

The term "processor" encompasses one or more of a local processor, a remote processor, or a system of processors, and combinations thereof.

The term "characteristic" is used herein to describe a property or attribute that is relevant to determining the developmental progress of a subject. For example, "characteristics" may refer to clinical characteristics (e.g., age, ability of a subject to participate in an impersonation game, etc.) associated with clinical assessment or diagnosis of a subject for one or more developmental disorders. The term "feature value" is used herein to describe the value of a corresponding feature of a particular subject. For example, a "feature value" may refer to a clinical characteristic of a subject associated with one or more developmental disorders (e.g., a feature value may be 3 if the feature is "age," or a feature value may be "various impersonation games" or "no impersonation games" if the feature is "ability of the subject to participate in the impersonation game").

As used herein, the phrases "autism" and "autism spectrum disorder" are used interchangeably.

As used herein, the phrases "Attention Deficit Disorder (ADD)" and "attention deficit/hyperactivity disorder (ADHD)" are used interchangeably.

As used herein, the term "facial recognition expressive activity" refers to a therapeutic activity (e.g., in a digital therapeutic application or device) in which a child is prompted to find a person in their environment that exhibits a particular emotion and receive real-time emotion confirmation. Facial recognition expression activity may also be described as unstructured play. This activity provides reinforcement of the changes in the face's emotions, as well as training of how to distinguish between emotions.

As used herein, the phrase "social interaction" refers to social interactions and/or communications between individuals that are reciprocal back and forth. Social interactions may include verbal and non-verbal social interactions such as, for example, conversations or communication of facial expressions and/or body language. One or more elements or indicators of social reciprocity may be measured in accordance with the platforms, systems, apparatus, methods, and media disclosed herein. For example, social reciprocity may be measured using eye contact or gaze, verbal responses to social or emotional cues (e.g., in response to a parent's greeting, "hi"), non-verbal responses to social or emotional cues (e.g., smiling in response to a parent's smiling).

Described herein are methods and devices for determining developmental progression of a subject. For example, the described methods and devices may identify a subject as having advanced development in one or more developmental areas or cognitive decline in one or more cognitive functions, or as having retarded development or being at risk of having one or more developmental disorders. The disclosed methods and apparatus may determine the developmental progress of a subject by evaluating a plurality of characteristics or features of the subject based on an assessment model, which may be generated from a large dataset of a population of related subjects using machine learning methods.

Although the methods and apparatus are described herein in the context of identifying one or more developmental disorders in a subject, the methods and apparatus are well suited for determining any developmental progression of a subject. For example, the methods and apparatus may be used to identify a subject as developmentally advanced by identifying one or more developmental areas advanced by the subject. To identify one or more areas of advanced development, the methods and apparatus may be configured to, for example, assess one or more characteristics or characteristics of the subject associated with advanced or innate behavior. The described methods and devices may also be used to identify a subject as cognitive decline in one or more cognitive functions by evaluating the one or more cognitive functions of the subject.

Described herein are methods and devices for diagnosing or assessing the risk of one or more developmental disorders in a subject. The method may include providing a data processing module that may be used to construct and administer an assessment program for screening a subject for one or more of a plurality of developmental disorders or conditions. The assessment program may assess a plurality of characteristics or traits of the subject, wherein each characteristic may be associated with a likelihood that the subject has at least one of a plurality of developmental disorders that can be screened by the program. Each characteristic may be associated with a likelihood that the subject has two or more related developmental disorders, wherein the two or more related disorders may have one or more related symptoms. The characteristics may be assessed in a number of ways. For example, the characteristics may be assessed by the subject's answers to questions, observations of the subject, or results of structured interactions with the subject, as described in further detail herein.

To differentiate among multiple developmental disorders of a subject within a single screening program, the program may dynamically select features to be evaluated on the subject during administration of the program based on values of features previously presented by the subject (e.g., answers to previous questions). The assessment program can be administered to the subject or a caregiver of the subject using a user interface provided by the computing device. The computing device includes a processor having instructions stored thereon for allowing a user to interact with the data processing module through a user interface. The assessment procedure may take less than 10 minutes, e.g., 5 minutes or less, to administer to the subject. Thus, the devices and methods described herein can provide a prediction of a subject's risk of developing one or more of a variety of developmental disorders using a single, relatively short screening procedure.

The methods and apparatus disclosed herein may be used to determine the next most relevant question related to a feature of a subject based on previously identified features of the subject. For example, the methods and apparatus may be configured to determine the next most relevant question in response to previously answered questions relating to the subject. After answering each prior question, a most predictive next question may be identified and a series of most predictive next questions and corresponding series of answers generated. The series of answers may include an answer profile of the subject, and the most predictive next question may be generated in response to the answer profile of the subject.

The methods and apparatus disclosed herein are well suited for combination with prior questions, for example, the methods and apparatus may be used to diagnose or identify a subject as at risk in response to fewer questions by identifying the most predictive next question in response to prior answers.

In one aspect, a method of providing an assessment of at least one cognitive function attribute of a subject comprises the acts of: a computer program for execution by a processor is stored on a computer system having a processor and a memory. The computer program may include instructions for: 1) receiving data of a subject related to the cognitive function attribute; 2) evaluating data of the subject using a machine learning model; and 3) providing an assessment of the subject. The evaluation may be selected from uncertainty determination and classification determination in response to the data. The machine learning model may include a selected subset of a plurality of machine learning assessment models. The classification determination may include the presence of a cognitive function attribute and the absence of a cognitive function attribute.

Receiving data from the subject may include receiving an initial data set. Evaluating the data from the subject may include evaluating an initial data set using a preliminary subset of adjustable machine learning assessment models selected from the plurality of adjustable machine learning assessment models to output a numerical score for the preliminary subset of each adjustable machine learning assessment model. The method may further comprise providing a classification determination or uncertainty determination regarding the presence or absence of the cognitive function attribute in the subject based on the analysis of the initial dataset, wherein a ratio of uncertainty determination to classification determination may be adjusted.

The method may further comprise the operations of: 1) determining whether to apply an additional assessment model selected from a plurality of adjustable machine learning assessment models if the analysis of the initial data set yields an uncertainty determination; 2) receiving an additional data set from the subject based on the outcome of the decision; 3) evaluating additional data sets from the subject using additional assessment models based on the results of the decision to output a numerical score for each additional assessment model; and 4) providing a categorical determination or uncertainty determination regarding the presence or absence of the cognitive function attribute in the subject based on analysis of the additional dataset from the subject using the additional assessment model. The ratio of uncertainty determination to classification determination may be adjusted.

The method may further comprise the operations of: 1) combining the numerical scores of the preliminary subset of each assessment model to generate a combined preliminary output score; and 2) mapping the combined preliminary output score to a categorical determination or uncertainty determination regarding the presence or absence of the cognitive function attribute in the subject. The ratio of uncertainty determination to classification determination may be adjusted. The method may further comprise the operations of: 1) combining the numerical scores of each additional assessment model to generate a combined additional output score; and 2) mapping the combined additional output scores to a categorical determination or uncertainty determination regarding the presence or absence of the cognitive function attribute in the subject. The ratio of uncertainty determination to classification determination may be adjusted. The method may further include employing rule-based logic or a combination technique for combining the numerical scores of the preliminary subset of each of the assessment models and for combining the numerical scores of each of the additional assessment models.

The ratio of uncertainty determination to classification determination can be adjusted by assigning an inclusion rate, and wherein classification determinations regarding the presence or absence of the developmental condition in the subject are assessed by providing sensitivity and specificity metrics. The inclusion rate may be no less than 70% and the classification determination yields a sensitivity of at least 70% with a corresponding specificity of at least 70%. The inclusion rate may be no less than 70% and the classification determination yields a sensitivity of at least 80% with a corresponding specificity of at least 80%. The inclusion rate may be no less than 70% and the classification determination yields a sensitivity of at least 90% with a corresponding specificity of at least 90%. The data from the subject may include at least one of a sample of diagnostic tools, wherein the diagnostic tools include a set of diagnostic questions and corresponding selectable answers, and demographic data.

The method may further include training a plurality of adjustable machine learning assessment models using data from a plurality of subjects previously evaluated for developmental status. The training may include the following operations: 1) pre-processing data from a plurality of subjects using machine learning techniques; 2) extracting and encoding machine learning features from the pre-processed data; 3) processing data from a plurality of subjects to reflect an expected incidence of a cognitive function attribute of the subject in an intended application environment; 4) selecting a subset of the processed machine learning features; 5) evaluating performance of each of a plurality of adjustable machine learning assessment models; and 6) determining an optimal set of parameters for each model based on determining a benefit of using all models in the selected subset of the plurality of adjustable machine learning assessment models. The sensitivity and specificity of each model for a predetermined inclusion rate can be evaluated. Determining the optimal set of parameters for each model may include tuning parameters for each model in a different tuning parameter environment. Processing the encoded machine learned features may include calculating and assigning a sample weight to each sample of the data. Each sample of data may correspond to a subject of the plurality of subjects. Samples may be grouped according to subject-specific dimensions. The sample weights may be calculated and assigned to balance one sample set against each other sample set to reflect the expected distribution of each dimension of the subject in the intended environment. The subject-specific dimensions may include the gender of the subject, the geographic region in which the subject is located, and the age of the subject. Extracting and encoding machine learning features from the pre-processed data may include using feature encoding techniques such as, but not limited to, one-hot encoding, severity encoding, and behavioral presence encoding. Selecting the subset of processed machine-learned features may include identifying a subset of discriminative features from the processed machine-learned features using a bootstrapping technique.

The cognitive function attributes may include behavioral disorders and developmental progression. The classification determination provided to the subject may be selected from the group consisting of an uncertainty determination in response to the data, the presence of a plurality of cognitive function attributes, and the absence of a plurality of cognitive function attributes.

In another aspect, an apparatus for assessing a cognitive function attribute of a subject may include a processor. The processor may be configured with instructions that, when executed, cause the processor to receive data of the subject related to the cognitive function attribute and apply rules to generate a classification determination of the subject. The classification determination may be selected from the group consisting of an uncertainty determination in response to the data, a presence of the cognitive function attribute, and an absence of the cognitive function attribute. The cognitive functional attribute may be determined with a sensitivity of at least 70% and a specificity of at least 70% for the presence or absence of the cognitive functional attribute, respectively. The cognitive functional attribute may be selected from the group consisting of autism, autism spectrum, attention deficit disorder, attention deficit and hyperactivity disorder, and speech and learning disorders. The cognitive function attribute may be determined with a sensitivity of at least 80% and a specificity of at least 80% for the presence or absence of the cognitive function attribute, respectively. The cognitive function attribute may be determined with a sensitivity of at least 90% and a specificity of at least 90% for the presence or absence of the cognitive function attribute, respectively. The cognitive function attributes may include behavioral disorders and developmental progression.

In another aspect, a non-transitory computer readable storage medium encoded with a computer program containing instructions executable by a processor to evaluate a cognitive function attribute of a subject includes a database recorded on the medium. The database may include data for a plurality of subjects associated with at least one cognitive function attribute, and a plurality of adjustable machine learning assessment models; an evaluation software module; and a model adjustment software module. The evaluation software module may include instructions for: 1) receiving data of a subject related to a cognitive function attribute; 2) evaluating data of the subject using a selected subset of the plurality of machine-learned assessment models; and 3) providing a classification determination to the subject, the classification determination selected from the group consisting of an uncertainty determination in response to the data, a presence of the cognitive function attribute, and an absence of the cognitive function attribute. The model adjustment software module may include instructions for: 1) pre-processing data from a plurality of subjects using machine learning techniques; 2) extracting and encoding machine learning features from the pre-processed data; 3) processing the encoded machine learning features to reflect an expected distribution of the subject in the intended application environment; 4) selecting a subset of the processed machine learning features; 5) evaluating performance of each of a plurality of adjustable machine learning assessment models; 6) adjusting parameters of each model under different adjusting parameter environments; and 7) determining an optimal set of parameters for each model based on determining a benefit of using all models in the selected subset of the plurality of adjustable machine learning assessment models. The sensitivity and specificity of each model for a predetermined inclusion rate can be evaluated. The cognitive function attributes may include behavioral disorders and developmental progression.

In another aspect, a computer-implemented system may include a digital processing device. The digital processing may include at least one processor, an operating system configured to execute executable instructions, memory, and computer programs. The memory may include storage for holding data for a plurality of subjects associated with at least one cognitive function attribute and storage for holding a plurality of machine learning assessment models. The computer program may include instructions executable by a digital processing device to: 1) receiving data of a subject related to a cognitive function attribute; 2) evaluating data of the subject using a selected subset of the plurality of machine-learned assessment models; and 3) providing a classification determination to the subject, the classification determination selected from the group consisting of an uncertainty determination in response to the data, a presence of the cognitive function attribute, and an absence of the cognitive function attribute. The cognitive function attributes may include behavioral disorders and developmental progression.

In another aspect, a mobile device for providing an assessment of at least one cognitive function attribute of a subject may include a display and a processor. The processor may be configured with instructions to receive and display data of the subject related to the cognitive function attribute, and to receive and display an assessment of the subject. The evaluation may be selected from uncertainty determination and classification determination. An assessment of the subject can be determined in response to the data of the subject. The classification determination may be selected from the group consisting of the presence of a cognitive function attribute and the absence of a cognitive function attribute. The cognitive function attribute may be determined with a sensitivity of at least 80 and a specificity of at least 80 for the presence or absence of the cognitive function attribute, respectively. The cognitive function attribute may be determined with a sensitivity of at least 90 and a specificity of at least 90 for the presence or absence of the cognitive function attribute, respectively. The cognitive function attributes may include behavioral disorders and developmental progression.

In another aspect, a digital treatment system for treating a subject with a personal therapeutic treatment plan can include one or more processors, a diagnostic module for receiving data from the subject and outputting diagnostic data of the subject, and a treatment module for receiving the diagnostic data and outputting the personal therapeutic treatment plan for the subject. The diagnostic module may include one or more classifiers constructed using machine learning or statistical modeling based on a population of subjects to determine diagnostic data for the subjects. The diagnostic data may include an assessment of the subject selected from the group consisting of an uncertainty determination and a classification determination in response to data received from the subject. The therapy module may include one or more models constructed using machine learning or statistical modeling based on at least a portion of a population of subjects to determine and output a personal therapeutic treatment plan for the subject. The diagnostic module may be configured to receive updated subject data from the subject in response to the therapy of the subject and generate updated diagnostic data from the subject. The therapy module may be configured to receive updated diagnostic data and output an updated personal treatment plan for the subject in response to the diagnostic data and the updated diagnostic data. The diagnostic module can include a diagnostic machine learning classifier trained on a population of subjects. The therapy module may include a therapeutic machine learning classifier trained on at least a portion of a population of subjects. The diagnostic module and the therapy module may be arranged such that the diagnostic module provides feedback to the therapy module based on the performance of the therapy plan. The therapeutic classifier can include instructions trained on a data set that includes a population of which the subject is not a member. The subject may include individuals who are not members of the population. The diagnostic module can include a diagnostic classifier trained on a plurality of profiles of a population of subjects of at least 10,000 people and a therapy profile trained on a plurality of profiles of a population of subjects.

In another aspect, a digital treatment system for treating a subject with a personal therapeutic treatment plan can include a processor, a diagnostic module for receiving data from the subject and outputting diagnostic data of the subject, and a treatment module for receiving the diagnostic data and outputting the personal therapeutic treatment plan for the subject. The diagnostic data may include an assessment of the subject selected from the group consisting of an uncertainty determination and a classification determination in response to data received from the subject. The personal therapeutic treatment plan may include a digital treatment. The digital treatment may include instructions, feedback, activities, or interactions provided to the subject or caregiver. The digital treatment may be provided using a mobile device. The diagnostic data and the personal therapeutic treatment plan may be provided to a third party system. The third-party system may include a computer system of a healthcare professional, or a therapy delivery system. The diagnostic module may be configured to receive updated subject data from the subject in response to the feedback data of the subject and generate updated diagnostic data. The therapy module may be configured to receive updated diagnostic data and output an updated personal treatment plan for the subject in response to the diagnostic data and the updated diagnostic data. Updated subject data may be received in response to feedback data identifying relative levels of efficacy, compliance, and response generated by the personal therapeutic treatment plan. The diagnostic module may use machine learning or statistical modeling to determine diagnostic data based on a population of subjects. The therapy module may determine a personal therapeutic treatment plan for the subject based on at least a portion of the population of subjects. The diagnostic module can include a diagnostic machine learning classifier trained on a population of subjects. The therapy module may include a therapeutic machine learning classifier trained on at least a portion of a population of subjects. The diagnostic module may be configured to provide feedback to the therapy module based on performance of the personal therapeutic treatment plan. The data from the subject may include at least one of video, audio, responses to questions or activities of the subject and caretaker, and active or passive data streams from user interaction with activities, games, or software features of the system. The subject may be at risk selected from a behavioral disorder, a neurological disorder, and a mental health disorder. The behavioral, neurological or psychological health disorder may be selected from: autism, autism spectrum, attention deficit disorder, depression, obsessive compulsive disorder, schizophrenia, alzheimer's disease, dementia, attention deficit and hyperactivity disorder, and speech and learning disorders. The diagnostic module may be configured for adults to perform assessments or to provide data for assessing a child or adolescent. The diagnostic module may be configured for a caregiver or family member to perform an assessment or to provide data for assessing a subject.

In another aspect, a non-transitory computer-readable storage medium may be encoded with a program. The computer program may include executable instructions to: 1) receiving input data from a subject and outputting diagnostic data for the subject; 2) receiving diagnostic data and outputting a personal therapeutic treatment plan for the subject; and 3) evaluating the diagnostic data based on at least a portion of the population of subjects to determine and output a personal therapeutic treatment plan for the subject. The diagnostic data may include an assessment of the subject selected from the group consisting of an uncertainty determination and a classification determination in response to input data received from the subject. Updated subject input data may be received from the subject in response to the treatment of the subject, and updated diagnostic data may be generated from the subject. Updated diagnostic data may be received and an updated personal treatment plan for the subject may be output in response to the diagnostic data and the updated diagnostic data.

In another aspect, a non-transitory computer-readable storage medium may be encoded with a computer program. The computer program may include executable instructions for receiving input data from a subject and outputting diagnostic data for the subject, and receiving the diagnostic data and outputting a personal therapeutic treatment plan for the subject. The diagnostic data may include an assessment of the subject selected from the group consisting of an uncertainty determination and a classification determination in response to the data received from the subject. The personal therapeutic treatment plan may include a digital treatment.

In another aspect, a method of treating a subject with a personal therapeutic treatment plan can include a diagnostic process receiving data from the subject and outputting diagnostic data for the subject, wherein the diagnostic data includes an assessment of the subject, and a treatment process receiving the diagnostic data and outputting the personal therapeutic treatment plan for the subject. The evaluation may be selected from the group consisting of an uncertainty determination and a classification determination in response to data received from the subject. The diagnostic process may include receiving updated subject data from the subject in response to the therapy of the subject and generating updated diagnostic data from the subject. The therapy process may include receiving updated diagnostic data and outputting an updated personal treatment plan for the subject in response to the diagnostic data and the updated diagnostic data. Updated subject data may be received in response to feedback data identifying relative levels of efficacy, compliance, and response generated by the personal therapeutic treatment plan. The personal therapeutic treatment plan may include a digital treatment. The digital treatment may include instructions, feedback, activities, or interactions provided to the subject or caregiver. The digital treatment may be provided using a mobile device. The method may further include providing the diagnostic data and the personal therapeutic treatment plan to a third party system. The third-party system may include a computer system of a healthcare professional, or a therapy delivery system. The diagnostic process may be performed by a process selected from machine learning, classifiers, artificial intelligence, and statistical modeling based on a population of subjects to determine diagnostic data. A treatment process may be performed by a process selected from machine learning, a classifier, artificial intelligence, or statistical modeling based on at least a portion of the population of subjects to determine the personal therapeutic treatment plan for a subject. The diagnostic process may be performed by a diagnostic machine learning classifier trained on a population of subjects. The therapeutic process may be performed by a therapeutic machine learning classifier trained on at least a portion of a population of subjects. The diagnostic process may provide feedback to the therapy module based on the performance of the individual therapeutic treatment plan. The data from the subject may include at least one of video, audio, responses to questions or activities of the subject and caretaker, and active or passive data streams from user interaction with activities, games, or software features. The diagnostic process may be performed by an adult to perform an assessment or to provide data for assessing a child or adolescent. The diagnostic process may enable a caregiver or family member to perform an assessment or provide data for assessing a subject. The subject may be at risk selected from a behavioral disorder, a neurological disorder, and a mental health disorder. The risk may be selected from: autism, autism spectrum, attention deficit disorder, depression, obsessive compulsive disorder, schizophrenia, alzheimer's disease, dementia, attention deficit and hyperactivity disorder, and speech and learning disorders.

Systems and methods for providing diagnostics and digital therapy using readily available computing devices (e.g., smart phones) and utilizing machine learning are disclosed herein.

Described herein are methods and devices for assessing and treating individuals with one or more diagnoses from behavioral disorders, developmental delays, and related categories of neurological injury. In some embodiments, the assessment comprises identification or confirmation of a diagnosis of the individual, wherein the diagnosis belongs to one or more of the relevant diagnostic categories comprising behavioral disorders, developmental delays, and neural injury. In some embodiments, the assessment by a method or device described herein includes an assessment of whether an individual will respond to treatment. In some embodiments, the assessment by the methods or devices described herein includes an assessment of the extent to which an individual will respond to a particular treatment. For example, in some embodiments, an individual is assessed as being highly responsive to digital therapy using the methods or devices described herein. In some embodiments, the digital treatment is administered when it is determined that the individual will be highly responsive to the digital treatment.

Also described herein are personalized treatment regimens that include digital therapy, non-digital therapy, drugs, or any combination thereof. In some embodiments, the therapeutic agent is administered with digital therapy. In some embodiments, the therapeutic agent administered with the digital treatment is configured to improve the performance of the digital treatment on the individual receiving the digital treatment. In some embodiments, the therapeutic agent administered with the digital treatment improves cognition in the individual receiving the digital treatment. In some embodiments, the therapeutic agent relaxes the subject receiving the digital treatment. In some embodiments, the therapeutic agent increases the concentration or concentration level of the individual receiving digital treatment.

Digital treatment may include instructions, feedback, activities, or interactions provided to an individual or caregiver by the methods or devices described herein. In some embodiments, the digital treatment is configured to suggest behaviors, activities, games, or interactive sessions with system software and/or third party devices.

The digital treatment used by the methods and apparatus described herein may be implemented using various digital applications, including augmented reality, virtual reality, real-time cognitive assistance, or other behavioral therapies augmented using technology. Digital treatment may be implemented using any device configured to produce a virtual or augmented reality environment. Such devices may be configured to contain one or more sensor inputs, such as video and/or audio captured using a camera and/or microphone. Non-limiting examples of devices suitable for providing digital therapy as described herein include wearable devices, smart phones, tablet computing devices, laptop computers, projectors, and any other device suitable for producing a virtual or augmented reality experience.

The systems and methods described herein may provide social learning tools or assistance to a user through a technology-enhanced experience (e.g., augmented reality and/or virtual reality). In some embodiments, the digital treatment is configured to promote or improve social interaction of the individual. In some embodiments, the digital treatment is configured to promote or improve social reciprocity in individuals with autism or autism spectrum disorders.

In some embodiments of the methods and devices described herein, the method or device for delivering virtual or augmented reality based digital therapy receives input, and in some of these embodiments, the input affects how the virtual or augmented reality is presented to the individual receiving the therapy. In some embodiments, input is received from a camera and/or microphone of a computing device used to deliver the digital therapy. In some cases, the input is received from a sensor such as, for example, a motion sensor or a vital signs sensor. In some embodiments, input in the form of video, images, and/or sounds is captured and analyzed using algorithm(s), such as artificial intelligence or machine learning models, to provide feedback and/or behavioral modifications to the subject through the virtual or augmented reality experience provided.

In some embodiments, the input to the method or apparatus includes an assessment of facial expressions or other social cues of one or more other individuals with which the digital treatment recipient interacted in a virtual reality or augmented reality interaction.

In a non-limiting example of an augmented reality digital treatment experience, an individual may react with a real person, and in this example, a video, image, and/or sound recording of the person is taken by a computing device that delivers the digital treatment. The video, image, and/or sound recordings are then analyzed using an analytical classifier that determines emotions associated with facial expressions (or other social cues) of a person interacting with the individual in an augmented reality environment. The analysis of the facial expressions (or other social cues) may include an assessment of emotions or emotions associated with the facial expressions and/or other social cues. The results of the analysis are then provided to the individual receiving the digital treatment. In some embodiments, the results of the analysis are displayed within an augmented reality environment. In some embodiments, the results of the analysis are displayed on a screen of the computing device. In some embodiments, the analysis results are provided via an audible sound or message.

In a non-limiting example of a virtual reality digital treatment experience, an individual receiving digital treatment may interact with an image or representation of a real person or an image or representation of a virtual object or character (such as a cartoon person or other artistic presentation of an interactive object). In this example, the software determines the emotion conveyed by the avatar, character or object within the virtual reality environment. The results of the analysis are then provided to the individual receiving the digital treatment. In some embodiments, the results of the analysis are displayed within an augmented reality environment. In some embodiments, the results of the analysis are displayed on a screen of the computing device. In some embodiments, the analysis results are provided via an audible sound or message.

As other illustrative examples, a smiling individual interacting with a digital treatment recipient is assessed as happy. In this example, the input includes an assessment of facial expressions or social cues, and it is displayed or otherwise provided to the recipient of the digital treatment to assist in learning to identify these facial expressions or social cues. That is, in this example, the emotion that the individual is assessed to express (in this example, happiness) is displayed or otherwise provided to the digital treatment recipient by, for example, displaying the word "happiness" on the screen of the mobile computing during or before the individual smiles in a virtual or augmented reality experience. Examples of emotions that may be detected and/or used in the various games or activities described herein include happiness, sadness, anger, surprise, depression, fear/startle, calm, aversion, and slight.

In some cases, the device communicates to the subject, using audio or video signals, emotional or social cues detected for other individuals captured as input(s). The visual signal may be displayed as text, designs or pictures, emoticons, colors, or other visual cues corresponding to the detected emotional or social cues. The audio signal may be conveyed as audio words, sounds such as tones or beats, music, or other audio cues corresponding to detected emotional or social cues. In some cases, a combination of visual and audio signals is used. These cues may be customized or selected from a series of cues to provide a personalized set of audio/visual signals. The signal may also be turned on or off as part of this customization experience.

In some cases, the digital treatment experience includes an activity pattern. The activity pattern may include emotion exciting activity, emotion recognition activity, or unstructured play. Unstructured play may be an unscripted, free-roaming, or otherwise unstructured mode in which the user may freely participate in one or more digital treatment activities. One example of an unstructured mode is a game or activity, where the user is free to collect one or more images or representations of a real person, or of a virtual object or character (such as a cartoon person or other artistic presentation of an interactive object). This unstructured mode may be characterized as having a "sandbox" style of play, which has little restriction on user decision making or game play, as opposed to a progress style that forces the user into a series of tasks. A user may use a camera of a device such as a smartphone to collect such images (e.g., to take a picture of other individuals such as family members or caregivers). Alternatively or in combination, the user may digitally collect images or representations, such as via browsing a library or database. As an illustrative example, a user wanders around his house and takes a picture of his parents using a smartphone camera. In addition, the user collects self-portraits posted by family members on social media by selecting and/or downloading photos to a smart phone. In some cases, the device displays a real-time image or a captured or downloaded image, along with the emotion of the person's identification or classification in the image. This allows the user to conduct unstructured learning when encountering real-world examples of emotions expressed by various other individuals.

The emotion recognition activity can be configured to test and/or train the user to recognize emotions or emotional cues through the structural learning experience. For example, emotion recognition activities may be used to assist a user in engaging in reinforcement learning by providing images that have been previously exposed to the user (e.g., photographs of a caregiver captured by the user during unstructured play). Reinforcement learning allows users to reinforce their recognition of emotions that have been previously displayed to them. Reinforcement learning may involve one or more interactive activities or games. One example is a game in which a user is presented with multiple images corresponding to different emotions (e.g., a smartphone screen showing an image of a person smiling and an image of another person frowning) and a prompt identifying the image corresponding to a particular emotion (e.g., a screen showing or microphone outputting a question or command for the user to identify the correct image). The user may respond by selecting one of a plurality of images on the screen or providing an audio response (e.g., stating "left/center/right image" or "answer a/B/C"). Another example is a game, where a single image corresponding to an emotion is presented to the user and the user is asked to identify the emotion. In some cases, the user is given multiple emotional options. Alternatively, the user must provide a response (e.g., a brief response to typing or audio, rather than a selection of multiple options) without giving an option selection. In some cases, a selection of multiple options is provided. The selection of the option may be visual or non-visual (e.g., an audio selection not shown on the graphical user interface). As an illustrative example, the user is shown an image of a caregiver smiling, and is prompted by the following audio question "is the person happy or sad? ". Alternatively, the question is displayed on the screen. The user may then provide an audio answer or type an answer. Another example is a game, where a user is presented with multiple images and multiple emotions, and the images can be matched to the corresponding emotions.

In some cases, the captured and/or downloaded images are tagged, sorted, and/or filtered for one or more activities or games as part of the digital treatment experience. For example, because reinforcement learning may require querying a user for images that the user has contacted, the available image library may be filtered to remove images that do not satisfy one or more of the following rules: (1) successfully detecting at least one face; (2) successfully detecting at least one emotion; (3) the image has been previously presented or displayed to the user. In some cases, the images are further filtered according to a particular activity. For example, due to poor performance in previous activities, a user may be assigned an emotion recognition reinforcement learning activity specifically directed to recognizing anger; thus, the image used for this reinforcement learning activity may also be filtered to include at least one image in which anger or emotional cues corresponding to anger are detected.

In some cases, the captured and/or downloaded images are imported into a library of collected images that are accessible by the digital treatment software. Alternatively or in combination, the images may be marked such that they are recognized by the digital treatment software as images collected for the purpose of an interactive digital treatment experience. The tagging may be automatic when the user takes a picture within the context of the interactive digital treatment experience. As an illustrative example, a user opens a digital treatment application on a smartphone and selects a scripted or free-roaming mode. The smartphone then presents a capture interface on its touchscreen, along with written and/or audio instructions for capturing a picture of the other person's face. Any photos captured by the user using the device camera will then be automatically tagged and/or added to the gallery or database. Alternatively, a user browsing social media outside of the digital treatment application selects published images and selects the option to download, import, or tag the images for access by the digital treatment application.

In some cases, the images are tagged to identify relevant information. This information may include the identity of the person in the image (e.g., name, title, interpersonal relationship to the user) and/or facial expressions or emotions expressed by the person in the image. Facial recognition and emotion classification as described herein may be used to evaluate an image to generate or determine one or more labels for the image. As an illustrative example, a user takes a picture of his caregiver, which is filtered for facial recognition, followed by sentiment classification based on the recognized face. The classified emotion is "happy", which results in the image being marked as an identified emotion. In some cases, the tagging is performed by another user (e.g., a parent or caregiver). As an illustrative example, a parent logs into the digital treatment application and accesses a library or database of images collected by the user. The parent sorts the unlabeled images and then selects the appropriate labels for the emotions expressed by the person within the images.

As an illustrative example, a computing device including a camera and microphone uses an outward facing camera and microphone to track the face and classify emotions of a social partner of a digital treatment recipient, and provides two forms of clues to the digital treatment recipient in real-time. The device also has an inwardly facing digital display with a peripheral monitor and speakers. A machine learning classifier is used to assess the expression of an individual interacting with a recipient of digital treatment, and when the face is classified as expressing emotion, the emotion is an input to the device and is displayed or otherwise presented to the recipient of digital treatment.

In some cases, the device also includes an inward facing camera (e.g., a "self-portrait" camera), and tracks and classifies the emotion of the digital treatment recipient. The tracking and classification of the social partner's emotions and the digital treatment recipient's emotions may be performed in real time (e.g., within 1, 2, 3, 4, or 5 seconds of each other, or some other suitable time range) simultaneously or in close proximity. Alternatively, images of social partners and/or digital treatment recipients may be captured and then evaluated to track and classify their respective emotions at a later time (i.e., not in real-time). This allows capturing social interaction between the patient and the target individual, for example, as a combined facial expression and/or emotion of the two persons. In some cases, the detected emotions and/or emotions of the parties to the social interaction are time stamped or otherwise ordered to determine a sequence of emotions, or other interactions that constitute one or more social interactions. These social interactions may assess the ability of the patient to participate in the social interaction.

As an illustrative example, the patient points the phone to the parent who smiles to him. The phone display displays smiley face expressions in real time to help the patient identify emotions corresponding to the parent's facial expressions. In addition, the display screen optionally provides instructions to the patient in response to the parent. The patient does not smile to his parents and the inward facing camera captures this response in one or more images or videos. The images and/or videos and the social interaction timeline or timestamp sequence are then saved on the device (and optionally uploaded or saved on a remote network or cloud). In this case, the parent's smile is labeled "smile" and the patient's lack of response is labeled "no response" or "no smile". Thus, this particular social interaction is determined to fail to be smiling reciprocal. Social interactions may be further segmented based on whether the target individual (parent) and patient expressed "honest" smiles rather than "polite smiles". For example, the algorithms and classifiers described herein for detecting "smiles" or "emotions" may be trained to distinguish real and polite smiles, which may be distinguished based on visual cues corresponding to engagement of eye muscles in real smiles and lack of engagement of eye muscles in polite smiles. This differentiation of the types or subtypes of emotions or facial expressions may be based on training an algorithm or classifier on a suitable dataset of labeled images (e.g., images labeled with a "polite" versus a "truthful" smile).

In some aspects, the platforms, systems, devices, methods, and media disclosed herein include software applications configured to enable management and/or monitoring of digital therapies. The software application may be a mobile application, a web application, or other computer application. In some cases, the application provides a control center that allows the subject or a caregiver of the subject to manage the device. The device may enable a user to review, upload or delete captured data, such as video, audio, photographs or detected or categorized emotional cues. The user may also use the apparatus to enter or configure settings such as, for example, data capture settings (e.g., what type of data to capture, how long to store, etc.). In some cases, the application obtains images (e.g., still images from captured video), performs emotion cue classifiers, and/or saves video and usage data.

At times, the platforms, systems, devices, methods, and media disclosed herein provide for digital treatment with interactive features. The interaction features in embodiments are configured to cause the digital treatment recipient to guess the emotion of another person based on the facial expressions or social cues of all people interacting with the individual. In some cases, the platforms, systems, devices, methods, and media disclosed herein provide a user with the option to delete captured data, such as video or audio. This option protects the privacy of the home by enabling them to delete data. Metrics on the captured data, such as usage, age of the video, whether the video was saved or deleted, usage during the intervention, and other relevant parameters may be obtained or calculated.

In some cases, the device operates at a frame rate of about 15-20FPS, which enables facial expression recognition to be completed within 100 ms. The apparatus may operate at a frame rate of 10FPS to 100 FPS. The device may include 10FPS to 15FPS, 10FPS to 20FPS, 10FPS to 25FPS, 10FPS to 30FPS, 10FPS to 35FPS, 10FPS to 40FPS, 10FPS to 45FPS, 10FPS to 50FPS, 10FPS to 60FPS, 10FPS to 80FPS, 10FPS to 100FPS, 15FPS to 20FPS, 15FPS to 25FPS, 15FPS to 30FPS, 15FPS to 35FPS, 15FPS to 40FPS, 15FPS to 45FPS, 15FPS to 50FPS, 15S to 60FPS, 15FPS to 80FPS, 15FPS to 100FPS, 20FPS to 25FPS, 20FPS to 30FPS, 20FPS to 35S, 20FPS to 40S, 20FPS to 45FPS, 20FPS to 50FPS, 20FPS to 30FPS, 25 to 30FPS, 25FPS, 30FPS, 30FPS, 30, and 30, and 30FPS, 30, and 30FPS, and S, FPS, and S, FPS, and 30FPS, and S, 30FPS to 50FPS, 30FPS to 60FPS, 30FPS to 80FPS, 30FPS to 100FPS, 35FPS to 40FPS, 35FPS to 45FPS, 35FPS to 50FPS, 35FPS to 60FPS, 35FPS to 80FPS, 35FPS to 100FPS, 40FPS to 45FPS, 40FPS to 50FPS, 40FPS to 60FPS, 40FPS to 80FPS, 40FPS to 100FPS, 45FPS to 50FPS, 45FPS to 60FPS, 45FPS to 80FPS, 45FPS to 100FPS, 50FPS to 60FPS, 50FPS to 80FPS, 50FPS to 100FPS, 60FPS to 80FPS, 60FPS to 100FPS, or 80 to 100 FPS. The apparatus may operate at a frame rate of 10FPS, 15FPS, 20FPS, 25FPS, 30FPS, 35FPS, 40FPS, 45FPS, 50FPS, 60FPS, 80FPS, or 100 FPS. The apparatus may operate at a frame rate of at least 10FPS, 15FPS, 20FPS, 25FPS, 30FPS, 35FPS, 40FPS, 45FPS, 50FPS, 60FPS, or 80 FPS. The apparatus may operate at a frame rate of at most 15FPS, 20FPS, 25FPS, 30FPS, 35FPS, 40FPS, 45FPS, 50FPS, 60FPS, 80FPS, or 100 FPS.

In some cases, the device may detect facial expressions or actions within 10ms to 200 ms. The device can be between 10ms to 20ms, 10ms to 30ms, 10ms to 40ms, 10ms to 50ms, 10ms to 60ms, 10ms to 70ms, 10ms to 80ms, 10ms to 90ms, 10ms to 100ms, 10ms to 150ms, 10ms to 200ms, 20ms to 30ms, 20ms to 40ms, 20ms to 50ms, 20ms to 60ms, 20ms to 70ms, 20ms to 80ms, 20ms to 90ms, 20ms to 100ms, 20ms to 150ms, 20ms to 200ms, 30ms to 40ms, 30ms to 50ms, 30ms to 60ms, 30ms to 70ms, 30ms to 80ms, 30ms to 90ms, 30ms to 100ms, 30ms to 150ms, 30ms to 200ms, 40ms to 50ms, 40ms to 60ms, 40ms to 70ms, 40ms to 80ms, 40 to 90ms, 40ms to 100ms, 40ms to 50ms, 50ms to 70ms, 40ms to 80ms, 40ms to 90ms, 50ms, or more preferably, The facial expression or motion is detected within 50ms to 100ms, 50ms to 150ms, 50ms to 200ms, 60ms to 70ms, 60ms to 80ms, 60ms to 90ms, 60ms to 100ms, 60ms to 150ms, 60ms to 200ms, 70ms to 80ms, 70ms to 90ms, 70ms to 100ms, 70ms to 150ms, 70ms to 200ms, 80ms to 90ms, 80ms to 100ms, 80ms to 150ms, 80ms to 200ms, 90ms to 100ms, 90ms to 150ms, 90ms to 200ms, 100ms to 150ms, 100ms to 200ms, or 150ms to 200 ms. The device may detect facial expressions or actions within 10ms, 20ms, 30ms, 40ms, 50ms, 60ms, 70ms, 80ms, 90ms, 100ms, 150ms, or 200 ms. The device may detect facial expressions or actions within at least 10ms, 20ms, 30ms, 40ms, 50ms, 60ms, 70ms, 80ms, 90ms, 100ms, or 150 ms. The device may detect facial expressions or actions within at most 20ms, 30ms, 40ms, 50ms, 60ms, 70ms, 80ms, 90ms, 100ms, 150ms, or 200 ms.

Disclosed herein are platforms, systems, apparatuses, methods, and media that provide a machine learning framework for detecting emotional or social cues. The input data may include image and/or video data and optionally additional sensor data (e.g., accelerometer data, audio data, etc.). The input data is provided to an emotion detection system that detects or identifies emotion or social cues that can be output to the user in real time, such as via a user interface on a computing device.

The emotion detection system includes artificial intelligence or machine learning model(s) trained to recognize emotion or social cues. In some cases, the system provides pre-processing of the data, a machine learning model or classifier, and optionally additional steps for processing or formatting the output. The output may be evaluated against one or more thresholds to place the input into one or more of a plurality of social or emotional cue categories.

In some implementations, the machine learning model is implemented as a regression model (e.g., providing a continuous output that can be associated with a degree of social cues such as a degree of anger). Alternatively, the model is implemented as a classification model (e.g., a classification output is detected that indicates a smile or frown). In some cases, both types of models are implemented according to the type of cue detected.

In some cases, emotion detection systems include one or more modules for performing specific tasks necessary for the overall process operation. The emotion detection system may include a facial recognition module to detect and track a person's face as it appears in one or more images or video data and an expression or emotion detection module that evaluates the detected face to identify the presence of one or more emotional or social cues. Additional modules may be present, such as an audio module for processing any audio input (e.g., a spoken utterance or spoken command of a user), or other modules corresponding to additional sensor inputs. Various combinations of these modules are contemplated depending on the particular implementation of the emotion detection system.

The face recognition module 3810 and emotion detection module 3820 may together perform a series of steps, such as the non-limiting illustrated steps shown in FIG. 38. First, input data including images and/or videos 3801 is provided. Face detection (e.g., for each image or frame of a video feed) is performed on the input data 3802. This may include fiducial point face tracking or other processes for providing accurate face detection. The face may be normalized and/or registered against a standard size and/or position or angle. Other image processing techniques that may be applied include the standardization of illumination. Next, a histogram 3803 of gradient feature extraction is generated for the region of interest on the face. The facial expressions are then classified to detect social or emotional cues (e.g., smile, frown, anger, etc.) 3804. The classification may be performed using a logistic regression machine learning model that is trained on a training data set of labeled images. Finally, the output of the machine learning model may be filtered 3805, for example, using a filtering algorithm such as a moving average or a low-pass time-domain filter. This may help provide real-time social or emotional cue detection that remains stable over time by avoiding detecting too many cues from the image or video data. Various methods may be employed to provide real-time emotional or social cue detection. Examples include neutral subtraction for facial expression recognition, which evaluates neutral facial features and subtracts from extracted features in real time, and classifies multiple images, such as in a video feed, and then averages or smoothes them over time to mitigate noise. Various machine learning models may be used, for example, a feed-forward convolutional neural network used in conjunction with a recurrent neural network. This framework for social or emotional cue detection may be implemented on input from an outward facing camera (e.g., the target individual) and from an inward facing camera (e.g., the user). In addition, other sources of input data (such as sensor data) can be incorporated into the analysis framework to improve sentiment and social cue detection.

In some embodiments, the various modules of the emotion detection system are implemented using a multi-dimensional machine learning system. For example, a convolutional neural network may generate an output based directly on input data (such as pixel image data and optionally additional forms of input data). Various known methods may perform object recognition, segmentation, and localization tasks without registration or image pre-processing. Furthermore, by generating a pre-trained neural network on a publicly available image database and then fine-tuning using a small data set when a small amount of labeled data is available, migratory learning can be used to improve emotional and social cue detection, and then applied to the field of emotional computing using a small amount of data.

In some embodiments, the emotion recognition system is configured to customize social or emotional cue detection based on a particular target individual to improve emotion detection. For example, the system may label images identified as belonging to the same person, which are used to provide a target-specific dataset to help calibrate the machine learning model. The label may be supplied by the user or a parent or caregiver, for example, who is reviewing the image captured by the patient in order to apply the correct label or to correct errors in the label. Thus, machine learning models such as convolutional neural networks can be fine-tuned to adjust the weights between layers in order to improve the accuracy of a particular individual. Thus, over time, accuracy may increase as more data is collected.

Digital treatment may include social learning assistance for the subject to improve cognitive performance, such as, for example, facial engagement and/or recognition or providing feedback during social interaction. In some cases, the platforms, systems, devices, methods, and media disclosed herein provide an assessment tool that includes a survey or questionnaire to be completed by the subject or the caregiver of the subject. A survey may contain at least 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 80, 90, or 100 or more items and/or no more than 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 80, 90, or 100 or more items. These items can be categorized across multiple domains. In some cases, the items are categorized across two, three, four, or five social domains. The inputs or responses to these items may correspond to features used in the machine learning algorithms described herein, such as trained evaluation or diagnostic models or classifiers. In some cases, the input or response includes a number or score. The score may be generated by summing the items for each of the items. A score below a threshold may be interpreted as indicating or suggesting a disorder, delay, or injury, such as, for example, an autism spectrum disorder.

In some cases, the platforms, systems, devices, methods, and media disclosed herein provide assessment tools that measure various areas, such as, for example, communication, daily life, social, motor function, and adaptive behavioral skills. The assessment tool can be used to monitor a subject. For example, a higher score may indicate a stronger adaptive function.

In some embodiments, a method or device as described herein includes an assessment aspect and a digital treatment aspect, wherein the assessment and digital treatment together improve social reciprocity of the individual receiving the digital treatment. More specifically, in some embodiments, the evaluation of individuals using machine learning modeling is selected for individuals who: (1) there is a need for improvements in social reciprocity, and (2) with the use of digital therapy, their social reciprocity will improve significantly. It is important to note that while some individuals are able to interact therapeutically with digital therapy, some individuals are unable to benefit from digital therapy (e.g., due to cognitive deficits), which renders them unable to interact sufficiently with digital therapy to reach the level of therapy. Embodiments of the methods and devices described herein select individuals that will benefit more from digital treatment such that only those individuals are provided with digital treatment, while other treatment modalities are provided to individuals determined not to benefit from digital treatment. In some embodiments, an individual receiving digital treatment is provided with a therapeutic agent or additional therapy that enhances his digital treatment experience by, for example, improving the cognitive and/or attention of the individual during a digital treatment session.

The digital treatment may include a social interaction session during which the subject participates in a social interaction with the assistance of a social learning aid. In some cases, the personal treatment plan includes one or more social interaction sessions. Social interaction sessions may be scheduled, such as, for example, at least one, two, three, four, five, six, seven sessions per week. The digital treatment implemented as part of the personal treatment plan may be programmed to last for at least one, two, three, four, five, six, seven, eight, nine, or ten or more weeks.

In some cases, digital treatment is implemented using artificial intelligence. For example, artificial intelligence driven computing devices (such as wearable devices) may be used to provide behavioral interventions to improve social outcomes for children with behavioral, neurological or mental health conditions or disorders. In some embodiments, the personalized treatment regimen is adaptive, e.g., dynamically updating or reconfiguring its therapy based on feedback captured from the subject during ongoing treatment and/or additional relevant information (e.g., results from an autism assessment).

Fig. 1A and 1B illustrate some exemplary developmental disorders that may be assessed using an assessment program as described herein. The assessment program can be configured to assess a subject's risk of having one or more developmental disorders, such as two or more related developmental disorders. A developmental disorder may have at least some overlap in the symptoms or characteristics of the subject. Such developmental disorders may include Pervasive Developmental Disorder (PDD), Autism Spectrum Disorder (ASD), social interaction disorder, stereorepetitive behaviors, interests and activities (RRB), autism ("classical autism"), asperger's syndrome ("high functional autism"), unclassified PDD (PDD-NOS, "atypical autism"), Attention Deficit and Hyperactivity Disorder (ADHD), speech and language retardation, Obsessive Compulsive Disorder (OCD), mental disorders, learning disorders, or any other relevant developmental disorder such as those defined in the mental disorder Diagnosis and Statistics Manual (DSM) any version. The assessment program can be configured to determine a risk of the subject suffering from each of a plurality of disorders. The assessment program may be configured to determine that the subject is at greater risk of developing a first disorder or a second disorder of the plurality of disorders. The assessment program may be configured to determine the subject as being at risk of having a first disorder and a second disorder that are co-morbid. The assessment program may be configured to predict that the subject has normal development, or has a low risk of having any obstacles for which the program is configured to screen. The assessment program may further be configured to have high sensitivity and specificity to distinguish between different severity levels of the disorder; for example, the program may be configured to predict the risk of a subject suffering from a grade 1 ASD, a grade 2 ASD, or a grade 3 ASD as defined in DSM, fifth edition (DSM-V).

Many developmental disorders may have similar or overlapping symptoms, thus complicating the assessment of a developmental disorder in a subject. The assessment programs described herein may be configured to assess a plurality of characteristics of a subject that may be associated with one or more developmental disorders. The program may include an assessment model that has been trained using a large number of clinically validated data sets to understand statistical relationships between features of the subject and clinical diagnoses of one or more developmental disorders. Thus, when a subject participates in a assessment program, the subject feature values for each assessment feature (e.g., subject answers to questions) may be queried against the assessment model to identify statistical correlations of the subject feature values, if any, with one or more screened developmental disorders. Based on the values of the features provided by the subject and the relationship between these values and the predicted risk of one or more developmental disorders determined by the assessment model, the assessment program may dynamically adjust the selection of the next feature to be assessed on the subject. Based on determining the subject as being at risk for a particular disorder of the plurality of disorders screened, the selection of the next feature to be evaluated may include identification of the next most predictive feature. For example, if the assessment model predicts that the risk of developing autism is low and the risk of developing ADHD is relatively high in a subject after the subject has answered the first five questions of the assessment program, the assessment program may choose to next evaluate features in the subject that are more relevant to ADHD (e.g., the questions whose answers are highly correlated with the clinical diagnosis ADHD may then be presented to the subject). Thus, the assessment procedure described herein can be dynamically tailored to the risk profile of a particular subject and enable a high level of granularity in assessing the subject's disabilities.

FIG. 2 is a schematic diagram of an exemplary data processing module 100 for providing an assessment program for use as described herein. The data processing module 100 generally includes a pre-processing module 105, a training module 110, and a prediction module 120. The data processing module may extract training data 150 from the database or host new data 155 using user interface 130. The pre-processing module may apply one or more transformations to normalize the training or new data for the training or prediction module. The preprocessed training data may be passed to a training module, which may build a rating model 160 based on the training data. The training module may further include a validation module 115, the validation module 115 configured to validate the trained assessment model using any suitable validation algorithm (e.g., hierarchical K-fold cross-validation). The pre-processed new data may be passed to a prediction module, which may output a prediction 170 of the developmental disorder of the subject by fitting the new data to a assessment model constructed in a training module the prediction module may further include a feature recommendation module 125, the feature recommendation module 125 configured to select or recommend a next feature to be evaluated on the subject based on previously provided feature values of the subject.

The training data 150 used by the training module to construct the assessment model may include a plurality of data sets from a plurality of subjects, each subject's data set including an array of features and corresponding feature values, and a classification of a subject's developmental disorder or condition. As described herein, a feature on a subject can be assessed by one or more questions asked of the subject, observations of the subject, or structured interactions with the subject. For example, the feature values may include one or more of an answer to a question, an observation of the subject, such as based on a characteristic of the video image, or a response of the subject to a structured interaction. Each feature may be associated with identifying one or more developmental disorders or conditions, and each corresponding feature value may be indicative of the degree of presence of the feature on a particular subject. For example, a feature may be a subject's ability to participate in an imagination or impersonation game, and a feature value for a particular subject may be a score of 0, 1, 2, 3, or 8, where each score corresponds to the degree of presence of the feature on the subject (e.g., 0 for multiple impersonation games, 1 for some impersonation game, 2 for occasional impersonation or highly repetitive impersonation games, 3 for no impersonation game, 8 for inapplicable). The characteristics may be evaluated on the subject by questions presented to the subject or to a caregiver, such as a parent, where the answers to the questions include characteristic values. Alternatively or in combination, the features may be observed on the subject, for example, using a video of the subject engaging in a certain behavior, and the feature values may be identified by the observation. In addition to the feature arrays and corresponding feature values, the data set for each subject in the training data includes a classification of the subject. For example, the classification may be autism, Autism Spectrum Disorder (ASD), or non-lineage. Preferably, the classification includes clinical diagnoses, assigned by qualified personnel, such as licensed clinical psychologists, in order to improve the accuracy of the predictions of the generated assessment models. The training data may include data sets available from a large data repository, such as autism diagnostic interview-revision (ADI-R) data and/or autism diagnostic observation plan (ADOS) data available from an Autism Genetic Resource Exchange (AGRE), or any data set available from any other suitable data repository (e.g., boston autism Alliance (AC), simons foundation, national autism research database, etc.). Alternatively or in combination, the training data may include a large self-reporting data set that may be crowd-sourced by users (e.g., via a website, mobile application, etc.).

For example, the pre-processing module 105 may be configured to apply one or more transformations to the extracted training data to clean and normalize the data. The pre-processing module may be configured to discard features that contain stray metadata or contain little observation. The pre-processing module may be further configured to normalize the encoding of the feature values. Depending on the source of the data set, different data sets may often have the same eigenvalues encoded in different ways. For example, "900", "900.0", "904", "904.0", "-1", "-1.0", "None", and "NaN" may all be encoded for "missing" feature values. The pre-processing module may be configured to recognize encoding variants of the same feature value and normalize the data set to have a uniform encoding for a given feature value. The pre-processing module may therefore reduce irregularities in the input data for the training module and the prediction module, thereby improving the robustness of the training module and the prediction module.

In addition to normalizing the data, the pre-processing module may be configured to re-encode certain feature values into different data representations. In some cases, the raw data representation of the feature values in the dataset may be less than ideal for building the assessment model. For example, for a classification feature in which corresponding feature values are encoded as integers from 1 to 9, each integer value may have different semantic content independent of the other values. For example, a value of "1" and a value of "9" may both be highly associated with a particular classification, while a value of "5" is not. Raw data representations of feature values (where the feature values are encoded as integers themselves) may not be able to capture the unique semantic content of each value because the values are represented in a linear model (e.g., the answer to "5" would place the subject exactly between "1" and "9" when considering the feature alone; however, in the above case where "1" and "9" are highly correlated with a given classification and "5" is not, such an interpretation is incorrect). To ensure that the semantic content of each feature value is captured in the construction of the assessment model, the pre-processing module may include instructions to re-encode certain feature values, such as feature values corresponding to classification features, in a "one-hot" manner, for example. In the "one-hot" representation, the characteristic value may be represented as an array of bits having a value of 0 or 1, the number of bits corresponding to the number of possible values of the characteristic. Only the characteristic value of the subject may be represented as "1" while all other values are represented as "0". For example, if the subject answers a question with a possible answer comprising an integer from 1 to 9 of "4", the raw data representation may be [4] and the one-hot representation may be [ 000100000 ]. Such a unique representation of the characteristic values may allow each value to be considered independently of other possible values, in case such a unique representation would be necessary. Thus, by re-encoding the training data using the most appropriate data representation for each feature, the pre-processing module may improve the accuracy of the assessment model constructed using the training data.

The pre-processing module may be further configured to extrapolate any missing data values so that downstream modules can process the data correctly. For example, if the training data set provided to the training module includes data missing an answer to one of the questions, the pre-processing module may provide the missing values so that the data set may be correctly processed by the training module. Similarly, if a new data set provided to the prediction module lacks one or more feature values (e.g., the data set being queried includes only the answer to the first question in a series of questions to be asked), the pre-processing module may provide the missing values, thereby enabling proper processing of the data set by the prediction module. For features having categorical feature values (e.g., the degree to which certain behaviors are displayed on the subject), the deficiency values may be provided as an appropriate representation of the data so specified. For example, if the classification features are encoded in a one-hot representation as described herein, the preprocessing module may encode the missing classification feature values as a "0" bit array. For a feature with a continuous feature value (e.g., subject age), the mean of all possible values may be provided in place of the missing value (e.g., 4 years of age).

For example, the training module 110 may utilize machine learning algorithms or other algorithms to build and train assessment models to be used in the assessment program. A rating model may be constructed to capture statistical relationships, if any, between a given feature value and a particular developmental disorder screened by the program to be rated based on training data. For example, the assessment model may include statistical correlations between a plurality of clinical characteristics and clinical diagnoses of one or more developmental disorders. A given feature value may have different predictive utility for classifying each of a plurality of developmental disorders to be assessed in an assessment program. For example, in the foregoing example that includes features of a subject's ability to participate in a game that is rich in imagination or impersonation, a feature value of "3" or "no multiple impersonation game" would have high predictive utility for classifying autism, while the same feature value may have low predictive utility for classifying ADHD. Thus, for each feature value, a probability distribution may be extracted that describes the probability of a particular feature value for predicting each of a plurality of developmental disorders to be screened by the assessment program. Machine learning algorithms may be used to extract these statistical relationships from the training data and build a rating model that can produce an accurate prediction of developmental disorders when a data set including one or more feature values is fitted to the rating model.

The assessment model may be built using one or more machine learning algorithms, such as a support vector machine and/or a graphical model that deploys feature selection step-wise backwards, both of which may have the advantage of inferring interactions between features. For example, a machine learning algorithm or other statistical algorithm may be used, such as alternating decision trees (adtrees), decision stumps, Functional Trees (FTs), Logical Model Trees (LMTs), logistic regression, random forests, linear classifiers, or any machine learning algorithm known in the art. One or more algorithms may be used together to generate an integration method, which may be optimized using machine-learned metaverse algorithms such as lifting (e.g., AdaBoost, LPBoost, TotalBoost, BrownBoost, MadaBoost, LogitBoost, etc.) to reduce bias and/or variance. Once the assessment model is derived from the training data, the model can be used as a predictive tool to assess the risk of a subject suffering from one or more developments. For example, machine learning analysis may be performed using one or more of a number of programming languages and platforms known in the art, such as R, Weka, Python, and/or Matlab.

A random forest classifier (which typically includes multiple decision trees where the output prediction is a pattern of prediction classifications of individual trees) may help reduce overfitting to the training data. A random subset of features may be used at each split or decision node to construct a set of decision trees. The best partition may be selected using the Gini criterion, where the decision node with the lowest calculated Gini uncertainty index is selected. At prediction time, all decision trees can be "voted" and the majority vote (or pattern of prediction classifications) can be output as a prediction classification.

FIG. 3 is a schematic diagram illustrating a portion of an exemplary assessment model 160 based on a random forest classifier. The assessment module may include a plurality of individual decision trees 165, such as decision trees 165a and 165b, each of which may be independently generated using a random subset of features in the training data. Each decision tree may include one or more decision nodes, such as decision nodes 166 and 167 shown in fig. 3, where each decision node specifies a predicate condition. For example, the decision node 16 asserts the condition that for a given data set of an individual, the answer to the ADI-R question #86 (age at which an anomaly is first apparent) is 4 or less. Decision node 167 asserts the condition that the answer to the ADI-R question #52 (showing and indicating attention) is 8 or less for a given data set. At each decision node, the decision tree can be split based on whether predicate conditions attached to the decision node hold, resulting in a prediction node (e.g., 166a, 166b, 167a, 167 b). Each prediction node may include an output value (the "value" in fig. 3) representing a "vote" of one or more classifications or conditions evaluated by the assessment model. For example, in the prediction node shown in fig. 3, the output values include votes for individuals classified as having autism or non-pedigree. The prediction node may result in one or more additional decision nodes (not shown in fig. 3) downstream, each of which results in an additional split in the decision tree associated with the corresponding prediction node having the corresponding output value. The degree of kini impurity may be used as a criterion to find informative features based on which splits in each decision tree may be constructed. The assessment model may be configured to detect or assess whether a subject has a disorder or condition. In some cases, a separate assessment model is configured to determine whether a subject with a disorder or condition will be improved by digital therapy (e.g., digital therapy configured to promote social reciprocity).

When the dataset being queried in the assessment model reaches a "leaf" or the final prediction node without further downstream splitting, the output value of that leaf can be output as a vote for a particular decision tree. Since the random forest model comprises a plurality of decision trees, the final votes for all trees in the forest can be summed to arrive at a final vote and a corresponding classification for the subject. Although only two decision trees are shown in FIG. 3, the model may include any number of decision trees. The large number of decision trees may help reduce overfitting of the assessment model to the training data by reducing the variance of each individual decision tree. For example, the assessment model can include at least about 10 decision trees, such as at least about 100 individual decision trees or more.

The set of linear classifiers may also be suitable for use in deriving an assessment model as described herein. Each linear classifier can be trained individually using random gradient descent without the need for an "intercept term". The lack of an intercept term may prevent the classifier from deriving any significance from the missing feature values. For example, if the subject did not answer the question such that the feature value corresponding to the question is represented as a "0" bit array in the subject dataset, a linear classifier trained without truncated terms would not attribute any significance to this "0" bit array. The resulting assessment model may thus avoid establishing a correlation between the selection of features or questions that the subject has answered and the final classification of the subject determined by the model. Such algorithms may help ensure that only feature values or answers provided by the subject, rather than features or questions, are included in the final classification of the subject.

The training module may include feature selection. One or more feature selection algorithms (such as support vector machines, convolutional neural networks) may be used to select features that can distinguish individuals with and without certain developmental disorders. Different sets of features may be selected as features associated with identifying different obstacles. The step-wise backward algorithm may be used with other algorithms. The feature selection procedure may include determining an optimal number of features.

The training module may be configured to evaluate the performance of the derived assessment model. For example, the model can be evaluated for accuracy, sensitivity, and specificity in classifying the data. The evaluation may be used as a guide to select an appropriate machine learning algorithm or parameters thereof. Thus, the training module may update and/or refine the resulting assessment model to maximize sensitivity (true positive rate) versus specificity (true negative rate). Such optimization may be particularly helpful when there is a classification imbalance or sample bias in the training data.

In at least some instances, the available training data may be biased towards individuals diagnosed with a particular developmental disorder. In such cases, the training data may produce an assessment model that reflects the sample bias, such that the model assumes that the subject is at risk for that particular developmental disorder unless there is a strong argument to make otherwise. The performance of assessment models incorporating such specific sample deviations in generating predictions of new or unclassified data may be less than ideal because the new data may be from a population of subjects that may not include sample deviations similar to those present in the training data. To reduce sample bias when constructing an assessment model using skewed training data, sample weighting may be applied in training the assessment model. Sample weighting may include assigning a relatively large degree of significance to a particular sample set during the model training process. For example, during model training, if the training data is biased towards individuals diagnosed with autism, a higher significance may be attributed to data from individuals not diagnosed with autism (e.g., up to 50 times the significance of data from individuals diagnosed with autism). Such sample weighting techniques may substantially balance sample bias present in the training data, thereby producing an assessment model with reduced bias and improved accuracy when classifying data in the real world. To further reduce the contribution of training data sample bias to the generation of the assessment model, a boosting technique may be implemented during the training process. Boosting includes an iterative process in which the weight of each sample data point is updated after one training iteration. For example, samples that are misclassified after an iteration can be updated to have higher significance. The training process may then be repeated with the updated weights of the training data.

The training module may further include a validation module 115, the validation module 115 configured to validate the assessment model constructed using the training data. For example, the validation module may be configured to implement hierarchical K-fold cross validation, where K represents the number of partitions into which the training data is split for cross validation. For example, k may be any integer greater than 1, such as 3, 4, 5, 6, 7, 8, 9, or 10 or possibly higher, depending on the risk of overfitting the assessment model to the training data.

The training module may be configured to save the trained assessment model to local memory and/or a remote server so that the model may be retrieved for modification by the training module or prediction generated by the prediction module 120.

FIG. 4 is an exemplary operational flow 400 of a method of prediction module 120 as described herein. The prediction module 120 may be configured to generate a predicted classification (e.g., developmental disorder) for a given subject by fitting new data to the assessment model constructed in the training module. At step 405, the prediction module may receive new data that may have been processed by the pre-processing module to normalize it, for example by dropping stray metadata, applying uniform encoding of feature values, re-encoding selected features using different data representations, and/or extrapolating missing data points as described herein. The new data may include an array of features and corresponding feature values for the particular subject. As described herein, a feature may include a plurality of questions presented to a subject, an observation of a subject, or a task assigned to a subject. The characteristic values may include input data from the subject corresponding to a characteristic of the subject, such as an answer to a challenge question by the subject or a response by the subject. The new data provided to the prediction module may or may not have a known classification or diagnosis associated with the data; in any event, the prediction module may not use any pre-assigned classification information in generating the predicted classification for the subject. The new data may include a complete set of previously collected data for diagnosing or assessing the risk of the subject for one or more of a plurality of developmental disorders. Alternatively or in combination, the new data may comprise data collected in real-time from the subject or a caregiver of the subject, for example using a user interface as described in further detail herein, so the complete data set may be populated in real-time as each new feature value provided by the subject is sequentially queried against the assessment model.

At step 410, the prediction module may load a previously saved assessment model constructed by the training module from local memory and/or a remote server configured to store the model. At step 415, the new data is fitted to the assessment model to generate a predictive classification of the subject. In step 420, the module may check whether the fit of the data may generate a prediction of one or more specific disorders (e.g., autism, ADHD, etc.) within a confidence interval that exceeds a threshold, such as within a 90% or higher confidence interval, such as within a 95% or higher confidence interval. If so, the prediction module can output the one or more developmental disorders as a diagnosis for the subject or as a disorder at risk to the subject, as shown in step 425. The prediction module may output a plurality of developmental disorders that the subject is determined to be at risk of exceeding a set threshold, optionally presenting the plurality of disorders in order of risk. The prediction module may output one of the developmental disorders that the subject is determined to be at the greatest risk. The prediction module can output two or more developmental disorders for which the subject is determined to be at a comorbid risk. The prediction module may output a risk for each of the one or more developmental disorders determined in the assessment model. If the prediction module is unable to fit the data to any specific developmental disorder at or above a specified threshold within the confidence interval, the prediction module may determine in step 430 whether there are any additional features that may be queried. If the new data comprises the complete data set previously collected and the subject cannot be queried for any additional feature values, "no diagnosis" may be output as the predictive classification, as shown in step 440. If the new data includes data collected in real-time from the subject or caregiver during the prediction process, such that the data set is updated with each new input data value provided to the prediction module, and each updated data set is fitted to the assessment model, the prediction module may be able to query the subject for additional feature values. If the prediction module has obtained data for all of the features included in the assessment module, the prediction module may output "no diagnosis" as the predictive classification for the subject, as shown in step 440. If there are features that have not been presented to the subject, the prediction module may obtain additional input data values from the subject, for example, by presenting additional questions to the subject, as shown in step 435. The updated data set including the additional input data may then be fitted to the assessment model again (step 415), and the loop may continue until the prediction module may generate an output.

FIG. 5 is an exemplary operational flow 500 of the feature recommendation module 125 described herein, as a non-limiting example. The prediction module may include a feature recommendation module 125, the feature recommendation module 125 configured to identify, select, or recommend a next most predictive or relevant feature to be evaluated on the subject based on previously provided feature values of the subject. For example, the feature recommendation module may be a question recommendation module, wherein the module may select the most predictive next question to present to the subject or caregiver based on answers to previously presented questions. The feature recommendation module may be configured to recommend one or more next questions or features having the highest predictive utility when classifying the developmental disorder of the particular subject. Thus, the feature recommendation module may help dynamically adapt the assessment procedure for the subject to enable the prediction module to generate predictions with reduced assessment length and improved sensitivity and accuracy. Furthermore, the feature recommendation module may help improve the specificity of the final prediction generated by the prediction module by selecting features to be presented to the subject based on feature values previously provided by the subject that are most relevant in predicting the one or more specific developmental disorders that the particular subject is most likely to suffer from.

In step 505, the feature recommendation module may receive as input data that has been obtained from the subject in the assessment procedure. The input subject data may include an array of features and corresponding feature values provided by the subject. At step 510, the feature recommendation module may select one or more features to consider as "candidate features" for recommendation for use as the next feature(s) to be presented to one or more of the subject, caregiver, or clinician. Features that have already been presented may be excluded from the set of candidate features to be considered. Optionally, additional features that meet certain criteria may also be excluded from the set of candidate features, as described in further detail herein.

In step 515, the feature recommendation module may evaluate the "expected feature importance" of each candidate feature. Candidate features may be evaluated for their "expected feature importance" or the estimated utility of each candidate feature in predicting a particular developmental disorder in a particular subject. The feature recommendation module may utilize an algorithm based on: (1) the importance or relevance of a particular characteristic value in predicting a particular developmental disorder; and (2) the probability that a subject is likely to provide a particular feature value. For example, if an answer to the ADOS question B5 of "3" is highly correlated with the classification of autism, the answer may be considered as a feature value that predicts that autism has high utility. If the probability that the current subject answered "3" for the question B5 is high, the feature recommendation module may determine that the question has the desired feature of high importance. For example, algorithms that may be used to determine the desired feature importance of a feature are described in further detail with reference to fig. 6.

In step 520, the feature recommendation module may select one or more candidate features to be subsequently presented to the subject based on the desired feature importance of the features determined in step 515. For example, the expected feature importance of each candidate feature may be expressed as a score or a real number, which may then be compared to other candidate features for ranking. Candidate features having a desired ranking, such as top 10, 5, 3, 2, or highest ranking, may be selected as features to be subsequently presented to the subject.

FIG. 6 is an exemplary operational flow 600 of a method of determining a desired feature importance determination algorithm 127 performed by the feature recommendation module 125 described herein.

At step 605, the algorithm may determine the importance or relevance of a particular feature value in predicting a particular developmental disorder. The importance or relevance of a particular feature value in predicting a particular developmental disorder can be derived from an assessment model constructed using training data. Such "feature value importance" can be conceptualized as measuring the degree of relevance of a given feature value's role (whether it should be present or absent) in determining the final classification of a subject. For example, if the assessment model contains a random forest classifier, the importance of a particular feature value may be a function of where the feature is located in the branches of the random forest classifier. In general, a feature may have a relatively high feature importance if the average position of the feature in the decision tree is relatively high. The importance of feature values given a particular assessment model may be efficiently calculated by a feature recommendation module or by a training module that may pass the calculated statistics to the feature recommendation module. Alternatively, the importance of a particular feature value may be a function of the actual prediction confidence that would be obtained if the feature value was provided by the subject. For each possible feature value of a given candidate feature, the feature recommendation module may be configured to calculate an actual prediction confidence for predicting one or more developmental disorders based on the feature values previously provided by the subject and the currently assumed feature values.

Each feature value may have a different importance for each developmental disorder for which the assessment program is designed to screen. Thus, the importance of each feature value can be expressed as a probability distribution that describes the probability that the feature value produces an accurate prediction of each of the plurality of developmental disorders being evaluated.

At step 610, the feature recommendation module may determine a probability that the subject provided each feature value. Any suitable statistical model may be used to calculate the probability that a subject is likely to provide a particular feature value. For example, a high probability graphical model may be used to find the value of an expression, such as

prob(E=1|A=1,B=2,C=1)

Where A, B and C represent different features or questions in the prediction module, and integers 1 and 2 represent different possible feature values (or possible answers to the questions) for the features. Bayesian rules can then be used to calculate the probability that a subject provides a particular feature value, where the expression is such as:

prob(E=1|A=1,B=2,C=1)=prob(E=1,A=1,B=2,C=1)/prob(A=1,B=2,C=1)

such expressions can be computationally expensive, both in terms of computation time and required processing resources. Alternatively or in combination with explicitly using bayesian rules to calculate the probabilities, logistic regression or other statistical estimators may be used, where the probabilities are estimated using parameters derived from machine learning algorithms. For example, the probability that a subject is likely to provide a particular feature value may be estimated using the following expression:

prob (E ═ 1| a ═ 1, B ═ 2, C ═ 1) ≈ sigmoid (a1 ═ a + a2 ═ B + a3 ═ C + a4), where a1, a2, a3, and a4 are constant coefficients determined by a trained assessment model, known using an optimization algorithm that attempts to make the expression maximally correct, and where sigmoid is a nonlinear function that enables the expression to be transformed into probability. Such an algorithm can be trained quickly and the resulting expression can be calculated quickly in an application, for example during administration of an assessment program. Although four coefficients are referenced, as one of ordinary skill in the art will recognize, as many helpful coefficients as possible may be used.

At step 615, the desired importance of each feature value may be determined based on a combination of the metrics computed in steps 605 and 610. Based on these two factors, the feature recommendation module may determine the expected utility of a particular feature value in predicting a particular developmental disorder. Although reference is made herein to a desired importance determined via multiplication, the desired importance may be determined by combining coefficients and parameters in many ways, such as using look-up tables, logic, or partitioning, for example.

At step 620, steps 605-615 may be repeated for each possible feature value of each candidate feature. For example, if there are 4 possible answers to a particular question, the expected importance of each of the 4 possible answers is determined.

At step 625, the total desired importance or the desired feature importance of each candidate feature may be determined. The desired feature importance of each feature may be determined by summing the feature value importance of each possible feature value of the feature, as determined in step 620. Thus, by summing the expected utility of all possible feature values for a given feature, the feature recommendation module may determine the overall expected feature importance of the feature for predicting a particular developmental disorder in response to previous answers.

At step 630, steps 605-625 may be repeated for each candidate feature being considered by the feature recommendation module. The candidate features may include a subset of possible features such as questions. Thus, a desired feature importance score may be generated for each candidate feature, and the candidate features may be ranked in order of highest to lowest desired feature importance.

Optionally, a third factor may be considered in determining the importance of each feature value in addition to the two factors determined in steps 605 and 610. Based on the characteristic values previously provided by the subject, the probability that the subject has one or more of a plurality of developmental disorders can be determined. Such probabilities may be determined based on probability distributions stored in the assessment model that indicate the probability that the subject has each of the plurality of screened developmental disorders based on the characteristic values provided by the subject. In selecting the next feature to present to the subject, the algorithm may be configured to give greater weight to the value of the feature most important or most relevant to predicting the one or more developmental disorders most likely to be suffered by the current subject. For example, if a subject previously provided a feature value indicating that the subject has a higher probability of having mental retardation or speech and language retardation than any other developmental disorder being assessed, the feature recommendation module may prefer a feature value of high importance for predicting mental retardation or speech and language retardation, rather than a feature of high importance for predicting autism, ADHD, or any other developmental disorder for which an assessment is designed to be screened. Thus, the feature recommendation module may enable the prediction module to tailor the prediction process to the current subject, presenting more features related to the subject's potential developmental disorder to produce a final classification with higher granularity and confidence.

While the above steps illustrate an exemplary operational flow 600 of the desired feature importance determination algorithm 127, those of ordinary skill in the art will recognize many variations based on the teachings described herein. The steps may be performed in a different order. Steps may be added or deleted. Some steps may include sub-steps of other steps. Many of the steps may be repeated as often as desired by the user.

An exemplary embodiment of a feature recommendation module is now described. Subject X has provided answers (feature values) to questions (features) A, B and C in the assessment program:

subject X { 'a': 1, 'B': 2, 'C': 1}

To maximize the confidence of the prediction that a final classification or diagnosis can be reached, the feature recommendation module may determine whether problem D or problem E should be presented next. Considering the previous answers for subject X, the feature recommendation module determines the probability that subject X provides each possible answer to each of questions D and E as follows:

prob(E=1|A=1,B=2,C=1)=0.1

prob(E=2|A=1,B=2,C=1)=0.9

prob(D=1|A=1,B=2,C=1)=0.7

prob(D=2|A=1,B=2,C=1)=0.3

the feature importance of each possible answer to each of the questions D and E may be calculated based on the described rating model. Alternatively, the feature importance of each possible answer to each of the questions D and E may be calculated as the actual prediction confidence that would be obtained if the subject gave a particular answer. The importance of each answer may be represented using a series of values on any suitable numerical scale. For example:

Importance (E ═ 1) ═ 1

Importance (E-2) -3

Importance (D ═ 1) ═ 2

Importance (D ═ 2) ═ 4

Based on the calculated probabilities and feature value importance, the feature recommendation module may calculate the expected feature importance for each question as expected [ importance (E) ] ═ (prob (E ═ 1| a ═ 1, B ═ 2, C ═ 1) × (E ═ 1) importance

(prob (E ═ 2| a ═ 1, B ═ 2, C ═ 1) — (E ═ 2)

=0.1*1+0.9*3

=2.8

The importance (D) is expected to be (prob (D ═ 1| a ═ 1, B ═ 2, C ═ 1) × (D ═ 1)

(prob (D ═ 2| a ═ 1, B ═ 2, C ═ 1) — (D ═ 2)

=0.7*2+0.3*4

=2.6

Therefore, even if the answer to the question D generally has a higher feature importance, the expected feature importance (also referred to as relevance) of the answer to the question E is also determined to be higher than the expected feature importance of the answer to the question D. Thus, the feature recommendation module may select question E as the next question to be presented to subject X.

When selecting the next best feature to present to the subject, the feature recommendation module 125 may be further configured to exclude one or more candidate features from consideration if the candidate features have a high covariance with features already presented to the subject. The covariance of the different features may be determined based on the training data and may be stored in an assessment model constructed by the training module. If a candidate feature has a high covariance with previously presented features, the candidate feature may add relatively little additional predictive utility and may therefore be omitted for future presentation to the subject in order to optimize the efficiency of the assessment procedure.

Prediction module 120 can interact with a person (e.g., a subject or a caregiver to the subject) participating in the assessment program using user interface 130. The user interface may be provided with a user interface, such as a display of any computing device, such as a personal computer, tablet computer, or smartphone, that may enable a user to access the prediction module. The computing device may include a processor containing instructions for providing a user interface, for example in the form of a mobile application. The user interface may be configured to display instructions from the prediction module to a user and/or receive input from a user using an input method provided by a computing device. Thus, a user may engage in a rating program as described herein by interacting with the prediction module using a user interface, for example by providing answers (feature values) in response to questions (features) presented by the prediction module. The user interface may be configured to administer the rating program in real-time such that the user answers one question at a time, and the prediction module may select the next best question to ask based on the recommendations made by the feature recommendation module. Alternatively or in combination, the user interface may be configured to receive a complete set of new data from the user, for example by allowing the user to upload a complete set of feature values corresponding to a set of features.

As described herein, a feature of interest associated with identifying one or more developmental disorders can be assessed in a subject in a number of ways. For example, a subject or caregiver or clinician may be asked a series of questions designed to assess the extent to which a feature of interest is present on the subject. The provided answers may then represent corresponding feature values of the subject. The user interface may be configured to present the subject (or anyone participating in the assessment program on behalf of the subject) with a series of questions that may be dynamically selected from a set of candidate questions as described herein. Such question and answer based assessment programs may be entirely machine applicable and may therefore provide a rapid prediction of developmental disorder(s) in a subject.

Alternatively or in combination, the feature of interest on the subject may be assessed by observing the behavior of the subject (e.g., using a video of the subject). The user interface may be configured to allow the subject or a caregiver of the subject to record or upload one or more videos of the subject. The video clips can then be analyzed by qualified personnel to determine feature values of the subject for the feature of interest. Alternatively or in combination, the video analysis for determining the feature values may be performed by a machine. For example, video analysis may include detecting an object (e.g., a subject, a spatial location of the subject, a face, eyes, mouth, hands, limbs, fingers, toes, feet, etc.) and then tracking the movement of the object. The video analysis may infer the gender of the subject and/or the proficiency of the subject's spoken language(s). Video analysis can globally identify specific markers on the face or face, such as nose, eyes, lips, and mouth, to infer facial expressions and track those expressions over time. Video analysis may detect eyes, limbs, fingers, toes, hands, feet, and track their movement over time to infer behavior. In some cases, the analysis may further infer the intent of the behavior, e.g., that the child is not comfortable with noise or noisy music, attends self-injurious behavior, mimics the action of another person, etc. The sound and/or speech recorded in the video file may also be analyzed. The analysis may infer a context of the subject's behavior. Sound/speech analysis can infer the subject's perception. Analysis of the subject video performed by humans and/or machines may produce feature values for features of interest, which in turn may be suitably encoded for input into the prediction module. A prediction of the developmental disorder of the subject may then be generated based on a fit of the feature values of the subject to a assessment model constructed using the training data.

Alternatively or in combination, the feature of interest on the subject may be assessed by structured interaction with the subject. For example, a subject may be asked to play a game, such as a computer game, and the performance of the subject on the game may be used to assess one or more characteristics of the subject. One or more stimuli can be presented to the subject (e.g., visual stimuli presented to the subject via a display), and the subject's response to the stimuli can be used to assess a characteristic of the subject. The subject may be asked to perform a certain task (e.g., the subject may be asked to pop a bubble with his or her finger), and the subject's response to the request or the subject's ability to perform the requested task may be used to assess a characteristic of the subject.

The methods and apparatus described herein can be configured in many ways for determining the next most predictive or relevant question. At least a portion of the software instructions as described herein may be configured to run locally on a local device to provide a user interface and present questions and receive answers to the questions. The local device may be configured with software instructions of an Application Program Interface (API) to query the remote server for the most predictive next question. For example, the API may return identified questions based on feature importance as described herein. Alternatively or in combination, the local processor may be configured with instructions for determining the most predictive next question in response to a previous answer. For example, the prediction module 120 may comprise software instructions of a remote server or software instructions of a local processor, as well as combinations thereof. Alternatively or in combination, the feature recommendation module 125 may comprise, for example, software instructions of a remote server or software instructions of a local processor configured for determining the most predictive next question, as well as combinations thereof. For example, the exemplary operational flow 600 of the method of determining the desired feature importance determination algorithm 127 as performed by the feature recommendation module 125 as described herein may be performed using one or more processors as described herein.

FIG. 7 illustrates a method 700 of managing a rating program as described herein. Method 700 may be performed using a user interface provided on a computing device that includes a display and a user interface for receiving user input in response to instructions provided on the display. The user participating in the assessment program may be the subject himself, or another person participating in the program on behalf of the subject, such as a caregiver of the subject. At step 705, an Nth question related to the Nth feature may be presented to the user with a display. At step 710, an answer for the subject containing the corresponding nth feature value may be received. At step 715, the data set for the current subject may be updated to include the nth feature value provided for the subject. At step 720, the updated data set may be fitted to a rating model to generate a predictive classification. Step 720 may be performed by a prediction module, as described herein. At step 725, a check may be performed to determine whether the fit to the data can generate a prediction of a particular developmental disorder (e.g., autism, ADHD, etc.) with sufficient confidence (e.g., within at least a 90% confidence interval). If so, the predicted developmental disorder may be displayed to the user, as shown at step 730. If not, then in step 735, a check may be performed to determine if there are any additional features that may be queried. If so, as shown at step 740, the feature recommendation module may select the next feature to present to the user, and steps 705-725 may be repeated until a final prediction (e.g., a particular developmental disorder or "no diagnosis") may be displayed to the subject. If no additional features are presented to the subject, then "no diagnosis" may be displayed to the subject, as shown at step 745.

While the above steps illustrate an exemplary method 700 of administering an assessment program, one of ordinary skill in the art will recognize many variations based on the teachings described herein. The steps may be performed in a different order. Steps may be added or deleted. Some steps may include sub-steps of other steps. Many of the steps may be repeated as often as desired by the user.

The present disclosure provides a computer-controlled system programmed to implement the methods of the present disclosure. FIG. 8 illustrates a computer system 801 suitable for incorporating the methods and apparatus described herein. Computer system 801 may process various aspects of the information of the present disclosure, such as questions and answers, responses, statistical analysis, to name a few. Computer system 801 may be a user's electronic facility or a computer system remotely located from the electronic device. The electronic device may be a mobile electronic device.

The computer system 801 includes a central processing unit (CPU, also referred to herein as a "processor" and "computer processor") 805, which may be a single or multi-core processor or multiple processors for parallel processing. Computer system 801 also includes memory or memory location 810 (e.g., random access memory, read only memory, flash memory), electronic storage unit 815 (e.g., hard disk), communication interface 820 (e.g., network adapter) for communicating with one or more other systems, and peripheral devices 825 (such as cache, other memory, data storage, and/or electronic display adapter). The memory 810, the storage unit 815, the interface 820, and the peripheral device 825 communicate with the CPU805 through a communication bus (solid line) such as a motherboard. The storage unit 815 may be a data storage unit (or data repository) for storing data. Computer system 801 may be operatively coupled to a computer network ("network") 830 by way of a communication interface 820. The network 830 may be the internet, the internet and/or an extranet, or an intranet and/or an extranet in communication with the internet. Network 830 is in some cases a telecommunications and/or data network. Network 830 may include one or more computer servers, which may support distributed computing such as cloud computing. Network 830 may implement, in some cases by way of computer system 801, a peer-to-peer network that may enable devices coupled to computer system 801 to act as clients or servers.

CPU 805 may execute sequences of machine-readable instructions that may be embodied in a program or software. The instructions may be stored in a memory location, such as memory 810. Instructions may be directed to CPU 805 which may then configure CPU 805 with program instructions or otherwise to implement the methods of the present disclosure. Examples of operations performed by CPU 805 may include fetching, decoding, executing, and writing back.

CPU 805 may be part of a circuit such as an integrated circuit. One or more other components of system 801 may be included in the circuitry. In some cases, the circuit is an Application Specific Integrated Circuit (ASIC).

The storage unit 815 may store files such as drivers, libraries, and saved programs. The storage unit 815 may store user data, such as user preferences and user programs. Computer system 801 may, in some cases, include one or more additional data storage units that are external to computer system 801, such as on a remote server that communicates with computer system 801 over an intranet or the internet.

Computer system 801 may communicate with one or more remote computer systems via network 830. For example, computer system 801 may communicate with a remote computer system of a user (e.g., a parent). Examples of remote computer systems and mobile communication devices include personal computers (e.g., laptop PCs), board or tablet PCs (e.g., iPad、Galaxy Tab), telephone, smartphone (e.g.,iPhone, Android-enabled device,) Or a personal digital assistant. A user may access computer system 801 using network 830.

The methods as described herein may be implemented by machine (e.g., computer processor) executable code stored on an electronic storage location of the computer system 801 (such as, for example, on the memory 810 or electronic storage unit 815). The machine executable code or machine readable code may be provided in the form of software. During use, code may be executed by the processor 805. In some cases, code may be retrieved from the storage unit 815 and stored on the memory 810 for ready access by the processor 805. In some cases, the electronic storage unit 815 may not be included and the machine executable instructions are stored on the memory 810.

The code may be pre-compiled and configured for use with a machine having a processor adapted to execute the code, or may be compiled during runtime. The code can be provided in a programming language that can be selected to enable the code to be executed in a pre-compiled or compiled-time manner.

Aspects of the systems and methods provided herein, such as computer system 801, may be embodied in programming. Various aspects of the technology may be considered as an "article of manufacture" or an "article of manufacture" in the form of machine (or processor) executable code and/or associated data that is carried or embodied on some type of machine-readable medium. The machine executable code may be stored on an electronic storage unit such as a memory (e.g., read only memory, random access memory, flash memory) or a hard disk. A "storage" class of media may include any or all of computer tangible memory, processors, etc., or their associated modules, such as various semiconductor memories, tape drives, disk drives, etc., that may provide non-transitory storage for software programming at any time. All or portions of the software may sometimes communicate over the internet or various other telecommunications networks. Such communication may, for example, enable software to be loaded from one computer or processor into another computer or processor, e.g., from a management server or host computer into the computer platform of an application server. Thus, another type of media that can carry software elements includes optical, electrical, and electromagnetic waves, such as those used across physical interfaces between local devices, over wired and optical landline networks, and over various air links. The physical elements that carry such waves, such as wired or wireless links, optical links, etc., can also be considered to be media that carry software. As used herein, unless limited to a non-transitory tangible "storage" medium, terms such as a computer or machine "readable medium" refer to any medium that participates in providing instructions to a processor for execution.

Thus, a machine-readable medium, such as computer executable code, may take many forms, including but not limited to tangible storage media, carrier wave media, or physical transmission media. Non-volatile storage media include, for example, optical or magnetic disks, such as any storage device in any computer(s), etc., such as those shown in the figures that may be used to implement a database, etc. Volatile storage media includes dynamic memory, such as the main memory of such a computer platform. Tangible transmission media include coaxial cables; copper wire and fiber optics, including the wires that comprise a bus within a computer system. Carrier-wave transmission media can take the form of electrical or electromagnetic signals, or acoustic or light waves, such as those generated during Radio Frequency (RF) and Infrared (IR) data communications. Thus, common forms of computer-readable media include, for example: a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD or DVD-ROM, any other optical medium, punch paper tape, any other physical storage medium with patterns of holes, a RAM, a ROM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave transporting data or instructions, cables or links transporting such a carrier wave, or any other medium from which a computer can read programming code and/or data. Many of these forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to a processor for execution.

The computer system 801 may include or be in communication with an electronic display 835, the electronic display 835 including a User Interface (UI)840 for providing, for example, questions and answers, analysis results, recommendations. Examples of UIs include, but are not limited to, Graphical User Interfaces (GUIs) and web-based user interfaces.

The methods and systems of the present disclosure may be implemented by one or more algorithms and instructions provided with one or more processors as disclosed herein. The algorithms may be implemented in software when executed by the central processing unit 805. The algorithm may be, for example, a random forest, a graphical model, a support vector machine, or others.

While the above steps illustrate a method according to an example system, one of ordinary skill in the art will recognize many variations based on the teachings described herein. The steps may be performed in a different order. Steps may be added or deleted. Some steps may include sub-steps. Many of the steps may be repeated as often as beneficial to the platform.

Each example as described herein may be combined with one or more other examples. Furthermore, one or more components of one or more examples may be combined with other examples.

Experimental data

A data processing module as described herein is built on the Anaconda Distribution of Python 2.7. The training data used to construct and train the assessment model includes data generated by the Autism Genetic Resource Exchange (AGRE) that performs in-home assessment to collect ADI-R and ADOS data for parents and children at their homes. ADI-R includes a parental interview presenting a total of 93 questions and results in a diagnosis of autism or no autism. ADOS includes a semi-structured interview of children that results in autism, ASD, or diagnostically-free diagnosis, where one of four possible modules, each comprising about 30 questions, is administered to children based on language level. The data includes clinical diagnosis of the child from the assessment; if a child's ADI-R differs from the ADOS diagnosis, a consensus diagnosis is assigned by a licensed clinical psychologist for the data set of the child in question. The training data contained a total of 3,449 data points, of which 3315 cases (autism or ASD) and 134 controls (non-pedigree). Features evaluated in the training data target 3 key areas: language, social communication, and repetitive behavior.

As described herein, the assessment model is built using a boosted random forest classifier. Before training the assessment model on the training data, the training data is pre-processed to normalize the data and re-encode the classification features in a unique representation as described herein. Since training data is biased towards individuals with autism or ASD, sample weighting was applied to attribute up to 50-fold significance to data from non-pedigree individuals compared to data from autistic/ASD individuals. The assessment model is iteratively trained using a boosting method, the weights of the data points are updated after each iteration to increase the significance attributed to the misclassified data points, and retraining with the updated significance.

The trained model was validated using hierarchical k-fold cross-validation with k-5. Cross-validation yields an accuracy of about 93-96%, where accuracy is defined as the percentage of subjects correctly classified in the binary classification task (autism/non-lineage) using this model. Since the training data contains sample deviations, a confusion matrix is calculated to determine the frequency with which the model confuses one class (autism or non-pedigree) with another class. The percentage of correctly classified autistic individuals is about 95%, while the percentage of correctly classified non-pedigree individuals is about 76%. However, it should be noted that the model may be adjusted to more closely fit one class than another, in which case the percentage of correct classifications for each class may vary. Fig. 9 shows a Receiver Operating Characteristic (ROC) curve mapping sensitivity versus false detection for an exemplary assessment model as described herein. The true positive rate (sensitivity) of the autism diagnosis is mapped on the y-axis as a function of the false positive rate (false positive) of the diagnosis mapped on the x-axis. Each of the three curves labeled "Fold (Fold) # 0", "Fold # 1", and "Fold # 2" corresponds to a different "Fold" of the cross-validation procedure, where for each Fold, a portion of the training data is fit to the assessment model while varying the prediction confidence threshold necessary to classify the dataset as "autism". Depending on the ROC curve of the model, the model can be adjusted to increase sensitivity in exchange for some increase in false positives, or to decrease sensitivity in exchange for reducing false positives, as needed or appropriate.

The feature recommendation module is configured as described herein, wherein the desired feature importance of each question is calculated, and candidate questions are ranked in order of the calculated importance by calling the server via an Application Program Interface (API). The ability of the feature recommendation module to recommend informational questions is evaluated by determining a correlation between the recommendation score of the question and the increase in prediction accuracy obtained from answering the recommended question. The following steps are performed to calculate a relevance metric: (1) splitting the data into partitions for cross validation; (2) questions that have been answered are randomly removed from the verification set; (3) generating a desired feature importance (question recommendation/score) for each question; (4) revealing one of the problems removed in step 2 and measuring the relative improvement in the accuracy of the subsequent predictions; and (5) calculating a correlation between the relative improvement and the expected feature importance. The calculated Pearson correlation coefficient ranged between 0.2 and 0.3, indicating a moderate correlation between the expected feature importance score and the relative improvement. Fig. 10 is a scatter plot showing the correlation between the expected feature importance ("expected informativeness score") and the relative improvement ("relative classification improvement") for each question. The figure shows a medium linear relationship between two variables, illustrating the problem that the feature recommendation module is indeed able to recommend that will improve the prediction accuracy.

The length of time to generate an output using the developed prediction module and feature recommendation model is measured. The prediction module takes about 46ms to predict an individual's risk of autism. The feature recommendation module takes about 41ms to generate a question recommendation for an individual. Although these measurements are made by calling the server via the API, the calculations may be performed locally, for example.

As described herein, while the assessment models of the data processing modules described with respect to fig. 9-10 are constructed and trained to classify a subject as having or not having autism, similar methods can be used to establish an assessment model that can classify a subject as having one or more of a variety of developmental disorders.

In another aspect, the methods and devices disclosed herein may identify a subject as belonging to one of three categories: having a developmental status, developing normal or typical, or uncertain or requiring additional assessment to determine whether a subject has a developmental status. The developmental condition may be a developmental disorder or developmental progression. The addition of the third category, uncertainty determination, results in improved performance and better accuracy of the classification assessment corresponding to the presence or absence of a developmental condition.

FIG. 11 is an exemplary operational flow of an assessment module identifying a subject as belonging to one of three categories. As shown in fig. 11, a method 1100 for assessing at least one behavioral developmental condition of a subject is provided. The evaluation module receives diagnostic data for a subject related to behavioral development at 1110, evaluates the diagnostic data using a selected subset of the plurality of machine-learned assessment models at 1120, and provides a classification determination for the subject at 1130. The classification determination may be inconclusive, or may indicate the presence or absence of a behavioral developmental condition.

FIG. 12 is an exemplary operational flow of a model training module as described herein. As illustrated in FIG. 12, a method 1200 for training a assessment model and optimally adjusting its configuration parameters using machine learning is provided. The method 1200 may be used to train and tune multiple machine learning predictive models, each using an off-line prepared data set and including representative samples of standardized clinical tools such as ADI-R, ADOS or SRS. The model may also be trained using data sets that include data other than clinical tools, such as demographic data. At 1210, the model training module preprocesses diagnostic data from the plurality of subjects using machine learning techniques. The data set may be preprocessed using well established machine learning techniques such as data cleaning, filtering, aggregation, interpolation, normalization, and other machine learning techniques known in the art.

At 1220, the model training module extracts and encodes machine learning features from the pre-processed diagnostic data. The columns comprising the data sets may be mapped into machine-learned features using feature-coding techniques (such as, for example, one-hot coding, severity coding, behavioral presence coding, or any other feature-coding technique known in the art). Some of these techniques are inherently novel and not commonly used in machine learning applications, but are advantageous in the present application due to the nature of the problem at hand, and in particular due to the discrepancy between the environment in which the clinical data is collected and the intended environment in which the model will be applied.

In particular, the presence of behavioral encoding is particularly advantageous for the problem at hand, as machine learning training data consists of a clinical questionnaire that is filled out by a psychologist who observes the subject for multiple hours. The answer codes they fill out may correspond to subtle degrees of severity or differences in behavioral patterns that may only become apparent throughout long-term observation. The data is then used to train a model that is intended to be applied in an environment where only a few minutes of subject observation is available. Thus, the subtleties of the expected behavior pattern will be less apparent. The presence of behavioral encoding as described herein alleviates this problem by abstracting subtle differences between answer choices and extracting data from the questionnaire only at a level of granularity that is expected to be reliably obtained in the application environment.

At 1230, the model training module processes the encoded machine learning features. In an exemplary embodiment, the questionnaire answers can be encoded into machine-learned features, after which a sample weight can be calculated and assigned to each sample of diagnostic data in the dataset, each sample corresponding to each subject having diagnostic data. The samples may be grouped according to subject-specific dimensions, and sample weights may be calculated and assigned to balance one sample group with each other sample group to reflect the expected distribution of subjects in the intended environment. For example, samples with positive classification tags may be balanced against samples with negative classification tags. Alternatively or additionally, the samples in each of the plurality of age group bins may be brought to an equal total weight. Additional sample balance dimensions may be used, such as gender, geographic area, sub-classifications within positive or negative categories, or any other suitable dimension.

The process of sample weight adjustment can be further refined to reflect the expected distribution of subjects in the intended application environment. This may allow the trained model to adapt to various specific application environments. For example, the model may be trained to be used exclusively as a secondary screening tool by adjusting sample weights in the training dataset to reflect the expected incidence of diagnostic conditions in a secondary diagnostic clinic. Another variant of the same screener can be trained to serve as a general public screening tool, again by adjusting the weights of the training samples to reflect an expected population that is predominantly neurologically typical and rarely positive and whose incidence matches that in the general population, to reflect the expected distribution of subjects in the intended application environment.

At 1240, the model training module selects a subset of the processed machine learning features. In an exemplary embodiment, where the training samples are weighted accordingly and all potential machine learning features are coded appropriately, feature selection may be performed using a machine learning process, commonly referred to as bootstrapping, in which multiple iterations of model training may be run, each using a random subsample of available training data. After each run, the count may be updated using features that the training process considers must be included in the model. This list can be expected to vary from run to run as the random data subset used in training may contain an obvious pattern of data sample selection display patterns that are accidental and do not reflect the problem at hand. Repeating this process multiple times may allow occasional pattern cancellation revealing features that reflect patterns that can be expected to be well generalized outside the training data set and in the real world. The top features of the bootstrap run may then be selected and used exclusively to train the final model, which is trained using the entire training data set and saved for subsequent application.

Several models may be trained instead of one to specialize the model in the demographic dimension if the expected dimension affects the selection of useful features. For example, multiple questionnaire-based models can be established, each model being specific to a particular age group, since the best questions to ask the subject are expected to be different for each age group. In this case, only the correct model for each subject is loaded at the time of application.

At 1250, the model training module evaluates each model. In particular, the performance of each model may be evaluated, for example, as determined by sensitivity and specificity for a predetermined inclusion rate. In an exemplary embodiment, the performance of the model in terms of inclusion rate, sensitivity, and specificity may be evaluated using retention data sets that are not used during the model training phase.

At 1260, the model training module adjusts each model. More specifically, to assess the performance of the models in different tuning environments, the tuning parameters of each model may be varied in iterative increments, and the same metrics may be computed for the same retention set in each iteration. The optimal environment can then be locked and the corresponding model saved. The tuning parameters may include, for example, the number of trees in the boosted decision tree model, the maximum depth of each tree, the learning rate, the threshold for positive determination scoring, the range of outputs deemed uncertain, and any other tuning parameter known in the art.

In a preferred embodiment, the parameter adjustment process of 1260 may include a brute force grid search, optimized gradient descent or simulated annealing, or any other spatial exploration algorithm known in the art. The adjusted models can be run individually and independently adjusted, or the models can be adjusted in an overall manner, with each parameter combination of each model being explored to arrive at the optimal overall parameter set at 1270 to maximize the benefit of using all models in their entirety.

Furthermore, in yet another aspect, the uncertainty range of tuning each predictive model can be extended with external conditions dictated by business needs rather than performance metrics. For example, it may be considered necessary to have a inclusion rate of not less than 70% for a particular classifier. In other words, it is expected that the classifier will provide an assessment indicating the presence or absence of a developmental condition of at least 70% of the classified subjects, yielding an uncertainty determination for less than 30% of the subjects. Therefore, the corresponding adjustment process for an indeterminate output range will have to be limited to only the range that meets this condition.

The model is adjustable based on the context of the application. The predictive model may be configured to output a diagnosis with a certain degree of certainty, which may be adjusted based on the adjustment of the uncertainty region.

Further, adjustments of the uncertainty range may be exposed outside of the offline machine learning phase. More specifically, the adjustment of the uncertainty range may be a configurable parameter accessible to an agent operating the model after deployment. In this way, the operator can dial up or down the entire system along a trade-off between more inclusive and more accurate. To support this situation, a plurality of best uncertainty ranges, each having its corresponding inclusion rate, may be explored and stored during the model training phase. The agent may then influence the change by selecting the best point from a menu of previously determined best circumstances.

FIG. 13 is another exemplary operational flow of an evaluation module as described herein. As shown in fig. 13, a method 1300 is provided for outputting a deterministic prediction indicating the presence or absence of a developmental condition at 1355, or an uncertainty determination of "no diagnosis" is shown at 1365.

At 1310, an evaluation module as depicted in fig. 13 receives new data, such as diagnostic data from a subject to be evaluated as having or not having a developmental condition, or diagnostic data related to the subject. At 1320, a plurality of stored assessment models that have been trained, tuned, and optimized as shown in FIG. 12 and described herein can be loaded. At 1330, a diagnostic model can be fitted to these initial assessment models and outputs collected. The evaluation module can combine the initial assessment model outputs at 1340 to generate a predicted initial classification for the subject. If at 1350, the evaluation module determines that the initial prediction is definitive, it can output a definitive determination indicating the presence or absence of a developmental condition in the subject. If at 1350, the assessment module determines that the initial prediction is uncertain, it may proceed to determine that additional or more complex assessment models are available and applicable at 1360. If no additional assessment model is available or applicable, the assessment module outputs an uncertainty determination of "no diagnosis". However, if the assessment module determines that additional or more complex assessment models are available and applicable, it may proceed to obtain additional diagnostic data from or related to the subject at 1370. Thereafter, the evaluation module can load additional or more complex assessment models at 1380 and can repeat the process of fitting data to the models, but at this point, the additional data obtained at 1370 is fitted to the additional assessment models loaded at 1380 to produce new model outputs that are then evaluated at 1350 to obtain deterministic predictions. This process, depicted by the loop including steps 1350, 1355, 1360, 1365, 1370, 1380, and returning to 1330 and 1340, can be repeated until either a deterministic prediction is output at 1355, or an uncertainty determination of "no diagnosis" is output at 1365 if no more applicable classification models are available.

In particular, when data from a new subject is received as input at 1310 in fig. 13, each available model for preliminary determination is loaded and run at 1320, outputting a numerical score at 1330. The scores may then be combined using a combination model.

FIG. 14 is an exemplary operational flow of the model output combining step depicted in FIG. 13. As shown in FIG. 14, the combiner module 1400 may collect output from a plurality of assessment models 1410, 1420, 1430, and 1440 that is received by a model combination or combination model 1450. The combined model may employ simple rule-based logic to combine the outputs, which may be numerical scores. Alternatively, the combined model may use more complex combining techniques, such as logistic regression, probabilistic modeling, discriminant modeling, or any other combining technique known in the art. The combined model may also rely on the context to determine the best way to combine the model outputs. For example, it may be configured to trust questionnaire-based model outputs that are only within a particular range, or otherwise comply with a video-based model. In another case, it may use questionnaire-based model outputs that are more significantly targeted to younger subjects than older subjects. In another case, it may exclude the output of a video-based model of a female subject, but include a video-based model of a male subject.

The combined model output score may then be subjected to a threshold value determined during a model training phase as described herein. In particular, as shown in fig. 14, these thresholds are indicated by dashed regions that divide the range of numerical scores 1460 into three segments corresponding to negative determination output 1470, uncertainty determination output 1480, and positive determination output 1490. This effectively maps the combined numerical score to a classification determination or to an uncertainty determination if the output is within a predetermined uncertainty range.

In the case of an uncertainty output, the assessment module can determine that additional data should be obtained from the subject in order to load and run additional models beyond the preliminary or initial model set. In cases where the preliminary model may not fit, the additional model may be well suited to discern deterministic outputs. This result can be achieved by training additional models that are more complex in nature, have higher requirements for detailed input data, or focus more on difficult to classify situations to exclude simple models.

Fig. 15 illustrates an exemplary questionnaire screening algorithm configured to provide only a categorical determination of developmental status as described herein. In particular, the questionnaire screening algorithm depicted in fig. 15 shows an alternating decision tree classifier whose output only indicates a determination of the presence or absence of autism. The different shades depict autism and non-autism as well as the total population of children evaluated via the questionnaire. The results of the classifier are also depicted, showing the correct and misdiagnosed child population for each of the two classification determinations.

In contrast, fig. 16 shows an exemplary Triton questionnaire screening algorithm configured to provide classification determinations and uncertainty determinations as described herein. In particular, the Triton algorithm depicted in fig. 16 implements an age-tailored questionnaire and age-specific model to generate a specialized classifier for each of two subgroups (i.e., "3 years and below" and "4 years and above") within the relevant age group (i.e., "children"). From this example, it is apparent that the classification determinations indicating the presence and absence of autism in the two subgroups in fig. 16 each have a higher accuracy than the classification determinations in fig. 15, as indicated by the different shaded areas of the child population displaying correct and incorrect diagnoses for each of the two classification determinations. By providing individual categories of uncertainty determinations, the Triton algorithm of fig. 16 is better able to isolate difficult-to-screen situations that lead to inaccurate classification determinations as shown in fig. 15.

The comparison of the performance of the various algorithms highlights the advantages of the Triton algorithm, particularly the Triton algorithm with a context-dependent combination of questionnaires and video input. Figure 17 shows a comparison of the performance of various algorithms in terms of sensitivity-specificity trade-off for all of the clinical samples, as described herein. As shown in fig. 17, when combined with a video combiner (i.e., a context-dependent combination of questionnaires and video input), the best performance in terms of both sensitivity and specificity was obtained by the Triton algorithm configured for 70% coverage.

Fig. 18 shows a comparison of the performance of various algorithms in terms of sensitivity-specificity trade-off for samples taken from children under 4 years of age, as described herein. The Triton algorithm, configured for 70% coverage, has the best performance when combined with a video combiner (i.e., a context-dependent combination of questionnaires and video input).

Fig. 19 shows a comparison of the performance of various algorithms in terms of sensitivity-specificity trade-off for samples taken from children 4 years and older, as described herein. For most cases, the Triton algorithm configured to 70% coverage appears to have the best performance when combined with a video combiner.

Fig. 20-22 show the specificity of the different algorithms in the 75% -85% sensitivity range for all samples, for children under 4 years of age, and for children 4 years of age and above. In all three cases, the Triton algorithm configured for 70% coverage had the best performance when combined with the video combiner, with 75% specificity for all samples, 90% specificity for children under 4 years of age, and 55% specificity for children 4 years of age and beyond. Note that the Triton algorithm has the further advantage of flexibility. For example, an adjustable model is provided as described herein, where uncertainty or inclusion rates can be controlled or adjusted to control the trade-off between coverage and reliability. In addition, the models described herein may be tuned to the application environment for expected incidence, or based on expected population distributions for a given application environment. Finally, support for adaptive retraining can improve performance over time in view of the feedback training loop of the methods and systems described herein.

To obtain improved results, one of ordinary skill in the art can generate and obtain additional data sets and improve the sensitivity and specificity and confidence intervals of the methods and devices disclosed herein without undue experimentation. While these measurements are made using an example data set, the methods and apparatus may be configured with additional data sets as described herein, and identify the subject as at risk without undue experimentation if the clinical setting is at an 80% confidence interval. Sensitivity and specificity of 80% or greater in a clinical setting can likewise be obtained by one of ordinary skill in the art without undue experimentation, e.g., with additional data sets, using the teachings provided herein. In some cases, the additional data sets are obtained based on a clinician questionnaire and are used to generate a rating model that can be used alone or in combination with other models. For example, a parent or caregiver questionnaire, a clinician questionnaire, video analysis results, or any combination thereof may provide input to one or more preliminary assessment models corresponding to each data source. These preliminary assessment models may generate outputs, such as preliminary output scores, which may be combined to generate a combined preliminary output score as described herein.

In some cases, an assessment module comprising a series of assessment models may be used to perform an assessment and/or diagnosis of a patient. The assessment module may interface or communicate with an input module configured to collect or obtain input data from a user. The series of assessment models may be used to inform the data collection process in order to obtain enough data to generate a deterministic determination. In some cases, the systems and methods disclosed herein use a first assessment module to collect an initial data set (e.g., containing a parent or caregiver questionnaire) corresponding to a parent or caregiver assessment model. The data set contains data corresponding to features of the assessment model that can be evaluated to generate a determination, e.g., a positive or negative determination (e.g., a categorical determination) or an uncertainty determination of a related behavioral disorder or condition, such as autism. If the determination is uncertain, a second rating module may be used to obtain additional data sets, such as the results of video analysis (e.g., an algorithm or video analyst based rating of the individual's captured video). Alternatively, in some cases, the results of the video analysis are used with the initial parental or caregiver data set to generate the rating. This information may be incorporated into a rating model configured to incorporate additional data sets from the video analysis to generate updated determinations. If the updated determination is still uncertain, another data set may be obtained using a third assessment module, such as a supplemental questionnaire by a healthcare provider such as a doctor or clinician (e.g., based on an in-person assessment). Such scenarios may occur in particularly difficult situations. Alternatively, the new data set may be optional and decided upon by the healthcare provider. A next data set may be obtained and then evaluated using a rating model configured to incorporate this data in generating a next determination. Each of the series of assessment models may be configured to take into account existing data sets and new or additional data sets when generating the determination. Alternatively, each of the series of assessment models may be configured only to consider new or additional data sets, and the results or scores of the assessment models are simply combined, as disclosed herein, in order to generate new or updated assessment results. The data set may be obtained via one or more computing devices. For example, a smart phone of a parent or caregiver may be used to obtain input of a parent or caregiver questionnaire and capture the video for analysis. In some cases, the computing device is used to analyze the video, alternatively, a remote computing device or remote video analyst analyzes the video and answers an analyst-based questionnaire to provide an input data set. In some cases, a computing device of a doctor or clinician is used to provide the input data set. Video analysis and assessment or diagnosis based on input data using one or more assessment models may be performed locally by one or more computing devices (e.g., a parent's smartphone), or remotely such as via cloud computing (e.g., computing occurs on the cloud and results/computing results are transmitted to a user device for display). For example, a system for performing the methods disclosed herein may include a parent or caregiver mobile application and/or device, a video analyst portal and/or device, and a healthcare provider device and/or dashboard. The benefit of this method of dynamically obtaining a new data set based on the current assessment results or determined status is that the evaluation or diagnostic process performs more efficiently without requiring more data than is necessary to generate a deterministic determination.

As described herein, additional data sets may be obtained from large archival data repositories, such as the Autism Genetic Resource Exchange (AGRE), boston autism union (AC), simons foundation, national autism research databases, and so forth. Alternatively or in combination, the additional data set may include mathematical simulation data generated based on the archival data using various simulation algorithms. Alternatively or in combination, additional data sets may be obtained via crowdsourcing, where the subject self-administers an assessment program as described herein and contributes data from its assessment. In addition to data from self-administered assessments, subjects can also provide clinical diagnoses obtained from qualified clinicians to provide comparative criteria for the assessment procedure.

In another aspect, a digitally personalized medical system as described herein includes a digitizing device having a processor and associated software, the digitizing device configured for: receiving data to assess and diagnose a patient; capturing interaction and feedback data identifying relative levels of efficacy, compliance, and response resulting from the therapeutic intervention; and performing data analysis, including at least one of machine learning, artificial intelligence, and statistical models, to assess the user data and user profile to further personalize, improve, or assess the efficacy of the therapeutic intervention.

The assessment and diagnosis of patients in a digitally personalized medical system can classify subjects into one of three categories: have one or more developmental status, develop normally or typically, or are indeterminate (i.e., require additional assessment to determine whether the subject has any developmental status). In particular, separate classes determined for uncertainty may be provided, which results in a higher accuracy with respect to classification determination indicating the presence or absence of a developmental condition. The developmental condition may be a developmental disorder or developmental progression. Furthermore, the methods and apparatus disclosed herein are not limited to developmental conditions, and may be applied to other cognitive functions, such as behavioral, neurological, or mental health conditions.

In some cases, the system may be configured to use digital diagnostics and digital therapy. Digital diagnosis and digital treatment may include systems or methods that include collecting digital information and processing and evaluating the provided data to improve the medical, psychological or physiological state of an individual. The systems and methods disclosed herein can classify subjects into one of three categories: have one or more developmental status, develop normally or typically, or are indeterminate (i.e., require additional assessment to determine whether the subject has any developmental status). In particular, separate classes determined for uncertainty may be provided, which results in a higher accuracy with respect to classification determination indicating the presence or absence of a developmental condition. The developmental condition may be a developmental disorder or developmental progression. Furthermore, the methods and apparatus disclosed herein are not limited to developmental conditions, and may be applied to other cognitive functions, such as behavioral, neurological, or mental health conditions. Additionally, the digital treatment system may apply software-based learning to evaluate user data, monitor and improve diagnostic and therapeutic interventions provided by the system.

The digital diagnosis in the system may include data and metadata collected from the patient or caregiver or a party unrelated to the individual being assessed. In some cases, the collected data may include monitoring behavior, observation, judgment, or may be assessed by a party other than the individual. In more cases, the assessment may include an adult performing the assessment or providing data for assessing a child or adolescent.

The data source may comprise an active or passive source in a digitized format via one or more digitizing devices such as a mobile phone, a video capturer, an audio capturer, an activity monitor, or a wearable digitizing monitor. Examples of active data collection include devices, systems or methods for tracking eye movements, recording body or appendage movements, monitoring sleep patterns, recording voice patterns. In some cases, active sources may include audio feed data sources such as speech patterns, lexical/syntactic patterns (e.g., size of vocabulary, correct/incorrect use of pronouns, correct/incorrect deformation and word shape change of verbs, usage of grammatical structures such as active/passive morphism, etc., and sentence fluency), high-order linguistic patterns (e.g., coherence, comprehension, conversation engagement, and curiosity). Active sources may also include touch screen data sources (e.g., fine motor function, dexterity, precision and frequency of clicks, precision and frequency of sliding movements, and focus/attention span). Video recordings of the subject's face during an activity (e.g., quality/number of eye fixations and saccades, heat maps of eye focus on the screen, focus/attention span, variability of facial expressions, and quality in response to emotional stimuli) may also be considered an active source of data.

Passive data collection may include devices, systems, or methods for collecting data from a user using recordings or measurements derived from a mobile application, a toy with embedded sensors, or a recording unit. In some cases, passive sources may include sensors embedded in the smart toy (e.g., fine motor function, gross motor function, focus/attention span, and skill to solve the problem) and wearable devices (e.g., activity level, number/quality of rest).

Data used in diagnosis and therapy may come from multiple sources, and may include a combination of passive and active data collection acquired from one device (such as a mobile device with which a user interacts) or other sources (such as microbiome sampling and genetic sampling of a subject).

The methods and devices disclosed herein are well suited for the diagnostic and digital therapeutic treatment of cognitive and developmental disorders, mood and mental disorders, and neurodegenerative disorders. Examples of cognitive and developmental disorders include speech and learning disorders, and other disorders as described herein. Examples of mood and psychiatric disorders that may affect children and adults include behavioral disorders, mood disorders, depression, attention deficit and hyperactivity disorder ("ADHD"), obsessive compulsive disorder ("OCD"), schizophrenia, and substance-related disorders such as eating disorders and drug abuse. Examples of neurodegenerative diseases include age-related cognitive decline, cognitive impairment that progresses to alzheimer's disease and aging, parkinson's disease and huntington's disease, and amyotrophic lateral sclerosis ("ALS"). The methods and devices disclosed herein enable digital diagnosis and treatment of children and continue treatment until the subject is adult, and may provide lifelong treatment based on personalized profiles.

Digital diagnosis and treatment as described herein is well suited for behavioral interventions in conjunction with biological or chemotherapeutic treatments. By collecting user interaction data as described herein, a therapy of a combination of behavioral intervention data drugs and biological therapies may be provided.

A mobile device as described herein may include sensors for collecting data of a subject, which may be used as part of a feedback loop in order to improve results and reduce reliance on user input. The mobile device may include passive or active sensors as described herein to collect data of the subject after treatment. The same mobile device or a second mobile device (such as an ipad or iphone or similar device) may include a software application that interacts with the user to periodically (e.g., daily, hourly, etc.) inform the user what to do to improve therapy. The user mobile device may be configured to send a notification to the user in response to treatment progress. The mobile device may include a drug delivery device configured to monitor a delivered amount of a therapeutic agent delivered to a subject.

For example, the methods and devices disclosed herein are well suited for treating parents and children. Both parents and children may receive separate treatments as described herein. For example, the neurological condition of parents can be monitored and treated, while the developmental progress of children is monitored and treated

For example, a mobile device for acquiring data of a subject may be configured in a variety of ways, and may incorporate multiple devices. For example, since abnormal sleep patterns may be associated with autism, sleep data acquired using the treatment apparatus described herein may be used as an additional input to the machine learning training process of the autism classifier used by the diagnostic apparatus described above. The mobile device may comprise a mobile wearable device for sleep monitoring of children, which may be provided as input for diagnosis and therapy, and may comprise components of a feedback loop as described herein.

Many types of sensors, biosensors, and data are available for collecting and inputting data to a subject for diagnosis and treatment of the subject. For example, work in connection with the embodiments shows that microbiome data can be used to diagnose and treat autism. Microbiome data can be collected in a number of ways known to those of ordinary skill in the art and can include data selected from a stool sample, an intestinal lavage, or other sample of the subject's intestinal flora. Genetic data may also be obtained as input to the diagnostic and therapeutic modules. Genetic data may include whole genome sequencing, sequencing and identification of specific markers of a subject.

The diagnostic and therapy module as disclosed herein may receive data from a plurality of sources, such as data from: genetic data, floral data, sleep sensors, wearable anklet sleep monitors, articles for monitoring sleep, and eye tracking of a subject. Eye tracking may be performed in a number of ways to determine the direction and duration of gaze. Tracking may be accomplished by glasses, helmets, or other sensors for direction and duration of gaze. For example, data may be collected during a visual session, such as video playback or a video game. This data can be acquired before, during, and after treatment and provided to the treatment module and diagnostic module as previously described herein in order to initially diagnose the subject, determine treatment of the subject, alter treatment of the subject, and monitor the subject after treatment.

Visual gaze, gaze duration, and facial expression information may be obtained using methods and apparatus known to those of ordinary skill in the art and obtained as inputs to the diagnostic and therapy module. The data may be acquired using a downloadable application that includes software instructions. For example, Gloarai et al, "Autotism and the maintenance of face processing," Clinical Neuroscience Research 6(2006) 145-160 describe facial treatment. The autism research team at the University of ducke (Duke University) has been conducting autism and other studies using software applications downloaded to mobile devices, as described in webpage austsmandbeyo. Data from such devices is particularly suitable for combination in accordance with the present disclosure. The facial recognition data and gaze data may be input into a diagnostic and therapy module as described herein.

The platforms, systems, devices, methods, and media disclosed herein may provide activity patterns that include various activities, such as facial expression recognition activities. Facial expression recognition may be performed on one or more images. A computing device, such as a smartphone, may be configured to perform automatic facial expression recognition and deliver real-time social cues, as described herein.

The system may track facial expression events using an outward facing camera on the smartphone and read facial expressions or emotions by passing video and/or image or photo data to the smartphone application for real-time machine learning-based classification of commonly used emotions (e.g., standardized ackerman "base" emotions). Examples of such emotions include anger, disgust, fear, happiness, sadness, surprise, slight, and neutral. The system may then provide real-time social cues about facial expressions (e.g., "happy," "angry," etc.) to the subject or user via the smartphone. The cues may be shown on the application program as visual, and/or audible through a speaker on the smartphone, or any combination thereof. The system may also record social responses, such as the number and type of facial engagement observed and the level of social interaction. In some embodiments, the emotion recognition system contains a computer vision pipeline that starts with a robust 23-point face tracker, followed by several illumination optimization steps, such as gamma correction, difference of gaussian filtering, and contrast equalization, or any combination thereof. In some embodiments, a histogram of orientation gradient features is extracted for the entire face and a logistic regression classifier is applied to the final emotion prediction. The classifier model may be trained on a large existing large database of facial expression recognition and additional data collected from other participants or subjects. In some embodiments, a technique known as "neutral subtraction" allows the system to be calibrated in real time to the specific face it sees during interaction, allowing for an increase in personalized predictions for a particular user while a conversation is ongoing.

In some cases, various modes of feedback are provided to a subject (e.g., a child), a parent or caregiver, an interventionalist, a clinician, or any combination thereof. The system may be configured to provide progress feedback to a clinician, for example, through a healthcare provider portal as described herein. The feedback may include performance scores for various activities or games, whether the emotional response is correct, an explanation of an incorrect answer, an improvement or progress (e.g., progress of past month emotion recognition activities), or other observations or comments. The feedback may contain performance indicators (such as facial attention fixation time, correct emotional response, score, and other indicators) that may optionally be provided in a simple interface for review by clinicians and interventionalists so that they may monitor the progress of the subject. The progress feedback may correspond to various domains or sub-domains of behavior. For example, progress feedback and/or subject improvement may relate to social areas and/or specific sub-areas, including interpersonal relationships, play and leisure, and coping skills. For example, specific improvements can be tracked by monitoring and assessing the performance and other indicators of a subject during various digital treatment activities. As an example, an inward facing camera and/or microphone may be used to monitor facial engagement, emotional expression, gaze, verbal interaction (e.g., whether a child reacted verbally to a caregiver's question), and other behaviors of the subject.

The digital treatment platforms, systems, devices, methods, and media disclosed herein may be configured to evaluate a subject for sub-areas and associated deficiencies, and to determine whether the subject would benefit from or obtain improvement in digital treatment. For example, interpersonal relationships may lead to a deficiency in social emotional reciprocity, a deficiency in non-verbal communication behavior for social interactions, and a deficiency in developing, maintaining, and understanding interpersonal relationships. The improvements provided herein may include an increase in facial engagement, an increase in understanding of emotional expression, and an increase in the chances and motivation for social engagement. The sub-fields of play and leisure may include the drawbacks of developing, maintaining and understanding interpersonal relationships, which may be improved by digitally treating games and/or activities that encourage social play. Due to the increased involvement of the face and the increased understanding of emotional expression, subjects may become more adept at maintaining interpersonal relationships. Social responses may require adherence to an uniform, inflexible adherence to ritualized patterns of regular, or verbal or non-verbal behavior, and as facial engagement and understanding of emotional expressions increase, subjects may become more able to respond to environmental stresses, including better understanding of social interactions. The therapeutic effect or outcome of a subject participating in a therapeutic activity and/or game disclosed herein may be collected as additional data used to train a machine learning model or classifier for determining responsiveness to therapy. In some embodiments, a subject who has been assessed by the diagnostic or assessment module and positively identified as having (or predicted to have) an autism spectrum disorder can then be assessed by a machine learning model or classifier that predicts or determines that the subject will respond to or benefit from one or more of the digital treatments disclosed herein. In some cases, it is predicted that a single activity or game or multiple activities or games will provide significant therapeutic benefits with respect to one or more forms of social interaction. In some cases, the benefit is often related to social reciprocity. Alternatively or in combination, benefits are determined with respect to specific areas or sub-areas related to social behavior or reciprocity or other behavioral deficiencies.

The digital treatment may be customized or personalized based on some or all of the diagnostic dimensions used to assess whether a subject has a disorder, condition, or injury. For example, a subject may be assessed based on using a machine learning model that predicts that the subject will benefit from emotion recognition activities in the social domain and/or in specific sub-domains, such as interpersonal relationships, play and leisure, and/or coping skills. This can be based on various diagnostic dimensions generated during diagnosis, which are then incorporated into the therapy customization process. The machine learning model may incorporate these dimensions into a prediction or likelihood of assessing the improvement or benefit that a subject may obtain from a particular therapy (e.g., emotion recognition activity or social reciprocity). In some cases, the subject is predicted to benefit with respect to a particular behavior (such as increasing facial engagement or increasing understanding of the emotion expressed by others). Significant benefit or improvement can be established statistically using conventional statistical tools or indicators, or can be set (e.g., a threshold such as an average 10% increase in emotion recognition score after 3 weeks of treatment). In some embodiments, subject performance is monitored and collected on a remote database server where the subject performance may be anonymized and combined with other subject's data to form a data set for training such machine learning models.

Classifiers as disclosed herein are particularly suitable for combination with this data to provide improved therapies and treatments. The data may be layered and used with a feedback loop as described herein. For example, the feedback data may be used in combination with drug therapy to determine differential responses and identify responders and non-responders. Alternatively or in combination, the feedback data may be combined with non-drug therapies, such as behavioral therapies (e.g., the digital treatments described herein).

With regard to genetics, recent studies have shown that some people may have genes that make them more susceptible to autism. The genetic composition of a subject may make the subject more susceptible to environmental influences, which may lead to symptoms and may affect the severity of the symptoms. For example, environmental effects may include damage from toxins, viruses, or other substances. Without being bound by any particular theory, this may result in a mechanism that alters the regulation of the expressed gene. Changes in gene expression may be associated with changes in the gastrointestinal ("GI") flora, and these changes in the flora may affect symptoms associated with autism. Alternatively or in combination, damage to the gut microbiome may result in a change in the microbiome of the subject, resulting in a subject with less than ideal homeostasis, which may affect associated symptoms associated with autism. The inventors noted that preliminary studies by Sarkis k.mazmanian et al using bacteroides fragilis (b.fragilis) showed that this change in microorganisms may be associated with the development of autism and autism. (see also Melinda Wenner Moyer "Gut Bacteria May Play a Role in Autosm", Scientific American 2014 9 months 1 days)

Digital diagnosis uses data collected by the system about the patient, which may include supplemental diagnostic data captured externally from the digital diagnosis, with analysis from tools such as machine learning, artificial intelligence, and statistical modeling to assess or diagnose the condition of the patient. Digital diagnosis may also provide a change in the patient's state or performance, directly or indirectly via data and metadata that may be analyzed and evaluated by tools such as machine learning, artificial intelligence, and statistical modeling to improve or refine diagnosis and potential therapeutic intervention, to provide feedback to the system.

Analysis of data including digital diagnoses, digital treatments, and corresponding responses, or analysis of self-treatment interventions without digital diagnoses, digital treatments, and corresponding responses, can result in identification of new diagnoses for subjects and new treatment regimens for patients and caregivers.

For example, the types of data collected and utilized by the system may include video, audio, responses to questions or activities of the patient and caregiver, and active or passive data streams from user interaction with activities, games, or software features of the system. Such data may also represent patient or caregiver interaction with the system, for example, in performing recommended activities. Specific examples include data of user interactions with devices or mobile applications of the system that capture aspects of the user's behavior, profile, activity, interactions with software systems, interactions with games, frequency of use, session time, selected options or features, and content or activity preferences. The data may also include streaming, gaming, or interactive content from various third party devices, such as activity monitors.

Digital treatment as described herein may include instructions, feedback, activities, or interactions provided by the system to the patient or caregiver. Examples include suggested behaviors, activities, games, or interactive sessions with system software and/or third party devices (e.g., internet of things ("IoT") enabled therapy devices as understood by one of ordinary skill in the art).

Fig. 23A illustrates a system diagram of a digitally personalized medical platform 2300 for providing diagnosis and therapy related to behavioral, neurological, or mental health disorders. For example, platform 2300 may provide diagnosis and treatment of pediatric cognitive and behavioral conditions associated with developmental delay. A user digitizing device 2310 (e.g., a mobile device such as a smartphone, activity monitor, or wearable digital monitor) records data and metadata related to the patient. Data can be collected based on patient interaction with the device and based on interaction with caregivers and healthcare professionals. Data may be collected proactively, such as by applying tests, recording speech and/or video, and recording responses to diagnostic questions. Data may also be collected passively, such as by monitoring the patient and caregiver online behavior, such as recording questions asked and topics under study related to the diagnosed developmental disorder.

The digitizing device 2310 connects to the computer network 2320 allowing it to share data with and receive data from the connected computers. In particular, the device may communicate with a personalized medical system 2330, the personalized medical system 2330 including a server configured to communicate with a digital device 2310 over a computer network 2320. The personalized medical system 2330 includes a diagnosis module 2332 that provides initial and step-by-step diagnosis of a patient's developmental status, and a therapy module 2334 that provides personalized therapy recommendations in response to diagnosis by the diagnosis module 2332.

Each of diagnostic modules 2332 and 2334 communicate with user digitizer 2310 during treatment. The diagnostic module provides diagnostic tests to the digitizing device 2310 and receives diagnostic feedback from the digitizing device 2310 and uses the feedback to determine a diagnosis of the patient. For example, an initial diagnosis may be based on a comprehensive set of tests and problems, but the diagnosis may be gradually updated using smaller data samples. For example, the diagnostic module may diagnose autism-related speech delay based on questions asked of the caregiver and tests administered to the patient (such as vocabulary or verbal communication tests). The diagnosis may indicate the number of months or years of retardation of speech. Subsequent tests may be administered and questions asked to update the diagnosis, e.g., to show a smaller or greater degree of delay.

The diagnostic module communicates its diagnosis to the digitizing device 2310 and to the therapy module 2334, which uses the diagnosis to suggest a therapy to be performed to treat any diagnosed symptoms. Therapy module 2334 sends its recommended therapies to digitizing device 2310, including instructions for the patient and caregiver to perform the recommended therapies within a given time frame. After performing therapy within a given time frame, the caregiver or patient may indicate completion of the recommended therapy and a report may be sent from the digitizing device 2310 to the therapy module 2334. Therapy module 2334 may then indicate to diagnostic module 2332 that the last round of therapy has ended and that a new diagnosis is needed. Diagnostic module 2332 can then provide new diagnostic tests and questions to digital device 2310, as well as input from the therapy module to retrieve any data provided as part of the therapy, such as a record of a learning session or a review history of a caregiver or patient related to the condition of the therapy or diagnosis. The diagnostic module 2332 then provides updated diagnostics to repeat the process and provide the next therapy.

Information related to diagnosis and therapy may also be provided from the personalized medical system 2330 to a third party system 2340, such as a computer system of a healthcare professional. A healthcare professional or other third party may be alerted to significant deviations from the treatment schedule, including whether the patient is behind an expected schedule or improving more quickly than predicted. The third party may then take appropriate further action based on this provided information.

Fig. 23B illustrates a detailed view of diagnostic module 2332. The diagnostic module 2332 includes a test management module 2342, the test management module 2342 generating tests and corresponding instructions for administration to a subject. The diagnostic module 2332 also includes a subject data receiving module 2344 in which subject data, such as test results, is received; the caregiver feeds back; metadata of patient and caregiver interactions with the system; and video, audio, and game interaction with the system, for example. The subject assessment module 2346 generates a diagnosis of the subject based on the data from the subject data receiving module 2344 and past diagnoses of the subject and similar subjects. The machine learning module 2348 assesses the relative sensitivity of each input to a diagnosis to determine which type of measurement provides the most information about a patient's diagnosis. The test management module 2342 may use these results to provide the test that most effectively informs the diagnosis, and the subject assessment module 2346 may use these results to apply weights to the diagnostic data in order to improve diagnostic accuracy and consistency. The diagnostic data associated with each treated patient is stored, for example, in a database to form a library of diagnostic data for pattern matching and machine learning. A large number (e.g., 10,000 or more) of subject profiles may be stored simultaneously in such a database.

Fig. 23C illustrates a detailed view of therapy module 2334. The treatment module 2334 includes a treatment assessment module 2352 that scores treatments based on their effectiveness. Previously suggested therapies are evaluated based on the diagnosis provided by the diagnostic module before and after the therapy and the degree of improvement is determined. The degree of improvement is used to score the effectiveness of the therapy. The effectiveness of the therapy may be associated with a particular diagnostic category; for example, a therapy may be considered effective for a subject with one diagnostic type but not effective for a subject with a second diagnostic type. A therapy matching module 2354 is also provided which compares the subject diagnosis from the therapy module 2332 to a list of therapies in order to determine a set of treatments that have been determined by the therapy assessment module 2352 to be most effective for treating diagnoses similar to the subject's diagnosis. Therapy recommendation module 2356 then generates a recommended therapy (including the one or more therapies identified as promising by therapy matching module 2354) and sends the recommendation to the subject via instructions to administer the recommended therapy. Therapy tracking module 2358 then tracks the progress of the recommended therapy and determines when a new diagnosis should be performed by diagnostic module 2332, or when a given treatment should be continued and further monitored for progress. The treatment data associated with each patient being treated is stored, for example, in a database to form a library of treatment data for pattern matching and machine learning. A large number (e.g., 10,000 or more) of subject profiles may be stored simultaneously in such a database. The therapy data may be correlated with the diagnostic data of the diagnostic module 2332 to allow matching of effective therapies to diagnoses.

The therapy may comprise digital therapy. Digital therapy may include single or multiple therapeutic activities or interventions that may be performed by a patient or caregiver. The digital treatment may include prescribed interactions with third party devices such as sensors, computers, medical devices, and treatment delivery systems. Digital therapy may support FDA-approved medical subsidies, a set of diagnostic codes, or a single diagnostic code.

Fig. 24 illustrates a method 2400 for providing diagnosis and therapy in a digitally personalized medical platform. The digitally personalized medical platform communicates with a subject, which may include a patient having one or more caregivers, to provide a diagnosis and recommend therapy.

In step 2410, the diagnostic module assesses the subject to determine a diagnosis, for example, by applying a diagnostic test to the subject. The diagnostic test may be directed to determining a plurality of characteristics and corresponding characteristic values of the subject. For example, the test may include a plurality of questions presented to the subject, observations of the subject, or tasks assigned to the subject. Testing may also include indirect testing of the subject, such as from a caregiver's feedback on patient performance and specific behaviors and/or significant events; metadata of patient and caregiver interaction with the system; and video, audio and game interactions with the system or with third party tools that provide behavioral and performance data about the patient and caregiver. For initial testing, a more comprehensive testing protocol may be performed, aimed at generating an accurate initial diagnosis. Subsequent tests used to update previous diagnoses to track progress may involve less comprehensive tests and may, for example, rely more on indirect tests such as behavioral tracking and treatment-related records and metadata.

In step 2412, the diagnostic module receives new data from the subject. The new data may include an array of features and corresponding feature values for the particular subject. As described herein, a feature may include a plurality of questions presented to a subject, an observation of a subject, or a task assigned to a subject. The characteristic values may include input data from the subject corresponding to a characteristic of the subject, such as the subject's answer to a question asked, or the subject's response. The characteristic values may also include recorded feedback, metadata, and system interaction data as described above.

In step 2414, the diagnostic module may load the previously saved assessment model from local memory and/or a remote server configured to store the model. Alternatively, if there is no assessment model for the patient, a default model may be loaded, e.g., based on one or more initial diagnostic indicators.

In step 2416, the new data is fitted to the assessment model to generate an updated assessment model. The assessment model may include an initial diagnosis of a previously untreated subject, or an updated diagnosis of a previously treated subject. Updated diagnosis may include measurements of the progress of one or more aspects of the condition, such as memory, attention and co-attention, cognition, behavioral responses, emotional responses, language usage, language skills, frequency of specific behaviors, sleep, social, nonverbal communication, and developmental milestones. Analysis of the data used to determine progress and current diagnosis may include automated analysis, such as question scoring and voice recognition for lexical and speech analysis. The analysis may also include human scoring by analyzing review video, audio, and text data.

In step 2418, the updated assessment model is provided to the therapy module, which determines the progress made as a result of any previously recommended therapies. The therapy module scores treatments based on the amount of progression in the assessment model, wherein greater progression corresponds to a higher score, such that successful treatments and similar treatments are more likely to be recommended to subjects with similar assessments in the future. The available series of treatments is thereby updated to reflect a new assessment of effectiveness in connection with the subject's diagnosis.

In step 2420, new therapies are recommended based on the assessment model, the degree of success (if successful) of previous therapies, and the scores assigned to the set of candidate therapies based on previous uses of these therapies for the subject and subjects with similar assessments. The recommended therapy is sent to the subject for administration along with instructions for the specific time span for its application. For example, a regimen may include language exercises being performed on a patient daily for a week, each exercise being recorded in an audio file of a mobile device used by a caregiver or patient.

In step 2422, the progress of the new therapy is monitored to determine whether to extend the treatment period. The monitoring may include periodic re-diagnostics, which may be performed by returning to step 2410. Alternatively, the underlying milestones may be recorded without complete re-diagnosis, and the progress may be compared to a predicted progress schedule generated by the therapy module. For example, if the therapy was initially unsuccessful, the therapy module may suggest repeating it one or more times, followed by a re-diagnosis and suggestion of a new therapy or suggestion of medical professional intervention.

FIG. 25 illustrates a flow chart 2500 showing the processing of suspected or confirmed speech and language delays.

In step 2502, an initial assessment is determined by diagnostic module 2532. The initial assessment may assess the patient's performance in one or more areas, such as speech and language usage, and assess the degree and type of developmental delay in multiple directions, as disclosed herein. The assessment may also place the subject in one of a plurality of overall progress tracks; for example, a subject may be assessed as verbal or non-verbal.

If the subject is non-verbal as determined in step 2510, the therapy module 2534 can recommend one or more non-verbal therapies 2512, such as tasks related to making selections, focused tasks, or responses to names or other statements. Further recommendations of useful devices and products that may contribute to progression may also be provided, and all recommendations may be tailored to the needs of the subject as indicated by the subject's diagnosis and progress reports.

While the recommended therapy is being applied, progress is monitored in step 2514 to determine if the diagnosis improves at a predicted rate.

If an improvement has been measured in step 2514, the system determines in step 2516 whether the subject is still non-verbal; if so, the system returns to step 2510 and generates a new recommended therapy 2512 to induce further improvement.

If no improvement is measured in step 2514, the system may recommend that the therapy be repeated a predetermined number of times. The system may also recommend attempting to change therapy to try and obtain better results. If such repetition and changes fail, the system may recommend a therapist visit in step 2518 to more directly address the problem of stunting.

Once the subject is determined to be verbal, as shown at step 2520, a speech therapy 2522 can be generated by therapy module 2534. For example, speech therapy 2522 can include one or more of language exercises, pronunciation training, and expression requests or communications. Further recommendations of useful devices and products that may contribute to progression may also be provided, and all recommendations may be tailored to the needs of the subject as indicated by the subject's diagnosis and progress reports.

As in nonverbal follow-up, the progress in response to speech therapy is continuously monitored in step 2524 to determine whether the diagnosis improves at a predicted rate.

If an improvement has been measured in step 2524, the system reports progress in step 326 and generates a new recommended therapy 2522 to induce further improvement.

If no improvement is detected in step 2524, the system may recommend that the therapy be repeated a predetermined number of times. The system may also suggest attempts to alter the therapy to try and obtain better results. If such repetition and changes fail, the system may recommend a therapist visit in step 2528 to more directly address the problem of stunting development.

The steps of nonverbal and verbal therapy can be repeated indefinitely until the subject is stimulated to continue learning and progression to the extent necessary, and in order to prevent or delay regression due to loss of verbal skills and abilities. While the particular treatment plan illustrated in fig. 25 is for speech and speech retardation in pediatrics, similar plans may be generated for other subjects with developmental or cognitive problems, including plans for adult patients. For example, similar diagnostic and treatment schedules may be used to treat neurodegenerative conditions and/or age-related cognitive decline using treatments selected to be appropriate for these conditions. Other conditions that may be treated by the methods and systems disclosed herein in adult or pediatric patients include mood disorders such as depression, OCD, and schizophrenia; cognitive impairment and decline; sleep disorders; addictive behaviors; eating disorders; and weight management issues related to behavior.

Fig. 26 illustrates an overview of the data processing flow of a digitally personalized medical system comprising a diagnosis module and a therapy module configured to integrate information from multiple sources. The data may include passive data sources (2601), which may be configured to provide finer grained information, and may include data sets that are acquired over a longer period of time under more natural conditions. Passive data sources may include, for example, data collected from wearable devices, smart devices that measure any one or combination of data from video feeds (e.g., video-enabled toys, mobile devices, eye tracking data from video playback), information about subject mobility based on information gathered from three-axis sensors or gyroscopes (e.g., sensors embedded in the toy or other device with which a patient may interact, for example, at home or under normal conditions outside of a medical environment): speech patterns, motions, touch response times, prosody, vocabulary, facial expressions, and other characteristics exhibited by the subject. Passive data may include data about one or more actions of a user and may include subtle information that may or may not be readily detected by untrained individuals. In some cases, passive data may provide more inclusive information

Passively collected data may include data collected continuously from various environments. Passively collected data may provide a more complete view of the subject, which may improve the quality of the assessment. In some cases, for example, passively collected data may include data collected both inside and outside of a medical environment. Passively collected data acquired in a medical environment may be different from passively collected data acquired from outside the medical environment. Thus, the continuously collected passive data may comprise a more complete picture of the general behaviour and habits of the subject and may therefore comprise data or information that would otherwise not be available to the healthcare practitioner. For example, a subject undergoing evaluation in a medical environment may exhibit symptoms, postures, or characteristics that are representative of the subject's response to the medical environment, and thus may not provide a complete and accurate view of the subject's behavior in more familiar conditions outside of the medical environment. The relative importance of one or more features derived from an assessment in a medical environment (e.g., features assessed by a diagnostic module) may be different than the relative importance of one or more features derived or assessed outside of a clinical environment.

The data may include information collected through diagnostic tests, diagnostic questions, or questionnaires (2605). In some cases, the data from the diagnostic test (2605) can include data collected from a second observer (e.g., a parent, guardian, or individual that is not the subject being analyzed). The data may include active data sources (2610), such as data collected from devices configured to track eye movement or measure or analyze language patterns.

As shown in fig. 26, the data input may be fed to a diagnostic module that may include data analysis (2615) using, for example, a classifier, an algorithm (e.g., a machine learning algorithm), or a statistical model to make a diagnosis as to whether the subject is likely to have the tested disorder (e.g., autism spectrum disorder) (2620) or is unlikely to have the tested disorder (2625). The methods and apparatus disclosed herein may alternatively be employed to include a third uncertainty category (not shown in this figure) corresponding to a subject who requires additional assessment to determine whether he/she is likely to have the tested disorder. The methods and apparatus disclosed herein are not limited to disorders and may be applied to other cognitive functions, such as behavioral, neurological, mental health, or developmental conditions. The method and apparatus may initially classify a subject into one of three categories and then proceed to evaluate subjects that were initially classified as "uncertain" by collecting additional information from the subject. Such continuous assessment of subjects initially classified as "uncertain" can be performed continuously through a single screening program (e.g., comprising various assessment modules). Alternatively or additionally, subjects identified as belonging to an indeterminate group may be evaluated using a separate additional screening program and/or referral to a clinician for further evaluation.

In the event that the subject is determined by the diagnostic model to be likely to have the disorder (2620), an information display may be presented to a second party (e.g., a healthcare practitioner, parent, guardian, or other individual). The informational display may provide symptoms of the disorder, which may be displayed as a graph depicting the covariance of the symptoms displayed by the subject and the symptoms displayed by the average population. The list of characteristics associated with a particular diagnosis may be displayed with confidence values, correlation coefficients, or other means for displaying the relationship between the performance of the subject and the average population or a population consisting of individuals with similar disorders.

If the digitally personalized medical system predicts that the user may have a diagnosable condition (e.g., autism spectrum disorder), the therapy module may provide behavioral therapy (2630), which may include behavioral intervention; a prescribed activity or training; interventions with medical devices or other therapies for a particular duration or at a particular time or under circumstances. As the subject undergoes therapy, data (e.g., passive data and diagnostic problem data) may continue to be collected to perform subsequent assessments to determine, for example, whether the therapy is effective. The collected data may undergo data analysis (2640) (e.g., analysis using machine learning, statistical modeling, classification tasks, predictive algorithms) to make a determination regarding the suitability of a given subject. The growth curve display can be used to show the progression of a subject relative to a baseline (e.g., relative to an age-matched cohort). The performance or progression of the individual can be measured to track compliance of the subject, where the suggested behavioral therapy predicted by the therapy module can be expressed as historical and predicted performance on the growth curve. The procedure for assessing the performance of an individual subject may be repeated or iterated (2635) until an appropriate behavioral therapy is identified.

The digital therapy treatment methods and apparatus described with reference to fig. 23A-23C and fig. 24-26 are particularly suitable for combination with the methods and apparatus for problem-less evaluation of subjects described herein with reference to fig. 1A-10. For example, components of diagnostic module 2332 as described herein can be configured to assess a subject with a reduced set of questions including the most relevant questions as described herein, and then evaluated with therapy module 2334 to then assess a subject with a subsequent set of questions including the most relevant questions for monitoring therapy as described herein.

Fig. 27 shows a system 2700 for evaluating a subject for multiple clinical indications. System 2700 may include a plurality of cascaded diagnostic modules (such as diagnostic modules 2720, 2730, 2740, 2750, and 2760). The cascaded diagnostic modules may be operably coupled (such as in a chain of modules) such that an output from one diagnostic module may form an input into another diagnostic module. As shown in fig. 27, the system may include a social or behavioral delay module 2720, an autism or ADHD module 2730, an autism and ADHD discrimination module 2740, a speech or language delay module 2750, and a mental retardation module 2760. A module as described anywhere herein (e.g., such as the diagnostic module described with respect to fig. 27) may refer to a module that includes a classifier. Thus, the social or behavioral retardation module may include a social or behavioral retardation classifier, the autism or ADHD module may include an autism or ADHD classifier, the autism and ADHD discrimination module may include an autism and ADHD classifier, the speech or language retardation module may include a speech or language retardation classifier, the mental disorder module may include a mental disorder classifier, and so on.

The social or behavioral delay module 2720 may receive information 2710, such as information from an interactive questionnaire described herein. The social or behavioral retardation module may determine the social or behavioral retardation diagnostic status of the subject using any of the diagnostic operations described herein. For example, the social or behavioral delay module may utilize any of the operations of procedure 1300 described with respect to fig. 13 to determine the social or behavioral delay diagnostic status (i.e., whether the subject displays behavior consistent with social or behavioral delay). In determining the social or behavioral retardation diagnostic status, the social or behavioral retardation module may output a determination as to whether the subject exhibits social or behavioral retardation. The social or behavioral delay module may output a positive identification 2722 indicating that the subject does show social or behavioral delay. The social or behavioral delay module may output a negative indication 2724 indicating that the subject does not show social or behavioral delay. The social or behavioral retardation module may output an uncertainty indication 2726 indicating that the social or behavioral retardation module cannot determine whether the subject shows social or behavioral retardation.

When the social or behavioral delay module determines that the subject does not show social or behavioral delay or that the results of the social or behavioral delay query are uncertain, the system may output such results and stop its query for the subject's social or behavioral well-being.

However, when the social or behavioral delay module determines that the subject does show social or behavioral delay, the social or behavioral delay module may pass the results and information 2710 to the autism or ADHD module 2730.

The autism or ADHD delay module may determine the autism or ADHD status of the subject using any of the diagnostic procedures described herein. For example, the autism or ADHD delay module may utilize any of the operations of the procedure 1300 described with respect to fig. 13 to determine an autism or ADHD diagnostic status (i.e., whether the subject exhibits behavior consistent with autism or ADHD). In determining the autism or ADHD diagnostic status, the autism or ADHD module may output a determination as to whether the subject exhibits autism or ADHD. The autism or ADHD module may have output a positive identification 2732 indicating that the subject does display autism or ADHD. The autism or ADHD module may output a negative indication 2734 indicating that the subject does not display autism or ADHD. The autism or ADHD module may output an uncertainty indication 2736 indicating that the autism or ADHD module is unable to determine whether the subject is showing autism or ADHD.

When the autism or ADHD module determines that the subject does not show autism or ADHD or that the result of the autism or ADHD query is uncertain, the system may output such a result and stop its query for the subject's social or behavioral health. In such a scenario, the system may revert to an early diagnosis that the subject exhibits social or behavioral retardation.

However, when the autism or ADHD module determines that the subject does exhibit autism or ADHD, the autism or ADHD module may pass the results and information 2710 to the autism and ADHD discrimination module 2740.

The autism and ADHD discrimination module may use any of the diagnostic operations described herein to distinguish between autism and ADHD. For example, the autism and ADHD discrimination module may utilize any of the operations of procedure 1300 described with respect to fig. 13 to discriminate between autism and ADHD in a subject (i.e., determine whether a subject exhibits behavior more consistent with autism or ADHD). After determining autism and ADHD, the autism and ADHD determination module may output a determination as to whether autism is displayed or whether the subject displays ADHD. The autism and ADHD discrimination module may output an indication 2742 that instructs the subject to display autism. The autism and ADHD discrimination module may output an indication 2744 that instructs the subject to display ADHD. The autism and ADHD discrimination module may output an uncertainty indication 2746 indicating that the autism and ADHD discrimination module cannot distinguish whether the subject's behavior is more consistent with autism or ADHD.

When the autism and ADHD discrimination module determines that the results of the autism and ADHD discrimination queries are uncertain, the system may output such results and stop its queries for the social or behavioral well-being of the subject. In such a scenario, the system may return to an early diagnosis where the subject exhibits behavior consistent with autism or ADHD.

Alternatively or in combination, the autism and ADHD discrimination module may also be configured to communicate the information 2710 to one or more additional modules. For example, the autism and ADHD discrimination module may be configured to communicate information to a obsessive-compulsive module (not shown in fig. 27). The obsessive-compulsive disorder module can use any of the systems and methods described herein (such as any of the operations of procedure 1300) to determine whether a subject exhibits behavior consistent with obsessive-compulsive disorder.

Alternatively or in combination, the speech or language delay module 2750 may receive the information 2710. The language or language retardation module can determine the subject's diagnosis of language retardation using any of the diagnostic operations described herein. For example, the speech or language delay module may utilize any of the operations of the program 1300 described with respect to fig. 13 to determine a speech or language delay diagnostic status (i.e., whether the subject exhibits behavior consistent with speech or language delays). In determining the speech or language delay diagnostic state, the speech or language delay module may output a determination as to whether the subject exhibits speech or language delay. The speech or language delay module may output a positive identification 2752 indicating that the subject does display speech or language delay. The language or language retardation module may output a negative indication 2754 indicating that the subject does not display language or language retardation. The speech or language delay module may output an uncertainty indication 2756 indicating that the speech or language delay module cannot determine whether the subject shows speech or language delay.

When the verbal or language-sluggish module determines that the subject does not show verbal or language sluggish or that the results of the verbal or language-sluggish query are inconclusive, the system can output such results and stop the query for the subject's verbal or language health.

However, when the speech or language delay module determines that the subject does exhibit speech or language delay, the speech or language delay module can pass the results and information 2710 to the intellectual impairment module 2760.

The dysnoesia module can determine the dysnoesia state of the subject using any of the diagnostic operations described herein. For example, the dysnoesia module may utilize any of the operations of procedure 1300 described with respect to fig. 13 to determine a dysnoesia diagnostic state (i.e., whether the subject exhibits behavior consistent with a dysnoesia). In determining the dysnoesia diagnostic state, the dysnoesia module may output a determination as to whether the subject exhibits a dysnoesia. The mental disorder module may output a positive identification 2762 indicating that the subject did display a mental disorder. The mental disorder module may output a negative indication 2764 indicating that the subject is not showing mental disorders. The dysnoesia module may output an uncertainty indication 2766 indicating that the dysnoesia module is unable to determine whether the subject is exhibiting dysnoesia.

When the dysnoesia module determines that the subject does not show dysnoesia or that the results of the dysnoesia query are uncertain, the system can output such results and stop its query for the subject's speech or language health. In such a scenario, the system may revert to the subject showing an early diagnosis of speech or language retardation.

Alternatively or in combination, the mental retardation module can also be configured to pass information 2710 to one or more additional modules. For example, the dysnoesia module can be configured to pass information to the dyslexia module (not shown in fig. 27). The dyslexia module can use any of the systems and methods described herein (such as any of the operations of procedure 1300) to determine whether a subject exhibits behavior consistent with dyslexia.

Although described with reference to social or behavioral delays, autism, ADHD, obsessive-compulsive disorders, verbal or verbal delays, intellectual disabilities, and reading disabilities, apparatus 2700 may include any number of modules (such as 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, or more than 10 modules) that may provide diagnostic status for any behavioral disorder. The modules may be operably coupled (such as cascaded or linked) in any possible order.

In various embodiments, disclosed herein are machine learning methods for analyzing input data including, for example, images in the context of emotion detection classifiers, parental/video analyst/clinician questionnaires in the context of detecting the presence of behavioral, developmental, or cognitive disorders or conditions, user input or performance (passive or active), or interaction with a digital treatment device (e.g., games or activities configured to facilitate emotion recognition), and other data sources described herein.

Disclosed herein, in various aspects, are platforms, systems, apparatuses, methods, and media that incorporate machine learning techniques (e.g., deep learning with convolutional neural networks). In some cases, provided herein are AI migration learning frameworks for analyzing image data for emotion detection.

In certain aspects, disclosed herein are machine learning frameworks for generating models or classifiers that detect one or more disorders or conditions, and/or for determining responsiveness or efficacy or likelihood of improvement using, for example, digital therapy configured to facilitate social reciprocity. These models or classifiers may be implemented in any system or device disclosed herein, such as a smartphone, mobile computing device, or wearable device.

In some embodiments, the machine learning model or classifier exhibits performance metrics such as accuracy, sensitivity, specificity, positive predictive value, negative predictive value, and/or AUC of the independent sample sets. In some embodiments, the performance of the model is assessed using metrics such as higher accuracy, sensitivity, specificity, positive predictive value, negative predictive value, and/or AUC of the independent sample sets. In some embodiments, the model provides an accuracy of at least 70%, at least 75%, at least 80%, at least 85%, at least 90%, at least 91%, at least 92%, at least 93%, at least 94%, at least 95%, at least 96%, at least 97%, at least 98%, or at least 99% when tested against at least 100, 200, 300, 400, or 500 independent samples. In some embodiments, the model provides a sensitivity (true positive rate) of at least 70%, at least 75%, at least 80%, at least 85%, at least 90%, at least 91%, at least 92%, at least 93%, at least 94%, at least 95%, at least 96%, at least 97%, at least 98%, at least 99%, and/or a specificity (true negative rate) of at least 70%, at least 75%, at least 80%, at least 85%, at least 90%, at least 91%, at least 92%, at least 93%, at least 94%, at least 95%, at least 96%, at least 97%, at least 98%, or at least 99% when tested against at least 100, 200, 300, 400, or 500 independent samples. In some embodiments, the model provides a Positive Predictive Value (PPV) of at least 70%, at least 75%, at least 80%, at least 85%, at least 90%, at least 91%, at least 92%, at least 93%, at least 94%, at least 95%, at least 96%, at least 97%, at least 98%, or at least 99% when tested against at least 100, 200, 300, 400, or 500 independent samples. In some embodiments, the model provides a Negative Predictive Value (NPV) of at least 70%, at least 75%, at least 80%, at least 85%, at least 90%, at least 91%, at least 92%, at least 93%, at least 94%, at least 95%, at least 96%, at least 97%, at least 98%, or at least 99% when tested against at least 100, 200, 300, 400, or 500 independent samples. In some embodiments, the model has an AUC of at least 0.7, 0.75, 0.8, 0.85, 0.9, 0.91, 0.92, 0.93, 0.94, 0.95, 0.96, 0.97, 0.98, or 0.99 when tested against at least 100, 200, 300, 400, or 500 independent samples.

In some implementations, the machine learning algorithm or model configured to detect emotion in one or more images includes a neural network.

In some embodiments, transfer learning is used to generate a more robust model by first generating a pre-trained model trained on a large image dataset (e.g., from ImageNet), freezing a portion of the model (e.g., several layers of a convolutional neural network), and transferring the frozen portion to a new model trained on a more targeted dataset (e.g., an image accurately labeled with correct facial expressions or emotions).

In some embodiments, a classifier or a trained machine learning model of the present disclosure includes a feature space. In some implementations, the feature space includes information such as pixel data from an image. When training a model, training data, such as image data, is input into a machine learning algorithm that processes the input features to generate the model. In some embodiments, the machine learning model is provided with training data containing classifications (e.g., diagnostic or test results), enabling the model to be trained by comparing its output to actual output to modify and improve the model. This is commonly referred to as supervised learning. Alternatively, in some implementations, the machine learning algorithm may be provided with unlabeled or unclassified data, which enables the algorithm to identify hidden structures in cases (referred to as unsupervised learning). Sometimes unsupervised learning helps identify features that are most helpful in classifying raw data into different groups.

In some embodiments, the machine learning model is trained using one or more sets of training data. In some embodiments, the machine learning algorithm utilizes a predictive model, such as a neural network, a decision tree, a support vector machine, or other suitable model. In some embodiments, the machine learning algorithm is selected from supervised, semi-supervised, and unsupervised learning, such as, for example, Support Vector Machines (SVMs), naive bayes classification, random forests, artificial neural networks, decision trees, K-means, Learning Vector Quantization (LVQ), self-organizing maps (SOM), graphical models, regression algorithms (e.g., linear, logical, multivariate, association rule learning, deep learning, dimension reduction, and integrated selection algorithms). In some embodiments, the machine learning model is selected from the group consisting of: support Vector Machines (SVM), naive bayes classification, random forests, and artificial neural networks. Machine learning techniques include packing programs, boosting programs, random forests, and combinations thereof. Illustrative algorithms for analyzing data include, but are not limited to, methods that directly process a large number of variables, such as statistical methods and methods based on machine learning techniques. Statistical methods include penalty logistic regression, microarray Predictive Analysis (PAM), shrinkage centroid based methods, support vector machine analysis, and regularized linear discriminant analysis.

The platforms, systems, devices, methods, and media described anywhere herein may be used as a basis for treatment planning or administration of a disorder diagnosed by any device or method for diagnosis described herein.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as propranolol, citalopram, escitalopram, sertraline, paroxetine, fluoxetine, venlafaxine, mirtazapine, nefazodone, carbamazepine, dipropanoic acid, lamotrigine, topiramate, prazosin, phenelzine, imipramine, diazepam, clonazepam, lorazepam or alprazolam for the treatment of acute stress disorders.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as buspirone, escitalopram, sertraline, paroxetine, fluoxetine, diazepam, clonazepam, lorazepam or alprazolam for the treatment of adjustment disorders.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as diazepam, clonazepam, lorazepam, alprazolam, citalopram, escitalopram, sertraline, paroxetine, fluoxetine or buspirone for the treatment of agoraphobia.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as donepezil, galantamine, memantine or rivastigmine for the treatment of alzheimer's disease.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as olanzapine, citalopram, escitalopram, sertraline, paroxetine or fluoxetine for the treatment of anorexia nervosa.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as sertraline, escitalopram, citalopram, fluoxetine, diazepam, buspirone, venlafaxine, duloxetine, imipramine, desipramine, clomipramine, lorazepam, clonazepam or pregabalin that treat anxiety disorders.

The platforms, systems, devices, methods, and media described anywhere herein may be used to administer drugs to treat attachment disorders.

The platforms, systems, devices, methods, and media described anywhere herein may be used to administer a drug to treat attention deficit/hyperactivity disorder (ADHD/ADD), such as amphetamine (e.g., at a dose of 5mg to 50mg), dexamphetamine (e.g., at a dose of 5mg to 60mg), methylphenidate (e.g., at a dose of 5mg to 60mg), methamphetamine (e.g., at a dose of 5mg to 25mg), dexmethylphenidate (e.g., at a dose of 2.5 mg to 40mg), guanfacine (e.g., at a dose of 1mg to 10mg), tomoxetine (e.g., at a dose of 10mg to 100mg), prodiginine (e.g., at a dose of 30mg to 70mg), clonidine (e.g., at a dose of 0.1mg to 0.5mg), or modafinil (e.g., at a dose of 100mg to 500 mg).

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs for the treatment of autism or autism spectrum disorders, the drug is, for example, risperidone (e.g., at a dose of 0.5mg to 20mg), quetiapine (e.g., at a dose of 25mg to 1000mg), amphetamine (e.g., at a dose of 5mg to 50mg), dexamphetamine (e.g., at a dose of 5mg to 60mg), methylphenidate (e.g., at a dose of 5mg to 25mg), dexmethylphenidate (e.g., at a dose of 2.5 mg to 40mg), guanfacine (e.g., at a dose of 1mg to 10mg), tomoxetine (e.g., at a dose of 10mg to 100mg), prodigiosin (e.g., at a dose of 30mg to 70mg), clonidine (e.g., at a dose of 0.1mg to 0.5mg), or aripiprazole (e.g., at a dose of 1mg to 10 mg).

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs that treat bereaved pain, such as citalopram, duloxetine or doxepin.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer a drug for the treatment of binge eating disorder, such as, for example, dexamphetamine.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as topiramate, lamotrigine, oxcarbazepine, haloperidol, risperidone, quetiapine, olanzapine, aripiprazole or fluoxetine for the treatment of bipolar disorder.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer a medicament for the treatment of somatoform disorders, such as sertraline, escitalopram or citalopram.

The platforms, systems, devices, methods, and media described anywhere herein may be used to administer drugs such as clozapine, asenapine, olanzapine, or quetiapine that treat transient psychiatric disorders.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as sertraline or fluoxetine for the treatment of bulimia nervosa.

The platforms, systems, devices, methods, and media described anywhere herein can be used to administer a therapeutic substance disorder, such as lorazepam, diazepam, or clobazam.

The platforms, systems, devices, methods, and media described anywhere herein may be used to administer drugs that treat a circulatory mood disorder.

The platforms, systems, devices, methods and mediums described anywhere herein may be used to administer a drug for the treatment of delusional disorder, such as clozapine, asenapine, risperidone, venlafaxine, bupropion or buspirone.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as sertraline, fluoxetine, alprazolam, diazepam or citalopram that treat personality disintegration disorders.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as sertraline, fluoxetine, citalopram, bupropion, escitalopram, venlafaxine, aripiprazole, buspirone, vortioxetine or vilazodone for the treatment of depression.

The platforms, systems, devices, methods, and media described anywhere herein may be used to administer drugs that treat de-inhibitory social engagement disorders.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as quetiapine, clozapine, asenapine or pimavanserin to treat disruptive mood disorder

The platforms, systems, devices, methods and media described anywhere herein can be used to administer drugs such as alprazolam, diazepam, lorazepam or chlordiazepoxide to treat isolated amnesia.

The platforms, systems, devices, methods, and media described anywhere herein may be used to administer drugs such as bupropion, vortioxetine, or vilazodone to treat an isolated disorder.

The platforms, systems, devices, methods, and media described anywhere herein may be used to administer drugs that treat episodic paradox, such as amobarbital, alprenbital, sec-butyl barbital, or methohexital.

The platforms, systems, devices, methods, and media described anywhere herein may be used to administer drugs that treat dissociative identity recognition disorders.

The platforms, systems, devices, methods, and media described anywhere herein can be used to administer a drug for treating a reading disorder, such as amphetamine (e.g., at a dose of 5mg to 50mg), dextroamphetamine (e.g., at a dose of 5mg to 60mg), methylphenidate (e.g., at a dose of 5mg to 60mg), methamphetamine (e.g., at a dose of 5mg to 25mg), dextroamphetamine (e.g., at a dose of 2.5 mg to 40mg), guanfacine (e.g., at a dose of 1mg to 10mg), tomoxetine (e.g., at a dose of 10mg to 100mg), lisdexamfetamine (e.g., at a dose of 30mg to 70mg), clonidine (e.g., at a dose of 0.1mg to 0.5mg), or modafinil (e.g., at a dose of 100mg to 500 mg).

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as bupropion, venlafaxine, sertraline or citalopram for the treatment of dysthymic disorder.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as olanzapine, citalopram, escitalopram, sertraline, paroxetine or fluoxetine for the treatment of eating disorders.

The platforms, systems, devices, methods, and media described anywhere herein may be used to administer drugs that treat expressive language disorders.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs that treat sexual anxiety, such as estrogen, progestin or testosterone.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as venlafaxine, duloxetine, buspirone, sertraline or fluoxetine that treat generalized anxiety disorder.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as buspirone, sertraline, escitalopram, citalopram, fluoxetine, paroxetine, venlafaxine or clomipramine for the treatment of stockpiling disorders.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs for the treatment of intellectual disabilities.

The platforms, systems, devices, methods and mediums described anywhere herein may be used to administer a drug for the treatment of intermittent explosive disorders, such as asenapine, clozapine, olanzapine or pimavanserin.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as escitalopram, fluvoxamine, fluoxetine or paroxetine to treat kleptomania.

The platforms, systems, devices, methods, and mediums described anywhere herein may be used to administer a medicament for treating a mathematical disorder.

The platforms, systems, devices, methods, and media described anywhere herein may be used to administer drugs such as buspirone (e.g., at a dose of 5mg to 60mg), sertraline (e.g., at a dose of up to 200mg), escitalopram (e.g., at a dose of up to 40mg), citalopram (e.g., at a dose of up to 40mg), fluoxetine (e.g., at a dose of 40mg to 80mg), paroxetine (e.g., at a dose of 40mg to 60mg), venlafaxine (e.g., at a dose of up to 375mg), clomipramine (e.g., at a dose of up to 250mg), or fluvoxamine (e.g., at a dose of up to 300mg) for the treatment of obsessive-compulsive disorders.

The platforms, systems, devices, methods, and media described anywhere herein may be used to administer drugs that treat oppositional defiant disorder.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as bupropion, vilazodone or vortioxetine to treat panic disorders.

The platforms, systems, devices, methods, and media described anywhere herein may be used to administer drugs such as rivastigmine, selegiline, rasagiline, bromocriptine, amantadine, cabergoline, or benztropine to treat parkinson's disease.

The platforms, systems, devices, methods, and media described anywhere herein may be used to administer drugs such as bupropion, vilazodone, or vortioxetine for the treatment of pathological gambling.

The platforms, systems, devices, methods, and mediums described anywhere herein may be used to administer drugs that treat pica.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as sertraline, fluoxetine, citalopram, bupropion, escitalopram, venlafaxine, aripiprazole, buspirone, vortioxetine or vilazodone for the treatment of postpartum depression.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer a drug that treats post-traumatic stress disorder, such as sertraline, fluoxetine or paroxetine.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as estradiol, drospirenone, sertraline, citalopram, fluoxetine or buspirone that treat premenstrual dysphoric disorder.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as dextromethorphan hydrobromide or quinidine sulfate to treat pseudobulbar paralysis.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer a drug for the treatment of mania, such as clozapine, asenapine, olanzapine, paliperidone or quetiapine.

The platforms, systems, devices, methods and mediums described anywhere herein may be used to administer a medicament for treating reactive attachment disorder.

The platforms, systems, devices, methods, and media described anywhere herein may be used to administer drugs that treat reading disorders.

The platforms, systems, devices, methods, and media described anywhere herein may be used for administration to treat rett syndrome.

The platforms, systems, devices, methods, and media described anywhere herein may be used to administer a medicament for treating a ruminant disorder.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as sertraline, carbamazepine, oxcarbazepine, valproic acid, haloperidol, olanzapine or loxapine for the treatment of schizoaffective disorders.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as chlorpromazine, haloperidol, fluphenazine, risperidone, quetiapine, ziprasidone, olanzapine, perphenazine, aripiprazole or prochlorperazine for the treatment of schizophrenia.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as paliperidone, clozapine, risperidone for the treatment of schizophreniform disorders.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer a medicament for treating seasonal affective disorder, such as sertraline or fluoxetine.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs for the treatment of departure anxiety disorders.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as clozapine, pimavanserin, risperidone or lurasidone to treat shared mental disorders.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs that treat social (pragmatic) communication disorders.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as amitriptyline, bupropion, citalopram, fluoxetine, sertraline or venlafaxine that treat social anxiety phobia.

The platforms, systems, devices, methods, and mediums described anywhere herein may be used to administer drugs that treat somatic symptoms disorders.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as diazepam, estazolam, quazepam or alprazolam that treat specific phobias.

The platforms, systems, devices, methods, and media described anywhere herein may be used to administer drugs such as risperidone or clozapine that treat stereotypy dyskinesia.

The platforms, systems, devices, methods, and mediums described anywhere herein may be used to administer drugs that treat stuttering.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as haloperidol, fluphenazine, risperidone, ziprasidone, pimozide, perphenazine or aripiprazole for the treatment of tourette's syndrome disorder.

The platforms, systems, devices, methods and media described anywhere herein may be used to administer drugs such as guanfacine, clonidine, pimozide, risperidone, citalopram, escitalopram, sertraline, paroxetine or fluoxetine that treat transient tic disorders.

Fig. 28 illustrates a drug that may be administered in response to a diagnosis of the platforms, systems, devices, methods, and media described herein. The container may have a label 2810 that carries instructions "administer drug y if diagnosed with disorder x". The disorder x may be any disorder described herein. Drug y may be any drug described herein.

FIG. 29 shows a diagram of a platform for assessing an individual as described herein. The platform architecture as shown in fig. 29 includes various input sources, in particular a caregiver or user mobile application or device 2901, a video analyst portal 2902, and a healthcare provider dashboard 2903. These input data sources communicate with the rest of the platform via the internet 2914, the internet 2914 itself interfacing with the video storage service 2912 and the load balancer gateway 2916. The load balancer gateway 2916 is in operable communication with an application server 2918, the application server 2918 utilizing an indexing service 2924 and an algorithmic and questionnaire service 2926 to assist in data analysis. The application server 2918 may obtain data from the video storage service 2912 and the master database 2910 for analysis. Logging or auditing services may also be used to log any events, such as which user data was accessed and how to use the data, to help ensure privacy and HIPAA compliance.

FIG. 30 shows a non-limiting flow diagram for evaluating an individual. The caregiver or healthcare provider has raised the attention of the child 3001 and then has developed an ASD handicap device for the child 3002, wherein the healthcare provider determines that the use of this device is appropriate and explains its use to the caregiver. The caregiver then completes the first module containing the caregiver questionnaire and uploads the response and 2 videos 3003. Next, the video analyst evaluates the uploaded video 3004 and provides a response to complete the second module. The healthcare provider may also decide to complete a third module at his or her discretion, which contains a clinician/healthcare provider questionnaire 3005. The third module may be completed during or outside of an appointment with the child. The device then returns an assessment result 3006. In the case of a positive 3007 or negative 3008 assessment of ASD, the healthcare provider provides a review of the results in conjunction with clinical performance to make a diagnosis. The final assessment result is either a positive ASD diagnosis 3010 or a negative ASD diagnosis 3011.

FIG. 31A illustrates a login screen for assessing an individual's mobile device according to the platforms, systems, devices, methods, and media described herein. The login may contain a username and password for accessing a personal account associated with the caregiver and/or the subject to be assessed.

Fig. 31B shows a screen of the mobile device indicating completion of the user portion of the ASD evaluation, e.g., of the first rating module.

Fig. 31C shows a screen of a mobile device providing instructions for capturing video of a subject suspected of having ASD. The screen shows user-selectable interactive elements to initiate video recordings of the first video and the second video corresponding to different play times of the subject.

Fig. 31D, 31E, and 31F illustrate screens of a mobile device prompting a user to answer questions for assessing a subject in accordance with the platforms, systems, devices, methods, and media described herein.

FIG. 32 shows a display screen of a video analyst portal displaying questions as part of a video analyst questionnaire. The response to the questionnaire may form part of the input to the rating model(s) or classifier(s), e.g. in a second rating module as described herein.

FIG. 33 illustrates a display screen of a healthcare provider portal displaying questions as part of a healthcare provider questionnaire. The responses to the questionnaire may form part of the input to the rating model(s) or classifier(s), e.g., in a third rating module as described herein.

FIG. 34 illustrates a display screen of a healthcare provider portal displaying uploaded information for individuals containing videos and completed caregiver questionnaires according to the platforms, systems, apparatus, methods, and media described herein.

Fig. 35 shows a diagram of a platform including mobile device software and server software for providing digital therapy to a subject as described herein. The mobile device software includes an augmented reality gaming module 3501, an emotion recognition engine 3502, a video recording/playback module 3503, and a video commentary game 3504 (e.g., an emotion guessing or recognition game). The server software includes API services 3510, application database 3511, video store 3512, healthcare provider portal 3513, and healthcare provider or therapist review portal 3514 on the local computing device.

Fig. 36 illustrates a diagram of an apparatus configured to provide digital therapy according to the platforms, systems, apparatuses, methods, and media described herein. In this illustrative example, the device is a smartphone 3601 having an outward facing camera that allows a user to capture one or more images (e.g., photos or videos) of another person 3603. Face tracking is performed to identify one or more faces 3604 within the one or more images. The identified faces are analyzed in real time for emotion classification 3605. The classification is performed using a classifier configured to classify the face as exhibiting a selected emotion from a variety of emotions 3606. In this example, smartphone 3601 is in an unstructured play or otherwise free-roaming mode, where classified emotions are depicted on the display screen with corresponding emoticons 3602 to provide dynamic or real-time feedback to the user.

Fig. 37 shows an operational flow for combining digital diagnosis and digital treatment. In this non-limiting embodiment, the digital diagnostic operation includes an application 3701 of a diagnostic input modality (e.g., input corresponding to a parent/caregiver questionnaire, a clinician questionnaire, video-based input, sensor data, etc.). The input data is then used to calculate an internal diagnostic dimension 3702, e.g., the subject may be projected onto a multi-dimensional diagnostic space based on the input data. The diagnostic dimensions are projected into scalar output 3703. This scalar output 3704 is evaluated against a threshold. For example, the threshold may be a scalar value that determines the boundary between positive, negative, and optionally the presence of an uncertainty determination of the disorder, condition, or injury, or category or group thereof. Thus, a resulting or predicted 3705 is generated. The outcome or prediction may be a predicted medical diagnosis and/or may be considered by a clinician when making a medical diagnosis. Next, a treatment prescription 3706 may be issued based on the diagnosis or results of the diagnostic procedure. The digital treatment operation includes obtaining or receiving an internal diagnostic dimension 3707 from the digital diagnostic operation. A customized and/or optimized treatment plan 3708 is then generated based on the internal diagnostic dimension 3707 and the prescription 3706. The digitized treatment protocol 3709 is then administered, for example, by the same computing device used to diagnose or evaluate the subject. The digital treatment protocol can include one or more activities or games determined to increase or maximize improvement in one or more functions of the subject associated with the diagnosed disorder, condition, or injury. For example, an activity or game may include emotion cue recognition activities using facial recognition and automatic real-time emotion detection implemented via a smart phone or tablet computer. The user progress 3710 may be tracked and stored in association with a particular user or subject. Progress tracking allows for monitoring the performance and adjustment or change of a game or activity based on progress over a period of time. For example, a customized treatment regimen for a subject deviates from the subject's good activity or play, or alternatively, the difficulty level is increased.

Examples of the invention

Example 1-rating Module

The smartphone device is configured with a series of assessment modules configured to obtain data and evaluate the data to generate an assessment of the individual.

Module 1 — caregiver assessment

Caregiver assessment is designed to explore behavioral patterns similar to standardized diagnostic tools, the autism diagnostic interview-revision (ADIR) -explored, but presented in a simplified manner for simplicity and ease of understanding by the caregiver.

The device presents a minimum set of most predictive questions to the caregiver to identify key behavioral patterns. A series of multiple choice questions will be provided to the caregiver based on the child's age, usually within 10-15 minutes.

For children of 18 to 47 months, the caregiver will be asked to answer 18 multiple choice questions, which are classified into the following categories:

non-oral communication

Social interactions

Abnormal sensory interest/response.

For a child of 48 to 71 months, the caregiver will be asked to answer 21 multiple choice questions, which are classified into the following categories:

non-oral communication

Communicate orally with each other

Social interactions

Abnormal sensory interest/response

Repetitive/restrictive behavior or interest.

Module 2-video analysis

Module 2 requires the caregiver to upload 2 videos, each of which is at least 1 minute during the child's natural play with the toy and others at home. The detailed instructions are provided to the caregiver in the application. The video is securely uploaded to the HIPAA security server. Each submission was scored independently of the other by an analyst who evaluated the observed behavior by answering a series of multiple choice questions that evaluated the ASD phenotypic characteristics on the combined video. The video analyst has no access to the caregiver responses in module 1 or the HCP responses in module 3.

For children 18-47 months old, the video analyst evaluated the behavior of the child with 33 questions, while children 48-71 months old were evaluated with 28 questions, which were classified into the following categories:

non-verbal and verbal communication

Social interactions

Abnormal sensory interest/response

Clapping or repetitive motion, object use, or speech.

For each question, the analyst may choose: "video does not provide sufficient opportunity for reliable assessment". Furthermore, the analyst may consider the submitted video to be non-scorable if one or more videos are not helpful for any reason, such as poor lighting, poor video or audio quality, poor vantage point, children not in the community or identified, insufficient interaction with children. If not, the caregiver will be notified and asked to upload additional video.

The underlying algorithm of the medical device will use the questionnaire answers from each of the video analysts separately, as follows: for each of the analysts, the fully answered questionnaire will be input as a set of input features to the module 2 algorithm, which will output a numerical response internally. This will be repeated for each of the analysts individually, producing a set of numerical responses. The numerical responses are then averaged and the average of the responses will be taken as the overall output of module 2. The output of module 2 is then combined with the outputs of the other modules to arrive at a single classification result.

Module 3-healthcare provider assessment

The HCP will be provided with a series of questions based on the age of the child. For children between 18 and 47 months, the HCP will be asked to answer 13 choice questions. For children between 48 and 71 months, the HCP will be asked to answer 15 choice questions. Before completing module 3, the HCP will not have access to the caregiver responses in module 1. The HCP will not have access to the video analyst's response in module 2. The problems are classified into the following categories:

development of

Language and communication

Sensory, repetitive and stereotyped behaviors

Social interaction

Algorithm output

After (3) modules are completed, the input is evaluated to determine if there is sufficient information to make the determination.

Dynamic algorithm for generating the determination:

exploiting non-observable co-dependencies and non-linearities of information

Identifying a minimum set of maximum predicted features

"next most relevant" information can be dynamically replaced to generate a diagnostic output

The basis of each of modules 1, 2 and 3, which make up the medical device, is an independently trained machine learning predictive model. Each of the three models was trained offline using a specialized training set consisting of thousands of historical medical instrument scorecard samples at the question-and-answer project level and corresponding diagnostic labels, such that the training process was a supervised machine learning run. The machine learning algorithm framework is a GBDT (gradient enhanced decision tree) that, when training data in a training set, generates a set of automatically created decision trees, each using some of the input features in the training set, and each generating a scalar output when running new feature data associated with a new patient submission. The scalar outputs of each of the trees are summed to obtain an overall scalar output for the classification model. Thus, when used for prediction, each of the three modules outputs a single scalar value, which is considered to be an intermediate output of the overall algorithm.

The scalar outputs from each of the three classification algorithms are passed as inputs to a second stage combined classification model that is independently trained on 350 historical data submissions collected in a clinical study. This combination model is probabilistic in nature and is trained to consider covariance matrices between all three individual module classifiers. It outputs a single scalar value representing the combined output of all three modules, and then compares its output to a preset threshold to produce a categorization result that can be considered as determining whether the child is ASD positive or ASD negative.

The device is also designed not to output results when the prediction is weak. If a classification determination is not provided, the healthcare provider will be informed that the device is unable to provide the results of Autism Spectrum Disorder (ASD) at that point in time ("no results"). In particular, the patient may exhibit a sufficient number and/or severity of features for which the patient cannot be confidently placed within the algorithm classifier as ASD negative, but exhibit an insufficient number and/or severity of features for which the patient cannot be confidently placed within the algorithm classifier as ASD positive. In these cases, the algorithm does not provide a result (the "no result" case). In most cases (patients), the algorithm will provide one of two different diagnostic outputs-positive ASD, negative ASD.

Example 2-patient assessment overview

During patient review, a healthcare provider (HCP) has an interest in the development of children based on observations and/or caregivers' attention. The HCP then drives out the device configured with the digital application and provides the caregiver with an overview of the device. Once the pharmacy dispenses the device, the caregiver accesses the application. The caregiver leaves the HCP's office, downloads the application and creates an account. The caregiver would then be prompted to answer questions in the application regarding child behavior/development (block 1). Upon completion, the caregiver needs to record and upload two videos of the child in the child's natural home environment. Detailed instructions are provided in the application. If the video is too short, too long, or not in compliance with technical instructions, the caregiver will not be able to upload them and provide additional instructions as to what needs to be corrected to continue. Once the video is uploaded, the caregiver is notified that they will be contacted for subsequent steps.

Once the video is uploaded, the trained video analyst is prompted to review the uploaded video by the video analyst portal. The video analyst cannot see the caregiver responses in module 1 and the HCP responses in module 3. The video analyst answers questions regarding the child's behavior shown in the video according to the prescribed requirements and quality controls (block 2). If a video analyst deems a video not "assessable," the caregiver may be notified that additional videos need to be uploaded.

Once the device is assigned, the HCP is prompted by Cognoa to answer a set of questions regarding child behavior/development (block 3). The HCP will follow its standard practice guidelines to complete the documentation of module 3. Before responding to a problem in module 3, the HCP cannot see the caregiver responses in module 1 and the video analyst responses in module 2.

Once all 3 modules are completed, the dynamic machine learning algorithm evaluates and combines the inputs of the modules through a complex multi-level decision tree to provide an output. The HCP is notified to log into the HCP dashboard and review the assessment of the entire device, along with instructions for use of the device, indicating that the results should be used with the patient's clinical performance.

The HCP, in conjunction with a medical assessment of the clinical performance of the child, reviews the results of the device to make a definitive diagnosis within its practical scope. The results of the device will help the HCP diagnose ASD, or determine that the child does not have ASD.

In some cases, the HCP device will be notified that the result cannot be provided. In these cases, the HCP must make the best decision for the patient based on its own judgment; however, in this case, the device does not suggest anything, nor does it provide additional clinical guidance or next step guidance to the HCP.

Finally, after the device presents the output, the HCP will have access to the caregiver's response to module 1, the raw patient video, and clinical performance test data about the device.

Example 3-ASD Positive assessment scenario

ASD Positive Scenario A

During patient examination in a primary care environment, licensed healthcare providers are concerned about the development of children aged 2 based on observations and caregivers' attention. The patient has a speech delay and his mother indicates that, when called, he does not react to his name, but his hearing assessment is normal and he may be easily irritated by soft sounds. The primary care provider assesses whether using the Cognoa apparatus is appropriate based on the label of the apparatus and instructs the caregiver to use the apparatus according to the prescription.

The caregiver leaves the clinic, downloads the software, completes module 1 and uploads the patient's video. The video analyst scores the submitted videos via the analyst portal to complete module 2. The healthcare provider accesses module 3 via the provider portal and completes the healthcare provider questionnaire. The device analyzes information provided in view of key developmental behaviors most indicative of autism, and notifies the healthcare provider once the device results are obtained. The healthcare provider will receive a report indicating that the patient is "positive for ASD" and that the support data used to determine the result is available for review by the healthcare provider.

The healthcare provider reviews the results and determines that the results match clinical performance and provides a diagnosis of the ASD in a face-to-face visit with the caregiver, interpreting the diagnosis and prescribing treatment according to the recommendations of the american academy of pediatrics.

ASD Positive Scenario B

During patient examination in a primary care environment, a licensed healthcare provider assesses the development of a 3 and a half-year-old child. The patient had odd language usage but no delay in speech. Parents report that she also emits strange, repetitive noise. She seems to lack danger awareness and often breaks into the private spaces of strangers. The healthcare provider assesses the suitability of using the device according to its label and instructs the caregiver to use the device according to the prescription.

The caregiver leaves the clinic, downloads the software, completes module 1 and uploads the patient's video. The video analyst scores the submitted videos via the analyst portal to complete module 2. The healthcare provider accesses module 3 via the provider portal and completes the healthcare provider questionnaire. The device analyzes information provided in view of key developmental behaviors most indicative of autism, and notifies the healthcare provider once the device results are obtained. The healthcare provider will receive a report indicating that the patient is "positive for ASD" and that the support data used to determine the result is available for review by the healthcare provider.

The healthcare provider reviews the device results and determines that the results are most consistent with the ASD. The healthcare provider provides a diagnosis of the ASD in a face-to-face visit with the caregiver and interprets the diagnosis and prescribes a treatment according to the recommendations of the american pediatric society.

Example 4-ASD negative rating scenario

ASD negative Profile A

During patient examination in a primary care environment, a licensed healthcare provider assesses the development of a 5 year old child. Patients have overactive behavior and are easily distracted. His mother says that when called, he does not respond to his name, and she needs to call him several times before he responds. The patient also struggles in the relationship of the fellow and is difficult to reach friends. Healthcare providers are concerned about possible autism, but most suspect ADHD. The healthcare provider assesses the suitability of using the device according to its label and instructs the caregiver to use the device according to the prescription. Healthcare providers also require parents and kindergarten teachers to complete van der pol ADHD ratings.

The caregiver leaves the clinic, downloads the software, completes module 1 and uploads the patient's video. The video analyst scores the submitted videos via the analyst portal to complete module 2. The healthcare provider accesses module 3 via the provider portal and completes the healthcare provider questionnaire. The device analyzes information provided in view of key developmental behaviors most indicative of autism, and notifies the healthcare provider once the device results are obtained. The healthcare provider will receive a report indicating that the patient is "negative for ASD" and that the support data used to determine the results is available for review by the healthcare provider.

The healthcare provider reviews the device results and van der pol bit assessments to determine that the diagnosis is most consistent with ADHD. Healthcare providers provide a diagnosis of ADHD, primarily hyperactivity, in a face-to-face visit with the caregiver, and explain the diagnosis and prescribe therapy according to the recommendations of the american academy of pediatrics.

Healthcare providers monitor patients' responses to behavioral therapy and prescribe non-stimulatory ADHD medications, thereby preserving the potential for ASD in differential diagnosis. Patients respond well to therapy and drugs and no longer exhibit signs associated with ASD, thereby enhancing the diagnosis of ADHD.

ASD negative scenario B

During patient examination in a primary care environment, a parent reports that the 18-month old siblings of the patient were diagnosed with autism and his father noted some aggressive events and possible stereotypy. The patient has reached all of his developmental milestones and his examination and interaction at the clinic is age-matched. The father presents the patient with a video showing a stereotypy similar to sister-brother to the healthcare provider. The healthcare provider assesses whether using the Cognoa apparatus is appropriate based on the label of the apparatus and instructs the caregiver to use the apparatus according to the prescription. The caregiver leaves the clinic, downloads the software, completes module 1 and uploads the patient's video. The Cognoa video analyst scores the submitted videos via the Cognoa analyst portal to complete module 2. The healthcare provider accesses module 3 via the Cognoa provider portal and completes the healthcare provider questionnaire.

The device analyzes information provided in view of key developmental behaviors most indicative of autism, and notifies the healthcare provider once the device results are obtained. The healthcare provider will receive a report indicating that the patient is "negative for ASD" and that the support data used to determine the results is available for review by the healthcare provider. The healthcare provider reviews the results of the Cognoa device and determines that the patient is likely to be mimicking sister-brothers. When a patient exhibits aggressive or stereotyped behavior, the healthcare provider monitors the development of the patient and provides fostering guidance regarding the change of direction.

Example 5-ASD uncertainty evaluation scenario

ASD uncertainty scenario A

During patient examination in a primary care environment, parents report learning difficulties with children aged 5 and half, and schools recommend personalized educational program assessment to make it possible to incorporate it into a particular educational system. The patient rarely communicates with the clinic's healthcare provider's eye and is expressive when answering questions. There is no evidence of being overlooked or abused, nor the hallucinations reported. Laboratory evaluations showed normal CBC, CMP and TSH. The healthcare provider assesses whether using the Cognoa apparatus is appropriate based on the label of the apparatus and instructs the caregiver to use the apparatus according to the prescription. The caregiver leaves the clinic, downloads the software, completes module 1 and uploads the patient's video.

The video analyst scores the submitted videos via the analyst portal to complete module 2. The healthcare provider accesses module 3 via the provider portal and completes the healthcare provider questionnaire. The device analyzes the information provided in view of the key developmental behavior most indicative of autism and informs the healthcare provider that the device is unable to provide results regarding ASD at the moment based on the provided information. The device is at this point taken out of service.

At this point, the HCPs use their professional decisions to determine the subsequent steps for the patient.

ASD uncertainty scenario B

Since the beginning of kindergarten, a 5 year old child had speech delays, but made progress in speech therapy, and his teacher noticed that he often struggled with adults, was prone to developing splenic activity, refused to follow regulations, blamed others for his own mistakes, deliberately worried others, and acted in an angry, complaint, and return fashion. Parents inform the child's primary healthcare provider of these problems. The healthcare provider assesses the suitability of using the device according to its label and instructs the caregiver to use the device according to the prescription. The caregiver leaves the clinic, downloads the software, completes module 1 and uploads the patient's video.

The video analyst scores the submitted videos via the analyst portal to complete module 2. The healthcare provider accesses module 3 via the provider portal and completes the healthcare provider questionnaire. The device analyzes the information provided in view of the key developmental behavior most indicative of autism and informs the healthcare provider that the device is unable to provide results regarding ASD at the moment based on the provided information. The device is at this point taken out of service.

At this point, the HCPs use their professional decisions to determine the subsequent steps for the patient.

Example 6-Emotion recognition digital therapy

The patient was assessed using the device described in any of the preceding embodiments and determined to be ASD positive. The device for assessment and/or the different device is configured with a digital treatment application for treating the patient by emotion recognition training ("treatment device"). In this case, the device is a smartphone configured with a mobile application for providing digital treatment. The HCP specifies the device and/or mobile application for treating the patient. The patient or parent or caregiver is given access to the treatment device and registers/logs into the personal account of the mobile application. The mobile application provides selectable modes for the patient, including activity modes that include emotion activation activity, emotion recognition activity, and unstructured play.

The patient or parent or caregiver selects unstructured play such that the device activates the camera and displays a graphical user interface that dynamically performs facial recognition and emotion detection/classification in real time as the patient points the outward facing camera to other people. When the patient points the camera at a particular individual, the image of the individual is analyzed to identify at least one emotion, and the graphical user interface displays the emotion or a representation thereof (e.g., an emoticon or word describing or corresponding to the emotion). This allows the patient to observe and learn the emotion(s) displayed by the person observing with the camera. In some cases, the emotion display on the interface may be delayed to allow time for the patient to attempt to identify the emotion before giving the "answer". Each positively identified emotion and its corresponding image(s) are then stored in an image library.

The caregiver coordinates the digital treatment session in which the child uses the smartphone to walk around in their home, office, or other familiar environment and "look" or attempt to elicit an emotion from the audio prompt in the application. Typically, in a home environment, emotions are generated by a caregiver; the instruction to the caregiver would be to duplicate the requested emotion or to deliberately provide an incorrect face. During use of the device in a multi-person area, the caregiver's instructions will direct the caregiver to help the child find individuals with the cued facial expression; if not, the caregiver may choose to copy the emotion or prompt another person nearby to copy the emotion without alerting the child. Children point the phone camera at the individual they believe that they are expressing the prompted emotion; the mobile application has an Augmented Reality (AR) component in which a child is alerted when a face is detected. The screen then provides real-time audio and visual feedback to the child to correctly mark the emotional expressions displayed on the face (e.g., emoticons are displayed on the screen in real-time with corresponding emotions). As the child continues to use the product, the emoticons remain on the screen in the augmented reality environment.

After the patient has collected many images in the image library, the patient then switches out of unstructured play activities and selects emotion recognition activities. Then the patient selects the emotion recognition game or the emotion riddle game for reinforcement learning.

The emotional puzzle game stores images that the child has previously evaluated and is blended with stock facial images (from pre-reviewed sources). The goals of this activity are (a) to review images that the child has not correctly evaluated and have the caregiver correct it, and (b) to reinforce and remind the child of their correct choice to improve memory. The child may then attempt to correctly match or mark the emotional expressions shown in the images. The goal of this EGG is to enhance the learning of augmented reality unstructured play sessions in different 2D environments. It also provides additional social interaction opportunities between caregivers and children to review and discuss emotions collectively.

Various reinforcement learning games are provided for selection by the patient. Examples of these games are shown below:

(A) the game shows three images (possibly mixed with the inventory images) collected by the patient, which are classified as showing three different emotions: happiness, sadness and anger. The game provides visual and auditory cues asking the patient to select an image showing a "happy" emotion. The patient selects an image and then gets feedback based on whether the selection is correct. The patient continues to use the various images that have been collected to accomplish several of these activities.

(B) The game shows a single image (or inventory image) of the person collected by the patient and prompts to determine the emotion of the image display. Multiple selections of emotions may be displayed to the patient. The emotions may be selectable, or the patient may be able to drag the emotions to the image or vice versa.

(C) Mixed and matched emotion recognition activities. In this case, a list of 3 collected (or stock) images is displayed on the left side of the graphical user interface screen and a list of 3 emotions is displayed on the right side of the graphical user interface. The interface allows the user to select images and then select corresponding emotions to "match" them together. Once the images and emotions all match, feedback is provided to the patient based on performance. Alternatively, two columns of images and emotions are displayed, and the patient can drag and drop to align the images with the corresponding emotions in the same row in order to "match" them together.

(D) A dynamic emotion ordering game. Two or more buckets are provided at the bottom of the screen, each bucket having emotion tags, and the various collected images float on the screen. The patient is directed to drag each image into the appropriate bucket. Once all images are sorted into buckets, feedback is provided to the patient based on performance.

The emotion recognition games and activities described herein may be provided for various emotion recognition and learning purposes, not just for reinforcement learning using collected images that the user has been exposed to. The patient's performance during the activity may be tracked or monitored, if possible. When the patient completes one of a series of activities, the next activity provided may be biased or biased towards selecting images for which the test patient exhibits relatively poor emotion.

The patient then switches to affective stimulation activity. These activities are designed to provide stimuli that are estimated to evoke emotion. The emotional stimuli are selected from images, image sequences, videos, sounds, or any combination thereof. Examples of emotional stimuli include audiovisual content designed to elicit fear (spiders, monsters) and pleasure or joy (children's songs or performances). The emotional response elicited in the patient may be determined by an inward facing camera of the device. For example, the camera may capture one or more images of the patient's face while providing emotional stimuli, and then evaluate the images to detect any emotional responses. Over time, the response can be monitored to track any changes in the patient's response to emotional stimuli.

Example 7-digital diagnosis and digital treatment

According to any of the preceding examples, the patient is assessed using a smartphone device and determined to be ASD positive. The HCP would then consider this positive assessment, and the HCP would diagnose the patient as having ASD and prescribe to the patient a digitized treatment application for treating the patient via the same smartphone device. The patient or parent or caregiver is given access to the treatment device and registers/logs into the personal account of the mobile application. The personal account contains diagnostic information for assessing the patient. This diagnostic information is computed to determine the patient's location in a multidimensional space that is relevant to various aspects of the ASD, such as, for example, a particular impairment, e.g., social interaction reduction. These internal diagnostic dimensions are then used to identify activities predicted to improve the impaired ability of the patient to participate in social interaction.

The identified activities are activity patterns that include activities for monitoring and improving social interaction. One example of such activity patterns for monitoring and improving social interaction is modification to unstructured play, where the user is prompted to respond to facial expressions or emotional cues detected in the parent or caregiver.

The patient or parent or caregiver selects the modified unstructured play such that the device activates the inward facing camera and the outward facing camera and displays a graphical user interface that dynamically performs facial recognition and emotion detection/classification of the target individual (e.g., parent) in real time as the patient points the outward facing camera toward the other person and the patient uses the inward facing camera (e.g., self-portrait camera). When the patient points the camera at a particular individual, one or more images or videos of the individual are analyzed to identify at least one emotion, and a graphical user interface displays the emotion or a representation thereof (e.g., an emoticon or word describing or corresponding to the emotion). This allows the patient to observe and learn the emotion(s) displayed by the person observing with the camera. In some cases, the emotion display on the interface may be delayed to allow time for the patient to attempt to identify the emotion before giving the "answer". Each positively identified emotion and its corresponding image(s) are then stored in an image library.

In addition to detecting the target individual's emotion, the device captures images or video of the patient's facial expressions and/or emotions simultaneously or in close temporal proximity to the target individual's analysis. Social interactions between the patient and the target individual may be captured in this manner as a combined facial expression and/or emotion of the two persons. The time stamps of the detected individual expressions or emotions are used to determine a series of social interactions, which are then evaluated for the patient's ability to participate in the social interaction. The patient's performance is monitored and associated with an individual account to maintain a continuous record. This allows for continuous evaluation of the patient to generate updated diagnostic dimensions that can be used to update the customized treatment plan.

In one example, the patient points the phone to the parent who smiles to him. The phone display displays smiley face expressions in real time to help the patient identify emotions corresponding to the parent's facial expressions. In addition, the display screen optionally provides instructions to the patient in response to the parent. The patient does not smile to his parents and the inward facing camera captures this response in one or more images or videos. The images and/or videos and the social interaction timeline or timestamp sequence are then saved on the device (and optionally uploaded or saved on a remote network or cloud). In this case, the parent's smile is labeled "smile" and the patient's lack of response is labeled "no response" or "no smile". Thus, this particular social interaction is determined to fail to be smiling reciprocal. Social interactions may be further segmented based on whether the target individual (parent) and patient expressed "honest" smiles rather than "polite smiles". For example, the algorithms and classifiers described herein for detecting "smiles" or "emotions" may be trained to distinguish real and polite smiles, which may be distinguished based on visual cues corresponding to engagement of eye muscles in real smiles and lack of engagement of eye muscles in polite smiles.

While preferred embodiments of the present invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Numerous variations, changes, and substitutions will occur to those skilled in the art without departing from the invention herein. It should be understood that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention. It is intended that the following claims define the scope of the invention and that methods and structures within the scope of these claims and their equivalents be covered thereby.

124页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:持续导丝识别

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!