Human action process acquisition system

文档序号:90950 发布日期:2021-10-08 浏览:7次 中文

阅读说明:本技术 人动作过程采集系统 (Human action process acquisition system ) 是由 H-M·格罗斯 A·沙伊丁 T·Q·郑 B·舒茨 A·凡德蓝 A·布雷 A·迈法斯 R 于 2020-01-07 设计创作,主要内容包括:本发明涉及人动作过程采集系统和方法。所述方法包括通过非接触式传感器采集一个动作过程中的大量人像,这时大量图像显示人身体部位的动作,通过大量图像中的至少某些图像制作包括肢体位置的至少一个骨架模型,通过比较创建的至少一个骨架模型中肢体位置的变化,根据身体部位的动作计算动作过程。(The invention relates to a human action process acquisition system and a human action process acquisition method. The method comprises acquiring a plurality of images of a person during a movement by means of a non-contact sensor, the plurality of images showing the movement of a body part of the person, creating at least one skeletal model including a position of a limb from at least some of the plurality of images, calculating the movement from the movement of the body part by comparing changes in the position of the limb in the created at least one skeletal model.)

1. A computer implemented method of capturing a course of action of a person, the course of action including movement of a body part of the person, the method comprising:

-acquiring a plurality of images of the person during an action by means of the contactless sensor, the plurality of images representing the action of the body part of the person;

-making at least one skeletal model including the position of the limb from at least some of the number of images; and

-calculating a course of motion from the motion of the body part of the person by comparing changes in the position of the limb in the created at least one skeletal model.

2. The method according to claim 1, further comprising comparing the calculated course of action with previously determined courses of action stored in a memory.

3. A method according to claim 1, when the course of action involves a gait process.

4. The method according to claim 1, when the assessment of the course of action comprises assessing the action of the body part through at least one complete gait cycle.

5. The method according to the preceding claim, further comprising identifying at least one of the plurality of images by comparing the models of the walking aids.

6. The method according to claim 4, further comprising comprehensively evaluating at least one walking aid and at least one foot skeletal point derived from the skeletal model.

7. The method according to claim 5, when evaluating comprises determining a difference between at least one skeletal frame point and at least one ground-side end point of the walking aid.

8. A method according to claim 6 when determining the difference in sagittal plane.

9. A method according to claims 5-7, when the evaluation is performed against a local time point.

10. A method according to claims 2-8, further comprising issuing a message when the collected action of the course of action deviates from a predetermined action of the course of action.

11. A method according to claim 9, whereby the number of messages sent is related to the number and type of action deviations collected.

12. Apparatus for performing the method of claims 1 to 11.

13. A system for acquiring a course of action of a person, where the course of action includes movement of a body part of the person, the system comprising:

-at least one sensor for non-contact acquisition of a plurality of images of a person during an action, the plurality of images representing the action of a body part of the person;

an evaluation unit for producing a skeletal model comprising the positions of limbs of at least some of the images of the multitude of images and for evaluating the movements of the body part by comparing the created skeletal model.

14. A system according to claim 13, when the course of action involves a gait process.

15. A system according to claim 13, in which case the evaluation unit is provided with a memory containing values for the position of the body part during a predetermined action and in operation compares the predetermined values with the action of the body part.

16. A system according to claims 13-15, when the evaluation unit during operation evaluates the body part position reached with the aid of the walking aid in at least one gait cycle.

17. A system according to claims 13-16, when the evaluation unit evaluates the symmetry of the body part movements during operation.

18. A system according to claims 13-17, further comprising an output unit for outputting a message when a deviation between the determined movement of the body part and the predetermined course of movement is present.

19. A system according to claims 13-18, further comprising a segmentation unit for identifying objects in the multitude of images.

20. A system according to claims 13-19, when the sensor is at least one 2D camera, depth camera, ultrasonic sensor, radar sensor or Lidar sensor.

Technical Field

The invention relates to a set of human action process acquisition system.

Background

Medical systems have been plagued by a serious shortage of professionals. This results in often inadequate treatment and care, which has a serious impact on the cost of medical care and the value-added aspects of national economy. For the patient, this may mean that the pain persists for a long time and even presents complications, which may be due to postural impairment of the rehabilitation measures, unless the patient is adequately instructed in this respect. Because of these factors, the necessity for patient condition documentation is increasing, and clinical claims may also be warranted for damage that may be attributed to inadequate treatment. Which may have a stronger effect in some cases.

The system described in this document, such as by a service robot, can locate this problem by primary monitoring of rehabilitation measures, particularly posture and gait correction appliances during movement. Such a service robot may additionally accurately record completed exercises so that the relevant medical institution may fulfill its relevant compliance obligations without having to arrange a separate hand for this purpose. Another effect is that the use of this set of systems allows a standardized assessment of the effect of the treatment, since the current assessment of whether the exercise has been performed correctly depends on the medical doctor, who for personal experience differs from the other medical doctors. The treating physician may therefore obtain different evaluation results when evaluating a practice or similar exercises, whereas a uniform evaluation result can be obtained using the system or the service robot.

In addition, the service robot used in the gait and stair climbing training fields can also obviously reduce the burden of the treating personnel: in neuropathy, senile diseases and internal medicine institutions, the service robot can accompany patients with weak direction sense and train the action process of the patients, and the living convenience of the patients can be obviously improved after the patients are frequently trained and walk. In the orthopedic institutions, gait training with or without the aid of a lower arm walking support (UAGS), a shoulder support or other aids and bench training of the lower extremities or the spine after surgery are an important factor in reducing the time taken for the therapist. By using a mobile service robot or for patients fitting gait and bench training, the time spent in this regard can be reduced.

The service robot also helps to obtain the effect of the operation and can avoid the occurrence of wrong action patterns in the action process. Therapeutic training may be performed after surgery by attempting to quit a patient's preoperative malfunctioning processes (e.g., protective posture and/or avoidance maneuvers) due to pain or movement disorders, such as the wrong gait process. The higher the frequency of training the correct course of action (repetitive training), the more successful the effect. If the treating physician has less time for therapeutic training during rehabilitation, the service robot is a good alternative option to be able to identify and correct errors in the course of action in a timely manner.

Background

Professionals are familiar with different kinds of systems, including service robots in the medical or geriatric field. Thus CN108422427 describes a rehabilitation robot which can deliver tablets to a patient for administration. Similarly CN206833244, wherein the service robot described can distribute material in a hospital. In the hospital field, chinese patent applications CN107518989 and CN101862245 relate to a service robot that can transport patients, similar to a wheelchair. CN205950753 describes a service robot which identifies a patient by means of a sensing mechanism and provides navigation for the patient in a hospital. In CN203338133 a service robot for providing support for nursing staff is described, which can accompany a patient in a hospital to handle daily matters. In contrast, CN203527474 relates to a service robot with its robot arm providing support for a person.

CN108073104 relates to a care robot which can care patients suffering from infection by means including administering drugs or medicines to these patients, massaging the patients, giving meals, communicating with the patients, etc. The nursing robot reduces the risk of infection for the medical staff because it reduces the number of times the medical staff comes into contact with the patient. CN107598943 describes a service robot for accompanying the elderly. The service robot has several monitoring functions, but its main function is floor cleaning.

CN106671105 relates to a mobile service robot for the care of the elderly. The service robot monitors human body parameters such as body temperature through a sensing mechanism, and can also monitor expressions. It can also identify if a person has fallen, and can help the corresponding alarm through the network.

Of similar state of the art are CN104889994 and CN204772554, in which a service robot in the medical field can recognize the frequency of heartbeats, provide oxygen to the patient and possess a speech recognition and entertainment network multimedia module. The product described in CN105082149 may also be used for detecting blood oxygenation. CN105078445 relates to a service robot for recording the electrocardiogram of the old people and measuring the oxygen content in the blood. Similarly, CN105078450 has an electroencephalogram measurement function.

Some health robots are dedicated to exercising or evaluating patients. A relatively brief introduction to a system for practicing patients based on stored information is presented in CN 108053889. CN108039193 describes a system for automatic generation of health reports in a service robot. CN107544266 describes the collection of motion/fitness exercises by means of a service robot, recording and saving the data and transmitting to an external system for evaluation. Meanwhile, the service robot can monitor the medicine taking condition through different sensors.

CN106709254 describes a patient medical diagnosis service robot, which also makes treatment plans on the basis of diagnosis. To this end, the service robot evaluates the voice and image information and compares it with the information stored in the memory. Also, a neural network will be used.

CN106407715 describes a service robot which collects patient medical histories through speech processing and image recognition. In addition to being queried through the voice input and output device, a tongue photograph taken by the service robot camera may also be referenced through the touch panel when a medical history is collected.

CN105078449 describes a service robot equipped with a tablet computer and a communication unit, by which cognitive function training or cognitive psychological assessment can be performed to find out whether a patient suffers from alzheimer's disease. The tablet computer will record the telephone conversation between the patient and the child according to a particular procedure and determine from the conversation record whether the patient has alzheimer's disease.

Jaeschke et al in 2018 verified whether gait assessment on a service robot by Microsoft Kinect can provide similar valid results as the established fixed gait assessment system (gold standard), which involves determining the position of joints and limbs, i.e. whether parameters relevant for assessing gait training can be acquired in these ways. Step minute, pace, stride length, time on one or both feet, as well as extension and flexion of the ankle, knee and hip, tendency of the pelvis to swing, forward or backward leaning of the torso, etc., are all referred to as relevant parameters. When the statistical comparison results of the two systems are evaluated, it is found that relevant parameters can be acquired for clinical application through the Kinect, and the method has no important difference with the (fixed) gold standard.

Trinh et al, 2018, set forth how a service robot recognizes a seated person and how the service robot interacts with that person. In addition, the identification of a person using the walking aid by a 2D laser scanner is also presented.

Vorndran et al, in 2018, graphically describe how a user who has completed gait training is tracked by a service robot walking in front of the user. In this case, a direction-controllable camera may also be used, which allows a better tracking of the user. The person is tracked by LIDAR and RGB cameras. In addition, the service robot detects (future) users using both the laser scanner and the RGB-D camera (3D camera). This prediction of user behavior is used both to control the service robot and to track the RGB camera through the PID controller.

Disclosure of Invention

The invention comprises a method and system with therapeutic assistance properties, such as a service robot providing support for treating a patient, in particular treatment of an action process. The service robot has a sensor device that can capture the motion of the body part during the patient motion and compare the captured motion with the motion stored in the service robot memory unit or in the cloud. The service robot provides the patient with notice to improve his course of action based on possible deviations in the action. The completed exercises and the data saved at this time can then be evaluated by the treating physician.

In one aspect, this document describes a computer-implemented method for capturing a course of action of a person, where the course of action includes actions of a body part of the person. The method comprises acquiring a plurality of images of a person during an action (or during gait) by means of a non-contact sensor, the plurality of images showing the action of a body part of the person, creating at least one skeletal model including the position of a limb from at least some of the plurality of images, calculating the action from the action of the body part by comparing the change in the position of the limb in the created at least one skeletal model. As described above, this method may also include comparing the calculated course of action with previously determined courses of action stored in memory.

Assessing the course of motion includes assessing the motion of the body part through at least one complete gait cycle so that the system can obtain a complete image of the course of motion.

In another feature, the method also includes identifying at least one walker in the plurality of images by comparing the walker models. The method may include integrating the at least one walking aid with at least one foot skeletal point derived from the skeletal model, including determining a difference between the at least one foot skeletal point and a ground-side endpoint, including the at least one walking aid. The difference is determined, for example, in the sagittal plane and evaluated, for example, at the touchdown time.

The method may additionally include issuing a message to notify the patient of a false action during the action if the collected course of action deviates from a predetermined course of action. The number of messages sent is related to the number and type of action deviations collected.

The human motion process acquisition system comprises at least one sensor for acquiring a plurality of human images in a non-contact manner during a motion process (or during a gait process), wherein the plurality of images show the motion of a human body part, and the evaluation unit is used for making a skeleton model comprising at least some of the image limb positions in the plurality of images and evaluating the motion of the human body part by comparing the created skeleton model.

The evaluation unit can be equipped with a memory containing predetermined values of the position of the body part during the assigned movement and in operation compare the predetermined values with the movement of the body part.

The evaluation unit evaluates, during operation, the symmetry of the body part position and/or the body part movement achieved with the aid of the walking aid in at least one gait cycle.

The system also has an output unit for outputting a message when a deviation between the movement of the body part and the predetermined course of movement is determined.

Another feature is that the system comprises a segmentation unit for identifying objects, such as walking aids or other objects in the bulk of the image.

Drawings

The invention will now be described in detail with the aid of the accompanying drawings. The figures show that:

FIG. 1: exemplary System architecture

FIG. 2: service robot truckle top view

FIG. 3: service robot management system

FIG. 4: exemplary exercise program

FIG. 5: GUI exercise plan configuration

FIG. 6: data exchange between patient management module, navigation module, permission card and service robot

FIG. 7: exercise process

FIG. 8: 3D data acquisition and evaluation

FIG. 9: decision matrix diagram based on course of action, course of action grading device for outputting instructions or feedback to patient

FIG. 10: self-learning method for adjusting a training plan

FIG. 11: patient action data evaluation matrix map for treating doctor

FIG. 12: method for improving course of motion correction and course of motion evaluation based on marked video order

FIG. 13: automatically improving course of action correction and course of action assessment

FIG. 14: torso of trunk flexion sequence (lean of trunk), hip flexion and knee flexion in right hip prosthesis (TEP) patients.

FIG. 15: support use conditions in the whole period

FIG. 16: standing duration sequence for right hip prosthesis patients

FIG. 17: error level histogram: symmetry of gait process

FIG. 18: interruption of exercise

FIG. 19 Angle of limbs and Upper body

FIG. 20 skeletal model

FIG. 21 supplemental determination of foot skeleton points

Figure 22 three-point gait

Figure 23 two-point gait.

Detailed Description

The service robot 17, as shown in fig. 3, may be designed in different software and hardware configurations, including different components and/or modules. The service robot 17 is a system for acquiring, for example, the course of action of a patient, and is not limited to the present invention.

An exemplary system architecture is illustrated in fig. 1. As described in examples elsewhere in this document, there may be alternative features where various components and/or modules are supplemented and/or eliminated. In principle, the service robot 17 has at least one processor (in the PC or ARM architecture) and at least one memory connected to the processor.

The system architecture of fig. 1 is characterized by four layers, including three software layers (an application layer 2010, a state layer 2020, and a service robot skills layer 2030) and a hardware layer 2080. 2010. The 2020 and 2030 layers in particular constitute individual modules, which are not explicitly shown in fig. 1 for reasons of clarity, nor are they necessarily explicitly shown in all positions in the following. Layer 2030 reflects service robot skills, which in turn form the basis of a status layer 2020, which reflects the status of the service robot 17, and an application layer 2010 covers the application. The application layer 2010 includes, for example, a gait training application in the motion training module 2011, in which an operation instruction of the patient is stored. However, the training module may also contain other training instructions, which need not be associated with an action, such as a memory training instruction, etc. To this end, instructions such as exercise plans or voice and/or display screen outputs are additionally added to implement the exercise plan 2012, evaluate the exercises 2013 in the exercise plan, and finally include (optional) patient data such as age, co-morbidities, room number of the patient, etc. 2014.

The four layers are arranged one above the other. This requires, for example, action training to apply specific robot capabilities, which in turn presupposes specific hardware components.

The status layer 2020 includes a user guidance module 2021 and a course of action correction module 2022. There is also a module that reflects how the service robot 17 approaches the patient 2023, i.e. how the service robot 17 communicates with the patient. Another state reflected on the state layer 2020 is movement to the destination 2024 and the reception waiting position 2025.

On the service robot skills layer 2030 are a number of modules containing different important sub-modules. First, a person identification module 2040 for identifying persons is provided. The person identification module 2040 includes a person identification module 2041 for identifying the identity of a person, a first person tracking module 2042 for performing virtual person tracking through a 2D camera, and a second person tracking module 2043 based on LIDAR. There are also a re-identification module 2044 to re-identify the person (patient) when he leaves the tracking area, a seat identification module 2045 to identify the person (patient) sitting on the chair as seat identification 2045, and a skeleton identification module 2046 for 3D skeleton identification. This may be done by a 2D or 3D camera.

Another module on the service robot skill layer 2030 is an action assessment module 2050 that includes a sub-module that is an action process acquisition module 2051 for acquiring action process characteristics and a sub-module that is an action process assessment module 2052 for acquiring and assessing patient action processes.

The navigation module 2060 includes a sub-module 2061 for 2D/3D acquisition, a mapping module 2061a for mapping its surroundings, and a map module 2061b having a map of the environment where the service robot 17 moves. In addition, the navigation module 2060 has an automatic positioning sub-module 2062, such as in a mapping environment. In addition, the navigation module 2060 has a sub-module that functions to keep the service robot 17 within the sight of the tracked person 2063 at all times. The path planning module 2064 for metric path planning ensures that the service robot 17 can effectively calculate the route that needs to be returned by itself. The action planning module 2065 for action planning employs a revolutionary algorithm, and thus one feature is that the metric path planning results from the path planning module 2064 are additionally used to calculate an optimal route for the service robot 17 while taking into account various destination functions, including determining the predicted path of the patient. The user speaking submodule 2066 stores rules of how the service robot 17 navigates a person, such as a patient, to speak. Rather, sub-module 2067 may ensure that a distance is maintained from the user (e.g., patient, treating physician, caregiver, or other person) that reflects both safety requirements and personal, cultural influences, person-to-person distances that service robot 17 occupies when interacting with a person. To avoid the service robot 17 using fixed navigation, i.e. accepting spatial positions from which the service robot 17 can no longer be released from the conventional control algorithm, the service robot 17 has a mechanism to acquire and re-release the auto-lock 2068. The module for determining the waiting position 2069 ensures that the service robot 17 occupies a waiting position that does not disturb anyone. The power supply sub-module 2060e ensures that the service robot 17 will automatically search for a charging station in a low power state, and will dock with the charging station to charge the storage battery.

The service robot skills layer 2030 is additionally provided with a module 2070 providing human-service robot interaction. The sub-modules now cover a Graphical User Interface (GUI)2071, another sub-module establishes eye contact 2072 between the patient and the service robot 17 (if the service robot 17 has a head with eyes 2094), and two other sub-modules use speech synthesis 2073 and speech recognition 2074.

Different components are disposed on the hardware layer 2080. It comprises an odometer module 2081, i.e. a measurement and control unit for odometer functions, which is connected to the navigation module 2060 via an interface. The pressure-sensitive push rod 2082 is several centimeters above the ground, and can be used for collision detection. If a collision is detected in the navigation module 2060, the differential 2090 is triggered to stop immediately. Otherwise, the differential gearing 2090 generally ensures that the service robot 17 continues to move. The charging port 2091 with the corresponding charging electronics can recharge the integrated battery and provide the corresponding power via the external charging device. Alternative energy sources, such as fuel cells, including direct methanol fuel cells or solid oxide fuel cells, may also power the service robot 17.

The service robot 17 has a LIDAR 2083 and a panoramic camera (2D, RGB) 2084. Also above the service robot 17 is an RGB-D camera (3D)2085, which has zoom functionality and can track 2086. Two wireless interfaces, one WLAN module 2088 and the other RFID authorization card 2089, allow for electronic exchange of data.

The service robot 17 has a touch display 2087. The at least one speaker 2092 may output speech synthesis 2073 and the at least one microphone 2093 may record speech signals, such as through natural language processing to achieve a speech recognition goal 2074. Finally, a head (with 6 degrees of freedom) with controllable eyes 2094 can improve the human-machine communication on the emotional layer. Components 2087,2092-2094 are primarily used for human-service robot interaction.

The display 2087 may also be used within the user guide 2021 for the following purposes:

identification of patient/treating doctor by password

Dialogue with the patient/treating doctor, or information inquiry/input

Transmitting instructions to the patient, e.g. as feedback on the completion of the exercise and/or as a request to follow the service robot 17

Simulate the exercises that the patient should perform. For this purpose, a saved user identification map may be used, but video of the demonstration exercise may also be used.

Playing a record of completed exercises

Display assessment of the exercise, e.g. for use by a treating physician.

One feature of the service robot 17 is the provision of a light element that can provide instructions to the patient or signal that the patient should turn with a particular gait. Such lighting elements are for example located in the upper area of the service robot 17, for example comprising LED lamps.

One feature of the service robot 17 is that it is equipped with two drive wheels 6, which are arranged in parallel, centered with respect to each other (see fig. 2). Around which two or more support wheels 5 are arranged, for example on an endless track. This arrangement of the support wheels 5 can rotate the service robot 17 into position by controlling the drive wheels 6 in reverse. The horizontal axis of the support wheels 5 ensures that the shaft can rotate 360 degrees around the vertical axis when placed. When two support wheels 5 are used, the distance between the drive wheels 6 is greater than the distance shown in fig. 2, so that the service robot 17 is prevented from easily tipping over.

Fig. 3 shows that the service robot 17 is connected to the cloud 18 through a wireless network connection, such as a WLAN module 2088. The cloud 18 may be either a public cloud or a private cloud ("on premise"). The treating physician may access the computer of the patient management module 161 located in the cloud 18 via the terminal 13, which in turn is connected to the patient management module memory 162. The patient management module processing unit 161 and the patient management module memory 162 are collectively referred to as the patient management module 160. The treating physician may store the patient data in the patient management module 162 or, alternatively, may have an interface that imports the patient data from at least one other set of patient data management system 170 having a patient data management system processing unit 171 and a patient data management system memory 172. Other systems additionally include hospital management systems, hospital information systems (KIS), and/or patient data management systems, which are commonly used in clinics or rehabilitation facilities. In addition, the treating physician may assign an exercise program to the patient in the patient management module 160, modify the exercise program over time, view an assessment of the exercises that the patient has performed using the service robot 17 in the exercise program, and transmit the results of these assessments to the memory of the patient management module 162 by the service robot 17 via the interface. The patient management module 160 may record the treatment progress of the patient by the patient management module 160 knowing the assessments that have been made from the service robot 17 and may transmit the treatment progress to an external patient data management system 170, such as a hospital management system. In addition, the cloud 18 has a set of navigation systems 180 that contain navigation information and have a navigation system processing module 181 and a navigation system memory 182. These navigation information are connected via an interface to a navigation module 2060 of the service robot 17, in particular to a space plan module 2060 r. In the space plan module 2060r, space numbers are assigned to the space coordinates in which the service robot 17 moves and which are drawn by the service robot 17, and these space numbers are previously stored in the navigation system 180 and/or the space planning module 2060 r.

The cloud 18 is connected to a ruleset 150, which has a ruleset processing unit 151 and a ruleset memory 152. Here, the main focus is on saving algorithms that are likely to be used in the service robot 17 application layer 2010, the state layer 2020, and the service robot skill layer 2030 in fig. 1 in advance, and also including algorithms used in the patient management module 160. Examples include algorithms that are referenced in evaluating the course of action in the course of action evaluation module 2052. This also means that, according to a particular feature, if the service robot 17 has an online connection in the cloud 18, certain modules in fig. 1 can only be stored in the cloud 18 in advance, especially when they need to be provided with navigation. The treating physician may state exercise plan adjustment suggestions for other algorithms in the rule set 150.

Additionally, there is a learning module 190 in the cloud 18 that includes at least one learning module processing unit 191 and at least one learning module memory 192. Here, the historical data recorded by the service robot 17 will be saved in advance, generated by the treating physician by creating an exercise plan in the patient management module 160, and/or from an external patient data management system 170, transmitted in advance to the patient management module 160, and/or directly from the service robot 17. As long as these historical data have a relationship with the patient, they will be submitted in an anonymous manner. These historical data may be accessed by the terminal 12 (see fig. 3) and may be tagged. These historical data are used to refine the algorithms in rule set 150 as described in detail below.

The rule set 150 is configured to ensure that the algorithms locally installed on the service robot 17 are updated, such as via a wireless interface, e.g., WLAN module 2088, when the algorithms are transferred from the rule set memory 152. Additionally, the algorithms in the patient management module 162 memory may also be updated via the cloud 18.

The illustrated example service robot 17 itself has a computer 9 and a memory 10, at least one sensor 3, at least one support wheel 5 and at least one drive wheel 6, and a power source 8.

Sensing mechanism

The service robot 17 may be equipped with alternative and/or complementary sensing mechanisms. Including ultrasonic sensors and/or radar sensors in place of the pressure sensitive push rod 2082. In addition, the service robot 17 may be equipped with one or more magnet sensors arranged in such a way that the magnet sensors can identify magnets on the ground, with the aim of achieving space limitations of the operating radius, for example in order to ensure safety on stairs. Alternatively and/or additionally, the service robot 17 may be equipped with infrared sensors, ultrasonic and/or radar sensors, which are aimed at the ground, one feature being configured so that the infrared sensors, ultrasonic and/or radar sensors can identify the steps. The information in these infrared sensors, ultrasound and/or radar sensors may be taken into account in the created map, for example in the mapping module 2061.

At least one 3D camera (designed as RGB-D or pure depth camera) can be used not only for the function of the person identification module 2040 but also for three-dimensional mapping in the mapping module 2061. At least one 2D-RGB camera can be used for 3D person tracking 2042 by means of a corresponding architecture, such as Open (Cao et al, 2017), which is used for Kinect and for asset, like NUITrack. In addition, one feature is that a 3D camera may be replaced.

For a 3D camera, either time-of-flight technology such as in Microsoft Kinect or a point-of-flight sensor such as in Astra Orbbec may be used. In the current state of the art, both techniques have corresponding evaluation algorithms. Depending on the particular features, one or more 3D cameras may also be used in place of LIDAR 2083.

Person identification in person identification module 2040 is based on at least one optical sensor, such as LIDAR 2083, 2D camera 2084, and/or a 3D camera (designed as RGB-3D camera 2085 or a pure depth camera). Especially when using a 2D-LIDAR, the LIDAR 2083 is also not used alone, but is used in combination with at least one 2D camera 2084 and/or 3D camera. The distance of the patient from the service robot 17 is then determined by the LIDAR 2083, its movement is acquired and the posture is acquired by the 2D camera 2084 or by the 3D camera, the latter of which can likewise determine data for determining the distance of the patient from the service robot 17. Alternatively, a 2D RGB camera and a separate 3D depth camera may be used, which takes additional time, in particular synchronization, in the signal processing compared to a (3D) RGB-D camera. The concept of "posture" refers to positioning a person in space, including its limbs/limb parts (body parts), and positioning the service robot 17 in space.

Prescribing an exercise plan

The starting point of therapeutic gait training is an exercise program, the aim of which is to improve the physical ability of the patient over a certain period of time, thereby gradually improving its course of action. The service robot 17 is now configured to assist in therapeutic gait training by means of the sensing mechanism. For example, after a total hip replacement (hip TEP) of a patient, attention must now be paid, on the one hand, to the reduction of the burden on the operating field by means of a lower arm walking support (UAGS) in accordance with a prescribed procedure and, on the other hand, to an attempt to restore as normal or physiological movement as possible. The course of action is well defined and published in Thieme Press 2016Detailed descriptions are provided in section 2 entitled "understanding walking" and section 6 entitled "Janda, Manual muscle function diagnosis (Elsevier Press, 2016), by Smolenski. For pain reasons, the patient "trains" a gait (i.e. the course of action) before the operation, which ensures the patient to achieve as good an operative and healing effect as possible after the operation. This trained gait is also referred to below as "deviated gait process" or "deviated movement process", the corresponding characteristics of the movement process being influenced, for example, by pain, so that different step lengths in the physiological movement process are referred to as deviated step lengths. This is achieved, for example, by targeted gait training (also referred to initially as UAGS/walker), where the phase of a large number of other limb parts or body parts is of primary interest And (4) interaction. As mentioned above, this relates to the posture of the head, but also to the inclination of the torso, the positions of the upper and lower arms, the hips and the thighs/feet.

Two gait patterns are formed in the UAGS process, and the two gait patterns are used for rehabilitation: three-point gait and two-point gait. In a three-point gait, two UAGS's move forward simultaneously with the (operating/associated) thigh, relieving the burden on the post-operative/associated thigh. The foot, having performed the operated thigh, will now contact the ground. Thereafter, the healthy thigh is moved forward. This assisted walking (three-point gait) results in a greater reduction in the weight burden on the operated thighs, since the UAGS takes part of the support. In this case, two-point gait is one way to accelerate the treatment process. With two-point gait, partial relief of the burden of one or both legs can be achieved. At this time, the support and the opposite leg are moved forward simultaneously or in time, and then the corresponding opposite leg is moved forward (right lower arm support-left leg, left lower arm support-right leg; this is equivalent to the mutual action process). Once the patient no longer needs to be relieved of the burden, the UAGS may not be used after negotiation with the doctor and the treating physician. If the patient's course of motion is smooth, a transition from three-point gait to two-point gait is characterized by an increased stance duration, a non-gravitational leg phase, a step size and/or degree of synchronization between the two legs and an increased walking speed. The criterion for transition to a two-point gait or for non-use of the UAGS is fulfilled when the service robot 17 makes a small correction to the patient, a threshold value is defined and the walking speed determined by the service robot 17 also exceeds a threshold value relating to various influencing parameters, such as the walking speed initially determined at the time of first training, co-morbidities, etc.

This transition may also be made if the previously determined exercise parameters and the exercise progress in these parameters derived therefrom are highly consistent with the intervention in the exercise program performed by the treating physician and the patient's two-point gait is released. The exercise progress is then determined according to the differences in the course parameters of the movement during the movement, such as speed, duration of standing, non-gravity leg phase, step size, etc., and the symmetry values during the movement. The service robot 17 can either switch itself from a three-point gait to a two-point gait in the exercise plan, or can advise the treating physician, as illustrated in fig. 10 (described in detail below) by observing various other parameters.

Fig. 4 illustrates such an exemplary exercise plan as completed by the patient. The rows illustrate the number of days and in the third column which exercises the patient has performed using the service robot 17 are shown. In this example, three point gaits were exercised for 5-10 minutes the first day with approximately-20-200 m of patient walking. The second column preserves the tasks of the physiotherapist. The 20 minutes of the next day include physical therapy and exercise data of the evaluation service robot 17. The tasks and processes for other days can be derived from the table accordingly.

Such exercise plans are created in the patient management module 160 by the observation service robot 17. To do so, the treating physician may access a Graphical User Interface (GUI) (FIG. 5) to configure an exercise program for each day the patient spends in the clinic or rehabilitation facility. These exercise plans additionally include patient ID as well as date of surgery, surgical side (at the time of surgery on the knee and/or hip), clearance for a particular walking pattern (such as two and/or three point gait from a particular date), clearance for training on steps, and measures of exercise (frequency, duration, and course length of each day or exercise). At this time, practice plans specific to the clinic/institution can be saved in the service robot 17, which meet the requirements of the respective surgeon. The aim is, for example, to relearn the correct course of action after surgery on the hip/knee/ankle joint using the UAGS according to the respective degree of burden and course of treatment, and on the other hand to learn the respective course of action of the joint. These exercise plans are matched to prescribed/agreed upon surgeon views. But may be actively modified or adjusted by the treating physician in a manner as described below. If the treating physician is shown by the assessment module that there are too many deviations in using the UAGS (perhaps because the patient has difficulty achieving a two-point gait that needs to be learned from the exercise plan beyond a certain date), the treating physician may "reset" the walking pattern to a three-point gait in the exercise plan and have the patient learn the two-point gait process at a later point in time. In addition, the treating physician can see which adjustments should be made to the exercise program based on the evaluation of past data in the GUI, as detailed elsewhere, i.e., dynamically based on past days of training outcomes, such as on the third day described below. The training program may also be automatically corrected as described elsewhere in this document.

Transmitting exercise plans to a service robot

After the exercise plan has been created or adjusted, the exercise plan or instructions related thereto are transmitted to the service robot 17. For this purpose, there are different solutions according to the specific features:

fig. 6 illustrates the data exchange between the patient management module 160, the navigation system 180 containing spatial information (or a spatial plan view module 2060 r-optional containing such spatial information as well), a storage medium (such as a rights card), and the service robot 17. The first step 605 is to create patient data for the patient in the patient management module 160. These patient data include name, ID, diagnosis (hip surgery if necessary), rehabilitation, patient age, etc., (patient in-optional) space number, and exercise plan. Patient data may also be obtained through an interface via an external system, such as a hospital information system. The spatial coordinates in the spatial number module 2060r may be transmitted at step 610 via an interface to the navigation module 2060 of the service robot 17, or alternatively from a complementary module to the memory of the cloud-based navigation module 160, which in one feature of the method may be used to take the patient from the room. Alternatively and/or additionally, an area where the service robot 17 contacts the patient may be defined in this respect.

There are different ways in which the data relating to the execution of the exercises can be transmitted from the patient management module 160 to the service robot 17. Another type is to transfer the exercise program (and patient data if necessary) to a rights card or storage medium (such as a usb disk) 610. It is handed to the patient. The patient transfers data to the service robot 17 on the machine by placing the authorization card on the RFID reading (and writing if necessary) device 2089 of the service robot 17, the service robot 17 recognizes the authorization card 612 and reads 615 accordingly. If a memory, such as a USB memory card, is used, a contact-based interface, such as a USB port for USB memory, may be used instead of the contactless RFID read and/or write device 2089.

Another approach is to transmit only the patient ID 620 from the patient management module 160 to the authorization card. Alternatively and/or additionally, the ID of the authorization card in the patient management module 160 is linked to the patient and thus (at least when data is transmitted via the authorization card or, if necessary, also via the memory) to the patient ID. The authorization card (including, if necessary, a memory) 612 will be identified on the read and/or write device 2089 of the service robot 17 and the patient ID 625 read therefrom, which is likewise performed by the patient by operating the authorization card/memory. The service robot 17 then downloads data 660 required to perform an exercise, such as an exercise plan, from the patient management module 160 in the cloud 18 via the interface, with the patient ID obtained at this time being used to identify the relevant data record in the memory 162 of the patient management module 160. In addition to using the rights card and/or memory, a barcode may also be generated containing the patient ID 630 and presented to the patient. If a barcode is placed in front of at least one 2D camera 2084 or 3D camera, it 632 is recognized by the service robot 17. A barcode or patient ID 635 will be read in. Based on this, the service robot 17 downloads data 660 required to perform an exercise, such as an exercise plan, from the patient management module 160 located in the cloud 18 through the interface. The obtained patient ID 630 is used to identify the relevant data record in the database 162 of the patient management module 160. As an alternative to the presently described method of identification on the service robot 17, the patient may also obtain login data which is linked 640 to the patient ID in the patient management module 160. If login data is entered at the service robot 17 at step 645, the service robot 17 downloads data related to performing exercises associated with the login data (and patient ID) from the patient management module 160 at step 660. In addition to the login information, biometric information of the patient may also be used, such as an iris scan, a fingerprint or a face scan at step 650, which are associated with the patient in the patient management module 160. The patient can then perform a corresponding identification on the service robot 17, in which case a corresponding reading device must be installed if an iris scan or a fingerprint scan is used. When scanning a human face, the 3D camera of the service robot 17 may also be configured correspondingly, for example, the RGB-D camera 2085. After steps 645 and 655, the service robot 17 downloads data associated with the patient ID for performing the exercise.

The service robot 17 then executes the exercise at step 665, records the results of the exercise at step 670, and analyzes the results by viewing the exercise plan 675. Other locations will show these three steps 665, 670, and 675 in detail. After the exercise is completed, the data recorded by the service robot 17, which mainly includes the evaluation result of the service robot 17 and, if necessary, the video recording of the exercise, the raw data of the skeleton recognition (which will be described in detail below), are retransmitted to the patient management module 160. In the first case (step 610-612-615), this can be achieved by: these data are transferred to the rights card/memory at step 680, and the patient's rights card/memory is then handed over to the treating physician, after which he reads the rights card/memory at step 685 on the terminal of the patient management module 160. In another described method of operation (steps 620, 630, 640 and 650), the data of the service robot 17 may be transmitted in step 690 via the interface into the patient management module 160 in the cloud 18. Such transmission may be made at the end of the exercise, or in real-time or intermittently during the execution of the exercise.

One feature of the data transmission is anonymity at the time of data exchange, for which no data is transmitted to the service robot 17 that can identify the patient (without associating the person ID with the name), nor is data stored on the service robot 17 that may identify the patient. If the service robot 17 creates video recordings of the patient, these video recordings are anonymized, as described elsewhere.

Principle exercise process

Fig. 7 shows a principle exercise process. One feature is that the service robot 17 is configured so that the service robot 17 can take the patient at one location, accompany the patient to the exercise area, where the exercise is completed, and if necessary, be returned to the patient. These steps are illustrated in fig. 7 as dashed boxes, as they are optional, since the service robot 17 may also wait for the patient at one location.

In an optional step 405, the patient signals to the service robot 17 that an exercise is required. The patient may also choose to view the exercises in the plan (such as the date, time defined in the exercise plan) and/or a supplemental time schedule. The next step 410, which is optional, is that the service robot 17 searches for the patient, when information about the space, such as the patient's room number, is from the navigation system 180 or the space plan module 2060r, stored in the patient management module 160, and transmitted to the service robot 17 along with the exercise plan, as described in figure 6. The service robot 17 can locate and navigate through the space plan module 2060r and the map created in the mapping module 2061, for which purpose the service robot 17 uses its destination guidance module 2024 (the method of operation of mapping is described in detail in example 7). If the service robot 17 encounters a patient, the patient may identify the identity on the service robot 17 at step 415, as described in detail previously in FIG. 6. In addition, the patient may also walk to a place where the service robot 17 waits. Based on the identification in step 415, the service robot 17 uses its sensing mechanism (including LIDAR 2083, 2D camera 2084 and/or 3D camera (designed as RGB-D camera 2085 or pure depth camera)) to identify the patient in step 420, and performs identification through the personnel identification module 2040. At this time, first, the person recognition module 2041 performs person recognition. Patient tracking is then performed at step 425, which may be accomplished by the first person tracking module 2042 and/or the second person tracking module 2043. When the person tracking is disturbed, which results in the service robot 17 losing the patient "from sight", the identity is re-identified by means of the identity re-identification module. Example 4 shows in detail a possible construction.

Through the dialog of step 430, the patient may transmit prompts for training, or his own training requirements, to the service robot 17. Alternatively or additionally, the patient may also select the exercises that he wishes to complete from his own, as opposed to an exercise plan containing predefined exercises. One feature of the service robot 17 is to check, when a patient-selected exercise request is present, whether this is in accordance with the requirements of the treating physician. Since gait training usually corresponds to a progress path (e.g., a three-point gait precedes a two-point gait), the patient cannot arbitrarily skip non-critical treatment steps. However, in addition to the requirements of the treating physician, automatic training plan adjustment and/or release may also be considered, for example, to automatically determine a transition from a three-point gait to a two-point gait using historical data and/or performed course of action corrections relating to other features of gait training, such as the length of the route required to return. For the patient this means that the patient can select an exercise plan configuration which is based on the automatic passing for a particular exercise. Alternatively and/or additionally, the automatic release may also refer to the number and/or type of corrections made during the course of the action.

The service robot 17 may additionally query the dialog at step 430 for information and/or simulate the exercises that need to be completed. Voice input and output may be provided through the graphical user interface 2071 via a screen (e.g., a touch screen display or touch sensitive display 2087) and/or through the speaker 2092 and voice synthesis 2073 or the microphone 2093 and voice recognition 2074. The service robot 17 may now maintain an optimal distance 2067 from the user depending on the particular characteristics. The (optional) service robot 17 then navigates to the practice area 435 with the assistance of the destination guidance module 2024 "move to destination" algorithm.

The service robot 17 requests the patient to follow it by means of said output unit, LED lamp or the like, or provides it with an indication of in which direction the patient is to move. At this point, the patient may move in front of the service robot 17, or follow the service robot 17, both during navigation to the exercise area and during exercise. Furthermore, the service robot 17 will move towards the patient at a constant distance, which may improve the sensory acquisition of the exercises to be performed.

The service robot 17 calculates a route that the patient and the service robot 17 should or must complete. For this purpose, metric path planning and a movement planning module 2065 featuring a revolutionary algorithm in the mapping module 2061 and the path planning module 2064 may be used in combination. Once the exercise area is reached, the exercise is started. At this time, the service robot 17 issues an exercise/correction instruction based on the motion correction module 2022 at step 440. In the following process, the exercise performance is collected by the action assessment module 2050 (in the action process acquisition module 2051) and assessed in the action process assessment module 2052. If a deviation is identified (step 445 course of action evaluation) or a corrective suggestion has not been properly implemented (step 450 outputs course of action correction), it may result in step 440 being generated again. For example, prompts generated as speech output by speech synthesis (and/or output via a display screen) for the course of action may include prompts for positioning the UAGS, correcting the upper body. Which may include both hints as to correct corrections and praise.

Step 440-. At the end of the process, the exercise will be evaluated at step 455, and the data will be transferred to the patient management module at step 460, as described elsewhere (see fig. 6). An optional feature is that the service robot 17 may also (re) accompany the patient (return) to a location, such as his room, at step 465. Of course the patient is generally continuously tracked during navigation. If a tracking interruption occurs during this time, the identification must be performed again.

Data acquisition and evaluation mainly during practice

The process of collecting sensor data and evaluating the collected sensor data is shown by way of example in fig. 8. Another feature is the acquisition of patient motion by sensors, such as Kinect 2 or Astra Orbbec, which belong to the RGB-D sensor 2085. Then, the depth image generated by the 3D depth camera from the sensor data at step 710 is converted to data representing a 3D scatter diagram, at step 715, in which each pixel of the 3D camera is assigned a spatial coordinate. In this way, a 3D representation of the environment information can be achieved.

The next step is to evaluate these context information at step 720 (feature acquisition) to create a skeletal model of the patient (step 725 creation of a skeletal model). A third party provider, such as a software program of NUITrack or KinectSDK, uses the 3D scatter plot data to derive the collected color, spatial depth signals, and skeleton information at step 725. Furthermore, these signals contain information about the framework points of the respective skeletal model, which describe, for example, the knee joint or the hip joint of the patient. FIG. 20 illustrates, by way of example, a skeletal model including a body 1703, skeletal points 1701, and connections 1702 between skeletal points (if oriented) that may be output as directional vectors. If an openpos or other architecture is used, a 2D camera may be used instead of a 3D camera, such as a commonly used RGB camera.

If there are several people in the image (i.e. not only the patient but also, for example, the treating physician), these people are displayed by means of the software by means of their own skeleton model. The number of identified skeleton points depends on the one hand on whether the complete person is seen in the image and on the other hand on the software used. Skeletal points may also be identified if a person is not fully seen.

One feature is that the sensor data of the 3D camera can be evaluated to ensure that the distance to each object or object region acquired is determined by sensor observation, when the evaluation result is related to the resolution of the 3D camera and the distance of the object. Spatial coordinates may be assigned to sensor data of the 3D camera by the perspective from the 3D camera and the distance from the 3D camera. These spatial coordinates may then be assigned to the skeleton points in turn. With such mathematical vector operations, it is possible to define both the direction and length direction vectors (which also correspond to distances) between skeleton points, and to calculate the angles between them.

The next step is to select joints at step 730, i.e. to continue processing the necessary skeleton points only for the calculations that need to be performed below. Then, an angle calculation is performed 735, such as an angle between the lower leg and the upper leg or a deviation angle of the upper leg from the plumb line, for which a skeleton point is defined as a basis, respectively, and the positioning of the limbs/trunk and/or the plumb line forms the basis for the angle calculation as a direction vector. (see also the description in fig. 19). Additionally, the distance is determined, such as at step 760.

The so-called time-distance parameter. Such parameters include, for example, step size, duration of standing, span, and flexion and extension of the hip and lacquer joints over time (depending on subsequent treatment). They are usually determined in a two-step process. An exemplary illustration of this is seen in example 1. Thus, for example, the stride length can be determined as the Euclidean geometric distance between the sagittal foot frame points 1950, which indicates the stride length, when both feet contact the ground (e.g., identified by the minimum height of the foot frame points 1950 above the ground).

In addition to the skeleton-based processing data, a lower arm walking frame identification, also called UAGS identification (a feature of which is also the inclusion of a shoulder frame and/or other types of pole/walker), is used at step 740. So that the UAGS can also evaluate gait and walking aids at some later point in time. The 3D scatter plot of the installed depth camera acquired in step 710 is also used as a basis for this identification. Step 755 may find the UAGS by supporting a real-time, fault-tolerant segmentation algorithm in the segmentation unit in the scatter plot. At this point, knowledge about the patient skeleton will be included to pre-select the appropriate candidate region 745, i.e., the region in space where the UAGS is most likely to be located. Thus, it can be assumed that they are extensions of the arm downwards, so this region would be a candidate for detecting UAGS. The candidate regions are then checked for consistency with model assumptions by the shape (long, narrow) of a typical UAGS at step 750, and selected if necessary. The location of the UAGS or the ground contact endpoint 1970 is again evaluated as a time-distance parameter within the characteristic acquisition 720 during the determination of the distance 760.

The characteristics are then ranked (or otherwise evaluated) at step 765, which may be done according to different criteria. In this case, a previously determined, possibly patient-specific threshold value may be exceeded or undershot, wherein the proposed characteristic is evaluated in particular over the course of time. For example if the patient is 1.75m tall, the step size that can normally be reached is 65cm, whereas the step size determined by the service robot 17 is 30 cm. The maximum allowable deviation is 20cm according to the step size of the hierarchical acquisition. The step size of 30cm is evaluated as too short, since the 65cm standard step size minus the grouping deviation value of 30cm step size >20 measured by the service robot 17.

In addition, the symmetry of the measured values of the left and right body sides can be compared and ranked, for example. For this purpose, for example, values of the right and left steps that are continuous in time are saved. The difference between the two measured values is then determined and, if necessary, normalized with respect to a reference variable, for example on the basis of the measured value on the surgical side. The difference between the two body sides (and the symmetry deviation) obtained at this time can be further evaluated by one or more thresholds. To maintain the step size, a step size deviation of 20cm (referred to as absolute value), or for example 20% of the physiological step size (i.e. the step size of the non-operated thigh) can be used as a threshold — as opposed to a deviation step size involving the post-operative thigh, as opposed to the physiological step size resulting from the operation.

After the characteristic ranking 765, the ranked characteristics are evaluated on a case-by-case basis (step 770 referred to as an action ranking), i.e. individual characteristics such as step size or span are not looked at, but combinations of characteristics that are present in the body posture are considered, one feature being the consideration of the walker. In addition, it is possible to make a conclusion about the use of a predetermined motion pattern (two-point gait, three-point gait). In the same manner as the property ranking 765 in the property acquisition 720 or the action process evaluation module 2052. The goal is to output course of action corrections (step 450), i.e. to output prompts to the patient asking him to adjust his action during his own course of action to conform to or at least approximate the physiological course of action. At this time, the service robot 17 gives feedback to the patient (error presentation, correction request, confirmation status/promotion). For this purpose, a set of rules is assigned to the course of action hierarchy, which can be characterized as a decision matrix diagram, in which a corresponding description for correcting the course of action is stored for a specific gesture (for example, a comprehensive combination of various characteristic hierarchies that form deviations/errors in the course of action). This course of action correction is shown in fig. 9 as decision ranking step 775.

To correct the error in three-point gait of "taking the leg after surgery first, taking the healthy leg back", the value used to identify the healthy leg step must be "too short" (i.e. below a certain threshold), while the distance of the UAGS from the healthy leg is evaluated as "too short to take forward" (i.e. above a certain threshold). If both are correct and only the two characteristics are ranked within the combination, then correction is triggered as user feedback through the decision matrix: please take a healthy leg first and then take the leg after the operation forward over the connecting line between the supports. ". A schematic diagram thereof is shown in fig. 9. Only when errors 1 and 3 (defined as deviations from a threshold value, for example) are identified can the exercise guidance/correction be performed. When errors 1, 2 and 4 occur in combination, correction notice 2 is triggered again. Since initially, due to the patient, a plurality of deviations occur during the course of action, and all errors cannot be corrected simultaneously, a course of action correction priority by the treating physician is carried out (which will be referred to in the next section). I.e. not every misidentification the exercise guidance/correction is output by the service robot 17, since there is first a certain minimum time and minimum distance between two corrections. But still preserves and provides the treating physician with a recognized error, i.e., an action progression rating 770 to which corrections are assigned, as will be described in further detail in the next section.

The matrix map in fig. 9 need not be deterministic, i.e. the correction output in the deviation for each deviation of the detected course of action from the physiological course of action need not be saved. One feature is that dynamic output is also possible. In this case, the correction output may be performed in order of priority, for example. To this end, each deviation is assigned a different priority score. Thereafter, the service robot may perform a certain number of outputs each time period, and only the output with the highest priority, i.e., the output with the highest priority score, is performed. To this end, one feature is that a defined delay period may be saved after the deviation is collected, after which the output of the course of action correction 450 is performed within the highest priority correction output delay period.

One feature is that the treating physician can influence course of action correction by setting up the patient management module, such as decision matrix graph 770 for course of action ranking, where certain postures/instructions are prioritized and other postures/instructions are ignored if necessary, which can accommodate course of action correction. This information may be transmitted to the service robot 17 along with the exercise program and then placed into the action training module 2011. Such settings may be learned and suggested to the treating physician along with the exercise plan settings through the suggestion functionality in the learning module 190, another feature also being automated by the service robot 17, as described below.

Model for improving patient management module advice algorithms, in particular in terms of exercise planning and implementation advice

As described above, the treating doctor can modify the exercise plan of the patient after viewing the results of the action training evaluation performed by the service robot 17 together with the patient to improve the treatment effect. One feature is that the system shown in FIG. 3 may suggest to the treating physician to adjust the exercise program based on past data. Thus, if the patient has excessive errors in the process during two-point gait, too slow speed, excessive burden on the legs after operation, etc., the rules or suggestions completed at this time can be switched backwards, and the two-point gait is switched to the three-point gait. An alternative and/or complementary feature is that the service robot 17 even automatically makes these past-based exercise program adjustments. The basis for this capability of the system is the self-learning system shown in FIG. 10. The self-learning system can repeatedly improve the treatment effect quality, and the treatment effect quality is mainly realized in two ways: a) collecting cases that have not been previously accounted for because they occur occasionally, and b) increasing the number of cases. Both of which can more accurately determine ganglion weights in the machine learning module and/or the neural network, improving the therapeutic effect of the service robot 17.

The patient management module 160 has the functionality to provide treatment recommendations to the treating physician 1330. In the first phase, the patient management module 160 obtains data about the patient, either by the treating physician and/or by input with an external system, such as a hospital information system (KIS)/patient data management system 170. At this point, factors will be collected that have an impact on the treatment design, such as factors related to the patient's general mobility or flexibility (degree of self-care, limb stroke, need for auxiliary tools) 1310, comorbidities (heart failure, myocardial infarction, dizziness, diabetes, diseases with high risk of falling, such as parkinson's disease) 1315, but primarily based on completing exercises (such as performing right hip TEP, associated with arthritis) 1320. Also, one feature is that surgical modalities (direct ventral anterior access, lateral access to the hip, etc.) 1325 can also be acquired, which affect the musculature at the time of surgery to varying degrees, and thus may also require different post-operative treatments. Normally, the treating physician 1330 prescribes an exercise plan 1335 for each patient.

The standard exercise plans 1335 associated therewith are identified in the rules store 152, are transferred to the patient management module 160 via an interface (not shown in fig. 10 for simplicity reasons), are suggested to the treating physician 1330 in one specification version, and are automatically selected in another specification version. And, the dexterity 1305, the surgical site 1320 (e.g., knee, hip), and/or the surgical approach 1325, i.e., the exercise plan with different characteristics depending on these parameters, which may also be clinic specific in one specification version, will be considered accordingly, when configured as such, e.g., by the patient management module 160. The GUI transmitted to fig. 5 may have displayed a preselection of options for the treating physician 1330 such as a three-point gait 1345 from the second day after surgery, a transition from a three-point gait to a two-point gait 1340 from day 4 after surgery, a step-up 1350 at the latter three points, and a maximum distance of 300m for two exercises per day. The exercise plan configuration 1355 is a function of start date, exercise duration, distance, exercise frequency, exercise intensity, etc. All parameters defined by the treating physician, predefined, and/or automatically specified by the rule set 150 are exercise plans 1335.

One feature is that the exercise plan 1335 is transmitted to the service robot 17 through the interface 1360, stored in the memory 1405, along with patient data such as height and comorbidities, surgical procedures (OP), and the like. The service robot 17 performs gait training, automatically adjusting the exercise plan 1335 as necessary, as shown in fig. 7. The exercises are then evaluated 2013 by observing the height in the service robot 17 (gait training), which are shown in the GUI illustration of fig. 11, available in the module for evaluating the exercises 2013. These data are transmitted into the cloud through the interface 1360, which then flows into the learning module 190. Alternatively and/or additionally, the data collected by the service robot 17 has not been consolidated in the practice plan evaluation as shown in fig. 11, "but rather only the raw data (such as the measured step size) is transmitted to the patient management module 160 and/or the rule set 150, then evaluated, and the data consolidated as shown in fig. 11 (and then transmitted to the patient management module 160 as consolidated in the rule set 150). The measured deviation step size may then be set there in proportion to the normal (physiological) step size. Depending on the specific features, the data consolidated in these ways may also be retransmitted to the service robot 17 in one or both cases via the interface 1360.

The assessment 1406 is in the scope of fig. 11 displayed how long the training lasts each day, how long the journey is, what the speed of the patient is, and the progress of each day is also determined. In addition, it will also be shown which specific deviations from the physiological course of action have occurred, in particular in terms of key parameters including the course of use of the stand, the duration of standing, the non-gravitational leg phase, the upper body/observation, the step size and/or the span, as determined by the service robot 17. Also shown are the progress of the treatment, respectively, and one feature is to show which corrective measures were performed during this period. In addition, one feature is that the treating physician 1330 can be prompted regarding the individual treatments performed by him or her to pinpoint deviations from the desired course of action or to specifically examine this course.

To enable advice to be given to the treating physician 1330 regarding exercise plan adjustments, or to enable automatic exercise plan adjustments, gait training assessment data is transmitted to the cloud of learning module 190 over time 1406. As mentioned above, these transmitted data also include the course of action correction output 450 or course of action deviations associated therewith by the service robot 17. These data are stored in the learning module memory 192 of the learning module 190 and, as long as they exist, will be supplemented with historical data 1505 from previous exercises, at which time these historical data 1505 may and should likewise come from more than one patient and more than one treating physician. Also included in the database are pre-exercise defined exercise plans 1335 and plan adjustments made by a treating physician or previously created rules regarding exercise plan adjustments 1515.

Weights for learning models of ganglia and/or exercise plan adjustments 1510 in the neural network are determined by the machine learning model and/or the neural network. Also, the evaluation results of the action training for the first point in time t performed by the service robot 17 in conjunction with the rule set 150 and/or the patient management module 160 as necessary and the patient data such as the patient's age, height, weight, comorbidities, type of surgery, etc. (1310 and 1325) are taken as input variables, and the setting of the exercise plan to be exercised at the later point in time t +1 or t is taken as output variables. If such ganglion weights are determined at once, it may be predicted which treatment settings are likely or should be made based on the action training assessment results determined by the service robot 17 (possibly in conjunction with the rule set 150 and/or patient management module 160) when these settings are either exercise plan adjustments or already defined exercise plans that retain default settings. Finally, default settings may also be defined.

The determined ganglion weights are next transmitted to the control means 150 where the previously determined ganglion weights are updated. The rules derived from the ganglion weight guides may then be updated in the next step based on the ganglion weights (e.g., extending the training period when there is a particular speed on a certain day and when other parameters are considered).

These rules of the exercise plan adjustments 1530 will then be transmitted to the patient management module 160 at step 1535, where there is the step of using the rules 1530, i.e. providing the treating physician with advice regarding the treatment and/or treatment adjustments based on the exercise results determined by the service robot 17 and the exercise schedule and parameters of the patient, such as patient age, height, weight, flexibility, type of operation, etc., based on a default exercise plan that may be specific to the clinic or treating physician, which advice may be confirmed, rejected or modified by the treating physician. In addition, it is a feature that the rules 1530 can also be transmitted directly to the service robot 17 so that the service robot 17 can autonomously adjust the exercise program.

The system depicted in fig. 10 may recalculate the ganglion weights after the end of each exercise, or only at certain periods, while temporarily saving data relating to the re-determination of the ganglion weights. The rules saved in the rule set 150 or in the corresponding module of the service robot (say 2012) will be returned without recalculating the ganglion weights. In this way, the exercise plan 2012 can also be automatically adjusted in the patient management module, i.e. the recommendations 1535 regarding the adjustment plan are not implemented as recommendations, but directly automatically adjust the plan, which does not require intervention of the treating physician.

Improved course of action assessment and course of action correction

The course of action evaluation in the course of action evaluation module 2052 in the first phase and the course of action correction (done in the service robot 17) in the course of action correction module 2022 in the second phase mainly determine the quality of the recognition and the subsequent correction of errors in the course of action. Course of action assessment and course of action correction together play an important role in the therapeutic outcome. The algorithm completed at this time may be continuously improved, as described below. And, the improvement is mainly achieved in two ways: a) collecting cases that have not been previously described because they occur occasionally, and b) increasing the number of leg drops. Collecting the not illustrated cases and increasing the number of cases allows an accurate assessment, also when assessed by machine learning and/or neural networks, of ganglion weights.

Fig. 12 describes the system and process behind it. The service robot 17 performs motion training with the patient in a first step. The characteristics of the action process are obtained in the action process obtaining module 2051, and the action process correction 450 is output in the action correction module 2022, which is characterized by the characteristics obtaining 720, the characteristics grading 765, the action process grading 770, the flow of the action process correction such as the decision grading 775, and the output action process correction 450.

To perform this process, the service robot 17 at this time collects data about the course of action of the person (i.e., patient) and/or saves the collected data at step 1535 a. The acquired data includes acquired course of action parameters, such as a patient's performed actions, including performed course of action grading 770, performed characteristic grading 765, and/or performed decision grading 775, and/or output of performed course of action correction 450, and/or recording patient actions, video sequences 1425 from RGB-2D camera 2084 and/or RGB-D-3D camera 2085. The action process parameters for a specification also include an action process rating 770 and a property rating 765. This data is transmitted to the learning module 190 via the interface 1360 and stored 1535a therein.

Video data and video sequences have previously been anonymized (not shown in fig. 12) to pixelate facial features so that the identity of a person cannot be recognized when viewing the video. Such solutions are known in the state of the art for automatically blurring (pixelating or blacking) faces or license plates etc. and are used as product solutions for 3DIS GmbH or guardian projects.

The treating doctor can access the terminal by one terminal, and can check and collect the recorded video data and different characteristics of the action process. For example, the treating physician notes personnel related parameters such as step size, span and/or upper body posture, including also the shoulder region of the patient. The treating physician inputs the personal relevant parameters (step size, span, posture, etc.) together with the corresponding video sequence into the course of action evaluation module 2052, sets a so-called tag 1535b for this information, i.e. the treating physician marks the recorded course of action, and physiologically distinguishes the course of action differently for the purpose of course of action evaluation in the course of action evaluation module 2052. In this case, the treating physician distinguishes between normal, physiological step size, span and posture and parameters that differ from those due to disease. The labeling performed by the treating physician is an action and/or characteristic grading. After the initial collection and labeling, the course of action and/or feature ratings continue to be reevaluated at steps 1540 and 1542, as well as the decision rating 1545, which is essentially equivalent to adjusting the rules in the decision matrix diagram of fig. 9, if necessary.

The adjustment may also cause manual adjustment of the rules of the course of action rating 1570, the characteristic rating 1572, and/or the decision rating 1575. This means, for example, that in a particular course of action hierarchy (i.e. an evaluation combination such as UAGS, legs, torso, etc.) a course of action correction (based on speech synthesis output instructions and/or displayed on a display screen) is redefined or specified and/or that in a characteristic hierarchy (such as step size) a threshold value has to be adjusted. These specified adjustments are ultimately transmitted to rule set 150, where the rules of course of action evaluation 1590, characteristic rating 1592, and/or decision rating 1595 are updated. These updates of the rule set 150 are transmitted (via an interface, not shown) to the action process acquisition module 2051 to acquire the characteristics of the action process or to the action correction module 2022 of the service robot 17, so that the characteristic hierarchy 765, the action process hierarchy 770, and the decision hierarchy 775, and the action process correction 450 output process chain, if necessary, can be updated. One feature is that re-evaluating the course of action may also result in the same having to adjust the property acquisition of the course of action in the course of action acquisition module 2051.

Alternatively and/or additionally, the action process 1540 and/or the decision ranking 1545 may be re-evaluated. FIG. 13 illustrates the process in which a model is trained and then used to evaluate course of action and course of action correction. The service robot 17 performs motion training together with the patient. The motion process characteristics are acquired in the motion process acquisition module 2051 and an exercise instruction/correction is issued by the motion correction module 2022. The service robot 17 will now collect the data and/or save the collected data at step 1410. Including course of action parameters, i.e., actions taken by the patient, and one feature is the performed course of action 770, the characteristic 765, and/or the decision 775 taken thereby. Additionally included is an evaluation of the displayed motion training (as shown in FIG. 11) 1406. These action training assessments are transmitted to the learning module 190 through the interface 1360 and stored 1535a therein. One feature is that the saving is made in a database, where existing historical data recorded by the service robot 17 is supplemented with data existing in the course of action hierarchy 1550 and property acquisition 1552, and/or with historical data of decision hierarchy 1555 modules of newly saved data.

As shown in fig. 11, ganglionic weights for action process ratings 1560, characteristic ratings 1562, and/or decision ratings 1565 may be determined in a learning model or neural network by including an evaluation of action training over time 1406. To this end, similar to determining ganglion weights when adjusting the exercise plan, machine learning algorithms, such as cluster analysis methods, support vector machines, and regression methods and/or neural networks, such as convolutional neural networks, may be used. The evaluation of the movement training, such as the elapsed journey, the standing duration, the non-gravity leg phase, etc., is used as a direct value and/or after calculation as an improvement compared to the previous phase as an output variable.

The determined course of action characteristics, one of which is that including the performed course of action 770 and/or the characteristic rating 765 and/or the decision rating 775 and/or the output course of action correction 450 and patient personal relevant parameters like patient age, height, weight, flexibility, type of surgery etc. (not shown in fig. 13) are used as input variables. New ganglion weights are generated as a result of this process and transmitted to the rule set 150, which results in updating the weights of the ganglia on the side of the action process ranking 1580, the property ranking 1582, and/or the decision ranking 1585. Updating the ganglion weights, in turn, results in updating the rules of the action process rating 1590, the characteristic rating 1592, and/or the decision rating 1595. These updates of the rule set 150 are transmitted (via an interface, not shown) to the course of action acquisition module 2051 to acquire characteristics of the course of action or to the action correction module 2022 of the service robot 17.

Both described ganglion weight determination methods may be implemented by machine learning and/or neural networks, so that, for example, only course of action corrections are made which show the actual effect of the progress of the treatment. This may result in a small amount of correction by the service robot 17, possibly resulting in saving energy consumption and processing time of the service robot 17. By looking at the characteristics of the action process, the identification time can be shortened on the one hand and the energy consumption related to calculation can be reduced on the other hand in a small amount of the characteristics needing to be identified. In addition, as opposed to video-based labeling by the treating physician, less bandwidth is used in transferring data between the service robot and the cloud.

One feature is that re-evaluating the course of action may also result in having to adjust the retrieval of the characteristics 720 in the adjusted course of action retrieval module 2051 to adjust the retrieval of the characteristics of the course of action.

One feature is that methods of manual improved course of action evaluation and manual improved course of action correction can also be combined with methods based on machine learning and/or neural network approaches.

Examples of the applications

Example 1: identifying and evaluating three-point gait

The evaluation of the three-point gait by the service robot 17, in particular the evaluation of the course of action, is first of all associated with certain definitions. A person's stride length is defined as the euclidean geometric distance between application instances, the foot skeleton point 1950 identified by the skeletal model in the gait cycle. One gait cycle includes the swing phase and the stance phase of each leg. The swing phase begins with the foot lifted off the ground and continues as long as the foot is in the air and the leg is moving forward. Once the foot, particularly the heel, contacts the ground (initial contact), the stance phase starts. The stance phase of the leg, defined as the time during which the foot lands, is also derived from the skeletal model, which also identifies a plane that coincides with the ground. During the gait cycle, the left and right legs respectively derive a step length which is always related to the leg which initially contacts the ground after ending its swing phase. In contrast to stride length, the span is defined as the distance between the centers of the two heels, which can range from 5-13cm, again as determined by the distance 1950 of the foot joint identified in the frontal plane.

As described above, the positions of the patient's joints are output as skeletal points to a skeletal model in space at step 725. Although Kinect 2 in the skeletal model used in this example does not show the posture of the body part, these postures may be simulated by connecting side-by-side skeletal points identified by Kinect, which is done in step 735 during the acquisition of the characteristics 720.

Fig. 19 shows this modeling process. It also shows the identified skeleton points as filled circles by way of example. The direction vectors between the identified skeleton points are calculated by mathematical operations, such as by creating vectors between the skeleton point 3D coordinates arranged side by side. This is drawn in fig. 19 as dashed arrows between side-by-side arranged skeleton points. Thus, as shown in fig. 19a), the angle α of the knee frame point 1930 can be defined by two directional vectors 1910 and 1920, which coincide with the orientation of the thigh and calf. A first direction vector 1910 from the knee skeleton point to the hip skeleton point and a second direction vector 1920 from the knee skeleton point to the ankle skeleton point (or foot joint skeleton point) will now be calculated, i.e. by determining the connecting line between the knee skeleton point 1930 and the hip skeleton point 1940, or between the hip skeleton point 1940 and the ankle skeleton point 1960. For example, analog calculations can also be performed on the arm. The noted angle alpha or knee flexion is shown in fig. 19 a). For example, it can be determined in the gravity leg phase, i.e., in the phase from the first contact of the heel with the ground to the transition from the first contact to the weight-bearing direction by the other leg, a so-called non-gravity leg phase begins simultaneously on the leg under observation.

To obtain flexion and extension values for both hip joints, corresponding shoulder and knee points are used. The angle is now determined by two direction vectors, one from the hip skeleton point 1940 to the knee skeleton point 1930 and the other from the hip skeleton point 1940 to the shoulder skeleton point 1960 (on respective sides, i.e., the right hip skeleton point 1940r corresponds to the right shoulder skeleton point 1960 r). If the leg is positioned forward from the plumb line, called flexion, especially called hip flexion, the angle of flexion is generally defined by a direction vector 1910 (shown in the opposite orientation) between hip skeleton point 1940 and shoulder skeleton point 1960 (see FIG. 19b) with an angle β, provided that the leg is positioned in the walking direction across the plumb line1). Conversely, extension is defined as orienting the leg posteriorly, i.e., the angle of extension is defined by the shoulder frame point 1960 direction vector and the direction vector 1910 between the hip frame point 1940 and the knee frame point 1930 (shown in the opposite orientation), provided that the leg lies in the direction of travel across the plumb line (see FIG. 19c), the angle being β1). One feature is that the angle can be determined on both sides.

The angle of flexion and extension of the hip is in turn influenced, for example, by the forward inclination of the upper body, which has an effect on the course of action. Thus, additionally, a forward tilt angle is recorded, which is derived from the direction vector from the central hip skeleton point 1940 to the central shoulder skeleton point 1960 and the plumb line passing through the central hip skeleton point 1940 multiplied by the tilt angle γ (see fig. 19 d).

These calculations are performed in the sagittal plane, but are also performed in a similar fashion in the frontal plane, shown in fig. 19e) with lateral upper body inclination as an example. In this case, a direction vector parallel to the plumb line and a direction vector parallel to the spinal column are used, with a side inclination e between them.

Optionally, calculations may be performed involving the remaining, supplementary and/or adjacent corners, and/or including plumb lines for determining extension and/or flexion of the knee, hip or other limb portion. Examples of this are shown in FIGS. 19f) and g), where the hip extension angle δ in g) can be passed1Identifying hip joint expansion by hip bending angle gamma in f)2A bend is identified. In addition, the angle can be used in combination with the upper limb tilt angle γ to achieve the angle β.

Fig. 19h) illustrates a skeletal model example in which the endpoint 1970 of the UAGS 1970 is illustrated, such as by referencing it in the location used to determine the foot skeletal point 1950. UAGS are denoted 1980 in dashed lines because they are not part of the skeletal model in the openpos et al architecture.

Fig. 14 shows further evaluations on the right side of the patient over time. The plotted curves show hip (center) and knee flexion (below) within 23s for patients with TEP (total endoprosthesis) on the right hip. The left knee is more significantly flexed (i.e., there is a stronger amplitude deviation) due to the asymmetry in the post-operative course of action, which is otherwise manifested as a larger step size on the non-operative side.

The accompanying gait training by the service robot 17 is performed shortly after the operation, at which time the patient must first complete a three-point gait and then a two-point gait as described earlier. Three-point and two-point gaits involve the use of a lower arm walking frame (UAGS) to reduce the burden on the joint after surgery. Before the patient begins robot-based gait training, the treating physician will direct the patient to use the service robot 17. In addition, the treating physician must explain to the patient the method of handling the UAGS while sitting and standing up, turning and opening the door at a certain location, and the flow of scaffold placement during the three-point gait. The treating physician then passes the training using the service robot 17. Once the treating physician has feedback via the patient management module that the service robot 17 can switch to two-point gait, the treating physician will first show the patient the correct procedure for cradle placement and check the recommendations of the service robot 17 before "releasing" the walking pattern.

Therefore, the UAGS must be identified as well. The service robot 17 uses the depth data from the Kinect 23D sensor for this purpose. In a first step 710, the depth image is converted into a scatter diagram, for which a scatter diagram library as described in the state of the art is used. In a next step 745, the scatter plots are segmented into smaller scatter plots based on the patient's skeletal model. This requires the use of a guess that the UAGS must be close to the lower arm and both hands and slightly parallel to both legs, i.e. to select according to the candidate area. This effectively utilizes standard segmentation and matching algorithms so that a smaller scatter plot can be evaluated at step 755 near the lower arm/hands/legs. It is also possible to carry out a specific model assumption at 750, which takes into account the shape of the UAGS, i.e. the UAGS is proportional to the limb of the patient and is relatively thin, either in the segmentation algorithm or in the generation of the 3D data. For example, the RANSAC architecture can be used to segment the corresponding scatter plot in real time (30fps) with high accuracy and high stability. Alternatively and/or additionally, it is also possible to use classification rules created by recording UAGS, such as from different perspectives, and rules that can identify UAGS on the image.

Based on the identified UAGS, the identified UAGS may continue to be evaluated in proportion to the skeletal model, for which purpose an action process hierarchy is used at step 770. The position of the feet is now estimated primarily based on the determined position of the UAGS. If a correct three-point gait motion is to be achieved, the patient takes the surgical leg forward with both UAGS at the same time, which optimally relieves the pressure on the surgical hand joint. The UAGS releases the pressure of the surgical leg during the whole standing phase, and the UAGS enters the swing phase again in the transition period. Since the UAGS now performs the motion largely in synchronism with the surgical hand-leg, a straight line between the two UAGS endpoints 1970 is defined and the foot point's distance from the straight line is evaluated. An alternative or supplementary feature is that the direction of motion of the patient can be determined, i.e. oriented in the sagittal plane, which can be achieved, for example, by the direction of motion of the arms, legs and/or the orientation of the back and/or shoulders. Alternatively and/or additionally, the patient will be tracked over a defined period of time and orientation derived from the assessment of the patient over the course of time. The next step may be to determine a line orthogonal to the direction of motion or orientation of the patient, which runs through the UAGS distribution, for which the distance of the foot point is evaluated. In this way, typical errors in three-point gait, such as early or late landing of the UAGS and release of pressure on the wrong thigh, can be identified by statistical evaluation of the distance distribution. Deviations from the arranged walking pattern or irregular/incorrect courses can be identified by an evaluation of the relative UAGS position to the patient's body. According to the three-point gait definition described above, the two UAGS are the same height as the sagittal plane of the patient during proper motion. Deviations in this position that exceed the respective threshold can then be identified as errors in the process.

Fig. 15 shows stent usage over time. The height of the stent above the ground is shown in the upper time profile (left stent: average higher amplitude, i.e. solid line; right stent: average smaller amplitude, i.e. dashed line). This information, in particular the contact with the ground, is used to evaluate the distance of the brace from the ankle, as shown in the time profile below. For this purpose, it is shown during the course of time that the postoperative leg is placed forward at the level of the so-called (invisible/imaginary) connecting line between the UAGS. The curve profile of the curve is such that the minimum is close to the minimum of the upper curve (i.e. on the vertical dashed line). Thus, according to the example illustrated in fig. 15, undesired stent placement procedures may be excluded. The patient first places the stent, then puts on the legs of the surgical hand, and then puts on the healthy legs. The output course of action correction 450 now informs the patient that the UAGS should be placed as far forward as possible at the same time as the postoperative leg. In this way, the patient's motion is smoother and requires fewer removals. From this time profile, further parameters can be derived. It contains "too short/too wide a landing of the stand" which results in different step sizes, and "too late a stand" which results in an unexpectedly high burden in the postoperative leg (the foot landing time of the operated thigh is earlier than UAGS).

In addition to acquiring the characteristics at step 760, the service robot 17 must distinguish the deviating course of action characteristics from the physiological course of action characteristics. This grading must be performed in real time as the patient moves behind the service robot 17. Grading of stride length, standing duration, stride width, torso flexion, or joint deflection/motion within typical physiological regions is performed by the therapist. For this purpose, the actions of healthy and physically injured persons are recorded by the Kinect device and the 2D camera 2084, and the actions of the persons are labeled by using the time stamps at the time of labeling and in the recording, and so on when the Kinect device records. In this case, both the characteristics of the malfunction and the correct course of action are labeled. The time stamp is used to synchronize the tagged data with the 3D shot of the 3D camera.

Since physiological deviations of the motion characteristics may have different planes, an evaluation based on a threshold absolute value deviation is not particularly effective. It is more effective to assess the similarity of the motion of the two legs within one gait cycle (i.e. two steps). In this case, the similarity is determined by the ratio of the step size, the standing duration, the striding angle between the legs with and without the prosthesis. Figure 16 illustrates the stance duration sequence for four gait cycles, four days post-surgery for a right hip prosthesis patient. The width of the vertical line correspondingly describes the duration of the standing of one leg, when the cycle on the operative side is narrower (right standing duration). Thus, the right post-operative leg of the patient of fig. 16 is burdened for a shorter time than a healthy leg in order to minimize pain. For another different step size please refer to the graph. The curve displayed is determined according to the distance between the ankles (sagittal plane). The value is smallest at the moment when the ankles are at the same height. The greatest value is obtained when the maximum distance between the ankle of the forward leg and the ankle of the standing leg is reached. As can be seen from the graph, the left non-operated leg has a significantly shorter stride than the right leg. Equally large strides are the best way to gradually increase the burden on the operative leg and achieve a smooth course of action. The ratio between the step size of the two legs and the duration of standing can be considered as a suitable means of describing the patient's walking as being physiological in character or having thoughts, where a grading as "bias" would result in an output course of action modification 450.

To distinguish between physiological and deviating action processes, an F1 score needs to be evaluated, which can distinguish between two levels (error/deviation versus correctly performed action), which can be calculated for different thresholds. The F1 score is generally defined as

F ═ 2 × (precision × (recall)/(precision + recall).

The best threshold corresponds to the highest F1 score. FIG. 17 illustrates the distribution of bias and physiological step symmetry in a histogram where the first step shows the error level symmetrically and the last step shows the non-error level symmetrically. 0.0 symmetry means perfect symmetry of the left and right legs, wherein a symmetry of ± 1.0 means that one leg has more than double the step size than the other leg. The threshold that best distinguishes the two grades may be found by evaluating the F1 score. Then an optimal symmetry threshold of-0.17 (precision: 0.85, recall: 0.87) indicates that steps less than 0.17 are graded as deviations from the normal course of action so that the service robot 17 can begin correction.

The evaluation of three-point gait is shown collectively in fig. 22. Information about the protected leg (operative side) is first used at step 1805 (such as is provided in the description of fig. 5). During the process of obtaining the properties 720, the direction vectors between the skeleton points and/or the UAGS are recorded as space-time parameters at step 1810. The position of the foot frame point 1950 at the time of contact with the ground is recorded at step 1815. Based on this, the span is determined by calculating the distance of the foot skeleton point 1950 in the frontal plane 1825. The span is evaluated at step 1865 during the property grading 765, i.e., the distance between the foot skeleton points 1950 is determined.

Additionally, the position of foot frame point 1950 at step 1815 is recorded as a result, or the step size (i.e., the distance of foot frame point 1950 measured in the sagittal plane when sequentially touching the ground) is determined at step 1830, and the step size ratio in gait cycle 1870 is evaluated during a characterization process 765. In addition, as a result of step 1815, the duration of standing in a stride is recorded by measuring time and evaluated at step 1875. Alternatively, the inclination of the upper body may be determined at step 1840, subsequently evaluated at step 1890 within the property profile 765, such as again recording the flexion and/or extension 1845 of the hip and/or knee joint, followed by evaluation 1895 in the property profile 765.

Additionally, the location of the UAGS endpoint 1970 is recorded at step 1820. Based on this, the distance between the UAGS end points 1970 when touching the ground in the frontal plane can be measured at step 1851, and the distance between the UAGS end points 1970 when touching the ground (corresponding to the UAGS distance) is evaluated at step 1884 in the characterization rating 765. At this point it will be determined how far apart the UAGS, for example, are from each other. Alternatively and/or additionally, a distance between the UAGS end point 1970 and the foot joint end point 1950 in the sagittal and/or frontal plane 1863 when touching the ground can be determined, followed by evaluating the distance between the UAGS end point 1970 and the foot skeleton point 1950 when touching the ground at step 1885. At this point, it will be determined whether the UAGS is positioned too close to the body or too far in the sagittal and/or frontal plane as compared to the foot frame point 1950. At this point, the corresponding threshold value may be considered. Exceeding the threshold in the frontal plane indicates that the UAGS is placed too wide, falling below the threshold indicates that the UAGS falls too narrow, and exceeding the threshold in the sagittal plane indicates that the UAGS falls too far forward. In addition to this, the landing time points for the UAGS and foot skeleton point 1950 can be evaluated at step 1880 based on steps 1815 and 1820 (in order to evaluate the legs after the operation taken at the right time, which should land as soon as possible after the UAGS has contacted the ground).

These characteristics assess the course of a person's actions, such as when walking through the UAGS. In detecting three-point gait, the position of the UAGS point 1970 at the time of ground contact is acquired at step 1850, and it is determined whether the position of the UAGS ground contact point is parallel to the frontal plane of the person 1852. In addition, at step 1854, it is determined whether the leg or the foot of the leg that released the pressure is between the UAGS. This can be achieved in at least two alternative and/or complementary ways. In one aspect, the location of the UAGS on the ground can be determined at step 1855, and then the distance of the protected leg's foot joint 1950 from the connecting line can be determined at step 1857. When ranking 765 the distance can be evaluated at step 1887. Alternatively, a perpendicular to the sagittal plane and a perpendicular to the protected leg and foot skeleton point 1950 in the sagittal plane may be formed for the UAGS end point 1970 (step 1860), respectively, after which the mutual distances of these perpendicular in the sagittal plane are determined in step 1862, after which these distances are evaluated in step 1889.

An action process rating 770 is then performed, at which point the association evaluates the characteristic ratings. If there is a deviation in the rules stored in the course of action hierarchy that relate to a defined fault (e.g., a "carriage landing too far forward" fault assigns a UAGS landing too far forward while the upper body is too far forward), a course of action correction 450 (or a reminder that the UAGS cannot be landed too far forward) will be output based on the decision matrix map (fig. 9).

Example 2: machine learning/neural network based ranking

The characteristic rating 765 and the action procedure rating 770 are definitively defined in the process, i.e., defined based on expert evaluation. An optional feature is that machine learning and/or neural networks may be used for this purpose. If a support vector machine is used, the direction vectors in the mentioned property acquisitions may form classes defined from a plurality of vector spaces (i.e. certain vectors have a certain similarity in class, dissimilar vectors differ). Thus, a feature with a given characteristic shows a space that is expanded by a vector. Its state in the course of action hierarchy is identical. Other methods, such as K-NN, K-Means, general clustering, or neural networks function similarly. One feature is that the course of action involves the gait process.

Example 3: identifying abnormalities in motion state and involvement performance

In one feature of the service robot 17, rules for identifying anomalies are saved in the course of action evaluation module 2052 for evaluating courses of action, and/or in the course of action correction module 2022 for course of action correction. The abnormality refers to a deviation in the action state from the "normal" state (i.e., a deviation from the physiological walking state). This may mean a step size of 65cm for a healthy leg, while a step size of only 30cm for a postoperative leg. If the service robot 17 does not measure a step length of 30cm, but only 10cm, it will deviate from the "normal course of deviation action". Alternatively and/or additionally, anomalies over time can also be recorded from the curve distribution of the skeleton points of the individual skeleton models, wherein the amplitude height, the minimum value, the maximum value and/or the position of the commutation point over time can describe anomalies. This identification is made in the action process evaluation module 2052 for evaluating the action process, in particular for this using the characteristic rating 765 and the action process rating 770.

If one or more of these exceptions are identified, the rules saved in the module trigger an event. The event may be the first transmission of information of the service robot 17 to the patient management module 160 via a wireless interface, which in turn may inform the patient, for example via a wireless network GSM, LTE, WLAN, which the mobile device of the treating physician may receive, thereby informing the treating physician of the event. This information may contain video sequences that reflect anomalies in a recorded manner. One feature is that the video sequence may also be stored in the patient management module 160, ensuring that the treating physician accesses the video sequence stored in the patient management module 160 via the mobile device based on their notification.

An alternative and/or complementary feature is that a time stamp can be assigned to the anomaly and a piece of information that the video sequence transmitted to the learning module 190 contains the anomaly is stored in the memory of the learning module 190 in correspondence with the time stamp. In the GUI through which the treating physician can label the video sequences transmitted to the learning module 190, higher priorities can be set for sequences containing anomalies, for example by specifying priorities in a database. In this way, efficient labeling can be achieved to continue to improve the algorithm for course of action assessment and course of action correction. One feature is that it involves the gait process in the act of acquiring, grading, evaluating and/or correcting.

Example 4: person identification, virtual person tracking, and re-identification of person identities

The personnel identification can be realized in the personnel identification module 2041 through skeletal model identification, such as an evaluation framework like a 2D camera and a 3D camera and/or an evaluation framework like OpenPose, OpenCV, etc. Synchronizing the recording of sensors that can enable skeletal model recognition and RGB photography can correspond the tracked person's body area to colors and/or color patterns and/or textures from the person's clothing. The person can be tracked and re-identified over time with the color or color pattern parameters of each body region and/or the height parameters of the patient (height, arm length, leg length). Alternatively and/or additionally, a motion process model, such as a gait process model, may be used. This time the tool, OpenPTrack, etc., is accessible.

Face recognition may also be used in addition and/or instead.

In another feature, a tag for identifying, tracking, and re-identifying the identity of a person may be used. To identify the patient, markers may be positioned on the walker. These markers may be color-based models or light sources with specific frequencies. Thus, it is a feature that the patient wears a vest on the surface of which a barcode is printed that can be seen by the service robot 17. The RGB-2D camera 2084 may identify the patient by the barcode. Another optional feature is that the patient may be assigned information about the barcode accordingly, i.e., stored in the patient management system 160 or in a storage medium or rights card.

Example 5: adjusting exercise plans based on multiple logistic regression

For example, a set of multiple logistic regressions is presented for a machine learning scheme. One such evaluation model may evaluate the likelihood of different output variables that do not have to be directly related based on a series of input variables. In this way, the options of the exercise program derived by the GUI in FIG. 5, such as a three-point gait, a 10 minute duration and a 100m journey, can be simultaneously evaluated as output variables. As input variables, for example, the relevant options of the exercise program previously determined by the service robot 17 can be considered, but also co-morbidities of the patient, such as general movement limitations, impairment of mental capacity, their height, the type of surgery performed, etc. (which may affect different muscle groups and thus represent different degrees of impairment in the post-operative phase).

In the multiple logistic regression process, a linear prediction function is used that determines a score by a set of linear combination ganglion weights, multiplied by an input variable that is a dot product: score (X)i,k)=βk·XiWherein X isiIs an input variable, β, observing ikIs a weight vector or regression coefficient that takes into account the corresponding output variable k. The score may in turn be directly converted into a likelihood value, Thus the observation i of an input variable yields a corresponding output variable k. The input variables are also referred to as independent variables and the output variables are also referred to as dependent variables.

At this time, the type of operation, the body state, the post-operation duration, the passing stroke and the walking mode can be used as independent variables, and they have different characteristics respectively. The settings or combination settings that the treating physician can perform in the exercise program can be used as relevant variables, which in this case likewise have different characteristics. Specifically, the patient may perform a left hip TEP as a surgical procedure from the lateral side into the hip under normal physical conditions, and the service robot 17 determines that 250m of travel (which is characteristic of the input variable) has been passed within 5 minutes during the three-point gait by three days post-operatively, in which respect the treating physician may make adjustments to the exercise plan to allow the patient to walk the same 10 minutes for the next exercise, while maintaining the three-point gait (which is characteristic of the output variable). Based on the characteristics of each variable, a plurality of logical models can be evaluated. On the basis of the regression coefficients determined in the model formation phase (weights of the individual input variables, such as body state, hip entry, duration after, travel, exercise duration, three-point gait) can be used to provide the treating physician with a recommendation on which exercise plan configuration he should undertake when acquiring other exercises, in particular by other patients. If the treating physician makes a number of such settings, then in the presence of the above-mentioned features of the input variables, this means that it is reasonable to make these settings, in other words, this represents a significant value of the ganglion weight. In order to be able to provide the treating physician with the exact recommendations of the above measures, i.e. under normal physical conditions, the left hip TEP is performed as a surgical procedure from the lateral side into the hip, it is determined by the service robot that a 250m stroke has been passed within 5 minutes during a three-point gait three days after which the treating physician needs to perform a 10 minute gait duration setting in a three-point gait. The suggestion is displayed in the GUI of FIG. 5, or highlighted by color (not shown). Since it is a statistical model that accounts only for the possibility of the treating physician modifying the exercise program after having the specific characteristics of the input variables, the threshold values are stored in the rule set 150, which also suggests to the treating physician when the reliability of the combination of three-point gait and 10 minute gait duration measures exceeds 80%.

Alternative or complementary methods of determining ganglion weights are naive bayes, decision trees, or neural networks, such as long and short term memory cycling neural networks.

Example 6: signal indicating the need for exercise

The signal may be transmitted to the patient management module 160 and/or the service robot 17 via a mobile terminal device used by the patient, which in turn may wirelessly transmit the relevant description to the service robot 17. Alternatively, an in-home calling system may be used in one feature, which is standard in the clinic, or a fixed-mount terminal, which communicates with the patient management module 160 and/or the service robot 17. First, such exercise requirements may also be entered into a database that has a set of mechanisms for planning the sequence of exercises. This allows rules to be stored in connection with this, which e.g. give priority to a particular patient.

Example 7: drawing/mapping

If the service robot 17 first enters an area, such as a clinic building, the service robot 17 maps its surroundings through the 2D/3D identification submodule and the mapping module 2061. For this purpose, all spaces are surveyed and environmental data are acquired by means of 2D and/or 3D sensors. At this point, the sensors considered are at least LIDAR 2083, RGB-D3-D camera 2085, RGB-2-D camera 2084, and/or ultrasound and/or radar sensors. Combinations of these sensors may also be used. (2D) The RGB camera 2084 only serves as an aid in evaluating the color at this time. Rules to which areas the service robot 17 is allowed to move, and for what purpose, are saved in the mapping module 2061. This includes, for example, areas in which the service robot 17 is not allowed to move, and areas in which it is allowed to pass, allowing it to complete training, such as action training.

One feature is that the 2D/3D identification module and mapping module 2061 has an interface to the action training module 2011 for exchanging positioning data. Thus, a space number may be stored in the action training module 2011 and the information assigned to the space number may be stored in the mapping module 2061. The service robot 17 can thus recognize the position where the service robot 17 meets the patient.

Example 8: enriching maps by CAD data

One feature is that the maps taken by the service robot 17 from the building in which the service robot 17 is located can be enriched with the building's CAD data, entering them through the interface into the 2D/3D acquisition and mapping module 2061. They may optionally be entered into a separate module connected to the mapping module 2061 via an interface. CAD data in this case refers on the one hand to 2D or 3D position plans, which come from software programs used in the planning of buildings. But also image data (such as PNG, JPEG, PDF) can be used from which the system can derive corresponding information of the building regulations. Considering these position plans, support for the service robot 17 in identifying the passage and opening the door can be provided, and remarks can be made accordingly by interpreting the drawing symbols in the mapping module 2061. In addition, the location plan may be used to identify temporary or quasi-stationary obstacles in a building that are not part of the building and that may change their location or disappear altogether within a period of time.

Example 9: drawing and automatic positioning by electromagnetic waves.

One feature is that the service robot 17 may also take into account electromagnetic waves, such as light and/or wireless signals from WLAN access points, for better navigation within the building. For this purpose, photodetectors on the service robot 17 can be used to collect the light intensity (or daytime solar radiation) and/or the angle of incidence of the light, both during the surveying process and during the regular navigation within the building. When the collected illumination is calibrated by using the drawn illumination, the time of day and the season as well as the longitude and latitude can be taken into consideration, so that the factors of the natural fluctuation of the incident angle of the sun and the intensity thereof can be taken into account. Alternatively and/or additionally, artificial light in the building can also be collected together, both its light intensity and its spectrum.

Alternatively and/or additionally, WLAN signals of a plurality of routers whose positions in the building are clear, which are remarked in the map of the service robot 17, can be identified by the WLAN module 2088. The location can be determined more accurately within the building by triangulation. When the collected light is incident, the speed, the passing stroke, and the direction of the service robot 17 in the space are collected and stored, and are compared with the stored values. These methods described in this example can be combined with other methods described in this document.

Example 10: measuring the distance traveled by a patient by odometer and patient tracking

One feature is the determination of the patient's travel by means of the odometer module 2081 and optical sensors, such as LIDAR 2083 and/or 3D cameras. At this time, at least one optical sensor collects the relative position of the patient to the service robot 17 during the motion training process, and the path of the robot is determined by the odometer module 2081. According to a particular feature, the robot may have sensors with magnets which determine the number of turns of the caster 6, the distance travelled being calculated from the radius. Errors due to slip can be corrected by combining with an appropriate inertial sensing mechanism, such as an acceleration sensor. When a patient is acquired using a sensor whose data is evaluated as a skeletal model, the basis of the patient position identified by the service robot 17 in the person identification module 2041 and tracked by the person tracking module 2043 or the 3D person tracking module 2046 may be the center point between the acquired legs (if acquired by the LIDAR 2083) and/or the center point between the acquired hip, spine, foot joint 1950, etc. A similar scheme to LIDAR may be referenced again by viewing the foot joint 1950. The patient's travel is determined relative to the patient's travel.

One feature is that the positions of foot skeleton points 1950 can only be insufficiently obtained from the skeleton model, i.e., the positions are acquired with a higher degree of ambiguity compared to other skeleton points. However, in order to improve the acquisition accuracy, as shown in fig. 21, alternatively and/or additionally, the skeleton point 1950 (fig. 21a) may be determined by a knee skeleton point 1930, a direction vector positioned in parallel from the knee skeleton point and the lower leg, and a height at which the knee skeleton point 1930 is above the ground when the direction vector passes through a plumb line, at which time the height at which the knee skeleton point 1930 is above the ground when the direction vector passes through the plumb line indicates a distance of the skeleton point 1950 as viewed from the knee skeleton point 1930 (see fig. 21 b). To determine the direction vector parallel to the lower leg, one feature is that, in addition or in addition, reference can be made to a segmentation method for detecting UAGS, in order to combine it with the skeleton model as a scatter diagram acquired for lower leg detection.

Example 11: measuring the distance traveled by the patient by accumulating the acquired step lengths

The patient's travel is determined in one feature by accumulating step sizes that are collected during the course of the motion training. The basis is to identify the distance of the ankle joint by a 3D camera, such as Microsoft Kinect or Astra Orbbec, and an evaluation architecture associated therewith. Alternatively and/or additionally, the position of the feet may also be determined by LIDAR.

Example 12: measuring the distance traveled by the patient by means of a coordinate system

One feature is that the patient's traversed path is determined by accumulating the euclidean geometric distances between coordinate points traversed by the patient. The service robot 17 creates a coordinate system of its environment in the mapping module 2061 in advance. The service robot 17 determines its position in space by automatic positioning 2062, which in turn assigns a position to the corresponding coordinate. Optical sensors, such as LIDAR2083 and/or 2D camera 2084 or 3D camera, collect patient data during the exercise and determine their relative position to the service robot 17. A spatial coordinate is also assigned to this position. By tracking the patient over time, a series of spatial coordinates are determined. A stepwise cumulative euclidean geometric distance between each coordinate point may be determined. When patient data is acquired using a sensor whose data is evaluated as a skeletal model, the patient center point may be based on the acquired center point between the legs (if acquired by LIDAR 2083) and/or the acquired center point between the hip, spine, etc. The position of the patient in space can also be determined from the patient coordinates, irrespective of the position of the service robot.

Example 13: determining and outputting a trip still to be completed

The service robot 17 may compare the journey the patient has taken with the journey according to the exercise plan and provide an indication to the patient via the speaker 2092 and the speech synthesis 2073 as to how long the journey remains to be completed according to the exercise plan. Alternatively and/or additionally, output to the display screen 2087 is also possible.

Example 14: determining the velocity of a patient

The service robot 17 records the exercise time while recording the elapsed journey. In this way, the service robot 17 can determine the speed of the patient, saving it to the action assessment module 2050. The patient's speed may be compared to a previously saved speed for a discrepancy calculation, and the service robot 17 may provide an indication of how much the patient deviated from the historical values during the exercise via the display screen and/or speaker 2092 and voice synthesis 2073.

Example 15: patient piloting

The service robot 17 navigates in front of or follows the patient during the autonomous exercise. In both cases, one feature is that the patient's path is assessed by a revolutionary algorithm using the motion planning module 2065. In both cases, the service robot 17 provides navigation for the patient within the exercise area. If the service robot 17 walks the patient in his room in advance, it first navigates in the area where the patient has not completed the exercise and then reaches the exercise area with few obstacles, including other moving people. The service robot 17 at this time navigates the patient by means of voice and/or video/optical prompts emitted by the speaker 2092 or the display 2087, one feature being that the prompts can also be given by means of a signal light of the service robot 17. At this time, if the service robot 17 is located behind the patient, the voice operation mode is selected first. The patient will be tracked and the position of the patient will be calibrated within the building. When a deviation from the planned route occurs, the service robot 17 sends a direction correction signal, reminding the patient to turn by a specific walk, and if necessary, to return, provided that the patient should not turn.

Example 16: action planning device

To efficiently calculate the journey traversed by the service robot 17, different destinations will be optimized simultaneously within the scope of the action planning performed in the action planning module 2065. First, the service robot 17 calculates the optimal journey in the path planning module 2064. If its environment changes, in particular the previously calculated path, the service robot 17 does not recalculate the optimal journey to the destination, but only for the dynamically changing section of the path. Second, if the service robot 17 moves directly to the destination, the dynamic action plan also takes into account the location of the service robot 17 as soon as the destination is reached. Third, it will be considered that the service robot 17 runs slower in the backward orientation than in the forward orientation. Fourth, the optimization also considers respecting safe distances from static and dynamic obstacles. Fifth, the service robot 17 at this point considers the service robot 17 and the tracked person, at this point the patient, keeping a certain distance. These target variables are uniformly observed as a cost function, and the respective target variables are displayed as a weighted sum so as to perform global optimization on the trip. The dynamic window method known from the state of the art is used at this time (Fox et al 1997). Furthermore, it is a feature that, for the selection of the individual path segments, revolutionary algorithms can also be used which optimize the acceleration of the individual path segments.

By means of this movement planning, the camera can be adjusted taking into account the path of the service robot 17 and the direction of the patient movement, ensuring, for example, a centralized acquisition of the patient by means of the camera. For this purpose a PID controller can be used which uses the "integrator hold" (i.e. the upper and lower limits affecting the result) known from the state of the art and adjusts the horizontal camera angle. Alternatively and/or additionally, the angles determined by the PID controller may also be used to correct horizontal angles in the skeletal model that arise due to the running direction-limited rotation of the service robot 17 compared to the orthogonal camera perspective.

Example 17: exercise area

In practice, a particular trip is first made through walking in the clinic or other area, which is saved as such in the space plan module 2060 r. Also, first, an area with fewer obstacles that the patient and/or the service robot 17 must avoid should be selected. One feature is that these spaces have a width at least corresponding to three times the width of the service robot 17. Obstacles are persons who appear to the patient and the service robot 17 to move within the area as well, including clinic staff, medical staff, and beds, carts, seats, etc. that are encountered in the daily work of the clinic. Such fewer obstacles not only allow the patient to perform smoother, less painful motion exercises depending on the health condition, but also allow the service robot 17 to better collect patient data. Less conversion driving is executed, so that the travel can be shortened, and the storage capacity of a portable storage battery is protected. Further fewer accidents occur where the various people do not move back between the patient and the service robot 17. This situation may in turn lead to tracking problems, but may also lead to the service robot 17 erroneously identifying others as an emergency for the patient. If such a situation is avoided, the occurrence of a situation in which re-identification of the person must be performed using the re-identification module 2044 can be reduced.

One feature is that the service robot 17 can also automatically identify sub-areas suitable for practice by navigating within a pre-defined area (or in a clinic). For this purpose it records the number and type of obstacles, their size (absolute and/or relative to the width of the walkway), the density of the obstacles in the whole area as a function of time through a certain time course, such as a day and week course. These values are stored in the memory of the service robot 17, such as in the spatial floor plan module 2060 r. During this time, the data may be collected either during "dry running" of the service robot 17 (i.e., without concurrent motion training with the patient) or during exercise with the patient. The data of the acquired obstacles may be processed within the service robot 17 or in the cloud to which the data is transmitted.

If the practice is now required with the patient, the service robot may determine and/or predict, by accessing historical data, which areas have the lowest density of obstacles during the past practice period (e.g., friday 13:00-13: 15). Clustering may comprise determining density. One feature is that the service robot 17 also takes into account the travelled distance here, after which a routing decision is made on the basis of the density. At this time, when the route length of 100m is notified, the service robot 17 may choose to walk 50m around each other, or the service robot 17 may identify an obstacle having a certain density (or a minimum density, such as a calculated result of being 90% at the lowest or a density specific to a specific relative or absolute threshold) in a certain past time for a partial trip of 25m length based on historical data. The service robot 17 now automatically selects the area of length 25m, which the patient should walk 4 times.

Example 18: arrangement of cameras and distance from patient

To optimally acquire the information of the patient, a 2D camera 2084 and/or a 3D camera is installed, and the 2D camera 2084 and/or the 3D camera photographs the patient as centrally as possible. Thus considering an average patient height of 175 and 180cm, the camera may be mounted at a height of 80-90 cm.

Depending on the specific technology used for the 3D camera (ToF on Kinect, speed sensor on Astra orbec), different associations with the optimal distance to the patient can be derived. On the one hand, the distance is determined by the angle of the 3D camera, from which the entire body of the patient can be acquired under optimal conditions. The Kinect has a radius of action of at most 4m, which allows a smooth acquisition of patient data, i.e. also at this distance, its body part can be identified. If the UAGS should be tracked, its status is different, and first a 3D camera using the lockle technology should be used. Guidi et al report 2016 that Kinect2 is more accurate than measurements obtained using a series of sight cameras. If a commercially available Astra Orbbec 3D camera is used, the horizontal output of the 3D camera is 60 ° and the vertical output is 50 ° with a resolution of 640x 480. So that a picture with a horizontal width of about 231cm can be taken at 2m from the 3D camera. If the 3D camera is mounted at a height of 92cm, the maximum height of the acquired picture is about 184 cm. This does not allow data to be collected for taller people. One feature is therefore that tracking can be done at greater distances from the patient, which may need to be taken care of by UAGS identification, depending on the camera model used. It is therefore a feature that for the acquisition of UAGS, UAGS identification and UAGS tracking by the LIDAR 2083 and/or ultrasound or radar sensors is required, for which the signals of at least one camera system, LIDAR 2083, ultrasound or radar sensor are correspondingly synchronized in time and combined. One feature is that a camera and LIDAR 2083, ultrasound or radar may be used in combination to acquire and/or track the UAGS.

Example 19: interruption of exercise

As shown in fig. 18, the patient may interrupt the exercise program at any time by selecting a break at step 510. The patient may also choose to cancel the exercise at step 515. For this purpose, the patient is withdrawn from the service robot 17 at 520. If the patient forgets to exit or the patient is not active because of resting for a defined period of time, then the system is automatically exited at step 525. To continue or restart the exercise, the system needs to be re-logged in step 530.

Example 20: treatment record

The system records treatment using the patient management module 160 and the service robot 17. The document itself may be transferred through an interface to an external system such as a hospital information system or patient data management system 170. The documents include, on the one hand, personal information such as name, age, sex, previous symptoms (such as arthritis of the hip in later stages), treatment performed (hip total endoprosthesis), and the operative side. Such information is primarily stored in the memory of the patient data management system 172. The exercise program is collected in accordance with fig. 4 and 5 within the patient management module 160 itself, as well as the results of the completed exercise such as that found by fig. 11. This also includes interventions in the exercise program taken by the treating physician or performed by the service robot 17 based on prescribed and/or learned rules.

Example 21: inform the treating doctor

One feature is the transmission of data to a mobile phone, such as provided for a treating physician, through a server connected to the patient management module 160 via an interface. Deviations from the threshold value, which may for example be step sizes different from the "typical" deviation step size or a combination of movements of different limbs not corresponding to the physiological course of action and the "typical" deviation course of action, may in turn serve as triggers for such notifications when evaluating the course of action. In the former case, a step size of only 10% of the physiological step size may trigger a trigger event. In the second case, a 20cm step, an inclination of the upper body from the vertical by more than 30 °, may trigger the trigger. Furthermore, in the case of deviations below or above a certain threshold in the exercise program, or two-point gait is used instead of three-point gait, for which the treating physician has not passed, or two-point gait is not carried out in the exercise cycle phase at a frequency above a defined threshold, based on patient data and exercise results currently acquired by the service robot 17 with historical values in the database, and a trigger may also be triggered, one feature being that the threshold is derived by evaluating the historical data. The exercise cycle at this point includes a typical exercise plan element sequence, calculated from the first exercise completed by the patient, through to the last exercise.

Example 22: cloud-based speech recognition

One feature is that voice recognition 2074 may be accomplished through cloud-based third party provider services that service robots 17 may wirelessly access through APIs. The first step then employs a Speech to text API such as Google Speech or Amazon Transcribe. The second step may evaluate the generated text data through an API such as Amazon complex, or may convert the result into a response or instruction of the service robot 17 through an instruction implemented through a menu input based on a screen. Combinations of these services may also be used through the various APIs.

Example 23: anonymization of data transfer between cloud and service robots

The treating physician may assign a memory location, such as an entitlement card, to the patient, i.e., the treating physician gives the patient an entitlement card that is assigned to the patient in the patient management module 160. The authorization card contains a patient ID assigned to the patient or his patient ID and/or another label ID. Using the rights card or serial number and/or patient ID, the patient identity may be automatically identified on the service robot 17. The service robot 17 now downloads the practice plan saved by the treating physician, but not its personal related data, from the cloud 18 via the interface and then makes a correspondence via the patient ID. After the exercise is completed, the service robot 17 encrypts data collected by the service robot 17 during the exercise and loads the encrypted data into the patient management module 160, which corresponds to the patient ID. The data is encrypted in the patient management module 160. If this is done, no data containing the name or address of the patient is transmitted that would be traced back to the patient.

Another feature is that the treating physician transmits the exercise plan to a storage medium (e.g., an authorization card in the form of an RFID tag, a usb disk) received by the patient in order to identify the person on the service robot 17. At this time, data including the patient ID specified by the patient management module 160 is transmitted from the storage medium to the service robot 17. After completion of the exercises, the service robot 17 retransmits the collected exercise data to the storage medium so that the treating doctor can transmit the data to the patient management module 160 when reading the storage medium.

The above scheme or data exchange may also be combined by a storage medium (or a rights card).

Another feature is that the patient identity can be identified by login and/or password. They are used as or associated with the patient ID so that the service robot 17 can use this information to download other data such as exercise plans from the cloud. Another feature is that biometric features such as fingerprint scans, face scans or iris scans can be used to identify the patient.

Example 24: identification of leg burden by 2D/3D sensing mechanism

In some cases, a patient's one leg is allowed to bear only a limited burden. If it is difficult to accurately identify the burden optically, a cue that the burden on the leg can be identified can be derived by some skeletal parameters. For this purpose, the following parameters can be traced: a) the angle between the lower arms when walking, and/or b) the angle of the lower leg to the thigh and/or the extension and flexion angle, and/or c) the posture of the upper body. The relief pressure condition exists in the following cases: i) the angle between the lower and upper arms is less than 170 deg., alternatively less than 160 deg., ii) the angle of the legs which can only bear a limited burden is less than 172 deg., iii) the upper body anteversion angle is less than 5 deg., and/or iv) the inclination of the upper body to the relevant side exceeds 5 deg.. The more pronounced the characteristics of the properties i-iiii), the greater the degree of pressure relief of the leg. The extended arm angle is then defined as 180. The person now adopts a three-point gait, which is detailed in figure 22. In addition to the evaluation described therein, acquisition and evaluation is performed at the arm, while other characteristics are evaluated in the characteristic rating 765 and the course of action rating 770.

The determined gestures will therefore be ranked and stored in the rule set 150, during the course of action evaluation in the action process evaluation module 2052, i.e. in the rule set 150, as well as locally in the service robot 17. If a burden limit is present that is stored in the patient management module 160 and that also the exercise plan input can be found, the patient's leg burden is continuously monitored by the processor of the service robot 17-while other poses are evaluated, the patient may be optically and/or audibly presented if the leg burden exceeds or falls below a certain threshold.

Example 25: leg burden recognition by external sensors

How much one person is burdened with legs can also be determined by external sensing means, such as wireless coupling with the service robot 17. The field of orthopedics finds a series of products for measuring the pressure of the sole of a foot. For example, insoles are known which are placed inside a patient's shoe and measure the pressure present on the sole of the shoe at different spatial resolutions. Pressure sensing mechanisms include capacitive, resistive (e.g., based on a strain gauge strip), inductive, or piezoelectric pressure sensors. The sensor signal is amplified by the bridge circuit and processed by an analog-to-digital converter, ensuring that it can be transmitted by conventional radio standards such as bluetooth, WLAN, etc.

The service robot 17 additionally has a software unit which can accumulate the measurements of the different sensors over time to determine the total leg load. These distribution values are continuously compared (by the comparison unit) with the burden parameters stored in the exercise program. If a deviation in the burden is found that exceeds or falls below a defined threshold, the patient may be instructed optically and/or audibly to reduce or increase the burden.

Example 26: radio location of a patient

The service robot 17 has at least one radio interface, such as bluetooth, that can enable wireless communication with IoT devices in the surrounding environment. The UAGS now has at least one sensor, which is wirelessly connected to the service robot 17 via this interface. The sensing effect is now produced by merely identifying its position. This can be done by active or passive positioning and position triangulation associated therewith. Passive positioning means backscattering the wireless signal originally transmitted by the service robot 17 when the sensor has no power source of its own. Such methods employing RFID technology are fully described in the current state of the art. Active positioning refers to a transmitter having its own power source. The service robot 17 collects the signals it transmits and determines the position of the transmitter by triangulation. This is preferably used instead of optical identification of the person if the patient and the service robot 17 are moving in an environment where there are many people. Thus, the direct line of sight of the service robot to the patient is often interrupted, and identification must therefore be frequently renewed.

Example 27: signal button on walking aid

Another feature of a walking aid such as a UAGS is that it has a button that when pressed can transmit a signal wirelessly to the service robot 17. When the button is mounted, it is ensured that the patient can reach it during the completion of the exercise, in particular without this leading to a change in the leg load. If the walking aid is of the UAGS type, the button is located on the distal end of a T-angled handle that is gripped by the patient's hands while walking. Configuring the sensing mechanism of the walking aid ensures that different numbers of button presses or frequencies of presses can trigger different commands on the service robot 17. A signal can be sent to the service robot 17 that the patient wishes to sit down once. At this point the exercise will be interrupted. A signal may be issued twice that the patient recognizes that the service robot 17 is followed by another person than the patient (or that a person is moving back and forth between the patient and the service robot 17), and or that the re-identification (in the re-identification module 2044) is unsuccessful. After such a signal, the service robot 17 interrupts the training and continues the training after the patient has logged in himself. In this way, it will be possible to avoid that others take over personalized training and unauthorized access to the data of the logged-on person.

Example 28: external sensing mechanism on patient

Furthermore, sensors (in particular acceleration sensors) can also be provided on the patient, in particular on the limbs, the trunk and/or the head, but also on the UAGS, which are connected to the service robot 17 via at least one interface. With the sensor, the patient's movements can be collected and evaluated. It is then advantageous to install sensors on as many limb parts as possible, likewise with 3D skeleton recognition acquisition and further processing, in particular during the property acquisition 720. This may mean that one acceleration sensor is mounted on each thigh, torso, etc. Alternatively and/or additionally, the data collected at this time may be used for 3D collection by an optical sensor (LIDAR 20832D camera 2084 and/or 3D camera). Evaluating these data may in turn provide angle information, as may be provided by a 3D scatter plot, which may establish an algorithm for the 3D sensing mechanism.

Example 29: fall identification by optical sensing mechanism

To ensure the safety of the patient, the service robot 17 has a fall recognition function. On the one hand, this can be achieved by integrated optical sensors, i.e. based on a skeletal model. The system now recognizes that the patient is currently in a falling position, either lying on the ground or kneeling, based primarily on the angle and position of the upper body or the relative angle of the head, shoulders and legs to the plumb line. The distance of the skeletal points identified in the skeletal model from the ground can alternatively and/or additionally be evaluated, i.e. a fall is identified as soon as they exceed a threshold value, for example a distance of 25cm for the knee.

Example 30: tumble identification by means of an inertial sensing mechanism

Optionally, an acceleration/inertial sensor, optionally including a magnetometer, may be carried by the patient or integrated into the patient walker. Identification of a fall or what is commonly referred to as a "fall from the ground"/a fall from an object based on inertial sensing mechanisms is described in US7450024 and US 8279060. The sensing mechanism used at this time may transmit the determined acceleration to the service robot 17 by a wireless transmission technique such as RFID or bluetooth, after which the service robot determines whether the acceleration exceeds a threshold. Alternatively and/or additionally, the threshold value is determined directly in the sensing means, which only transmits information whether the threshold value is exceeded or not. The acceleration threshold may relate to an acceleration maximum, an acceleration angle, and/or a combination of these values.

Example 31: automatic action process ranking

The example shows how the action process ranking 770 is based on exercise data collected and evaluated by the service robot. These data relate on the one hand to time-variable data such as the distance travelled, the speed, the duration of standing, the non-gravity leg phase and the exercises to be performed according to the exercise plan, and on the other hand to time-variable data, patient data such as age, height, weight, type of surgery, date of performing surgery, etc. (as shown in fig. 11). All of this data is collected and transferred to the learning module memory 192 of the learning module storage 190 during which it is accumulated, a historical database is formed and occasionally evaluated. During the evaluation, it was found that during the stay of the patient in the clinic (from the time of the operation up to the time of the robot being removed), the course of action with many faults from the beginning should be adjusted to the physiological course of action, i.e. the practice should ideally form an improvement that can be achieved in the standard evaluation parameters. In particular, the values of travel, exercise speed and time to be passed should be significantly increased and the frequency of corrections should be decreased before being thrown off the robot. The standing duration and the non-gravitational leg phase values have to be compensated and normalized in the process according to the relevant/operative side. This in turn depends on the exercises the patient has done according to the exercise plan, but also on patient data, such as their age, co-morbidities, etc.

The action process of a person is extremely personalized. But many standard values are described in the literature, such as torque, angle and acceleration as well as load and nature characteristics. They form the basis for standardization of the form of gait training. Walking with UAGS or other walkers is already a different fault or deviation from the so-called "normal course of action", but first of all a support landing procedure of the type described above should be considered in the field of orthopedics/surgery. The course of action, including those with other walking aids, is easily standardized and defined by a combination of different action parameters or characteristics over time, such as the placement of UAGS or walking aids against both feet, the corresponding order of steps, stride length, etc. These and other characteristics of each gait cycle are now recorded, which describe the course of action itself or the characteristics of walking by means of UAGS/walkers, one feature being the formation of an average value for each exercise in the gait cycle; it is then known whether it belongs to the first, second or third exercise of the patient. This set of motion process characteristics thus becomes variables that need to be ranked and that act as relevant variables, while the exercise results (e.g. fig. 11) are used as influencing variables (irrelevant variables) following the recorded motion process characteristics, patient data and exercise plan, which determine the ranking.

Multi-objective or multi-output regression requires this method of operation (see Borchani et al, paper for a summary of different protocols, WIRES Data Mining Knowl Discov 2015, DOI: 10.1002/widm.1157). This scheme is also referred to as multidimensional scaling, since there is a metric set of course of action characteristics. There are two principle methods of operation: a) evaluating the related variable transformation problems independently of each other, b) adapting the algorithm such that it needs to be evaluated synchronously. The latter scheme considers the dependency relationship between related variables and better reflects the actual situation of the action process. In the latter case, decision trees or GLM evaluations (conventional linear models) can be used, such as in the glmnet language package of R. An alternative approach is to use multiple linear regression trees (abbreviated MRT) which are used in the mvpart package of the R language statistics software. Alternatively or additionally, a CLUS software package may be used, which uses decision trees in predictive clustering. The level obtained when the method is adopted can obtain updated action process grading.

Example 32: detecting and evaluating two-point gait

Two-point gaits are determined in a similar manner to three-point gaits in a number of steps. The differences are in part characteristic acquisition 720, characteristic ranking 765, action process ranking 770, action process corrections such as decision ranking 775, and output action process correction 450. Either a two-point gait is used immediately post-operatively (as instructed by the surgeon) or after a three-point gait if the patient is recovering further. If the gait is two-point gait, the two legs and the UAGS have synchronous gait under the ideal condition. In particular, the legs and the respective opposite side supports are simultaneously grounded, alternately taking the forward steps at the same height.

Fig. 23 shows two-point gait. And continuing to perform simulation evaluation on the three-point gait. The difference is mainly that no necessary information about the protected leg 1805 is needed. As a result of the acquisition of the location of the UAGS endpoint 1970 when touching the ground 1850, it is first acquired at step 1856 whether the UAGS and contralateral leg are located anterior to the sagittal plane. A ground-contacting UAGS end point 1970 and a contralateral foot frame point 1950 are formed, respectively, from a perpendicular 1858 to the sagittal plane, defining a distance 1859 from the perpendicular in the sagittal plane. Alternatively or additionally, a minimum value of the distance between the UAGS and the contralateral foot joint may be evaluated. The distance, i.e. how large the deviation is, or whether the two-point gait process has been performed correctly (step 1888) is evaluated in the characteristic grading process at step 1886. For this purpose, for example, a threshold value for the determined distance can be evaluated. These data then enter the action process stage 770, such as will be described in the decision stage 775 as corresponding to the collected action deviations. This is based in part on thresholds and conventional characteristics that differ from the three-point gait to trigger the appropriate output of course of action corrections 450 that are stored in the correction matrix map (see fig. 9).

Example 33: projecting notes by a service robot

One feature is that the service robot 17 is equipped with a projection unit by means of which the service robot 17 can project directions to the patient, for example onto the ground. These descriptions may contain prompts for speed, body posture, travel (transitions, commutations, etc.), may be text-based and/or icon-based, including, for example, traffic signs (such as stops, transition arrows, etc.). For this purpose, the service robot has a projection unit, for example a commercially available projector. It has a defined plane of projection, i.e. a plane of projection, for example, on a fixed defined area of the ground, for example, 2 to 3m away from the service robot, with a width of 1.2 m. When the distance between the service robot and the obstacle (such as a wall) is smaller than the distance passed by the service robot projected from the ground, the service robot is projected onto the obstacle in proportion firstly. But this may affect reading the projected content. The service robot is therefore calibrated using its sensing means for detecting obstacles. One feature is that it can be implemented based on at least one force that the robot dynamically generates from its environment, which contains fixed and moving obstacles. Alternatively and/or additionally, the sensor data can also be evaluated directly, ensuring that no obstacle is allowed to be detected in the direction-finding area. In this case, the sensors used may be LIDAR, camera, ultrasonic, infrared and/or radar sensors. The service robot is additionally configured to ensure that it can adjust the output mode based on the type of obstacle found. Thus, if output is required, but the facing surface is at least partially obscured by an obstacle, the service robot may instead select a display screen and/or voice output. Such directions may replace or supplement the output via a display screen and/or speech output mentioned elsewhere in this document.

Example 34: two-point and three-point gait recognition

This example includes a computer-implemented method of capturing a course of motion of a person, including contactlessly capturing the person over a period of time, making a skeletal model of the person and evaluating skeletal points and/or orientation vectors of limbs obtained from the skeletal model to grade the course of motion of the person. Alternatively and/or additionally, the course of motion in a two-point gait or a three-point gait is graded. One feature is that the person performs course of action correction 450 output, such as output via voice output, projection, and/or display screen. One feature is that the course of action correction 450 output is made based on identified course of action deviations, which are defined by rules stored in memory. One feature is that these deviations are determined as symmetrical deviations of the course of action. There is a minimum period of time between the output of corrective instructions for each course of action, which is stored in memory, and one feature is that it depends on the number of corrections that have been made during the exercise. In the computer-implemented method, one feature is that a lower arm support (UAGS) is identified and a spatial assessment is made, particularly the endpoint 1970 of the UAGS that contacts the ground. One feature is that the endpoints 1970 of the UAGS are evaluated at least proportionally to at least one foot skeleton point 1950 in the sagittal plane. And, the distance between the foot skeleton point 1950 and the UAGS endpoint 1970 will be evaluated and compared to a threshold value stored in memory. An alternative and/or complementary feature is whether the collection UAGS is approximately parallel to the frontal plane of the person being collected. Also, a line between the UAGS and the distance of this line from the foot skeleton point 1950 can be determined, when the foot skeleton point 1950 belongs to, for example, a leg skeleton point of the leg to be protected.

An alternative and/or supplemental feature is to assess the location of the foot skeleton point 1950 to the contralateral UAGS end point 1970 placed in the sagittal plane. Such as for legs and UAGS located in the direction of the person's walking. Such as between the foot frame point 1950 and the UAGS when the associated foot and the UAGS contact the ground.

An alternative and/or complementary feature is that the method determines and ranks person-related parameters such as step size, duration of standing, span, distance between UAGS in frontal plane, torso curvature, head inclination and/or joint curvature and/or joint extension in one feature. In one feature, the method acquires the person-related parameter over at least one gait cycle of the person being acquired. One feature is that the gait cycle is determined and evaluated by a symmetrical deviation from the course of action. The method distinguishes between physiological and deviant course of action. One feature is that it can be done by the F1 score defined above. One feature is to take the features from the skeletal model parameters, perform a feature rating 765, an action process evaluation 770, or a decision rating 775, and then output the action process correction 450 as appropriate.

This method may additionally include determining the length of the exercise, the speed at which the person being harvested passes, and/or the cradle usage/walker type of the person being harvested.

Example 35: method for detecting lower arm support

This example includes a computer-implemented method of assessing the relative position of a lower arm support (UAGS) and at least one foot skeleton point 1950 of a person. And, one feature is that there is no contact with the acquiring person throughout the time course. A skeletal model 725 of the person will be made. Sensor data is evaluated from at least one hand skeletal point in a spatial environment for UAGS. For this purpose, for example, a candidate region 745 is selected. It may extend primarily downward from the hand skeleton point. The scatter plot around the collected hand skeleton points is then compared to the data stored in memory. Also, one feature may be to refer to the positioning of the lower arm, which is slightly parallel to the positioning of the UAGS. Model assumptions 750 relating to shape, for example, may also be considered. To detect UAGS, a fault tolerant segmentation algorithm 755 may be used. Scatter plot data is collected by at least one sensor, which may collect personnel contactlessly, such as a 2D camera 2084, a 3D camera, a LIDAR 2083, radar, and/or ultrasonic sensor. If more than one sensor is used, the data is synchronized and merged in time. One feature is evaluating the position of the UAGS at the location where the UAGS contacts the ground at the endpoint. This position is then assessed relative to the position of the foot skeletal point 1950, such as when the foot is contacting the ground. One feature when evaluating in the sagittal plane is to determine the distance between the foot skeleton point 1950 and the UAGS end point 1970. One feature is that the person's two-point gait or three-point gait will then be evaluated.

Example 36: robot system

This example comprises a service robot with at least one computer, at least one memory 10, at least one camera, at least one device for detecting obstacles in the vicinity, an odometer unit 2081 and a display screen, such as a touch screen 2087, a module 2061 for 2D/3D acquisition of the environment, a map module 2061b, a self-localization module 2062, a module for checking and/or starting the automatic charging of the accumulators and an action planning module 2065. Alternatively and/or additionally, the service robot may include a mapping/surveying module 2061a, a metric path planning module 2064, a self-locking detection module 2068, a user speaking module 2066, a module 2067 for determining an optimal distance to the user, a space planning module 2060r, a module 2069 for determining a waiting position, a module 2043 for tracking a person on a laser basis, a module 2044 for re-identifying the person's identity, an action evaluation module 2050, an action process acquisition module 2051, an action process evaluation module 2052, an action correction module 2022, a patient parameter storage module 2014, an exercise plan storage module 2012, and/or a module 2013 for evaluating exercises collected by the service robot sensor. One feature of the service robot is that it may be equipped with an RFID interface 2089 for data exchange. It is a feature that the action planning module 2065 uses metric path planning information 2064 to determine the optimal travel of the robot taking into account different destinations and cost functions. Also, the action planning module 2065 may determine an expected itinerary for the patient. The person's identity may be re-identified by a color pattern, a person's height parameter, and/or a grading of the person's gait 2044. One feature is that gait processes are involved in the acquisition, grading, assessment and or correction of actions.

Example 37: gait anomaly identification

This example includes a computer-implemented method of identifying anomalies in human processes having the functionality of property acquisition 720, property ranking 765, and process ranking 770, where the property acquisition 720 is based on a skeletal model of a human. The characteristic rating 765 is based on a spatio-temporal parameter evaluation of the human skeletal points and/or orientation fitness vectors of the skeletal points. The anomaly features are described by amplitude height, minimum and/or maximum position and/or commutation points, which are determined based on observations of skeleton points within the time course. One feature is that abnormal conditions are collected by evaluating the gait cycle. And, step size, standing duration, span, torso curvature, head tilt, and/or joint curvature will be acquired and evaluated. In addition, a feature is to estimate the relative position of the skeleton point 1950 to the detected walking aid, such as the lower arm support. At this point, the evaluation will be made by determining the distance in the sagittal plane. The course of action grading 770 includes evaluating at least two parameters such as step size, stance duration, span, torso or head tilt, joint flexion, and/or the positional relationship of the foot skeletal point 1950 and the walker. Anomalies are identified by the deviation of the determined parameter from the ranking parameter, such as a defined threshold value. The threshold may be a value generated by evaluating a previously recorded course of action. One feature is that the ranking can be performed by a machine learning method. An optional supplementary feature is that the notification is made by the terminal when the ranked parameter deviates from the threshold.

One feature is that the data making the skeletal model is recorded as a video sequence. These video sequences are time-stamped when an anomaly is detected. Such as the transfer of video sequences to memory. The memory may be located in the learning module 190. The video sequence priority may be determined in memory. One feature is that the priority of the video sequence depends on the type and/or number of anomalies detected.

One feature is that the anomaly relates to deviations of the collected, completed exercise from the exercise plan, such as exercise time, elapsed speed, or use of a support/walking aid, etc.

Example 38: automatic adjustment of exercise program to determine transition from three-point to two-point gait

This example includes a system for automatically determining a transition from a three-point to a two-point gait, comprising a processor and a memory, where the processor and memory are connected via an interface, such as a data transfer interface, to a sensor for contactless acquisition of human data. An alternative and/or supplementary feature is that it relates to a method of adjusting an exercise plan, which is performed by a computer, based on sensor process data for contactless acquisition of personnel data, one feature being that the exercise plan adjustment is intended to prescribe a transition from a three-point gait to a two-point gait. One feature is that after determining the transition from three-point to two-point gait, intervention is automatically performed, such as automatically adjusting the exercise plan and/or transmitting information through an interface. The system or the method performed by the computer compares the values acquired by the further processing of the data (determined by the sensors for the contactless acquisition of the person data) with the corresponding reference values. These values may relate to personnel related parameters such as speed, standing duration, non-gravity leg phase, step size, etc., which are collected, for example, by a contactless sensor. One feature is that the person-related parameter is a combination of speed, duration of standing, non-gravity leg phase and/or step size. The system or computer implemented method may determine, for example, the symmetry between left and right leg walking motions, such as considering gait cycles and the deviation of symmetry used to specify a transition from three-point to two-point gait. One feature is to acquire and evaluate deviations of the acquired physiological motion process by its type and frequency. One feature is that correction prompts stored in the memory, which are output during the practice of the person taking the exercise (e.g. by the system or by a system with at least one sensor for contactless acquisition of person data connected via an interface), such as the type and frequency of correction prompts that have been made, can also be taken into account.

One feature is that these values are collected over a time course, such as over multiple exercises. Also, the system or computer implemented method such as performing a differential view of multiple exercises to determine the progress of the person's exercise. One feature is to compare the determined exercise progress with previously determined exercise progress of others and/or reference data stored in memory.

Example 39: method for self-selecting exercise plan configuration under consideration of historical data

This example includes a computer-implemented method of selecting an exercise program configuration at the person's discretion. Also, the exercise plan configuration required by the person is compared with the clearance in the exercise plan and can only be selected if clearance is obtained. In addition, the exercise plan can be automatically adjusted to release. One feature is that data previously made by a person and collected and evaluated by the contactless sensor is now compared with historical data. In this way, clearance may be provided for exercise plan configuration when the collected data is highly consistent with the historical data. The exercise program may for example involve gait training. Additionally, the type and/or number of outputs of the performed course of action correction 450 may be referenced, such as a walk correction or a base course of action deviation for determining automatic clearance. One feature is that the set of computer-implemented methods is implemented in a service robot, such as for acquiring and evaluating gait exercises.

Example 40: system for automatically adjusting an exercise program and/or a computer-implemented method

This example includes a system for automatically adjusting an exercise program over a wireless interface, at least one processor and at least one memory including at least one database having an exercise program stored therein. In addition, the system can acquire data via a wireless interface from a second set of systems equipped with sensors for the contactless acquisition of personnel data, for example with a camera, which can record and evaluate the exercises stored in the exercise program, providing the first set of systems with the recorded exercises. Additionally, the system may obtain information from a third set of systems in which to adjust the exercise program. Alternatively and/or in addition to these systems, the computer-implemented method may obtain exercise records for a completed exercise planner in connection with an exercise plan. The exercise program may contain information about the person, including dexterity, type of surgery, location where the surgery was performed, height and/or weight. Adjusting the exercise program may include the stroke traversed, the manner in which the support/walker is used. The recorded data are personal related parameters and can be saved in the system in the time course. The person-related parameters additionally include the duration of standing, the span, the non-gravity leg stage, the upper body, the viewing direction and/or the step size of the person, and also the symmetry value of the step size. Alternatively and/or additionally, the recorded data may include local usage procedures for a walker, such as a UAGS. These recorded data containing the person-related parameters may be supplemented with historical data in a previously obtained database. These recorded data, as well as data transmitted by the second set of systems (using cameras for recording personnel) and the third set of systems (for adjusting the exercise program) may be evaluated by machine learning and/or neural networks. As input variables for these evaluations, personnel data can be referred, such as age, height, weight, comorbidities, mode of surgery, etc. of the patient, as well as data collected during the recording exercise. The setting of the exercise program for an exercise at time t or a later time point t +1 can be determined as an output variable. The ganglionic weights determined in these calculations may be transmitted to a second set of systems (using a camera for recording personnel) and/or a third set of systems (for adjusting the exercise program). These ganglion weights may be used to automatically adjust the exercise plan. In one feature, the second set of systems relates to a service robot that performs gait training.

Example 41: system for automatically adjusting an exercise program

This example includes a system for automatically adjusting an exercise program, a patient management module 160, a wireless interface with a second set of systems having at least one camera, a processor and a memory, and an interface with a learning module 190. The second set of systems may be, for example, a service robot 17. In addition, the first set of systems may interface with rule set 150. The patient management module 190 stores an exercise program that can be configured via the terminal 12. The exercise plan may take into account the flexibility of the person, the type of surgery performed and/or the location of the surgery (e.g., area and side, such as left knee). Flexibility includes the general condition of the person and/or his co-morbidities. The exercise program may include, for example, exercises based on a two-point gait, a three-point gait, and/or a step up. The second set of systems collects exercise time, stroke, exercise frequency and/or exercise intensity, etc. as a result of exercises addressed by the exercise program. The exercise program may be transmitted to the second set of systems via the interface and stored therein. The second set of systems may, for example, trigger a person's exercise, such as gait training, at a characteristic, and/or output course of action corrections 450, such as walking corrections, at the time. The collected data will be saved and evaluated over the course of time. Such as data to the learning module 190 via an interface. The learning module 190 has historical data in the learning module memory that is supplemented by data that is evaluated and transmitted over the interface. At this time, the history data is supplemented with the settings made in the exercise plan. The settings made relate to the exercise plan and/or adjustments to the exercise plan based on the data collected. Ganglionic weights for the exercise program may be determined by machine learning and/or neural network methods. As input variables for determining the ganglion weights, the exercise assessment and the individual components of the exercise plan are used, as output parameters the configuration of the exercise plan and/or the exercise plan modification are used. One feature is to transmit the ganglion weights determined at this time to a rule set. The saved ganglionic weights used to adjust the exercise plan may be replaced at this point. Additionally, the rules of the exercise program may be updated. Based on the rule set 150, suggestions may be provided for exercise plan adjustments, such as based on historical data evaluations, for example. One feature is that these suggestions may be provided by the terminal. An alternative and/or complementary feature is the suggestion of the configuration based on data and/or exercise plans collected by a system having at least one camera, a processor and a memory. The exercise program may in turn comprise gait training.

Example 42: method for evaluating a course of action and for performing the method by a computer

Examples include a method for evaluating a person's course of action and performed by a computer, where the method may record the person's action, obtain characteristics of the recorded action 720, such as the course of action; ranking the recorded actions, such as action process ranking 770; grouping 765 the recorded actions into properties; guidance is assigned to the collected motion deviations, such as decision ranking 775 and output course of motion correction 450, such as output walk correction. The characteristic rating 765 includes evaluating personal related parameters such as the step size, non-gravity leg stage, duration of standing, etc. of the person being captured. The course of action grading 770 includes a combination of observations of different limb and/or lower arm support use. Outputting course of action corrections (e.g., outputting walking corrections), including outputting instructions to a person, such as through a speaker, projection, and/or display screen. The recorded movements may for example relate to exercises during gait training. The course of action correction 450 may also be output in a priority order, one feature being that different deviations with different priority success scores are compared over a time period, with only the deviation with the highest priority score being output, respectively.

One feature is that the action process rating 770, the characteristic rating 765, and/or the action process correction data (such as the decision rating 775) and the evaluation of the recorded action, such as the evaluation of gait training, may be saved along with the corresponding historical data.

An alternative and/or complementary feature is that the data collected and evaluated, i.e., the results and/or recorded raw data, such as the action process rating 770, the action process correction, and the decision rating 775, can be transmitted to the learning module 190. The method additionally includes accessing data in the learning module, such as by the terminal, and reevaluating an action process rating, such as a gait process rating, reevaluating a characteristic rating, and/or reevaluating an action process correction, such as a decision rating, such as accounting for output action process corrections 450. This re-evaluation may be performed manually, for example. An optional and/or supplemental feature is that at least one manual rule adjustment is transmitted in rule set 150. In rule set 150, for example, an action process rating (e.g., gait process rating), a characteristic rating, and/or an action process correction (e.g., decision rating), such as an action process correction, may be updated. Updated action process grading rules, such as gait process grading, characteristic grading and/or action process corrections, such as decision grading rules, are transmitted to a system that records the person's actions. Where the motion process recording raw data relates to video capture, these motion recordings may be processed anonymously in optional steps, such as by pixelating a human face. In the learning module 190, for example, the video sequence can be labeled, in particular with regard to deviations and/or physiological courses of action. Time stamps may be assigned at this time, for example.

In addition, for example, automatic assessment of the course of action and/or automatic reevaluation of course of action corrections (e.g., decision ranking) can be performed. Such as automatic re-evaluation by machine learning and/or neural network methods. Evaluation of the collected movements, such as exercises (e.g. gait training), can be used as input variables, and movement process ratings, such as gait process ratings, characteristic ratings and/or movement process corrections, such as decision ratings, can be used as output variables. Ganglionic weights determined at the time of automatic reevaluation for characteristic ratings, action process ratings (such as gait process ratings), and or action process corrections (such as decision ratings) may be transmitted to rule set 150. As such, ganglion weights stored in the rule set(s) for action process ratings (e.g., gait process ratings), characteristic ratings, and/or action process corrections (e.g., decision ratings), as well as associated rules for action process ratings (e.g., gait process ratings), characteristic ratings, and/or action process corrections (e.g., decision ratings), may be updated, for example, in the rule set 150. Another step may be to transmit updated ganglionic weights and/or rules to the system that collects the personnel data.

Example 43: service robot with self-selection exercise area capability

This example shows a service robot 17 with at least one optical sensor unit, one radar sensor and/or one ultrasonic sensor and at least one memory containing a map of the environment of the service robot, when the service robot 17 collects the number and/or type of obstacles in its environment and evaluates them to identify sub-areas of the map where the density of obstacles is low. The map stores an area in which the service robot 17 mainly moves. Obstacle information is collected during a time course, such as during a day and week course and/or a month course. The obstacle may for example relate to a dynamic and/or static obstacle. For example, the size of the obstacle can also be acquired. Such as may be absolutely and/or relatively sized with respect to the width of the walk in which the service robot is moving. During the exercise obstacles are recorded, during which the service robot 17 moves within a sub-area. One feature is that the service robot, an optional feature is a set of external systems to which data is transmitted to determine the obstacle density over time. And then re-transmits the data from the external system to the service robot 17. Data evaluation includes, for example, clustering. In this way, for example, the time period during which a particular sub-region has a low obstacle density therein may be defined based on the acquired data. Predictions about future obstacle densities can be made in this way, for example. As a result, the service robot 17 can, for example, first perform the tasks to be performed in these sub-areas with low-density obstacles. The task that needs to be completed at this time may depend on the density threshold. The service robot 17 may make routing decisions, for example, based on the determined density.

Example 44: method for identifying leg load and being completed by computer

This example includes a method for recognizing a burden on a person's leg and performed by a computer, and the movement of the person can be collected without contact. At this time, contactless acquisition can be performed by the camera. One feature is that a skeletal model is to be made, with the skeletal points and/or the amount of orientation between the skeletal points being assessed both temporally and spatially. At this point, for example, feature acquisition 720, feature ranking 765, and/or action process ranking 770 may occur. One feature is that the walking aid is also collected and the relative distribution of the walking aid to the human foot skeletal points 1950 is evaluated. One feature is that the leg to be burdened is placed adjacent to at least one lower arm bracket (UAGS) in the sagittal plane. One feature is that the leg that needs to relieve pressure is located near the line that interconnects the UAGS. An alternative and/or complementary feature is that the human gait process of the leg, which should release pressure, involves a three-point gait. For example, the collected actions may be compared to stored rules. A decision ranking 775 is performed that accounts for the collected deviations for an assessment of the correction output and outputs course of action corrections 450, such as by way of voice commands to a person.

To identify leg burden, the angle between the lower and upper arms, the extension and/or flexion angle of the hips and knees and/or the inclination of the upper body (in the frontal and sagittal planes) are evaluated. Ways to determine leg release pressure include: the angle between the lower arm and the upper arm is less than 170 degrees, or less than 160 degrees, the angle of the leg which can only bear limited burden is less than 172 degrees, and the anteversion angle of the upper body is less than 5 degrees.

Example 45: walking aid with communication device

A walking aid equipped with a power supply, a control unit, a wireless interface and a button (which transmits a signal through the wireless interface when it is pressed) will be described. The receiver of the transmitted signal may involve a service robot. One feature of the walking aid is the lower arm support. The buttons are located on the distal end of the T-shaped handle which is held by the patient's hands. The control unit is configured to transmit different signals by pressing the button. The frequency of depression and/or the number of events depressed may then form a different signal. One feature is that the trigger button will re-identify the patient's identity by the service robot.

Example 46: method for evaluating three-point gait

Examples include a computer-implemented method of evaluating a three-point gait, comprising the steps of: collecting skeleton points, direction vectors between the skeleton points and/or a lower arm support (UAGS) as space-time parameters; collecting the position of a UAGS endpoint 1970 in the time process; the location of the UAGS endpoint 1970 at the time of ground contact is collected. In addition, it is determined whether the UAGS contact point is located on the ground parallel to the frontal plane of the person. The frontal plane can be determined by the action direction of the person to be collected. It is then detected whether the foot of the protected leg is positioned between the UAGS.

One feature is determining a connection line between the UAGS above ground locations, determining a distance of the foot skeleton point 1950 of the protected leg from the connection line, and evaluating the distance of the foot skeleton point 1950 of the protected leg from the connection line. Alternatively and/or additionally, the positions of the UAGS end point 1970 and the foot joint 1950 of the protected leg in the sagittal plane are evaluated with respect to each other, and the distance between these points is evaluated.

In addition, at least one of the following features can be determined and evaluated later, if more than one feature is present, these features can be taken into account individually and/or in combination: inclination of the upper body; such as knee or hip joint extension and/or flexion; determining a span by determining a distance of a frontal plane midfoot skeleton point 1950; the step size is determined by determining the distance of the foot skeleton point 1950 in the sagittal plane, followed by an evaluation in terms of single step size in the gait cycle; a standing duration; landing time point of UAGS and/or foot skeleton point 1970; landing time point of UAGS and/or foot joint 1950; distance between UAGS endpoints 1970 from sagittal and/or frontal mid-foot skeletal point 1950 at ground contact; the latter two features are then used to assess the relative use of the UAGS with respect to the body position, such as whether the stent is placed too close and/or too far (sideways or forward) from the body.

In addition, an action process stage 770 can be performed where at least two characteristics are evaluated in combination and compared to the stored rules. If a deviation is identified during the action, it is evaluated whether a command is given to the person on the basis of a decision matrix map of the detected deviation.

Example 47: evaluating two-point gait

Examples disclose a computer-implemented method of evaluating a three-point gait, comprising: collecting skeleton points, direction vectors between the skeleton points and/or a lower arm support (UAGS) as space-time parameters; collecting the position of a UAGS endpoint 1970 in the time process; the location of the UAGS endpoint 1970 at the time of ground contact is collected. In addition, the UAGS and the leg positioned in front of the sagittal direction are detected; then the perpendicular to the ground-contacting UAGS end point 1970 and the contralateral foot skeleton point 1950 are made to be perpendicular to the sagittal plane, then the distance between the two perpendicular lines is determined, or the ground-contacting UAGS end point 1970 and the foot skeleton point 1950 in the sagittal plane are taken and then the distance between the UAGS end point 1970 and the foot skeleton point 1950 in the sagittal plane is determined. One feature is that the distance between the vertical lines in the sagittal plane, or the distance between two points in the sagittal plane, can be evaluated and/or whether the UAGS and leg are used contralaterally. Such as by comparing thresholds. In addition, other features of the previous example (the latter two sets of scenarios) may be referenced in a later step to evaluate two-point gait.

The source is as follows:

jaeschke, b., Vorndran, a., Trinh, t.q., Scheidig, a., Gross, h. -m., Sander, k., Layher, f. Is disclosed in: IEEE biomedical robotics and bio-mechatronics international conference (Biorob), Enschrader pp., Netherlands, IEEE 2018

Trinh, t.q., Wengefeld, t., Mueller, St., Vorndran, a., Volkhardt, m., Scheidig, a., Gross, h. The mobile robot approaches and identifies a seated person. Is disclosed in: international robotics workshop (ISR), pp.240-247, VDE Press 2018.

Vorndran, a., Trinh, t.q., Mueller, St., m., Scheidig, a., Gross, h. -m. how to always focus on the user with the mobile robot? Is disclosed in: international robotics workshop (ISR), pp.219-225, VDE Press 2018

ZHE Cao, Tomas Simon, Shih-En Wei, Yaser Sheikh. Openpos, real-time multi-person 2D pose estimation using partial affinity fields; IEEE computer Vision and Pattern recognition Conference (CVPR), 2017, pp.7291-7299

Guidi, s.gonizzi, l.mico, 3D capture performance of low cost range sensors for mass market applications, international photogrammetry archives, remote sensing and spatial information science, volume XLI-B5, 2016, the XXIII international conference on isps, 2016, 12 months to 19 days 7, czech republic.

Fox, W.Burgard and S.Thrun, "dynamic Window method for Collision avoidance" published in IEEE journal of robotics and Automation, Vol.4, No. 1, pp.23-33, 1997, 3 months

Muller s., Trinh t.q., Gross HM. (2017) Local real-time action planning using progressive optimization. Is disclosed in: gao y., Fallah s., Jin y., Lekakou C. (eds.) goes to autonomous robotic systems. Tasos 2017. Computer science lecture, volume 10454. Springer, Cham.

R.philippsen and r.siegwart, "interpolated dynamic navigation function", 2005 IEEE robotics and automated international conference discourse, baselona, spain, 2005, pages 3782-.

Borchani, H., Varando, G., Bielza, C, andp. (2015), investigation on multiple output regression. WIREs data mining and knowledge discovery, 5: 216-233. DOI of 10.1002/widm.1157.

List of reference numerals

3 sensor

5 supporting wheels placed rotatably about a vertical axis

6 driving wheel

8 power supply

9 processing unit

10 memory

12 System management terminal

13 terminal

17 service robot

18 cloud

150 rule set

151 rule set processing unit

152 rule set memory

160 patient management module

161 patient management module processing unit

162 patient management module memory

170 patient data management system

171 patient data management system processing unit

172 patient data management system memory

180 navigation system

181 navigation system processing module

182 navigation system memory

190 learning module

191 learning module processing unit

192 learning module memory

1305 flexibility

1310-general case

1315 Co-disorder

1320 surgical site

1325 surgical method

1330 medical doctor

1335 exercise program

1340 two-point gait

1345 three-point gait

1350 step

1355-training plan configuration

1360 interface

1505 historical data

1701 skeleton point

1702 connection between skeletal points

1703 the human body

1910 Knee skeleton point and hip skeleton point

1920 direction vector between knee joint and foot skeletal point

1930 Knee skeleton point

1940 hip skeleton point

1940r Right hip skeleton point

1940l left foot skeleton point

1950 foot skeleton point

1960 shoulder skeleton point

1960r Right shoulder skeleton Point

1960l left side shoulder skeleton point

1970 UAGS endpoint

1980 stylized UAGS

2010 application layer

2011 action training module

2012 exercise program

2013 exercise evaluation module

2020 state layer

2022 action correction module

2024 destination boot module

2030 service robot skills

2040 personnel identification module

2041 personal identification module

2042 first personnel tracking Module

2043 second personnel tracking Module

2044 identity recognization module

2045 seat identification module

20463D person tracking

2050 action evaluation Module

2051 action procedure acquisition Module

2052 action procedure evaluation Module

2060 navigation module

2061 surveying and mapping module

2060r space plane diagram module

2064 Path Scale Module

2065 action size module

2071 Graphic User Interface (GUI)

2073 speech synthesis module

2074 speech recognition module

2081 odometer module

2082 pressure sensitive push rod

2083 LIDAR

20842D camera

2085 RGB-3D camera

2086 camera zooming and flipping function

2087 touch display screen

2088 WLAN module

2089 RFID reading and writing device

2090 differential transmission

2091 charging port and electronic device

2092 loudspeaker

2093 microphone

2094 head with eyes.

63页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:委托的认证系统和技术

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!