Sight line detection method, surgical robot system, control method, and storage medium

文档序号:519480 发布日期:2021-06-01 浏览:14次 中文

阅读说明:本技术 视线检测方法、手术机器人系统、控制方法和存储介质 (Sight line detection method, surgical robot system, control method, and storage medium ) 是由 齐金标 朱祥 于 2021-01-11 设计创作,主要内容包括:本发明提供了一种视线检测方法、手术机器人系统、控制方法和存储介质,视线检测方法包括:步骤S11、获取被测者的面部朝向信息;步骤S12、根据所述面部朝向信息,判断所述被测者的面部是否朝向目标对象;若是,则执行步骤S13,若否,则执行步骤S14;步骤S13、获取所述被测者的眼部朝向信息,并根据所述眼部朝向信息,判断所述被测者的视线是否朝向所述目标对象;若否,则执行步骤S14;步骤S14、发出预警信息和/或发出状态变更指令。本发明在判定被测者的视线非面向目标对象时,会自动发出预警信息和/或发出状态变更指令,由此可以有效防止误操作,提高操作过程中的安全性。(The invention provides a sight line detection method, a surgical robot system, a control method and a storage medium, wherein the sight line detection method comprises the following steps: step S11, obtaining face orientation information of the tested person; step S12 of determining whether the face of the subject is directed to a target object based on the face direction information; if yes, go to step S13, otherwise go to step S14; step S13, obtaining the eye orientation information of the tested person, and judging whether the sight of the tested person is oriented to the target object according to the eye orientation information; if not, go to step S14; and step S14, sending early warning information and/or sending a state change instruction. When the vision of the tested person is judged not to face the target object, the invention can automatically send out early warning information and/or send out a state change instruction, thereby effectively preventing misoperation and improving the safety in the operation process.)

1. A line-of-sight detection method, characterized by comprising:

step S11, obtaining face orientation information of the tested person;

step S12 of determining whether the face of the subject is directed to a target object based on the face direction information;

if yes, go to step S13, otherwise go to step S14;

step S13, obtaining the eye orientation information of the tested person, and judging whether the sight of the tested person is oriented to the target object according to the eye orientation information;

if not, go to step S14;

and step S14, sending early warning information and/or sending a state change instruction.

2. The gaze detection method according to claim 1, wherein the acquiring face orientation information of the subject includes:

acquiring a face image of a detected person; and

and acquiring the face orientation information of the tested person according to the face image.

3. The gaze detection method according to claim 2, wherein the acquiring of the face orientation information of the subject from the face image includes:

identifying the facial image to obtain facial feature point information of the tested person; and

and acquiring the face orientation information of the tested person according to the face feature point information.

4. The gaze detection method according to claim 3, characterized in that the facial feature point information includes the number of facial feature points, positional relationship between facial feature points;

the acquiring the face orientation information of the subject according to the facial feature point information includes:

and acquiring the face orientation information of the tested person according to the obtained number of the face feature points, the position relation among the face feature points and the corresponding relation between the face orientation and the position relation among the face feature points which are stored in advance.

5. A line-of-sight detection method according to claim 3, wherein the facial feature points include corners of the eyes, corners of the mouth, and/or tips of the nose.

6. The gaze detection method according to claim 1, wherein the acquiring eye orientation information of the subject comprises:

emitting a plurality of infrared rays to the eye of the subject from different angles;

receiving infrared rays reflected by the eye of the testee to obtain an iris image of the eye of the testee;

and acquiring the eye orientation information of the tested person according to the obtained eye iris image.

7. A control method of a surgical robot system including a display device and a robot arm, the control method comprising:

adopting the visual line detection method according to any one of claims 1 to 6, judging whether the visual line of the subject faces the display device; and

and if the vision of the tested person is judged not to face the display device, sending out early warning information and/or enabling the mechanical arm to enter a locking state.

8. A surgical robotic system comprising a display device, a controller and a robotic arm, the display device being in communicative connection with the controller, the controller comprising a processor and a memory, the memory having stored thereon a computer program that, when executed by the processor, implements the gaze detection method of any of claims 1 to 6 or the control method of the surgical robotic system of claim 7.

9. The surgical robotic system according to claim 8, wherein the surgical robotic system comprises a head-mounted device, the head-mounted device comprising a spectacle frame, a plurality of infrared emitters mounted on the spectacle frame, an infrared receiver and a camera mounted on the display device, the infrared emitters, the infrared receiver, and the camera all communicatively connected to the controller;

the camera is used for acquiring a face image of a detected person and transmitting the face image to the controller;

the infrared emitter is used for emitting infrared rays to the eyes of the testee;

the infrared receiver is used for receiving infrared rays reflected by the eye of the measured person to obtain an eye iris image of the measured person and transmitting the eye iris image to the controller;

the controller is configured to acquire face orientation information of the subject based on the acquired face image, and acquire eye orientation information of the subject based on the acquired eye iris image.

10. The surgical robotic system of claim 8, comprising a surgeon console and a surgical console, the surgeon console comprising the display device, the controller and a master control arm, the surgical console comprising the robotic arm, the controller configured to determine if a line of sight of a subject is directed toward the display device, and establish a master-slave relationship between the master control arm and the robotic arm.

11. A readable storage medium, characterized in that a computer program is stored therein, which when executed by a processor, implements the line-of-sight detection method according to any one of claims 1 to 6 or the control method of the surgical robot system according to claim 7.

Technical Field

The present invention relates to the field of vision detection technologies, and in particular, to a vision detection method, a surgical robot system, a control method, and a storage medium.

Background

Existing medical devices generally perform eye tracking by acquiring eye movement data when detecting a doctor's gaze. A common eye tracker illuminates a doctor's eye with an infrared illumination device to generate spots on the doctor's cornea, and then calculates the doctor's gaze direction using the spots and the doctor's pupil picture. However, in the process of performing line-of-sight detection by using an eye tracker, interference factors such as ambient stray light cause great interference factors to the line-of-sight detection. In addition, the shooting range of the shooting device for shooting the eye movement arranged on the eye tracker is limited, so that certain requirements are imposed on the position stability of a doctor in the sight line detection process. Once the doctor deviates from the preset test range for some reason, the sight line detection of the doctor cannot be successfully completed, the system has difficulty in judging whether the doctor looks at the display device, and once the sight line of the doctor deviates from the display device, misoperation occurs, and the patient is injured. Aiming at the problem that the sight of a doctor cannot be accurately determined in the prior art, an effective solution is not provided at present.

Disclosure of Invention

The invention aims to provide a sight line detection method, a surgical robot system, a control method and a storage medium, which can solve the problem that the sight line of a doctor cannot be accurately determined in the prior art.

In order to solve the above technical problem, the present invention provides a line-of-sight detection method, including the steps of:

step S11, obtaining face orientation information of the tested person;

step S12 of determining whether the face of the subject is directed to a target object based on the face direction information;

if yes, go to step S13, otherwise go to step S14;

step S13, obtaining the eye orientation information of the tested person, and judging whether the sight of the tested person is oriented to the target object according to the eye orientation information;

if not, go to step S14;

and step S14, sending early warning information and/or sending a state change instruction.

Optionally, the acquiring the face orientation information of the subject includes:

acquiring a face image of a detected person; and

and acquiring the face orientation information of the tested person according to the face image.

Optionally, the obtaining the face orientation information of the subject according to the face image includes:

identifying the facial image to obtain facial feature point information of the tested person; and

and acquiring the face orientation information of the tested person according to the face feature point information.

Optionally, the facial feature point information includes the number of facial feature points and the position relationship between the facial feature points;

the acquiring the face orientation information of the subject according to the facial feature point information includes:

and acquiring the face orientation information of the tested person according to the obtained number of the face feature points, the position relation among the face feature points and the corresponding relation between the face orientation and the position relation among the face feature points which are stored in advance.

Optionally, the facial feature points include corners of the eyes, corners of the mouth, and/or tips of the nose.

Optionally, the acquiring the eye orientation information of the measured person includes:

emitting a plurality of infrared rays to the eye of the subject from different angles;

receiving infrared rays reflected by the eye of the testee to obtain an iris image of the eye of the testee;

and acquiring the eye orientation information of the tested person according to the obtained eye iris image.

In order to solve the above technical problem, the present invention further provides a control method of a surgical robot system, where the surgical robot system includes a display device and a robot arm, the control method includes:

judging whether the sight line of the tested person faces to the display device by adopting the sight line detection method; and

and if the vision of the tested person is judged not to face the display device, sending out early warning information and/or enabling the mechanical arm to enter a locking state.

In order to solve the above technical problem, the present invention further provides a surgical robot system, including a display device, a controller and a mechanical arm, wherein the display device is in communication connection with the controller, the controller includes a processor and a memory, and the memory stores a computer program, and when the computer program is executed by the processor, the method for detecting a line of sight or the method for controlling a surgical robot system as described above is implemented.

Optionally, the surgical robot system includes a head-mounted device, the head-mounted device includes a glasses frame, a plurality of infrared emitters are mounted on the glasses frame, an infrared receiver and a camera are mounted on the display device, and the infrared emitters, the infrared receiver and the camera are all in communication connection with the controller;

the camera is used for acquiring a face image of a detected person and transmitting the face image to the controller;

the infrared emitter is used for emitting infrared rays to the eyes of the testee;

the infrared receiver is used for receiving infrared rays reflected by the eye of the measured person to obtain an eye iris image of the measured person and transmitting the eye iris image to the controller;

the controller is configured to acquire face orientation information of the subject based on the acquired face image, and acquire eye orientation information of the subject based on the acquired eye iris image.

Optionally, the surgical robot system includes a doctor console and a surgical console, the doctor console includes the display device, the controller and the master control arm, the surgical console includes the mechanical arm, the controller is used for determining if the line of sight of the person being measured is oriented to the display device, then establishes the master control arm with the master-slave relationship between the mechanical arms.

In order to solve the above technical problem, the present invention further provides a readable storage medium, wherein a computer program is stored in the readable storage medium, and when the computer program is executed by a processor, the computer program implements the above-mentioned line-of-sight detection method or the above-mentioned control method of the surgical robot system.

Compared with the prior art, the sight line detection method, the surgical robot system, the control method and the storage medium provided by the invention have the following advantages:

the invention obtains the face orientation information of the tested person; judging whether the face of the tested person faces to the target object or not according to the acquired face facing information; if not, sending out early warning information and/or sending out a state change instruction; if yes, obtaining eye orientation information of the tested person, and judging whether the sight of the tested person faces the target object or not according to the obtained eye orientation information of the tested person; if not, sending out early warning information and/or sending out a state change instruction. Therefore, the invention can determine the sight line of the tested person more accurately by judging whether the face of the tested person faces to the target object or not and then judging whether the sight line of the tested person faces to the target object or not on the basis that the face of the tested person faces to the target object. Compared with the prior art, the method for realizing the sight line detection by acquiring the eye movement data has the advantages that the requirement on the position stability of the detected person in the sight line detection process is low, the sight line detection cannot be finished because the detected person deviates from the preset test range due to some reason, and the operability and the accuracy of the sight line detection are greatly improved. In addition, when the vision of the tested person is judged not to face the target object, the invention can automatically send out early warning information and/or send out a state change instruction, thereby effectively preventing misoperation and improving the safety in the operation process. For example, for a surgical robot system, when it is determined that the sight line of a doctor (a subject) is not facing a display device (a target object), the invention automatically sends out early warning information and/or starts a protection mechanism to enable a mechanical arm for performing a surgery to enter a locked state, i.e., to disconnect the master-slave relationship, so that the doctor can be effectively prevented from operating the mechanical arm when the sight line of the doctor is not facing the display device, the safety in the surgery process is further improved, and misoperation is effectively prevented.

Drawings

Fig. 1 is a schematic flow chart of a gaze detection method in an embodiment of the present invention;

FIG. 2 is a diagram of facial feature points according to an embodiment of the present invention;

FIG. 3 is a schematic diagram illustrating the position relationship between facial feature points when the head of the subject swings;

FIG. 4 is a diagram illustrating facial feature points of a subject during lowering his head according to an embodiment of the present invention;

FIG. 5 is a schematic diagram of facial feature points of a subject on their head side according to an embodiment of the present invention;

fig. 6 is a schematic diagram illustrating a principle of measuring eye orientation information according to an embodiment of the present invention;

FIG. 7 is a schematic view of a head mounted device in an embodiment of the invention;

fig. 8 is a schematic view illustrating a subject's eye facing a target object according to an embodiment of the present invention;

fig. 9 is a schematic view illustrating a subject's eyes not facing a target object according to an embodiment of the present invention;

FIG. 10 is a schematic view of a surgical robotic system according to an embodiment of the present invention;

FIG. 11 is a schematic view of a physician's console in an embodiment of the present invention;

FIG. 12 is a block diagram of a controller according to an embodiment of the present invention;

FIG. 13 is a schematic structural diagram of a display device according to an embodiment of the present invention;

fig. 14 is a flowchart illustrating a control method of the surgical robot system according to an embodiment of the present invention.

Wherein the reference numbers are as follows:

a head-mounted device-110; target object-120; a camera-121; -111 infrared emitter; an infrared receiver-122; facial feature point-1; a reflective point-2; doctor's console-10; an operation trolley-20; a surgical console-30; a robotic arm-31; a master control arm-11; a display device-12; a processor-131; a communication interface-132; a memory-133; a communication bus-134.

Detailed Description

The sight line detection method, the surgical robot system, the control method, and the storage medium according to the present invention will be described in further detail with reference to fig. 1 to 14 and specific embodiments. The advantages and features of the present invention will become more apparent from the following description. It is to be noted that the drawings are in a very simplified form and are all used in a non-precise scale for the purpose of facilitating and distinctly claiming the embodiments of the present invention. To make the objects, features and advantages of the present invention comprehensible, reference is made to the accompanying drawings. It should be understood that the structures, ratios, sizes, and the like shown in the drawings and described in the specification are only used for matching with the disclosure of the specification, so as to be understood and read by those skilled in the art, and are not used to limit the implementation conditions of the present invention, so that the present invention has no technical significance, and any structural modification, ratio relationship change or size adjustment should still fall within the scope of the present invention without affecting the efficacy and the achievable purpose of the present invention.

It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

The invention mainly aims to provide a sight line detection method, a surgical robot system, a control method and a storage medium, so as to solve the problem that the sight line of a doctor cannot be accurately determined in the prior art.

To achieve the above object, the present invention provides a line-of-sight detection method, please refer to fig. 1, which schematically shows a flow chart of the line-of-sight detection method according to an embodiment of the present invention, as shown in fig. 1, the line-of-sight detection method includes the following steps:

step S11, obtaining face orientation information of the tested person;

step S12 of determining whether the face of the subject is directed to a target object based on the face direction information;

if yes, go to step S13, otherwise go to step S14;

step S13, obtaining the eye orientation information of the tested person, and judging whether the sight of the tested person is oriented to the target object according to the eye orientation information;

if not, go to step S14;

and step S14, sending early warning information and/or sending a state change instruction.

Therefore, the present invention can determine the line of sight of the subject more accurately by determining whether the face of the subject faces the target object 120, and then determining whether the line of sight of the subject faces the target object 120 on the basis that the face of the subject faces the target object 120. Compared with the prior art, the method has lower requirement on the position stability of the testee in the sight line detection process, cannot finish the sight line detection because the testee deviates from a preset test range for some reason, and greatly improves the operability and the accuracy of the sight line detection. In addition, when the line of sight of the tested person is judged to face the target object, the tested person can execute the next operation, and when the line of sight of the tested person is judged not to face the target object 120, the early warning information and/or the state change instruction can be automatically sent out, so that misoperation can be effectively prevented, and the safety in the operation process is improved.

Further, in this embodiment, the step S11 of acquiring the face orientation information of the subject includes:

acquiring a face image of a detected person; and

and acquiring the face orientation information of the tested person according to the face image.

Specifically, the face image of the subject may be acquired by the camera 121 mounted on the target object 120. Of course, as will be understood by those skilled in the art, in other embodiments, the camera 121 may be mounted on other components besides the target object 120, as long as the image of the face of the subject can be obtained through the camera 121, which is not limited by the invention.

Further, the acquiring the face orientation information of the subject from the face image includes:

identifying the facial image to obtain facial feature point information of the tested person; and

and acquiring the face orientation information of the tested person according to the face feature point information.

Specifically, the facial image may be recognized by using an existing face recognition technology to obtain facial feature point information of the subject, where the facial feature points include an eye corner, a mouth corner, and/or a nose tip. Referring to fig. 2, a schematic diagram of facial feature points to be obtained according to an embodiment of the invention is schematically shown. As shown in fig. 2, in the present embodiment, facial feature points such as the corners of the eyes (facial feature points 1A, 1B, 1C, 1D in fig. 2), the corners of the mouth (facial feature points 1F, 1G in fig. 2), and the tips of the nose (facial feature point 1E in fig. 2) of the subject are identified by a face recognition technique to acquire facial feature point information of the subject.

Further, the facial feature point information includes the number of facial feature points and the positional relationship between the facial feature points.

The acquiring the face orientation information of the subject according to the facial feature point information includes:

and acquiring the face orientation information of the tested person according to the obtained number of the face feature points, the position relation among the face feature points and the corresponding relation between the face orientation and the position relation among the face feature points which are stored in advance.

Specifically, please refer to fig. 3 to 5, wherein fig. 3 schematically shows a schematic diagram of a position relationship between facial feature points when the head of the subject swings; fig. 4 schematically shows a schematic view of facial feature points when the subject lowers his head; fig. 5 schematically shows a schematic view of facial feature points when the subject is facing his head. As shown in fig. 3, when the head of the subject swings right, four facial feature points 1A, 1B, 1C, 1D at the corners of the eyes, two facial feature points 1F, 1G at the corners of the mouth, and a facial feature point 1E at the nose tip should all be recognized in the acquired facial image, and the facial feature points 1A, 1B, 1C, 1D, 1E, 1F, 1G have a specific positional relationship therebetween, for example: an isosceles triangle is defined among the facial feature points 1B, 1C and 1E, the facial feature points 1A, 1B, 1C and 1D are approximately positioned on the same straight line, and an isosceles triangle is defined among the facial feature points 1E, 1F and 1G. As shown in fig. 4, when the subject is heading low (i.e., the face is not facing the target object 120), although the four facial feature points 1A, 1B, 1C, 1D at the corners of the eyes, the two facial feature points 1F, 1G at the corners of the mouth, and the facial feature point 1E at the nose tip can also be recognized in the acquired facial image, the positional relationship among the facial feature points 1A, 1B, 1C, 1D, E, F, G is significantly changed, for example: the facial feature points 1A, 1B, 1C, 1D no longer lie on a straight line. As shown in fig. 5, when the subject faces his head (i.e., the face is not directed toward the target object 120), the face image is acquired in which not only the facial feature points are missing but also the positional relationship between the facial feature points is changed, for example: when the head of the subject faces the left side, in the obtained face image, the face feature point 1D at the left eye corner of the left eye is missing, and the face feature point 1G at the left mouth corner is missing. When the angle of the left side is large, the facial feature point 1C at the right eye angle of the left eye is also missing. Thus, the face orientation information of the subject can be acquired based on the acquired number of the face feature points of the subject, the positional relationship between the face feature points, and the correspondence between the face orientation and the positional relationship between the face feature points and the face orientation stored in advance, and it is possible to determine whether or not the face of the subject is oriented to the target object 120 based on the face orientation information.

Preferably, in order to further improve the accuracy of the line-of-sight detection, if all the facial feature points 1 are not recognized in the facial image acquired for 2 seconds continuously or if all the facial feature points 1 are recognized but the facial feature points 1 do not satisfy a specific positional relationship (the specific positional relationship is stored in a memory of the system in advance and is recognized by a processor of the system), the system determines that the face of the subject is not directed to the target object 120, and issues warning information and/or issues a state change instruction.

Please refer to fig. 6, which schematically illustrates a measurement principle of the eye orientation information according to an embodiment of the present invention. As shown in fig. 6, in the present embodiment, the step S13 of acquiring the eye orientation information of the subject includes:

emitting a plurality of infrared rays to the eye of the subject from different angles;

receiving infrared rays reflected by the eye of the testee to obtain an iris image of the eye of the testee;

and acquiring the eye orientation information of the tested person according to the obtained eye iris image.

When the infrared rays are emitted to the left eye and the right eye of the testee, the infrared rays reaching the iris areas of the left eye and the right eye of the testee are reflected by the irises of the left eye and the right eye of the testee, the reflected infrared rays are received, iris images of the left eye and the right eye of the testee are further obtained, and the eye orientation information of the testee can be obtained by analyzing the iris images of the left eye and the right eye of the testee.

Specifically, in the present embodiment, the head of the subject may wear a head-mounted device 110, the head-mounted device 110 is mounted with a plurality of infrared transmitters 111, such as infrared LED lamps, the target object 120 is mounted with an infrared receiver 122, such as an infrared camera, and the infrared transmitters 111 and the infrared receiver 122 are both in communication connection with a processor. Thus, the infrared emitter 111 may emit infrared rays to the left and right eyes of the subject, and the infrared receiver 122, for example, an infrared camera, may receive the infrared rays reflected from the left and right eyes of the subject and form an image, thereby obtaining iris images of the left and right eyes of the subject, and obtaining eye orientation information of the subject based on the obtained iris images of the left and right eyes of the subject. Of course, as will be understood by those skilled in the art, in other embodiments, the infrared receiver 122 may be mounted on other components besides the target object 120, as long as the infrared reflected by the eyes of the subject can be received by the infrared receiver 122, which is not limited in the present invention. Furthermore, as will be understood by those skilled in the art, in some embodiments, the number of the infrared receivers 122 is one, and the infrared rays reflected by the left and right eyes of the examinee are received by the same infrared receiver 122. In other embodiments, there are two infrared receivers 122, and in this case, the infrared rays reflected by the left eye of the subject are received by one of the infrared receivers 122 to obtain an iris image of the left eye of the subject; the infrared rays reflected by the right eye of the subject are received by another infrared receiver 122 to obtain an iris image of the right eye of the subject. The processor acquires eye orientation information of the subject according to the acquired iris images of the left and right eyes, and determines whether the line of sight of the subject is oriented to the target object 120 according to the eye orientation information.

Referring to fig. 7, a schematic diagram of a head-mounted device 110 according to an embodiment of the invention is schematically shown. As shown in fig. 7, in the present embodiment, the head-mounted device 110 includes a spectacle frame, four infrared emitters 111 are respectively mounted on a left rim and a right rim of the spectacle frame, four infrared rays can be emitted from different angles to the left eye of the examinee by the four infrared emitters 111 mounted on the left rim, and four infrared rays can be emitted from different angles to the right eye of the examinee by the four infrared emitters 111 mounted on the right rim. Fig. 8 is a schematic diagram illustrating a subject's eyes facing a target object 120 according to an embodiment of the present invention. As shown in fig. 8, when the left eye of the subject faces the target object 120, four infrared rays irradiated to the left eye of the subject form four reflection points 2 (one infrared ray corresponds to one reflection point 2) in the iris region of the left eye of the subject, that is, four infrared rays irradiated to the left eye of the subject are all reflected in the iris region of the left eye of the subject, and four bright points (i.e., images formed by the reflection points 2) are present in the iris image of the left eye obtained by receiving the reflected infrared rays by the infrared receiver 122. If the left eye of the subject is not open, the four infrared rays irradiated to the left eye of the subject are not reflected, and the infrared receiver 122 may not capture the iris image of the left eye of the subject. Fig. 9 is a schematic diagram illustrating a situation where the eye of the subject is not facing the target object 120 according to an embodiment of the present invention. As shown in fig. 9, if the left eye of the subject does not face the target object 120, that is, the iris region of the left eye of the subject moves a certain distance, at this time, at least one of the four infrared rays irradiated to the left eye of the subject will not form the reflective dots 2 on the iris region of the left eye of the subject, that is, at least one of the four infrared rays irradiated to the left eye of the subject will not be reflected on the iris region of the left eye of the subject, and therefore, the number of the bright dots on the iris image of the left eye obtained by receiving the reflected infrared rays by the infrared receiver 122 will be less than four, for example, in fig. 9, only one infrared ray emitter 111 among the infrared rays emitted by the four infrared ray emitters 111 located on the left eye frame can form the reflective dots 2 on the iris region of the left eye. Thus, by analyzing the iris image of the left eye of the subject collected by the infrared receiver 122, it is possible to determine whether the left eye of the subject is directed to the target object 120, is not open, or is deviated from the target object 120, to acquire the eye direction information of the left eye of the subject, and to determine whether the line of sight of the left eye of the subject is directed to the target object 120. Similarly, by analyzing the iris image of the right eye of the subject collected by the infrared receiver 122, it is possible to determine whether the right eye of the subject faces the target object 120, is not open, or deviates from the target object 120, to acquire the eye orientation information of the right eye of the subject, and to determine whether the line of sight of the right eye of the subject faces the target object 120. Although the present embodiment is described by taking an example in which four infrared emitters 111 are respectively mounted on the left and right frame members, as will be understood by those skilled in the art, in other embodiments, less than four or more than four infrared emitters 111 may be respectively mounted on the left and right frame members, and the present invention is not limited thereto.

Preferably, in order to further improve the accuracy of the sight line detection, if the iris image of the eye of the subject is not acquired for 3 seconds continuously or the number of the reflective dots 2 corresponding to the number of the infrared emitters 111 cannot be recognized in the iris image acquired for 3 seconds continuously, the system may determine that the sight line of the subject is not directed to the target object 120, and then send out the warning information and/or send out the state change instruction.

In order to further improve the accuracy of the visual line detection, a plurality of infrared emitters 111 mounted on a left rim of the spectacle frame are uniformly arranged along the circumferential direction of the left rim; a plurality of infrared emitters 111 mounted on the right frame of the spectacle frame are uniformly arranged along the circumference of the right frame.

The target object in the above-described sight line detection method is not particularly limited, and may be, for example, a display device, or may be a component other than a display device, which may be used in a variety of situations for identifying whether the sight line is within a predetermined range. The present invention is described with reference to a display device as a target object and a surgical robot system as an application scene, but the present invention is not limited thereto.

Referring to fig. 10 and 11, fig. 10 is a schematic diagram illustrating an overall structure of a surgical robot system according to an embodiment of the present invention; FIG. 11 is a schematic diagram of a physician console according to an embodiment of the present invention. As shown in fig. 10 and 11, the surgical robot system includes a control end and an execution end, the control end includes a surgeon console 10 provided with a master control arm 11, and the execution end includes a surgical cart 20, a surgical console 30 and other devices, wherein a patient lies on the surgical cart 20 for surgery. The surgical console 30 is provided with a mechanical arm 31 for mounting a surgical instrument and an endoscope, the mechanical arm 31, the surgical instrument, the endoscope and the main control arm 11 have a predetermined mapping relationship, so that a master-slave relationship is formed, and after the surgical instrument is connected to the mechanical arm 31, the system realizes actions in all directions of the surgical instrument according to the movement of the main control arm 11, so as to complete a surgery. The doctor console 10 includes a multi-axis robot arm (i.e. a master control arm 11), a display device 12 (target object) and a controller, the display device 12 is connected with the controller in a communication way, a doctor (a tested person) remotely controls the mechanical arm 31 to perform a surgical operation by operating the multi-axis robot arm, the display device 12 can display intra-operative procedures in the abdominal cavity captured by the endoscope when the mechanical arms 31 perform surgery, the surgical console 30 includes two or more mechanical arms 31, the doctor controls two of the mechanical arms 31 through the multi-axis robot arm (i.e. the master control arm 11) of the doctor console 10, the operation of the two robot arms 31 for operating the surgical instrument (e.g., grasping and cutting a lesion) is captured by the camera of the endoscope and displayed on the display device 12 of the doctor console 10.

Referring to fig. 12, which schematically shows a block structure diagram of the controller in the present embodiment, as shown in fig. 12, the controller includes a processor 131 and a memory 133, the memory 133 stores a computer program, and when the computer program is executed by the processor 131, the following steps are implemented:

step S21, obtaining face orientation information of the tested person;

step S22 of determining whether the face of the subject faces the display device based on the face direction information;

if yes, go to step S23, otherwise go to step S24;

step S23, obtaining the eye orientation information of the tested person, and judging whether the sight line of the tested person is oriented to the display device according to the eye orientation information;

if not, go to step S24;

and step S24, sending out early warning information and/or enabling the mechanical arm to enter a locking state.

Therefore, the invention can determine the sight line of the tested person more accurately by judging whether the face of the tested person faces to the display device or not and then judging whether the sight line of the tested person faces to the display device or not on the basis that the face of the tested person faces to the display device. Compared with the prior art, for example, the method for realizing the sight line detection by acquiring the eye movement data has the advantages that the requirement on the position stability of the detected person in the sight line detection process is lower, the sight line detection cannot be finished because the detected person deviates from a preset test range due to some reason, and the operability and the accuracy of the sight line detection are greatly improved. In addition, when the vision of the tested person is judged to be not facing the display device, the invention can automatically send out early warning information and/or make the mechanical arm enter the locking state, so that the mechanical arm for executing the operation enters the locking state, namely the master-slave relation is disconnected, thereby effectively preventing a doctor from operating the mechanical arm when the vision of the doctor does not face the display device, further improving the safety in the operation process and effectively preventing misoperation.

As shown in fig. 12, the processor 131, the communication interface 132 and the memory 133 are communicated with each other via a communication bus 134. The communication bus 134 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus 134 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus. The communication interface 132 is used for communication between the above-described controller and other devices.

The Processor 131 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable gate array (FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware component, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and the processor 131 is the control center of the controller and is connected to various parts of the overall controller by various interfaces and lines.

The memory 133 may be used to store the computer program, and the processor 131 may implement various functions of the controller by executing or executing the computer program stored in the memory 133 and calling data stored in the memory 133.

The memory 133 may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).

Referring to fig. 13, a schematic diagram of a display device according to an embodiment of the invention is schematically shown. As shown in fig. 13, a camera 121 and an infrared receiver 122 are disposed on the display device, and both the camera 121 and the infrared receiver 122 are in communication connection with the controller. When the sight line is detected, a doctor (namely, a detected person) wears the head-mounted device 110, the head-mounted device 110 is in communication connection with the controller and keeps a normal operation posture, the medical control system enables the camera 121 positioned on the display device to collect facial images of the doctor and transmit the collected facial images to the controller, and the controller identifies the facial images to identify facial feature point information of the doctor and obtain face orientation information of the doctor so as to judge whether the face of the doctor faces towards the display device. If the determination result is that the face of the doctor faces the display device, under the control of the controller, the plurality of infrared emitters 111 located on the head-mounted device emit a plurality of infrared rays to the left and right eyes of the doctor from different angles, respectively, the infrared rays reflected by the irises of the left and right eyes of the doctor are received by the infrared receiver 122 installed on the display device to obtain iris images of the left and right eyes, the infrared receiver 122 transmits the obtained iris images of the left and right eyes to the controller, and the controller analyzes the obtained iris images of the left and right eyes to obtain eye facing information of the doctor, thereby determining whether the sight line of the doctor faces the display device. If the judgment result is that the sight line of the doctor faces the display device 12, establishing a master-slave relationship between the master control arm 11 and the mechanical arm 31, and entering a master-slave connection state, so that the doctor can normally operate the master control arm 11 and control the mechanical arm 31 to perform an operation; if the judgment result is that the sight line of the doctor does not face the display device 12, the system automatically sends out alarm information and/or starts a protection mechanism, the main control arm 11 cannot operate the mechanical arm 31 to perform the operation, the mechanical arm 31 enters a locked state, the doctor needs to lightly touch a screen unlocking key, knead a joint at the tail end of the main control arm 11 or perform other unlocking actions, sight line detection is performed again, after the judgment result meets the condition that the sight line of the doctor faces the display device 12, the mechanical arm 31 is unlocked, and the doctor can perform the operation normally. Therefore, by adopting the surgical robot system provided by the invention, the sight detection of a doctor can be successfully completed, the master-slave control relation between the main control arm 11 and the mechanical arm 31 can be controlled only when the sight of the doctor faces the display device 12, and otherwise, the mechanical arm 31 is automatically locked, so that the misoperation can be effectively avoided, and the safety performance in the surgical process is improved.

Corresponding to the surgical robot system, the present invention further provides a method for controlling a surgical robot system, please refer to fig. 14, which schematically shows a flowchart of a method for controlling a surgical robot system according to an embodiment of the present invention, and as shown in fig. 14, the method for controlling a surgical robot system includes the following steps:

step S21, obtaining face orientation information of the tested person;

step S22 of determining whether the face of the subject faces the display device based on the face direction information;

if yes, go to step S23, otherwise go to step S24;

step S23, obtaining the eye orientation information of the tested person, and judging whether the sight line of the tested person is oriented to the display device according to the eye orientation information;

if not, go to step S24;

and step S24, sending out early warning information and/or enabling the mechanical arm to enter a locking state.

Therefore, the invention can determine the sight line of the tested person more accurately by judging whether the face of the tested person faces to the display device or not and then judging whether the sight line of the tested person faces to the display device or not on the basis that the face of the tested person faces to the display device. Compared with the prior art, the method for realizing the sight line detection by acquiring the eye movement data has the advantages that the requirement on the position stability of the detected person in the sight line detection process is low, the sight line detection cannot be finished because the detected person deviates from the preset test range due to some reason, and the operability and the accuracy of the sight line detection are greatly improved. In addition, when the vision of the tested person is judged to be not facing the display device, the invention can automatically send out early warning information and/or make the mechanical arm enter the locking state, so that the mechanical arm for executing the operation enters the locking state, namely the master-slave relation is disconnected, thereby effectively preventing a doctor from operating the mechanical arm when the vision of the doctor does not face the display device, further improving the safety in the operation process and effectively preventing misoperation.

The present invention also provides a readable storage medium in which a computer program is stored, which, when executed by a processor, can implement the above-described gaze detection method or the control method of a surgical robot system. Therefore, the invention can determine the sight line of the tested person more accurately by judging whether the face of the tested person faces to the target object or not and then judging whether the sight line of the tested person faces to the target object or not on the basis that the face of the tested person faces to the target object. Compared with the prior art, the method for realizing the sight line detection by acquiring the eye movement data has the advantages that the requirement on the position stability of the detected person in the sight line detection process is low, the sight line detection cannot be finished because the detected person deviates from the preset test range due to some reason, and the operability and the accuracy of the sight line detection are greatly improved. In addition, when the vision of the tested person is judged not to face the target object, the invention can automatically send out early warning information and/or send out a state change instruction, thereby effectively preventing misoperation and improving the safety in the operation process. For example, for a surgical robot system, when it is determined that the sight line of a doctor (a subject) is not facing a display device (a target object), the invention automatically sends out early warning information and/or starts a protection mechanism to enable a mechanical arm for performing a surgery to enter a locked state, i.e., to disconnect the master-slave relationship, so that the doctor can be effectively prevented from operating the mechanical arm when the sight line of the doctor is not facing the display device, the safety in the surgery process is further improved, and misoperation is effectively prevented.

The readable storage media of embodiments of the present invention may take any combination of one or more computer-readable media. The readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this context, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.

A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.

Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).

In summary, compared with the prior art, the sight line detection method, the surgical robot system, the control method and the storage medium provided by the invention have the following advantages:

the invention obtains the face orientation information of the tested person; judging whether the face of the tested person faces to the target object or not according to the acquired face facing information; if not, sending out early warning information and/or sending out a state change instruction; if yes, obtaining eye orientation information of the tested person, and judging whether the sight of the tested person faces the target object or not according to the obtained eye orientation information of the tested person; if not, sending out early warning information and/or sending out a state change instruction. Therefore, the invention can determine the sight line of the tested person more accurately by judging whether the face of the tested person faces to the target object or not and then judging whether the sight line of the tested person faces to the target object or not on the basis that the face of the tested person faces to the target object. Compared with the prior art, the method for realizing the sight line detection by acquiring the eye movement data has the advantages that the requirement on the position stability of the detected person in the sight line detection process is low, the sight line detection cannot be finished because the detected person deviates from the preset test range due to some reason, and the operability and the accuracy of the sight line detection are greatly improved. In addition, when the vision of the tested person is judged not to face the target object, the invention can automatically send out early warning information and/or send out a state change instruction, thereby effectively preventing misoperation and improving the safety in the operation process. For example, for a surgical robot system, when it is determined that the sight line of a doctor (a subject) is not facing a display device (a target object), the invention automatically sends out early warning information and/or starts a protection mechanism to enable a mechanical arm for performing a surgery to enter a locked state, i.e., to disconnect the master-slave relationship, so that the doctor can be effectively prevented from operating the mechanical arm when the sight line of the doctor is not facing the display device, the safety in the surgery process is further improved, and misoperation is effectively prevented.

It should be noted that the apparatuses and methods disclosed in the embodiments herein may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments herein. In this regard, each block in the flowchart or block diagrams may represent a module, a program, or a portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

In addition, the functional modules in the embodiments herein may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.

The above description is only for the purpose of describing the preferred embodiments of the present invention, and is not intended to limit the scope of the present invention, and any variations and modifications made by those skilled in the art based on the above disclosure are within the scope of the present invention. It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, it is intended that the present invention also include such modifications and variations as come within the scope of the invention and their equivalents.

22页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:测量眼球运动的可头戴装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!