Guiding device, medical examination device, guiding method and storage medium

文档序号:91535 发布日期:2021-10-12 浏览:22次 中文

阅读说明:本技术 一种导引装置、医学检查装置、导引方法以及存储介质 (Guiding device, medical examination device, guiding method and storage medium ) 是由 龚琛辉 张欣宇 佘铭钢 于 2020-03-20 设计创作,主要内容包括:本发明涉及一种用于医学检查装置的导引装置,其包括:图像获取模块,其配置成获取所述医学检查装置及其针对的对象的空间图像;信息处理模块,其配置成提取所述空间图像中的特征信息;位置判定模块,其配置成根据所述特征信息判定所述对象的摆位;分类模块,其配置成根据所述特征信息进行归类;以及信息呈现模块,其配置成根据所述摆位以及所述归类生成导引信息并呈现。(The invention relates to a guide device for a medical examination device, comprising: an image acquisition module configured to acquire a spatial image of the medical examination apparatus and the subject for which it is directed; an information processing module configured to extract feature information in the aerial image; a position determination module configured to determine a position of the object based on the characteristic information; a classification module configured to classify according to the feature information; and an information presentation module configured to generate and present guidance information according to the positioning and the classification.)

1. A guide device for a medical examination apparatus, the device comprising:

an image acquisition module configured to acquire a spatial image of the medical examination apparatus and the subject for which it is directed;

an information processing module configured to extract feature information in the aerial image;

a position determination module configured to determine a position of the object based on the characteristic information;

a classification module configured to classify according to the feature information; and

and the information presentation module is configured to generate and present guidance information according to the positioning and the classification.

2. The guidance device of claim 1, wherein the information processing module comprises a counting unit, wherein the counting unit is configured to detect a number of the objects in the aerial image, and wherein the feature information comprises the number.

3. The guidance device according to claim 1, wherein the information processing module includes a joint detection unit configured to detect a joint of the object, the information processing module extracts the feature information including posture information and position information of the object from the joint.

4. The guidance device of claim 3, wherein the information processing module comprises an error recovery unit, wherein the error recovery unit is configured to recover at least one of: missing detection of a joint, duplicate detection of a joint, absence of depth information in the aerial image, error of depth information in the aerial image, and error of a joint node.

5. The guidance device of claim 1, wherein the information processing module comprises a data conversion unit, wherein the data conversion unit is configured to unify different coordinate systems, different dimensions, and/or different units when extracting the feature information.

6. The guidance device of claim 1, wherein the position determination module is configured to determine a relative positional relationship between the object and the medical examination device.

7. The guide device of claim 1, wherein the classification module comprises a classification model.

8. The guidance device of claim 7, wherein the classification model is configured to classify based on the characteristic information about the subject's signs.

9. The guidance device of claim 8, wherein the classification of the classification model comprises: correct and wrong.

10. A medical examination apparatus, characterized in that the medical examination apparatus comprises a guide apparatus according to any one of claims 1-9.

11. A medical examination guidance method, characterized in that the method comprises the steps of:

acquiring a spatial image of a medical examination apparatus and an object aimed at it;

processing the space image and extracting characteristic information in the space image;

judging the arrangement position of the object according to the characteristic information;

classifying according to the characteristic information; and

and generating and presenting guidance information according to the positioning and the classification.

12. The guidance method of claim 11, wherein the step of processing the aerial image and extracting feature information therein comprises: detecting a number of the objects in the aerial image, the feature information including the number.

13. The guidance method of claim 11, wherein the step of processing the aerial image and extracting feature information therein comprises: detecting a joint of the object, and extracting the characteristic information according to the joint; wherein the feature information includes pose information and position information of the object.

14. The guidance method of claim 13, wherein the step of processing the aerial image and extracting feature information therein comprises: repairing at least one of: missing detection of a joint, duplicate detection of a joint, absence of depth information in the aerial image, error of depth information in the aerial image, and error of a joint node.

15. The guidance method of claim 11, wherein the step of processing the aerial image and extracting feature information therein comprises: different coordinate systems, different dimensions and/or different units are unified when extracting the feature information.

16. The guidance method according to claim 11, wherein the step of determining the position of the object based on the feature information includes: determining a relative positional relationship between the object and the medical examination apparatus.

17. The guidance method of claim 11, wherein the classification is performed by a classification model based on the feature information.

18. Guidance method according to claim 17, characterized in that the classification model is used for classifying in accordance with the characteristic information about the subject's signs.

19. The guidance method of claim 18, wherein the classification of the classification model comprises: correct and wrong.

20. A computer-readable storage medium having instructions stored therein, which when executed by a processor, cause the processor to perform the method of any one of claims 11-19.

Technical Field

The present invention relates to a guidance apparatus for a medical examination apparatus, a medical examination guidance method, and a computer-readable storage medium, and more particularly, to a mechanism for achieving a medical purpose using an image processing technique.

Background

The use of medical imaging systems, such as digital radiography systems (hereinafter referred to simply as DR systems), has become increasingly popular, and medical imaging systems have become an important tool for medical workers to make diagnoses. However, the complexity of medical imaging systems is also increasing, and the imaging quality is closely related to the system and the patient positioning, and thus the imaging quality of the system is highly dependent on the quality of the system operator.

In order to obtain satisfactory imaging quality of the medical imaging system, the operator of the medical imaging system needs to receive professional training, and in addition, the operator needs to practice a lot of operations for a long time to grasp the operation specifications of the medical imaging system. However, even with the practice of the procedure, the operator needs to maintain a high level of mental focus while performing the medical examination. However, long-term clinical work will cause fatigue errors to the operator, which is inevitable. In some cases, even if the operator is done with the correct setup, the patient may move unexpectedly causing setup errors when the operator leaves the patient for further operations. At this time, the operator is far away from the patient, and the error often cannot be found in time. Taking a DR system as an example, frequent improper positioning of the patient by the operator may result in improper setting of exposure parameters by the system, and thus the quality of the acquired image may be too poor to meet the diagnostic requirements.

Disclosure of Invention

The present invention aims to provide a mechanism capable of assisting medical activities by guiding information to reduce the workload of medical examiners, specifically:

according to an aspect of the present invention, there is provided a guide device for a medical examination apparatus, comprising: an image acquisition module configured to acquire a spatial image of the medical examination apparatus and the subject for which it is directed; an information processing module configured to extract feature information in the aerial image; a position determination module configured to determine a position of the object based on the characteristic information; a classification module configured to classify according to the feature information; and an information presentation module configured to generate and present guidance information according to the positioning and the classification.

In other examples of the present application, optionally, the information processing module includes a counting unit, wherein the counting unit is configured to detect a number of the objects in the aerial image, and the feature information includes the number.

In other examples of the present application, optionally, the information processing module includes a joint detection unit configured to detect a joint of the object, the information processing module extracts the feature information from the joint, the feature information including posture information and position information of the object.

In other examples of the present application, optionally, the information processing module comprises an error repair unit, wherein the error repair unit is configured to repair at least one of: missing detection of a joint, duplicate detection of a joint, absence of depth information in the aerial image, error of depth information in the aerial image, and error of a joint node.

In other examples of the present application, optionally, the information processing module includes a data conversion unit, wherein the data conversion unit is configured to unify different coordinate systems, different dimensions, and/or different units when extracting the feature information.

In other examples of the application, optionally, the position determination module is configured to determine a relative positional relationship between the object and the medical examination apparatus.

In other examples of the present application, optionally, the classification module comprises a classification model.

In other examples of the application, optionally, the classification model is for classifying according to the characteristic information about the sign of the subject.

In other examples of the present application, optionally, the classifying of the classification model comprises: correct and wrong.

According to another aspect of the invention, a medical examination apparatus is provided, comprising a guide device as described in any of the above.

According to another aspect of the present invention, there is provided a medical examination guidance method, characterized in that the method comprises the steps of: acquiring a spatial image of a medical examination apparatus and an object aimed at it; processing the space image and extracting characteristic information in the space image; judging the arrangement position of the object according to the characteristic information; classifying according to the characteristic information; and generating and presenting guidance information according to the positioning and the classification.

In other examples of the present application, optionally, the step of processing the aerial image and extracting feature information therein includes: detecting a number of the objects in the aerial image, the feature information including the number.

In other examples of the present application, optionally, the step of processing the aerial image and extracting feature information therein includes: detecting a joint of the object, and extracting the characteristic information according to the joint; wherein the feature information includes pose information and position information of the object.

In other examples of the present application, optionally, the step of processing the aerial image and extracting feature information therein includes: repairing at least one of: missing detection of a joint, duplicate detection of a joint, absence of depth information in the aerial image, error of depth information in the aerial image, and error of a joint node.

In other examples of the present application, optionally, the step of processing the aerial image and extracting feature information therein includes: different coordinate systems, different dimensions and/or different units are unified when extracting the feature information.

In other examples of the present application, optionally, the step of determining the positioning of the object according to the feature information includes: determining a relative positional relationship between the object and the medical examination apparatus.

In other examples of the present application, optionally, the classification is performed by a classification model according to the feature information.

In other examples of the present application, optionally, the classifying of the classification model comprises: correct and wrong.

According to another aspect of the present invention, there is provided a computer readable storage medium having stored therein instructions, which when executed by a processor, cause the processor to perform any of the steering methods as described above.

Drawings

The above and other objects and advantages of the present invention will become more apparent from the following detailed description when taken in conjunction with the accompanying drawings, in which like or similar elements are designated by like reference numerals.

Fig. 1 shows a schematic view of a guide device for a medical examination apparatus according to an embodiment of the invention.

Fig. 2 shows a schematic view of a guiding device for a medical examination apparatus according to an embodiment of the invention.

Fig. 3 shows a schematic view of a guiding device for a medical examination apparatus according to an embodiment of the invention.

Fig. 4 shows an exemplary diagram of the basic principle of a medical examination according to an embodiment of the present invention.

Fig. 5 shows an exemplary diagram of the basic principle of a medical examination according to an embodiment of the present invention.

FIG. 6 shows a schematic diagram of a classification model according to an embodiment of the invention.

Fig. 7 shows a schematic view of a medical examination guidance method according to an embodiment of the invention.

Fig. 8 shows an exemplary diagram of the basic principle of a medical examination according to an embodiment of the present invention.

Fig. 9 shows an exemplary diagram of the basic principle of a medical examination according to an embodiment of the present invention.

Fig. 10 shows an exemplary diagram of the basic principle of a medical examination according to an embodiment of the present invention.

Detailed Description

For the purposes of brevity and explanation, the principles of the present invention are described herein with reference primarily to exemplary embodiments thereof. However, those skilled in the art will readily recognize that the same principles are equally applicable to all types of guidance devices for medical examination devices, medical examination guidance methods, and computer-readable storage media, and that these same or similar principles may be implemented therein, with any such variations not departing from the true spirit and scope of the present patent application.

In the conventional scheme in the field, a two-dimensional solution such as only text, picture or video is used to help an operator to improve the operation accuracy, and the conventional two-dimensional solution is not closely combined with the actual workflow and cannot compare the current operation with the standard operation, so that the practicability is not strong, and the work efficiency may be affected.

Another conventional approach in the art is to image tissue (e.g., lesion, etc.) of a patient by means of three-dimensional imaging principles. However, it is not sufficient to image only tissue (e.g. lesions, etc.), other parts of the patient's body and the positioning of the examination apparatus are also crucial for the examination, and other parts of the patient's body that are outside the field of view of the imaging device may also affect tissue within the field of view (e.g. lesions, etc.). It is noted that the positioning in the context of the present application includes parameters of attitude (specifically, may be a standard degree of attitude), position, and the like.

According to an aspect of the invention, a guiding device for a medical examination apparatus is provided. Fig. 1 shows a schematic view of a guiding apparatus for a medical examination apparatus according to an embodiment of the present invention, as shown, the guiding apparatus 10 comprises an image acquisition module 102, an information processing module 104, a position determination module 106, a classification module 108 and an information presentation module 110.

The image acquisition module 102 is configured to acquire a spatial image of the medical examination apparatus and the object for which it is directed. The medical examination apparatus may be an X-ray machine, a magnetic resonance system, a CT machine, or the like, and the corresponding medical operation is an X-ray examination, a magnetic resonance examination, a CT examination, or the like. The object to which the medical examination apparatus is directed in the present application refers to an object to which a medical operation corresponding to the medical examination apparatus is applied, and the object includes, but is not limited to, a human being, an animal, and the like. It is worth mentioning that although it may actually be possible to perform a medical examination on a part of the tissue of the subject (e.g., a lesion, etc.), the image acquisition module 102 of the present application acquires the aerial image of the subject as a whole, i.e., the field of view of the aerial image acquired by the image acquisition module 102 includes the entire subject rather than a part of the tissue of the subject (e.g., a lesion, etc.). Thus, unless otherwise specified, reference to an object in this application refers to an indivisible individual (e.g., human, animal, etc.) to which the subject site belongs, and reference to an object in this application may include one or more of the above-mentioned objects. The spatial image refers to a three-dimensional stereoscopic image including depth/spatial position information; or a planar image in which depth/spatial position information of all or part of the objects is derived therefrom, for example, in some cases, depth/spatial position information of all or part of the objects may be derived from inter-frame differences of consecutively acquired planar images, in which case the derived depth/spatial position information may be used as a direct construction of a three-dimensional stereoscopic image or may participate in downstream operations and processing as accessory information of the spatial image. In some examples of the invention, not only color information (and possibly also grayscale information) of the imaged object, but also spatial depth information (sometimes referred to as a depth map) of the imaged object may be derived from the spatial image. As a non-limiting example, the image acquisition module 102 may be a binocular stereo depth camera. In other examples of the present invention, the image capturing module 102 may also be a multi-view camera, wherein the "view" may be not only for capturing visible light, but also for capturing invisible light such as infrared light.

As an example, referring to fig. 4, a multi-view camera 302 is used to capture an aerial image of an object 306 on a stage 304. The field of view FOV2 of the multi-view camera 302 includes the entirety of the object 306 rather than a portion of the tissue of the object 306 under examination (e.g., lesion, etc.). For example, if the subject 306 rolls over and lies on his/her side may affect the shape of the examined region (legs), but if the field of view of the multi-view camera 302 is the FOV1 that covers only the examined region (legs), the subject 306 may not be found to roll over and lie on his/her side, and the quality of the resulting medical diagnostic image may not be high. Similarly, the field of view FOV2 shown in fig. 5 also includes the entirety of the object 306 rather than a portion of the tissue of the object 306 under examination (e.g., lesion, etc.).

The information processing module 104 is configured to extract feature information about the object and the medical detection apparatus in the spatial image. The feature information refers to a feature in an image obtained by image processing, and may be, for example, an edge contour, an RGB value, a depth value, or the like of an object. As will be appreciated by those skilled in the art in light of the present disclosure, such feature information may further be used to determine some basic features, parameters, etc. of the imaged object, and thus basic information of interest (e.g., spatial dimensions, relative positions, etc.) of each object in the aerial image may be determined based on such basic features, parameters, etc. Some examples of the present application do not limit the specific content of the feature information in order to achieve the specific functions of the example, and some possible feature information are described in detail below. In addition, the method for extracting the feature information may be implemented in a general manner in the field of image processing, and is not described herein again.

The position determination module 106 is configured to determine the positioning of the object based on the characteristic information. The plausibility of the positioning of the object is relative to the medical examination apparatus, and the position determination module 106 determines the positioning of the object by examining the position of the object relative to the medical examination apparatus. For example, referring to fig. 5, if the medical examination apparatus is a vertical X-ray machine, the position determination module 106 may first determine whether the Bucky height of the vertical X-ray machine is appropriate according to the characteristic information. At this time, the feature information may include contour information of the subject, and the contour information may be used to determine the height of the subject. Wherein the contour of the object can be determined by adopting an edge extraction method, for example, and the height of Bucky can be determined according to the characteristic points of the corners of Bucky in the space image, for example. For example, if the chest of a subject (e.g., a patient) is to be examined, the position of the chest may be determined based on the height of the subject. The cassette is included in Bucky and thus should be as high as the position of the subject's chest. If the Bucky height deviates far from the position of the chest, it may be assumed that the subject is not positioned properly, and information regarding this determination may be generated for transmission downstream.

In other examples of the invention, the position of the subject's thorax may be determined from other forms of characteristic information. For example, to examine a female patient, the position of the breast may be determined from the contour information of the subject and the position of the chest may be estimated. Generally, the contour of the breast of a female patient will appear as a convex feature relative to the contour of the body.

In other examples of the invention, the height of Bucky may be determined from other forms of characteristic information. For example, a flag for a feature may be provided on the cassette. At this time, the feature information includes the position of the marker, and since the marker is fixed with respect to Bucky, the height of Bucky can be estimated based on the position.

Although the above examples have been described with some aspects of determining the position of the object for illustrative purposes, other characteristic information may be used to determine the position of the object, and the present application is not limited thereto.

Unlike the location determination module 106, the classification module 108 does not perform the above quantization operation with direct physical meaning on the feature information, and is configured to classify according to the feature information. Although the classification module 108 does not perform quantitative analysis with direct physical meaning as above, the classification by the classification module 108 is also performed specifically according to certain specific steps. The results of the classification may be passed downstream.

The information presentation module 110 is configured to generate and present guidance information according to the determination (placement) of the position determination module 106 and the classification of the classification module 108. The information presentation module 110 generates the guidance information by simultaneously using the analysis results of two dimensions, thereby avoiding errors possibly caused by the analysis of a single dimension. The two analysis methods of the position determination module 106 and the classification module 108 are mutually complementary and verified, so that the analysis accuracy is improved, and the generated analysis result is more accurate than that of the existing scheme.

Fig. 2 shows a schematic view of a guiding device for a medical examination apparatus according to an embodiment of the invention. Unlike the information processing module 104 in fig. 1 that sends the processed information to the position determining module 106 and the classifying module 108 in parallel, the information processing module 104 in fig. 2 sends the processed information to the position determining module 106 first, and then forwards the processed information to the classifying module 108. Other details of this embodiment may refer to the embodiment design corresponding to fig. 1, and are not described herein again.

On the basis of the example corresponding to fig. 4, fig. 8 shows a screen displayed by an augmented reality device 606 as an information presentation module, wherein the augmented reality device 606 presents guidance information 604 and guidance information 602. Wherein the guiding information 602 represents the plausibility of the positioning of the object (the positioning is not reasonable in the figure, and the reasonable position is shown by the guiding information 602), and the guiding information 604 represents the classification information ("error") of the characteristic information. Further, fig. 9 and 10 show another way of presenting guidance information, in which the display device 704 presents guidance information to the operator, the display device 708 presents guidance information directly to the subject, and the acoustic device 706 presents guidance information directly to the subject and/or the operator (e.g., outputs a prompt sentence). It is noted that one or more of the display device 704, the acoustic device 706, and the display device 708 may be selected for use according to actual needs.

In other examples of the present application, as shown in fig. 3, the information processing module 24 includes a counting unit 202, and the counting unit 202 is configured to detect the number of the objects in the aerial image, the feature information including the number. For example, when the number of the objects to be examined is not unique, the medical examination may be disturbed, the guidance device 10 may not be able to accurately recognize the object actually required to be examined, and the operator of the medical examination apparatus may need to be informed. Thus, for example, the classification module 108 may make the classification based on feature information including the number of objects. The result of the classification may be that the characteristic information is in error and is passed downstream. Further, the information presentation module 110 will generate and present guidance information according to the classification of the classification module 108.

In other examples of the present application, as shown in fig. 3, the information processing module 24 includes a joint detection unit 204, wherein the joint detection unit 204 is configured to detect a joint of the subject, and the feature information at this time includes posture information and position information of the subject. The above describes determining the characteristics of an object by its contour, but the features of the contour used to characterize the object are sometimes coarse and their accuracy is not sufficient. Further, the information processing module 24 may include a joint detection unit 204 for detecting a joint of the subject, and the method of joint detection may be performed according to an image processing method in the related art. After determining the joint positions of the subject, the information processing module 24 further determines the posture of the subject and the positions of the body parts according to the joint positions, and the extracted and generated feature information includes the posture of the subject and the positions of the body parts for downstream analysis.

In other examples of the present application, as shown in fig. 3, the information processing module 24 includes an error repair unit 206, which may be configured to repair at least one of: missing detection of a joint, duplicate detection of a joint, missing depth information, error in depth information, and error in a joint node. The generated aerial image may be different from the actual physical state due to the limitation of the hardware characteristics and/or algorithms of the imaging element, and various feature information and intermediate data generated by downstream processing according to the difference may be wrong. Some examples of the application repair and/or correct the aerial image to more realistically reflect the actual physical state; in addition, some intermediate analysis data from the aerial image may also be repaired and/or corrected in other examples of the present application. For example, the "joint" detected from the aerial image is a product of the joint detection unit 204 in the above, belonging to the intermediate analysis data, and the error repair unit 206 may repair and/or correct missing detection of the joint, repeated detection of the joint, an error of the joint node, and the like. Furthermore, the depth information may also become intermediate analysis data, and the error repair unit 206 may repair and/or correct a lack of depth information, an error of depth information, and the like.

In other examples of the present application, as shown in fig. 3, information processing module 24 includes a data conversion unit 208, wherein the data conversion unit is configured to unify different coordinate systems, different dimensions, and/or different units. For example, the coordinates of the pixels in the pixel coordinate system of each target on the spatial image may be uniformly converted into the actual physical coordinates in the world coordinate system (this operation may be expressed as being uniform in the world coordinate system), and the spatial distance of each target in the actual physical world may be estimated. The mapping from the pixel coordinate system to the world coordinate system can be performed according to the prior art and will not be described further herein. For another example, if the height of Bucky in the world coordinate system is measured to be a decimeter, and the position of the chest cavity of the subject is measured to be B meters from the ground, then the position of the chest cavity can be estimated to be 10 × B decimeters from the ground in the world coordinate system, so as to conveniently compare the two. For another example, if it is measured that the a target occupies 15 sub-blocks in the image and the B target occupies 1 macro-block in the image, how many sub-blocks the B target occupies may be estimated according to the size of the macro-block, so as to compare the sizes of the two.

It is noted that although the information processing module 24 in fig. 3 includes the counting unit 202, the joint detection unit 204, the error recovery unit 206, and the data conversion unit 208, in some examples the information processing module 24 does not necessarily include all of the above.

In other examples of the application, the position determination module 106 is configured to determine a relative positional relationship between the object and the medical examination apparatus, which the information presentation module 110 may present. Specifically, for example, the position determination module 106 may determine a relative positional relationship between the subject and Bucky of the medical examination apparatus in the actual space and provide it to the downstream, and the information presentation module 110 may present the relative positional relationship therebetween.

In other examples of the present application, classification module 108 includes a classification model. A classification model herein refers to a network that can be classified according to input, and may be, for example, a neural network, a support vector machine, or the like. The following advantages are obtained when using a neural network as a classifier: the classification accuracy is high; the parallel processing capability is strong; the distributed storage and learning capacity is strong; the robustness is strong, and the noise is not easy to influence. Due to the above features, neural networks are suitable for processing images. Furthermore, in some examples of the present application, the neural network herein may also be substituted with a support vector machine. Fig. 6 shows a schematic diagram of a classification model, in particular a neural network, according to an embodiment of the present invention, wherein an input layer L1 comprising n input nodes, an output layer L3 comprising j output nodes, and a hidden layer L2 are schematically shown. As an example of the present application, the information input by the input node may be angle information of each joint, and the output node may give a classification of whether the placement and the like are correct. It will be appreciated by those skilled in the art that the particular form of the neural network may be adapted to the actual circumstances and that the general principles of the neural network as a classifier may be referred to in the art.

In other examples of the present application, the classification model is used to classify based on the feature information about the subject's signs. For example, the information input by the input node may be angle information of each joint, and the output node gives a classification whether the object to be examined is a cross waist.

In other examples of the present application, the classification of the feature information by the classification model includes: correct (corresponding to a placement criterion/acceptable case), wrong (corresponding to a case where it is recommended to adjust the placement), and accordingly, 2 output nodes of the classification model. For example, if the cross-waist classification is correct, then not the cross-waist classification is false. Returning to fig. 8, the guidance information classifies the characteristic information as "error". In other examples, the input node may be multiple, for example, the classification of the feature information by the classification model includes: the criteria, acceptable, suggested adjustments, and accordingly, the output nodes of the classification model are 3.

According to another aspect of the invention, a medical examination apparatus is provided, the medical examination apparatus comprising a guide device as any one of the above.

According to another aspect of the present invention, there is provided a medical examination guidance method, as shown in fig. 7, the guidance method including steps 502-510 as will be described in detail below. In step 502, a spatial image of the medical examination apparatus and the object to which it is directed is acquired. The medical examination apparatus may be an X-ray machine, a magnetic resonance system, a CT machine, or the like, and the corresponding medical operation is an X-ray examination, a magnetic resonance examination, a CT examination, or the like. The object for which the medical examination apparatus is directed includes, but is not limited to, humans, animals, and the like. It is worth mentioning that although it may actually be possible to perform a medical examination of a part of the tissue of the subject (e.g. lesion, etc.), the method of the present application acquires a spatial image of the subject as a whole, i.e. the field of view of the spatial image acquired at this step includes the whole subject and not a part of the tissue of the subject (e.g. lesion, etc.). Thus, unless otherwise specified, reference to an object in this application refers to an indivisible individual (e.g., human, animal, etc.) to which the subject site belongs, and reference to an object in this application may include one or more of the above-mentioned objects. The spatial image refers to a three-dimensional stereoscopic image including depth/spatial position information; or a planar image in which depth/spatial position information of all or part of the objects is derived therefrom, for example, in some cases, depth/spatial position information of all or part of the objects may be derived from inter-frame differences of consecutively acquired planar images, in which case the derived depth/spatial position information may be used as a direct construction of a three-dimensional stereoscopic image or may participate in downstream operations and processing as accessory information of the spatial image. In some examples of the invention, not only color information (and possibly also grayscale information) of the imaged object, but also spatial depth information (sometimes referred to as a depth map) of the imaged object may be derived from the spatial image.

As an example, referring to fig. 4, a multi-view camera 302 is used to capture an aerial image of an object 306 on a stage 304. The field of view FOV2 of the multi-view camera 302 includes the entirety of the object 306 rather than a portion of the tissue of the object 306 under examination (e.g., lesion, etc.). For example, if the subject 306 rolls over and lies on his/her side may affect the shape of the examined region (legs), but if the field of view of the multi-view camera 302 is the FOV1 that covers only the examined region (legs), the subject 306 may not be found to roll over and lie on his/her side, and the quality of the resulting medical diagnostic image may not be high.

The spatial image is processed and feature information about the object and the medical examination apparatus is extracted therefrom in step 504. The feature information refers to a feature in an image obtained by image processing, and may be, for example, an edge contour, an RGB value, a depth value, or the like of an object. As will be appreciated by those skilled in the art in light of the present disclosure, such feature information may further be used to determine some basic features, parameters, etc. of the imaged object, and thus basic information of interest (e.g., spatial dimensions, relative positions, etc.) of each object in the aerial image may be determined based on such basic features, parameters, etc. Some examples of the present application do not limit the specific content of the feature information in order to achieve the specific functions of the example, and some possible feature information are described in detail below. In addition, the method for extracting the feature information may be implemented in a general manner in the field of image processing, and is not described herein again.

In step 506, the position of the object is determined based on the feature information. The plausibility of the positioning of the object is relative to the medical examination apparatus, which determines the positioning of the object by examining the position of the object relative to the medical examination apparatus. For example, if the medical examination apparatus is a vertical X-ray machine, it may be determined whether or not the Bucky height of the vertical X-ray machine is appropriate based on the characteristic information. At this time, the feature information may include contour information of the subject, and the contour information may be used to determine the height of the subject. Wherein the contour of the object can be determined by adopting an edge extraction method, for example, and the height of Bucky can be determined according to the characteristic points of the corners of Bucky in the space image, for example. For example, if the chest of a subject (e.g., a patient) is to be examined, the position of the chest may be determined based on the height of the subject. The cassette is included in Bucky and thus should be as high as the position of the subject's chest. If the Bucky height deviates far from the position of the chest, it may be assumed that the subject is not positioned properly, and information regarding this determination may be generated for transmission downstream.

In other examples of the invention, the position of the subject's thorax may be determined from other forms of characteristic information. For example, to examine a female patient, the position of the breast may be determined from the contour information of the subject and the position of the chest may be estimated. Generally, the contour of the breast of a female patient will appear as a convex feature relative to the contour of the body.

In other examples of the invention, the height of Bucky may be determined from other forms of characteristic information. For example, a flag for a feature may be provided on the cassette. At this time, the feature information includes the position of the marker, and since the marker is fixed with respect to Bucky, the height of Bucky can be estimated based on the position.

Although the above examples have been described with some aspects of determining the position of the object for illustrative purposes, other characteristic information may be used to determine the position of the object, and the present application is not limited thereto.

The classification is performed in step 508 based on the characteristic information. The step does not carry out the quantization operation with direct physical meaning on the characteristic information, but carries out classification according to the characteristic information. Although this step does not have a direct physical meaning of quantitative analysis as above, the classification of this step is also performed specifically according to certain specific steps, and the results of the classification can be passed downstream.

In step 510, guidance information is generated and presented according to the positioning and classification. The step simultaneously adopts the analysis results of two dimensions to generate the guiding information, thereby avoiding the error possibly caused by the analysis of a single dimension. The two analysis methods of step 506 and step 508 complement each other and prove, which improves the accuracy of analysis, so that the generated analysis result is more accurate than the existing scheme.

In other examples of the present application, the step of processing the spatial image and extracting feature information therein comprises: the number of the objects in the aerial image is detected, and the feature information includes the number. For example, when the number of objects to be examined is not unique, the medical examination may be disturbed, and the object actually required to be examined may not be accurately recognized, at which time the operator of the medical examination apparatus may need to be notified. Thus, the classification can be further made based on the feature information including the number of objects. The result of the classification may be that the characteristic information is in error and is passed downstream. Further, guidance information may be generated and presented according to the categorization.

In other examples of the present application, the step of processing the spatial image and extracting feature information therein comprises: joints of the object are detected, and the feature information includes posture information and position information of the object. The above describes determining the characteristics of an object by its contour, but the features of the contour used to characterize the object are sometimes coarse and their accuracy is not sufficient. The joint of the object can be further detected, and the method of joint detection can be performed according to the image processing method in the prior art. After the joint position of the object is determined, the posture of the object and the arrangement position of each body part can be further determined according to the joint position, and the extracted and generated feature information comprises the posture of the object and the arrangement position of each body part for downstream analysis.

In other examples of the present application, the step of processing the spatial image and extracting feature information therein comprises: repairing at least one of: missing detection of a joint, duplicate detection of a joint, missing depth information, error in depth information, and error in a joint node. The generated aerial image may be different from the actual physical state due to the limitation of the hardware characteristics and/or algorithms of the imaging element, and various feature information and intermediate data generated by downstream processing according to the difference may be wrong. Some examples of the application repair and/or correct the aerial image to more realistically reflect the actual physical state; in addition, some intermediate analysis data from the aerial image may also be repaired and/or corrected in other examples of the present application. For example, "joints" detected from the aerial image also belong to the intermediate analysis data, and thus missing detection of joints, repeated detection of joints, errors of joint nodes, and the like can be further repaired and/or corrected. Further, the depth information may also become intermediate analysis data, and thus, the absence of depth information, the error of depth information, and the like may be further repaired and/or corrected.

In other examples of the present application, the step of processing the spatial image and extracting feature information therein comprises: unifying different coordinate systems, different dimensions and/or different units. For example, the coordinates of the pixels in the pixel coordinate system of each target on the spatial image may be uniformly converted into the actual physical coordinates in the world coordinate system (this operation may be expressed as being uniform in the world coordinate system), and the spatial distance of each target in the actual physical world may be estimated. The mapping from the pixel coordinate system to the world coordinate system can be performed according to the prior art and will not be described further herein. For another example, if the height of Bucky in the world coordinate system is measured to be a decimeter, and the position of the chest cavity of the subject is measured to be B meters from the ground, then the position of the chest cavity can be estimated to be 10 × B decimeters from the ground in the world coordinate system, so as to conveniently compare the two. For another example, if it is measured that the a target occupies 15 sub-blocks in the image and the B target occupies 1 macro-block in the image, how many sub-blocks the B target occupies may be estimated according to the size of the macro-block, so as to compare the sizes of the two.

In another example of the present application, the determining the arrangement position of the object based on the feature information includes: the relative positional relationship between the object and the medical examination apparatus is determined. Specifically, for example, the relative distance in real space between the subject and Bucky of the medical examination apparatus may be determined and provided to the downstream, and the downstream may exhibit the relative positional relationship therebetween.

In other examples of the present application, the classification is performed by a classification model and according to feature information. When using e.g. neural networks as classifiers, the following advantages are obtained: the classification accuracy is high; the parallel processing capability is strong; the distributed storage and learning capacity is strong; the robustness is strong, and the noise is not easy to influence. Due to the above features, neural networks are suitable for processing images. Fig. 6 shows a schematic diagram of a neural network according to an embodiment of the present invention, in which an input layer L1 comprising n input nodes, an output layer L3 comprising j output nodes, and a hidden layer L2 are schematically shown. As an example of the present application, the information input by the input node may be angle information of each joint, and the output node may give a classification of whether the placement and the like are correct. It will be appreciated by those skilled in the art that the particular form of the neural network may be adapted to the actual circumstances and that the general principles of the neural network as a classifier may be referred to in the art.

In other examples of the present application, the classification model is used to classify based on the feature information about the subject's signs. For example, the information input by the input node may be angle information of each joint, and the output node gives a classification whether the object to be examined is a cross waist.

In other examples of the present application, the classification of the feature information by the classification model includes: correct (corresponding to a placement criterion/acceptable case), wrong (corresponding to a case where it is recommended to adjust the placement), and accordingly, 2 output nodes of the classification model. Classification model classification models, for example, if the cross-over classification is correct, then the cross-over classification is not false. Returning to fig. 8, the guidance information classifies the characteristic information as "error". In other examples, the input node may be multiple, for example, the classification of the feature information by the classification model includes: the criteria, acceptable, suggested adjustments, and accordingly, the output nodes of the classification model are 3.

According to another aspect of the present invention, there is provided a computer readable storage medium having stored therein instructions, which when executed by a processor, cause the processor to perform any of the steering methods as described above. Computer-readable media, as referred to herein, includes all types of computer storage media, which can be any available media that can be accessed by a general purpose or special purpose computer. By way of example, computer-readable media may include RAM, ROM, EPROM, E2PROMs, registers, hard disks, removable disks, CD-ROMs or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other device that can be used to carry or store desired program code means in the form of instructions or data structures and that can be general purpose or special purposeA special purpose computer, or any other transitory or non-transitory medium that is accessible by a general or special purpose processor. Disk (disk) and disc (disc), as used herein, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.

In summary, the mechanism for achieving the medical purpose by using the image processing technology of the present application attempts to determine whether the positioning and the like of the object are reasonable as a whole by performing the whole imaging on the object. On the other hand, the mechanism also incorporates a classification idea, and generates guidance information by using the analysis results of two dimensions, thereby avoiding errors possibly caused by the analysis of a single dimension. It should be noted that some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.

The above examples mainly illustrate the guiding apparatus for medical examination apparatus, the medical examination guiding method, and the computer-readable storage medium of the present invention. Although only a few embodiments of the present invention have been described, those skilled in the art will appreciate that the present invention may be embodied in many other forms without departing from the spirit or scope thereof. Accordingly, the present examples and embodiments are to be considered as illustrative and not restrictive, and various modifications and substitutions may be made therein without departing from the spirit and scope of the present invention as defined by the appended claims.

22页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种放射科用病人稳固装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!