Ultrasonic scanning guiding method and system based on augmented reality

文档序号:1880091 发布日期:2021-11-26 浏览:9次 中文

阅读说明:本技术 一种基于增强现实的超声扫查引导方法及系统 (Ultrasonic scanning guiding method and system based on augmented reality ) 是由 朱瑞星 兰璐 夏炎 彭君湜 于 2021-08-25 设计创作,主要内容包括:本发明提供一种基于增强现实的超声扫查引导方法及系统,包括:采用一增强现实设备实时拍摄并显示真实扫查场景的真实场景图像,根据真实场景图像分别对背景定位标识和探头定位标识进行追踪得到背景定位标识的第一位姿信息和超声探头的第二位姿信息;增强现实设备根据第一位姿信息以及预先获取的受试者的一扫查目标区域的空间位置标定信息于真实场景图像上叠加渲染扫查目标区域,以及根据第二位姿信息于真实场景图像上叠加渲染探头位置区域;增强现实设备在超声扫查过程中根据探头位置区域和扫查目标区域之间的重合状态生成并显示相应的扫查提示,以对操作者进行超声扫查引导。有益效果是为超声扫查提供标准化规范的引导提高超声扫查的普及度。(The invention provides an ultrasonic scanning guiding method and system based on augmented reality, which comprises the following steps: real scene images of a real scanned scene are shot and displayed in real time by an augmented reality device, and the background positioning identifier and the probe positioning identifier are respectively tracked according to the real scene images to obtain first position and attitude information of the background positioning identifier and second position and attitude information of the ultrasonic probe; the augmented reality equipment superposes and renders a scanned target region on a real scene image according to the first attitude information and the pre-acquired spatial position calibration information of the scanned target region of the subject, and superposes and renders a probe position region on the real scene image according to the second attitude information; and the augmented reality equipment generates and displays a corresponding scanning prompt according to the coincidence state between the probe position area and the scanning target area in the ultrasonic scanning process so as to carry out ultrasonic scanning guidance on an operator. The ultrasonic scanning system has the beneficial effects that the standardized guidance is provided for ultrasonic scanning, and the popularity of ultrasonic scanning is improved.)

1. An ultrasonic scanning guiding method based on augmented reality is characterized in that a real scanning scene is set up in advance, at least one background positioning mark used for indicating the scanning pose of a subject and an ultrasonic probe used for carrying out ultrasonic scanning on the subject are arranged in the real scanning scene, and a probe positioning mark is arranged on the ultrasonic probe;

the ultrasound scanning guidance method comprises the following steps:

step S1, real scene images of the real scanned scene are shot and displayed in real time by an augmented reality device, and the background positioning identifier and the probe positioning identifier are respectively tracked according to the real scene images to obtain first position and posture information of the background positioning identifier and second position and posture information of the ultrasonic probe;

step S2, the augmented reality device superimposes and renders a scanned target region of the subject on the real scene image according to the first pose information and the pre-acquired spatial position calibration information of the scanned target region, and superimposes and renders a probe position region on the real scene image according to the second pose information;

and step S3, the augmented reality equipment generates and displays a corresponding scanning prompt according to the coincidence state between the probe position area and the scanning target area in the ultrasonic scanning process so as to guide the ultrasonic scanning of an operator.

2. The ultrasound scanning guidance method according to claim 1, wherein before executing the step S1, the method includes a process of acquiring spatial position calibration information, which includes:

step A1, shooting and displaying a first scene image containing the subject and the background positioning identifier by adopting the augmented reality device;

step A2, the augmented reality device tracks the background positioning identifier according to the first scene image to obtain current pose information of the background positioning identifier;

step A3, the augmented reality device processes the current pose information to obtain the spatial position calibration information of the scanned target area of the subject.

3. The ultrasound scanning guidance method according to claim 2, wherein in the step a3, the augmented reality device processes the spatial position calibration information of the scanned target region of the subject according to the current pose information and a preset initial calibration information representing a spatial position relationship between the background positioning identifier and the scanned target region.

4. The ultrasound scanning guidance method according to claim 1, wherein in step S1, the method further comprises:

and the augmented reality equipment superposes and renders an indicating mark according to the position of the first position and posture information on the background positioning mark on the real scene image so as to indicate the first position and posture information.

5. The ultrasound scan guidance method of claim 4, further comprising a tracking adjustment process before executing the step S2, comprising:

the augmented reality equipment judges whether the real-time indication state of the indication mark is consistent with a preset standard indication state:

if yes, go to step S2;

if not, an adjustment prompt is given, and then the process returns to the step S1.

6. The ultrasound scanning guidance method according to claim 1, before executing the step S3, further comprising:

and the augmented reality equipment receives and adjusts the position of the scanning target area subjected to the overlapped rendering according to an external manual adjusting instruction.

7. The ultrasound scanning guidance method of claim 1, wherein the scanned target area comprises at least one target sub-area;

the step S3 includes:

step S31, for each target sub-region, the augmented reality device determines, in an ultrasound scanning process, whether the probe position region currently corresponding to the ultrasound probe coincides with the target sub-region:

if yes, go to step S32;

if not, giving a first scanning prompt to guide the operator to adjust the position of the ultrasonic probe, and then returning to the step S31;

step S32, the augmented reality device counts the coincidence time between the probe position region and the target sub-region, and determines whether the coincidence time is less than a time threshold:

if yes, giving a second scanning prompt to guide the operator to scan the target sub-region again, and then returning to the step S31;

if not, giving a third scanning prompt to prompt the operator that the scanning of the target sub-region is successful, and then turning to the step S33;

step S33, the augmented reality device determines whether all the target sub-regions in the scanned target region are scanned completely:

if yes, quitting;

if not, a fourth scanning prompt is given to guide the operator to scan the next target sub-area, and then the step S31 is returned.

8. The ultrasound scanning guidance method according to claim 1, wherein in step S3, the method further comprises:

and the augmented reality equipment receives and displays the ultrasonic image obtained by scanning the ultrasonic probe in real time in the ultrasonic scanning process.

9. The ultrasound scanning guidance method of claim 1, wherein the background positioning mark and the probe positioning mark are two-dimensional codes.

10. An augmented reality-based ultrasound scanning guidance system, which is characterized in that the ultrasound scanning guidance method according to any one of claims 1-9 is applied, and the ultrasound scanning guidance system comprises:

the background positioning mark is arranged in a pre-established real scanning scene and used for indicating the scanning pose of a subject;

the ultrasonic probe is provided with a probe positioning mark;

augmented reality equipment connects ultrasonic probe, augmented reality equipment includes:

the image acquisition module is connected with an image display module and is used for shooting a real scene image of the real scanned scene in real time and sending the real scene image to the image display module for displaying;

the first processing module is connected with the image acquisition module and used for acquiring and respectively tracking the background positioning identifier and the probe positioning identifier according to the real scene image to obtain first position and attitude information of the background positioning identifier and second position and attitude information of the ultrasonic probe;

the second processing module is respectively connected with the first processing module and the image display module, and is used for superposing and rendering a scanned target area on the real scene image according to the first attitude information and the pre-acquired spatial position calibration information of the scanned target area of the subject, superposing and rendering a probe position area on the real scene image according to the second attitude information, and displaying the probe position area through the image display module;

and the scanning guide module is respectively connected with the second processing module and the image display module and is used for generating a corresponding scanning prompt according to the superposition state between the probe position area and the scanning target area in the ultrasonic scanning process and displaying the scanning prompt through the image display module so as to carry out ultrasonic scanning guide on an operator.

Technical Field

The invention relates to the technical field of ultrasonic scanning, in particular to an ultrasonic scanning guiding method and system based on augmented reality.

Background

Medical ultrasound imaging is a non-invasive medical imaging means, and medical ultrasound scanning is also an important method for early screening, detection and diagnosis of many diseases at present, and is generally used for non-invasive screening and disease diagnosis of human organs, such as thyroid nodule scanning. However, the current medical ultrasound scanning is a technology highly dependent on the manual operation experience of the sonographer, and has higher requirements on the operation level and the training time of the sonographer.

A sonographer requires extensive training and practice to skillfully place an ultrasound probe at a target location on a subject's body in order to obtain good quality ultrasound images that can be used for screening and diagnostic applications. In addition, since the current medical ultrasound scanning is highly dependent on the manual operation experience of the sonographer, the standard deviation of the acquired ultrasound data is large.

However, the sonographer may not have the opportunity or condition to receive on-site training and high quality on-site guidance for a long enough time, resulting in large quality variations in the ultrasound data it acquires, which is not conducive to performing standardized image analysis and disease diagnosis. Therefore, a guidance method capable of standardizing the ultrasound scanning is needed, the accessibility of high-quality training is improved, the quality difference of image acquisition is reduced, more ultrasound doctors are assisted to obtain standardized medical ultrasound scanning image data, the application scene of the ultrasound scanning is expanded, and the popularity of the ultrasound scanning is improved.

Disclosure of Invention

Aiming at the problems in the prior art, the invention provides an ultrasonic scanning guiding method based on augmented reality, which is characterized in that a real scanning scene is set up in advance, at least one background positioning mark used for indicating the scanning pose of a subject and an ultrasonic probe used for carrying out ultrasonic scanning on the subject are arranged in the real scanning scene, and a probe positioning mark is arranged on the ultrasonic probe;

the ultrasound scanning guidance method comprises the following steps:

step S1, real scene images of the real scanned scene are shot and displayed in real time by an augmented reality device, and the background positioning identifier and the probe positioning identifier are respectively tracked according to the real scene images to obtain first position and posture information of the background positioning identifier and second position and posture information of the ultrasonic probe;

step S2, the augmented reality device superimposes and renders a scanned target region of the subject on the real scene image according to the first pose information and the pre-acquired spatial position calibration information of the scanned target region, and superimposes and renders a probe position region on the real scene image according to the second pose information;

and step S3, the augmented reality equipment generates and displays a corresponding scanning prompt according to the coincidence state between the probe position area and the scanning target area in the ultrasonic scanning process so as to guide the ultrasonic scanning of an operator.

Preferably, before executing step S1, the method includes a process of acquiring spatial location calibration information, including:

step A1, shooting and displaying a first scene image containing the subject and the background positioning identifier by adopting the augmented reality device;

step A2, the augmented reality device tracks the background positioning identifier according to the first scene image to obtain current pose information of the background positioning identifier;

step A3, the augmented reality device processes the current pose information to obtain the spatial position calibration information of the scanned target area of the subject.

Preferably, in step a3, the augmented reality device processes the current pose information and preset initial calibration information indicating a spatial position relationship between the background positioning identifier and the scanned target region to obtain the spatial position calibration information of the scanned target region of the subject.

Preferably, the step S1 further includes:

and the augmented reality equipment superposes and renders an indicating mark according to the position of the first position and posture information on the background positioning mark on the real scene image so as to indicate the first position and posture information.

Preferably, before executing the step S2, a tracking adjustment process is further included, including:

the augmented reality equipment judges whether the real-time indication state of the indication mark is consistent with a preset standard indication state:

if yes, go to step S2;

if not, an adjustment prompt is given, and then the process returns to the step S1.

Preferably, before executing step S3, the method further includes:

and the augmented reality equipment receives and adjusts the position of the scanning target area subjected to the overlapped rendering according to an external manual adjusting instruction.

Preferably, the scanned target area comprises at least one target sub-area;

the step S3 includes:

step S31, for each target sub-region, the augmented reality device determines, in an ultrasound scanning process, whether the probe position region currently corresponding to the ultrasound probe coincides with the target sub-region:

if yes, go to step S32;

if not, giving a first scanning prompt to guide the operator to adjust the position of the ultrasonic probe, and then returning to the step S31;

step S32, the augmented reality device counts the coincidence time between the probe position region and the target sub-region, and determines whether the coincidence time is less than a time threshold:

if yes, giving a second scanning prompt to guide the operator to scan the target sub-region again, and then returning to the step S31;

if not, giving a third scanning prompt to prompt the operator that the scanning of the target sub-region is successful, and then turning to the step S33;

step S33, the augmented reality device determines whether all the target sub-regions in the scanned target region are scanned completely:

if yes, quitting;

if not, a fourth scanning prompt is given to guide the operator to scan the next target sub-area, and then the step S31 is returned.

Preferably, the step S3 further includes:

and the augmented reality equipment receives and displays the ultrasonic image obtained by scanning the ultrasonic probe in real time in the ultrasonic scanning process.

The background positioning mark and the probe positioning mark are two-dimensional codes.

The invention also provides an ultrasonic scanning guidance system based on augmented reality, which applies the ultrasonic scanning guidance method and comprises the following steps:

the background positioning mark is arranged in a pre-established real scanning scene and used for indicating the scanning pose of a subject;

the ultrasonic probe is provided with a probe positioning mark;

augmented reality equipment connects ultrasonic probe, augmented reality equipment includes:

the image acquisition module is connected with an image display module and is used for shooting a real scene image of the real scanned scene in real time and sending the real scene image to the image display module for displaying;

the first processing module is connected with the image acquisition module and used for acquiring and respectively tracking the background positioning identifier and the probe positioning identifier according to the real scene image to obtain first position and attitude information of the background positioning identifier and second position and attitude information of the ultrasonic probe;

the second processing module is respectively connected with the first processing module and the image display module, and is used for superposing and rendering a scanned target area on the real scene image according to the first attitude information and the pre-acquired spatial position calibration information of the scanned target area of the subject, superposing and rendering a probe position area on the real scene image according to the second attitude information, and displaying the probe position area through the image display module;

and the scanning guide module is respectively connected with the second processing module and the image display module and is used for generating a corresponding scanning prompt according to the superposition state between the probe position area and the scanning target area in the ultrasonic scanning process and displaying the scanning prompt through the image display module so as to carry out ultrasonic scanning guide on an operator.

The technical scheme has the following advantages or beneficial effects: the real scene image of the subject is shot in real time by building a real scanning scene, the background positioning identification and the probe positioning identification are tracked based on the real scene image, and then the real scene image is overlaid with rendering guide information, so that standardized and standard guide is provided for ultrasonic scanning, the accessibility of high-quality training is improved, the quality difference of image acquisition is reduced, more ultrasonic doctors are assisted to obtain standardized medical ultrasonic scanning image data, the application scene of ultrasonic scanning is expanded, and the popularity of ultrasonic scanning is improved.

Drawings

FIG. 1 is a schematic diagram of a real scanned scene according to a preferred embodiment of the present invention;

fig. 2 is a schematic flow chart of an ultrasound scanning guidance method based on augmented reality according to a preferred embodiment of the present invention;

FIG. 3 is a schematic diagram of a display interface of an augmented reality device according to a preferred embodiment of the invention;

FIG. 4 is a flowchart illustrating a process of obtaining spatial location calibration information according to a preferred embodiment of the present invention;

FIG. 5 is a flowchart illustrating a tracking adjustment process according to a preferred embodiment of the present invention;

FIG. 6 is a schematic diagram of a display interface of an augmented reality device according to a preferred embodiment of the invention;

FIG. 7 is a flow chart illustrating a scanning guidance process according to a preferred embodiment of the present invention;

fig. 8 is a schematic structural diagram of an ultrasound scanning guidance system based on augmented reality according to a preferred embodiment of the present invention.

Detailed Description

The invention is described in detail below with reference to the figures and specific embodiments. The present invention is not limited to the embodiment, and other embodiments may be included in the scope of the present invention as long as the gist of the present invention is satisfied.

In a preferred embodiment of the present invention, based on the above problems in the prior art, an ultrasound scanning guidance method based on augmented reality is provided, as shown in fig. 1, a real scanning scene is set up in advance, at least one background positioning mark 110 for indicating a scanning pose of a subject 102 and an ultrasound probe 106 for performing ultrasound scanning on the subject are provided in the real scanning scene, and a probe positioning mark 108 is provided on the ultrasound probe 106;

as shown in fig. 2, the ultrasound scanning guidance method includes:

step S1, real scene images of real scanned scenes are shot and displayed in real time by an augmented reality device, and the background positioning identifier and the probe positioning identifier are respectively tracked according to the real scene images to obtain first position and posture information of the background positioning identifier and second position and posture information of the ultrasonic probe;

step S2, the augmented reality device superimposes and renders a scanned target area on the real scene image according to the first attitude information and the pre-acquired spatial position calibration information of the scanned target area of the subject, and superimposes and renders a probe position area on the real scene image according to the second attitude information;

and step S3, generating and displaying a corresponding scanning prompt according to the coincidence state between the probe position area and the scanning target area in the ultrasonic scanning process by the augmented reality equipment so as to guide the operator in the ultrasonic scanning.

Specifically, in this embodiment, as shown in fig. 1, the background positioning mark 110 and the probe positioning mark 108 may be two-dimensional codes. The augmented reality equipment is provided with the camera and the image processor, a plurality of frames of scanned live-action images are collected in real time through the camera, and then the image processor processes the plurality of frames of scanned live-action images to obtain real scanned images. The augmented reality device 104 may be, but is not limited to, a smart phone, a tablet computer, a portable mobile device, augmented reality glasses, and the like, and the augmented reality device 104 has image acquisition and image processing functions and is loaded with related software. The operator 101 may hold or wear the above-described augmented reality device 104, and scan the scanned target region with the ultrasound probe 106 according to the real scene image superimposed with the probe position region and the scanned target region and the scanning prompt displayed on the augmented reality device 104.

Further, based on the constructed real scanning scene, any interested part, such as neck ultrasound, abdomen ultrasound and the like, can be scanned accurately and with high quality through the ultrasonic probe combined with the augmented reality equipment. Preferably, a background plate 112 may be further disposed in the real scanning scene for installing or adhering the background positioning identifier 110, and the background plate 112 may be a flat background plate 112 perpendicular to the ground. As shown in fig. 1, the subject 102 is instructed to sit in a specific posture in front of a vertically smooth background plate 112, the background positioning mark 110 is installed or pasted on the background plate 112 at a designated position near the sitting position of the subject 102, as shown in fig. 3, the background positioning mark 110 may be two, respectively, a first mark 1101 and a second mark 1102, which are respectively arranged on two sides of the head of the subject 102, so as to indicate the scanning pose of the subject 102, and at the same time, serve as a limit indication for the position sitting of the subject 102, that is, instruct the subject 102 to ensure that the head is located between the first mark 1101 and the second mark 1102 when sitting. It should be noted that the setting position and the setting number of the background positioning markers 110 are not limited, and when scanning different interested regions, different background positioning markers 110 may be set according to the human physiological structure ratio, and then the spatial position of the interested region is deduced based on the tracking of the background positioning markers 110, so as to provide the ultrasound scanning standardized guidance.

Taking thyroid ultrasound scanning as an example for illustration, the operator 101 may face the subject 102 and sit right in front of the subject, hold or wear the augmented reality device 104, and take a picture of the subject 102, the background positioning identifier 110 and the scanned live-action through a camera of the augmented reality device 104 to obtain a scanned live-action image, which is displayed on a display screen of the augmented reality device 104. At the same time, the operator 101 places the ultrasound probe 106 in the scanned area and ensures that the ultrasound probe 106 can be captured by the camera. As the camera tracks the ultrasound probe 106 and the probe positioning identifier 108 disposed on the ultrasound probe 106, and generates guidance information as shown in fig. 3 based on the tracking of the probe positioning identifier 108, the guidance information includes an indication mark 201 of a background positioning identifier, a probe position region 202, and a scanning target region 204, and the guidance information further includes a scanning prompt given when the ultrasound probe 106 moves during an ultrasound scanning process, guides the operator 101 to complete the acquisition of an ultrasound image, effectively reduces dependence on manual operation experience of the operator 101, reduces quality difference of image acquisition, and assists more sonographers to obtain standardized medical ultrasound scanning image data, thereby expanding an application scene of ultrasound scanning and improving popularity of ultrasound scanning.

As a preferred embodiment, when performing overlay rendering, the background positioning identifier 110, the probe positioning identifier 108, the scanned target region 204, and the ultrasound probe 106 may be defined in the same scanned spatial coordinate system; the swept spatial coordinate system is a three-dimensional rectangular coordinate system defined by the augmented reality device 104. In particular, one possible coordinate system is defined as: the origin of the scanned spatial coordinate system may be located at the camera position of the augmented reality device 104, the X-axis and the Y-axis of the scanned spatial coordinate system are respectively parallel to the horizontal axis direction and the vertical axis direction of the camera imaging plane, and the Z-axis of the scanned spatial coordinate system is parallel to the depth direction of the camera. It should be noted that the coordinate system definition described in the present embodiment is only an example, and the coordinate system definition is not specifically limited.

The spatial position calibration information is used to represent offset information of the scanned target area 204 with respect to the background positioning mark 110, including an offset position and an offset orientation. Further preferably, the offset information may be defined in a background coordinate system which is a three-dimensional rectangular coordinate system defined by the background positioning two-dimensional code. In particular, one possible coordinate system is defined as: the origin of the background coordinate system can be located at the center of one of the background positioning marks, the X axis and the Y axis of the background coordinate system are respectively parallel to the horizontal axis direction and the longitudinal axis direction of the background positioning mark, and the Z axis of the background coordinate system is perpendicular to the background positioning mark. Based on this, one possible offset information is defined as: the position of the center point of the scanning target area 204 relative to the origin of the background coordinate system is defined as an offset position, and the orientation of the scanning target area 204 relative to the Z-axis of the background coordinate system is defined as an offset orientation. It should be noted that the definition of the coordinate system and the offset information described in this embodiment is only an example, and the definition of the coordinate system and the offset information is not specifically limited.

Based on this, the description is given by taking the superposition rendering of the target scanning area 204 as an example, and based on the initial calibration values of the spatial position calibration information preset in the mobile device, including the shape and size of the scanning target position itself, the relative position relationship among the scanning target positions, and the offset position and the offset orientation of the scanning target position in the background coordinate system, the scanning target area 204 is superposed and rendered on the real scene image. According to the space position calibration information obtained by tracking, the position and orientation information of the scanned target area can be converted into a scanned space coordinate system from a background coordinate system through a three-dimensional coordinate conversion method, so that the position and orientation of the scanned target area in the scanned space coordinate system are obtained. Further, according to the camera parameters of the augmented reality device 104 (mainly representing the imaging characteristics of the camera, and the projection relationship between the three-dimensional coordinate of the camera and the two-dimensional imaging plane), the position and orientation of the scanned target region in the scanned spatial coordinate system can be projected in a superimposed manner to the corresponding position in the real scene image by the camera imaging and projection principle.

In a preferred embodiment of the present invention, before executing step S1, as shown in fig. 4, the method includes a process of obtaining the spatial location calibration information, which includes:

step A1, shooting and displaying a first scene image containing a subject and a background positioning mark by using an augmented reality device;

step A2, tracking the background positioning identifier by the augmented reality device according to the first scene image to obtain current pose information of the background positioning identifier;

and step A3, processing the augmented reality equipment according to the current pose information to obtain the spatial position calibration information of the scanned target area of the subject.

Specifically, in this embodiment, before the ultrasound scanning, the background positioning identifier is tracked first to obtain the spatial position calibration information of the scanned target region of the subject. As described above, the subject 102 sits at the designated position indicated by the background positioning identifier 110 in a specific posture, and the display screen of the augmented reality device 104 held or worn by the operator 101 displays the first scene image in real time, where the first scene image is acquired by the RGB camera carried by the augmented reality device 104, and the first scene image only includes the subject and the background positioning identifier 110, and does not include other objects, so as to eliminate interference. After the first scene image is obtained, the augmented reality device 104 may perform recognition tracking on the background positioning identifier 110 by using a tracking method of visible light image recognition, and process the first scene image by analyzing and comparing the size and the orientation of the background positioning identifier 110 acquired in an actual operation to obtain a spatial position of the background positioning identifier 110 relative to the camera, that is, the spatial position calibration information, where the processing of the first scene image may be implemented based on a perspective principle of near-far-near. It should be noted that, in the present embodiment, only some tracking means of the background positioning identifier 110 are exemplarily illustrated, and the tracking method is not particularly limited.

The spatial position calibration information is used to represent offset information of the scanned target area 204 with respect to the background positioning mark 110, including an offset position and an offset orientation. Further preferably, the offset information may be defined in a background coordinate system which is a three-dimensional rectangular coordinate system defined by the background positioning two-dimensional code. In particular, one possible coordinate system is defined as: the origin of the background coordinate system can be located at the center of one of the background positioning marks, the X axis and the Y axis of the background coordinate system are respectively parallel to the horizontal axis direction and the longitudinal axis direction of the background positioning mark, and the Z axis of the background coordinate system is perpendicular to the background positioning mark. Based on this, one possible offset information is defined as: the position of the center point of the scanning target area 204 relative to the origin of the background coordinate system is defined as an offset position, and the orientation of the scanning target area 204 relative to the Z-axis of the background coordinate system is defined as an offset orientation. It should be noted that the definition of the coordinate system and the offset information described in this embodiment is only an example, and the definition of the coordinate system and the offset information is not specifically limited.

In a preferred embodiment of the present invention, in step a3, the augmented reality device processes the current pose information and a preset initial calibration information representing a spatial position relationship between the background positioning identifier and the scanned target region to obtain spatial position calibration information of the scanned target region of the subject.

Specifically, in this embodiment, the initial calibration information is obtained based on a standardized test environment and a typical value of a human anatomy structure. Specifically, the standardized test environment corresponds to a real scanning scene in the present technical solution, and a relative position relationship among the background board 112, the first identifier 1101, and the second identifier 1102 in the standardized test environment is a standardized fixed value. More specifically, when a subject sits in a defined position, the position of the subject's body position (e.g., head) relative to the background positioning indicia can be considered close to a standardized fixed value. From typical values of human anatomy, a subject's scanned target region can be considered to be close to a standardized fixed value with respect to its body position (e.g., head). Therefore, it can be considered that, after the subject is seated according to the standard test environment defined position, the relative position and orientation of the scanned target area with respect to the background positioning mark, i.e. the initial calibration information, are close to a standard fixed value. And the mark of the scanned target area obtained by rendering according to the initial calibration information of the space calibration information is close to the actual scanned target area of the subject. Based on the above, after the current pose information of the background positioning identifier is obtained, the spatial position calibration information of the scanned target area of the subject can be obtained by processing in combination with the initial calibration information. Preferably, the spatial position calibration information is stored in the augmented reality device for use in a subsequent scanning process.

In a preferred embodiment of the present invention, step S1 further includes:

and the augmented reality equipment superposes and renders an indicating mark at the position of the background positioning identifier on the real scene image according to the first position and posture information so as to indicate the first position and posture information.

Specifically, in this embodiment, the first pose information and the second pose information may be obtained by a tracking method including, but not limited to, visible light image recognition, and after the first pose information is obtained, the first pose information is indicated by superimposing and rendering a directional three-dimensional geometric object on the background positioning identifier 110 in the real scene image as the indication mark 201, where the first pose information includes a real-time position and an orientation. The position and orientation of the indicator 201 rendered in the interface are the same as those of the tracking result of the background positioning identifier 110 bound to the tracking result of the background positioning identifier 110.

In a preferred embodiment of the present invention, before the step S2 is executed, a tracking adjustment process is further included, as shown in fig. 5, which includes:

the augmented reality equipment judges whether the real-time indication state of the indication mark is consistent with the preset standard indication state:

if yes, go to step S2;

if not, an adjustment prompt is given, and then the process returns to step S1.

Preferably, under normal tracking conditions, the center of the indicating mark 201 coincides with the center of the background positioning mark 110, and the direction is upward along the left and right sides of the background positioning mark 110, i.e. the standard indicating state. Therefore, the indication mark 201 can serve as a tracking status indication of the background positioning indicator 110, and when the position or the direction of the indication mark does not meet the standard indication status, it can be considered that the current augmented reality device 104 is tracking the background positioning indicator 110 in the real scene image inaccurately, or the tracking result is incorrect. This situation alerts the operator 101 that the augmented reality device 104 should be adjusted to re-track the background location marker 110 so that the overlay rendered indicator 201 conforms to the normal tracking situation.

In a preferred embodiment of the present invention, before the step S3 is executed, the method further includes:

and receiving and carrying out position adjustment on the overlapped and rendered scanning target area by the augmented reality equipment according to an external manual adjusting instruction.

Specifically, in this embodiment, as described above, the mark of the scanned target area rendered according to the initial calibration value of the spatial calibration information is close to the actual scanned target area of the subject, but there is an error due to the difference between the test environment and the subject, which causes the difference between the rendered mark of the scanned target area and the actual scanned target area of the subject. As shown in fig. 6, if there is a deviation, the operator 101 may manually click a manual adjustment button 208 on the screen interface of the augmented reality device 104 to adjust the spatial position of the scanned target region 204. The augmented reality device 104 then updates the scanned target region 204 according to the adjusted spatial position calibration information until the rendered result matches the scanned target region of the subject's body in the real scene image. Preferably, after the adjustment is completed, the operator 101 may manually click the relevant confirmation key 210 on the screen interface of the augmented reality device 104 to confirm and save the adjusted scanned target area.

In a preferred embodiment of the present invention, the scanned target area comprises at least one target sub-area;

as shown in fig. 7, step S3 includes:

step S31, aiming at each target sub-area, judging whether the probe position area corresponding to the ultrasonic probe currently coincides with the target sub-area or not by the augmented reality equipment in the ultrasonic scanning process:

if yes, go to step S32;

if not, giving a first scanning prompt to guide the operator to adjust the position of the ultrasonic probe, and then returning to the step S31;

step S32, the augmented reality device counts the coincidence time between the probe position area and the target sub-area, and judges whether the coincidence time is less than a time threshold:

if yes, a second scanning prompt is given to guide the operator to scan the target sub-area again, and then the operation returns to the step 31;

if not, a third scanning prompt is given to prompt the operator that the scanning of the target sub-area is successful, and then the step S33 is turned to;

step S33, the augmented reality device determines whether all target sub-regions in the scanned target region are scanned:

if yes, quitting;

if not, a fourth scanning prompt is given to guide the operator to scan the next target sub-region, and then the process returns to step S31.

Specifically, in this embodiment, the target sub-regions may be divided according to actual scanning requirements, and as shown in fig. 6, preferably, a scanning progress prompt bar 206 is further disposed on a display screen of the augmented reality device 104, and the scanning progress prompt bar 206 may display scanning progress of each target sub-region.

As shown in fig. 6, the target sub-regions are 5 as an example, and are respectively the target sub-region 1 to the target sub-region 5. When the ultrasonic scanning is performed on the target sub-area 1, the operator 101 moves the ultrasonic probe according to the probe position area 202 and the scanning target area 204 which are tracked and displayed on the display screen of the augmented reality device 1, so that the probe position area coincides with the target sub-area 1, when it is detected that the probe position area does not coincide with the target sub-area 1, it indicates that the ultrasonic probe has not moved to the target sub-area, at this time, a first scanning prompt can be given to guide the operator to perform position adjustment on the ultrasonic probe until the probe position area coincides with the target sub-area 1, then, the ultrasonic scanning can be performed on the target sub-area 1, meanwhile, the residence time of the ultrasonic probe in the target sub-area 1 is counted to obtain the coincidence time, and further, when the coincidence time reaches a preset time threshold, it is determined that the scanning of the target sub-area 1 is successfully completed, and if not, a second scanning prompt is given, to guide the operator to scan the target sub-area 1 again until the target sub-area 1 is scanned successfully. The above determination method of the coincidence of the probe position region and the target sub-region 1 includes, but is not limited to, a method of calculating the coincidence area between the probe position region and the target sub-region 1.

Preferably, during the scanning process, the scanning progress prompt column 206 may give a scanning progress, where the scanning progress may include the total number of the target sub-regions and the total number of the target sub-regions that have been scanned, and may further include giving an indication of completion of scanning of each target sub-region, for example, a red color mark is given when the target sub-region 1 is not scanned or scanning fails, and a corresponding green color mark is given when the scanning of the target sub-region 1 succeeds. The scanning completion indication may also be given in a text manner, which is not limited herein. Further preferably, the position of the corresponding target sub-region 1 in the target scanning region 204 may also synchronously give an indication of the scanning completion condition, and the indication manner of the scanning completion condition indication is not limited, so long as the operator 101 can accurately view the indication.

In a preferred embodiment of the present invention, step S3 further includes:

and the augmented reality equipment receives and displays the ultrasonic image obtained by scanning the ultrasonic probe in real time in the ultrasonic scanning process.

It should be understood that, although the steps in the above-described flowchart diagrams are sequentially shown as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the above-mentioned flowcharts may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or the stages is not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a part of the steps or the stages in other steps.

The invention also provides an ultrasound scanning guidance system based on augmented reality, which applies the ultrasound scanning guidance method, as shown in fig. 1 and 8, the ultrasound scanning guidance system comprises:

at least one background positioning mark 110, which is arranged in a pre-established real scanning scene and used for indicating the scanning pose of a subject 102;

the ultrasonic probe 106 is provided with a probe positioning mark 108;

augmented reality device 104 connects ultrasonic probe 106, and augmented reality device 104 includes:

the image acquisition module 1041 is connected to an image display module 1042, and is configured to capture a real scene image of a real scanned scene in real time and send the real scene image to the image display module for display;

the first processing module 1043 is connected to the image acquisition module 1041, and is configured to acquire and track the background positioning identifier and the probe positioning identifier respectively according to the real scene image to obtain first position and attitude information of the background positioning identifier and second position and attitude information of the ultrasonic probe, and further process according to the first position and attitude information to obtain spatial position calibration information of a scanned target region of the subject;

the second processing module 1044 is respectively connected to the first processing module 1043 and the image display module 1042, and is configured to superimpose and render a probe position area on the real scene image according to the second pose information, superimpose and render a scanned target area on the real scene image according to the spatial position calibration information, and display the scanned target area through the image display module;

the scanning guidance module 1045 is connected to the second processing module 1044 and the image display module 1042, respectively, and configured to generate a corresponding scanning prompt according to a coincidence state between the probe position region and the scanning target region in the ultrasound scanning process, and display the scanning prompt through the image display module 1042, so as to guide an operator through ultrasound scanning.

Specifically, in this embodiment, taking an ultrasonic thyroid scanning as an example, the real scanning scene may include a seat of the subject and a vertically flat background plate, and the background plate is provided with a background positioning identifier for guiding the subject to sit in a specific posture and position. The augmented reality device 104 is configured with a camera as an image acquisition module 1041, and a plurality of frames of scanned live-action images acquired by the camera in real time are processed to obtain a real scanned image. The augmented reality device 104 may be, but is not limited to, a smart phone, a tablet computer, a portable mobile device, augmented reality glasses, and the like, and the augmented reality device 104 has image acquisition and image processing functions and is loaded with related software. The operator 101 may hold or wear the above-described augmented reality device 104, and scan the scanned target region with the ultrasound probe 106 according to the real scene image superimposed with the probe position region and the scanned target region and the scanning prompt displayed on the augmented reality device 104.

It should be understood that, although the respective parts in the above-described system configuration diagram are sequentially connected as indicated by the arrows, the information transmission between these components is not necessarily performed sequentially in the order indicated by the arrows. Unless explicitly stated otherwise herein, there is no strict order limitation to the transfer of information and data between these components, and the information and data transfer operations may be performed in other orders.

The ultrasound scanning standardized method and system provided by the application further comprise a computer program product, wherein the computer program product comprises a computer program, and when the computer program is executed by a processor, all functions related to ultrasound scanning standardized guide information generation, feedback, storage, interaction and the like of the ultrasound scanning method and system are realized.

While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.

18页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种高质量的超声复合成像方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!