Novel endoscope intelligent navigator system based on image recognition and 3D-SLAM real-time modeling

文档序号:1029205 发布日期:2020-10-30 浏览:7次 中文

阅读说明:本技术 一种基于图像识别和3d-slam实时建模的新型内窥镜智能导航器系统 (Novel endoscope intelligent navigator system based on image recognition and 3D-SLAM real-time modeling ) 是由 王玉峰 于 2019-04-25 设计创作,主要内容包括:本发明公开了一种基于图像识别和3D-SLAM实时建模的新型内窥镜智能导航器系统,包括:图像采集单元和三维UWB接受器系统、以及图像处理单元、胶囊肠道机器人,通过获取的全患者肠道图像和病灶精确坐标能够导航胶囊肠道机器人根据患者肠道路径快速自动找到病灶位置取出活体样本,由于有全肠道的图片模型,肠道机器人的运动过程不会损伤患者肠道,不会给患者带来多余的病痛,效果好,便于在产业上推广和应用。(The invention discloses a novel endoscope intelligent navigator system based on image recognition and 3D-SLAM real-time modeling, which comprises: image acquisition unit and three-dimensional UWB receiver system, and image processing unit, capsule intestinal robot, can navigate capsule intestinal robot and find the focus position according to patient's intestinal route fast automatically through the whole patient's intestinal image that acquires and the accurate coordinate of focus and take out the live body sample, owing to there is the picture model of whole intestinal, patient's intestinal can not be harmd to intestinal robot's motion process, can not bring unnecessary ailment for the patient, and is effectual, be convenient for promote and use in the industry.)

1. A novel endoscope intelligent navigator system based on image recognition and 3D-SLAM real-time modeling is characterized in that,

the method comprises the following steps: an image acquisition unit, a three-dimensional UWB receiver system, an image processing unit and a capsule intestinal robot,

the image acquisition unit is an enteroscope (11) provided with a high-speed camera (2) and an RFID label (3) capable of sending data;

the three-dimensional UWB receiver system comprises: the 6 position fixing signal receivers are used for positioning the position of the camera in three dimensions according to the time difference of signals received by the 6 signal receivers, and the positions are marked through multiple times of photographing to obtain an intestinal tract interior map library with coordinates of the patient;

the image processing unit can fuse the pictures into a complete internal map of the intestinal tract of the patient according to the positioning information and the picture information of the pictures by utilizing an image processing fusion technology; determining the position coordinates of the focus through the internal map of the intestinal tract of the patient;

the capsule intestinal robot is used for automatically finding the position of a focus quickly through an intestinal map of a patient and the position coordinates of the focus to take out a living body sample.

Technical Field

The invention relates to the technical field of medical equipment, in particular to a novel endoscope intelligent navigator system based on image recognition and 3D-SLAM real-time modeling.

Background

With the development and popularization of intelligent medicine, medical detection and surgery are more developed towards painless and noninvasive directions, and anorectal detection is inevitable for patients with intestinal polyps. Intestinal polyps refer to abnormally growing tissue protruding from the surface of the intestinal mucosa and are collectively referred to as polyps until the nature of the pathology is not determined. Polyps are mainly classified into inflammatory and adenomatous types. Inflammatory polyp can disappear by itself after the inflammation is cured; adenomatous polyps generally do not disappear by themselves and tend to become malignant. The most effective means of detecting polyps and determining the nature of their lesions is to conduct a full-length colonoscopic (including pathological) examination at regular intervals and to conduct interventional treatments under enteroscopy. However, the accuracy of enteroscopy is only relied on, and living body detection is not required to determine whether the cancer is prone to be cancerated. This brings secondary injury to the patient, and if the living body sample can be brought out of the body through the intestinal robot or the diseased tissue can be directly excised, the pain of the patient can be greatly relieved.

Before the intestinal robot performs the task, the structure of the intestinal tract of the patient and the position of the pathological tissue must be known, so that the navigation control advocates the robot to find the pathological tissue smoothly without damaging the intestinal tract structure. Therefore, the invention is very meaningful in the development of a set of novel endoscope intelligent navigator system.

Disclosure of Invention

The invention aims to provide a novel endoscope intelligent navigator system based on image recognition and 3D-SLAM real-time modeling.

To achieve the aim of the invention, the invention provides a novel endoscope intelligent navigator system based on image recognition and 3D-SLAM real-time modeling,

the method comprises the following steps: an image acquisition unit, a three-dimensional UWB receiver system, an image processing unit and a capsule intestinal robot,

the image acquisition unit is an enteroscope (11) provided with a high-speed camera (2) and an RFID label (3) capable of sending data;

the three-dimensional UWB receiver system comprises: the 6 position fixing signal receivers are used for positioning the position of the camera in three dimensions according to the time difference of signals received by the 6 signal receivers, and the positions are marked through multiple times of photographing to obtain an intestinal tract interior map library with coordinates of the patient;

the image processing unit can fuse the pictures into a complete internal map of the intestinal tract of the patient according to the positioning information and the picture information of the pictures by utilizing an image processing fusion technology; determining the position coordinates of the focus through the internal map of the intestinal tract of the patient;

The capsule intestinal robot is used for automatically finding the position of a focus quickly through an intestinal map of a patient and the position coordinates of the focus to take out a living body sample.

Compared with the prior art, the capsule intestinal robot has the advantages that the capsule intestinal robot can be navigated to quickly and automatically find the focus position to take out a living body sample according to the intestinal path of the patient through the acquired intestinal images of the whole patient and the focus accurate coordinates, the intestinal tract of the patient cannot be damaged in the motion process of the intestinal robot due to the image model of the whole intestinal tract, unnecessary pain cannot be brought to the patient, the effect is good, and the capsule intestinal robot is convenient to popularize and apply in the industry.

Drawings

FIG. 1 is a schematic representation of the operation of the enteroscope of the present application;

FIG. 2 is a schematic diagram of the layout of a three-dimensional UWB receiver system of the present application;

fig. 3 is a schematic view illustrating the automatic navigation of the intestinal robot for finding a lesion according to the present invention.

Detailed Description

The invention is described in further detail below with reference to the figures and specific examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.

It should be noted that "connected" and words used in this application to express "connected," such as "connected," "connected," and the like, include both direct connection of one element to another element and connection of one element to another element through another element.

It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when used in this specification the singular forms "a", "an" and/or "the" include "specify the presence of stated features, steps, operations, elements, or modules, components, and/or combinations thereof, unless the context clearly indicates otherwise.

It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.

Spatially relative terms, such as "above … …", "above … …", "above … …", "above", and the like, may be used herein for ease of description to describe the spatial relationship of one component or module or feature to another component or module or feature as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the component or module in use or operation in addition to the orientation depicted in the figures. For example, if a component or module in the figures is turned over, components or modules described as "above" or "above" other components or modules or configurations would then be oriented "below" or "beneath" the other components or modules or configurations. Thus, the exemplary term "above … …" can include both an orientation of "above … …" and "below … …". The components or modules may also be oriented in other different ways (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.

It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.

1-2, the invention relates to a novel endoscope intelligent navigator system based on image recognition and 3D-SLAM real-time modeling.

First, an image of the patient's intestine is generated, and the image of the intestine is combined with the image of the intestine to determine the specific location of the intestine. Therefore, the process requires the cooperation of the positioning system and the photographing system to restore the intestinal condition of the patient. Namely, 3D-SLAM real-time modeling is carried out. An RFID label 3 capable of sending data is arranged on the enteroscope provided with the high-speed camera 2, and the whole enteroscope is plugged into the intestinal tract 1 of a patient to observe, wherein the RFID label continuously transmits a section of data at a very high frequency. And a three-dimensional UWB receiver system is built, and the system is provided with 6 position-fixed signal receivers, as shown in figure 2, the position-fixed signal receivers are respectively marked with the reference numbers 5, 6, 7, 8, 9 and 10, so that the time difference from the signal to each receiver can be calculated according to the signal value received by the receiver at the same time ratio. It can be determined where the camera is capturing the content of the bowel. And continuously photographing and marking the positions to obtain an intestinal tract interior gallery with coordinates of the patient. And fusing the pictures into a complete internal map of the intestinal tract of the patient according to the positioning information and the picture information of the pictures by utilizing an image processing and fusing technology. Therefore, the condition in the intestinal tract of the patient can be visually and integrally observed, and the position where the focus 15 possibly exists can be accurately found. Can navigate capsule intestinal robot 14 through full patient's intestinal image and focus accurate coordinate and find the focus position according to 13 footpaths of patient's intestines and take out the live body sample fast automatically, owing to there is the picture model of full intestinal, patient's intestinal can not be harmd to intestinal robot's motion process, can not bring unnecessary ailment for the patient.

7页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于图像识别的预防医源性肠壁穿孔的方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!