Device and method for diagnosing stomach pathological changes by deep learning of stomach endoscope images

文档序号:639465 发布日期:2021-05-11 浏览:3次 中文

阅读说明:本技术 利用胃内窥镜图像的深度学习诊断胃病变的装置及方法 (Device and method for diagnosing stomach pathological changes by deep learning of stomach endoscope images ) 是由 赵凡柱 方昌锡 朴世雨 李在浚 崔在镐 洪锡焕 刘容倬 于 2019-09-25 设计创作,主要内容包括:本发明涉及一种从内窥镜图像中诊断胃病变的方法,从内窥镜图像中诊断胃病变的方法,包括:获得多个胃病变图像的步骤;连接上述多个胃病变图像和患者信息生成数据集的步骤;预处理上述数据集以可用于深度学习算法的步骤;通过将经过预处理过程的上述数据集作为输入,将关于胃病变诊断结果的项目作为输出的学习构建人工神经网络的步骤。(The present invention relates to a method for diagnosing gastric lesions from endoscopic images, comprising: a step of obtaining a plurality of images of gastric lesions; a step of generating a data set by connecting the plurality of stomach lesion images and patient information; preprocessing the data set to be used for a deep learning algorithm; and a step of constructing an artificial neural network by learning with the above-mentioned data set subjected to the preprocessing process as an input and items on the diagnosis result of the gastric disorder as an output.)

1. A lesion diagnostic method, in a method of diagnosing a gastric lesion from an endoscopic image, comprising:

a step of obtaining a plurality of images of gastric lesions;

a step of generating a data set by connecting the plurality of stomach lesion images and patient information;

preprocessing the data set to be used for a deep learning algorithm;

and a step of constructing an artificial neural network by learning with the above-mentioned data set subjected to the preprocessing process as an input and items on the classification result of the gastric lesion as an output.

2. The method for diagnosing a lesion according to claim 1, wherein:

the method also comprises the step of executing the gastric lesion diagnosis through the artificial neural network after the new data set is subjected to the preprocessing process.

3. The method for diagnosing a lesion according to claim 1, wherein:

the data set generation step divides the data set into a learning data set necessary for learning the artificial neural network and a verification data set for verifying the progress of learning of the artificial neural network, and generates the data set.

4. The method of diagnosing a lesion of claim 3, wherein:

the verification dataset is a dataset that does not overlap with the learning dataset.

5. The method of diagnosing a lesion of claim 3, wherein:

the verification data set is data used for evaluating the performance of the artificial neural network when the new data set is input to the artificial neural network after the preprocessing process.

6. The method for diagnosing a lesion according to claim 1, wherein:

the image obtaining step receives a gastric lesion image obtained from an imaging device provided in the endoscope device.

7. The method for diagnosing a lesion according to claim 1, wherein:

the pretreatment step comprises:

cutting a peripheral region of an image not including the gastric lesion from the center of the gastric lesion with use of an image of the gastric lesion included in the data set, and cutting the peripheral region to a size usable for the deep learning algorithm;

moving the stomach lesion image in parallel in the vertical and horizontal directions;

rotating the stomach lesion image;

turning over the stomach lesion image; and

adjusting the color of the stomach lesion image;

performing at least one of a plurality of preprocessing processes to preprocess the stomach lesion image into a state usable for the deep learning algorithm.

8. The method of diagnosing a lesion of claim 7, wherein:

the pretreatment step further comprises:

a step of enlarging image data for increasing the number of data of the gastric lesion image;

the step of enlarging the image data enlarges the stomach lesion image data using at least one of rotation, inversion, cropping, and noise addition of the stomach lesion image.

9. The method for diagnosing a lesion according to claim 1, wherein:

the artificial neural network constructing step is to construct a diagnosis model through learning of a convolutional neural network and a fully-connected neural network, wherein the data set subjected to the preprocessing process is used as input, and items related to the stomach lesion classification result are used as output.

10. The method of diagnosing a lesion according to claim 9, wherein:

the preprocessed data set is used as the input of the convolutional neural network, and the fully-connected neural network takes the output of the convolutional neural network and the patient information as the input.

11. The method of diagnosing a lesion of claim 10, wherein:

the neural network outputs a plurality of characteristic patterns from the plurality of gastric lesion images, and the plurality of characteristic patterns are finally classified by a fully connected neural network.

12. The method of diagnosing a lesion according to claim 9, wherein:

the artificial neural network construction step uses the training data in a deep learning algorithm structure including a convolutional neural network and a fully-connected neural network, and learns by gradually increasing a back propagation algorithm feedback result of a weight value of the neural network structure equivalent to an error.

13. The method of diagnosing a lesion according to claim 2, wherein:

the step of performing a diagnosis of gastric lesions classifies gastric lesions as at least one of late gastric cancer, early gastric cancer, high dysplasia, low dysplasia, and lung tumor.

14. A lesion diagnostic apparatus in an apparatus for diagnosing a lesion from an endoscopic image, comprising:

an image obtaining unit that obtains a plurality of stomach lesion images;

a data generating unit for generating a data set by connecting the plurality of gastric lesion images and patient information;

a data preprocessing unit for preprocessing the data set so as to be usable for a deep learning algorithm; and

and a learning part for constructing an artificial neural network by learning with the data set subjected to the preprocessing process as an input and items regarding the classification result of the gastric lesion as an output.

15. The lesion diagnostic device according to claim 14, wherein:

the gastric lesion diagnosis device further comprises a lesion diagnosis part for performing gastric lesion diagnosis through the artificial neural network after the new data set is subjected to the preprocessing process.

16. A computer-readable recording medium recording a program for executing the method of any one of claims 1 to 13 in a computer.

Technical Field

The present invention claims priority to korean patent application No. 10-2018-0117823, which was filed on 10/2/2018, and the entire contents of the specification and drawings disclosed in the application are incorporated herein by reference.

The present invention relates to a device and a method for diagnosing gastric lesions by deep learning using an endoscopic image of the stomach.

Background

In the case of normal cells, the cells that constitute the smallest unit of the human body maintain the balance in the number of cells through the intracellular regulatory function, growth by division, death, and the like. When the cells are damaged due to some reason, the cells can be recovered through treatment to play the role of normal cells, but when the cells cannot be recovered, the cells die by themselves. However, the abnormal excessive proliferation of cells, which cannot regulate the proliferation or inhibition for various reasons, and the state of tumor formation and destruction of normal tissues due to invasion into peripheral tissues and organs are defined as cancer (cancer). Cancer is extremely important for diagnosis and treatment because the structure and function of normal cells and organs are destroyed by the proliferation of such cells which cannot be inhibited.

Cancer is a disease in which the function of normal cells is impaired by unlimited proliferation of cells, and is typically lung cancer, Gastric Cancer (GC), breast cancer (BRC), colorectal cancer (CRC), and the like, but cancer may occur in any tissue. Although the initial cancer diagnosis is based on the external change of living tissue due to the growth of cancer cells, recently, the diagnosis and detection of living tissue or a small amount of living molecules present in cells using blood, sugar chains (lycol chain), deoxyribonucleic acid (DNA), and the like have been attempted. However, the most commonly used cancer diagnosis method is diagnosis using a tissue sample obtained by biopsy or diagnosis using an image.

In the case of gastric cancer, korea, japan, and the like occur much worldwide, but the incidence of disease is low in the western countries such as the united states, europe, and the like. In case of korea, the incidence of gastric cancer is the first, and the mortality is second to lung cancer. From the classification of gastric cancer, 95% of all are adenocarcinoma that occurs in glandular cells of the mucosa of the stomach wall. In addition, there are lymphoma occurring in the lymphatic system and gastrointestinal stromal tumor occurring in the stromal tissue.

Among them, biopsy causes great pain to patients, is expensive, and takes a long time to reach a diagnosis site. In addition, in the case where a patient actually suffers from cancer, there is a risk that cancer metastasis is induced during biopsy, and in the case where a site of a tissue sample cannot be obtained by biopsy, there is a disadvantage that disease diagnosis cannot be performed before a suspicious tissue is surgically removed.

In diagnosis using an image, cancer is determined based on an X-ray (X-ray) image, a Nuclear Magnetic Resonance (NMR) image obtained using a contrast medium to which a disease target substance is attached, or the like. However, the above-mentioned image diagnosis has a possibility of misdiagnosis depending on the clinical or interpretation proficiency, and has a disadvantage that the accuracy of the apparatus for obtaining the image is high. Further, the most precise instrument cannot detect tumors smaller than several mm, and has a disadvantage that it is difficult to detect the tumors in the early stage of onset of disease. In addition, in order to obtain an image, a patient or a person with a possibility of a disease is exposed to high-energy electromagnetic waves that may induce gene mutation, thereby causing other diseases, and there is a disadvantage in that the number of diagnoses by the image is limited.

Most of the early gastric cancer (ECG) has no clinical symptoms or signs, and problems occur in that it is difficult to timely detect and treat without a screening strategy. At the same time, patients with pre-cancerous conditions such as gastric dysplasia are at a considerable risk of developing gastric cancer.

In the prior art, a doctor preliminarily determines whether or not a stomach cancer is present in a neoplasm generated in the stomach by an endoscope of the stomach based on the shape and size of the interior of the stomach included in an endoscopic image, and confirms a diagnosis by a tissue examination. However, the above method has a problem that different diagnoses are obtained depending on the experience of each doctor, and accurate diagnoses cannot be made in areas where no doctor is available.

In addition, the finding of abnormal lesions obtained by endoscopic devices generally depends on the abnormal shape of the lesion or the color change of the mucous membrane, and the diagnostic accuracy is improved by training and optical techniques and pigment endoscopy (chromoendoscopy). The use of endoscopic imaging techniques such as narrow band imaging (narrow band imaging), confocal imaging (confocal imaging), and magnification techniques (so-called image-enhanced endoscopy) can improve the diagnostic accuracy.

However, examination by a white endoscope alone is the most common examination method, and in the influence-enhanced endoscopy, standardization of procedures and analysis flows for resolving variability between servers and in endoscopes is required.

The art as the background of the present invention is disclosed in Korean laid-open patent publication No. 10-2018-0053957.

Disclosure of Invention

Problems to be solved by the invention

The present invention has been made to overcome the drawbacks of the prior art and an object of the present invention is to provide a gastric disorder diagnosis apparatus which can collect white light gastric endoscopic images (images) obtained from an endoscopic imaging device and diagnose gastric disorders using a deep learning algorithm.

The present invention is directed to overcome the deficiencies of the prior art and to provide a gastric lesion diagnostic device that provides a deep learning model for automatically classifying gastric tumors based on endoscopic images of the stomach.

The present invention has been made to overcome the disadvantages of the prior art, and an object of the present invention is to provide a gastric lesion diagnostic apparatus capable of evaluating in real time a plurality of image data obtained when a doctor (user) examines gastric tumors using an endoscopic apparatus, thereby diagnosing gastric tumors that may be overlooked.

The present invention has been made to overcome the drawbacks of the prior art, and an object of the present invention is to provide a gastric lesion diagnostic apparatus which can automatically classify a gastric neoplasm based on an endoscopic image of a stomach obtained in real time, thereby diagnosing and predicting gastric cancer, gastric dysplasia, or the like.

However, the technical problems to be achieved by the present invention and the embodiments of the present invention are not limited to the above technical problems, and other technical problems may be present.

Means for solving the problems

As an aspect to solve the above technical problem, a method for diagnosing a gastric lesion from an endoscopic image according to an embodiment of the present invention may include: a step of obtaining a plurality of images of gastric lesions; a step of generating a data set by connecting the plurality of stomach lesion images and patient information; preprocessing the data set to be used for a deep learning algorithm; and a step of constructing an artificial neural network by learning with the above-mentioned data set subjected to the preprocessing process as an input and items on the classification result of the gastric lesion as an output.

The method for diagnosing gastric lesion from endoscopic images according to an embodiment of the present invention further includes the step of performing gastric lesion diagnosis through the artificial neural network after the preprocessing process is performed on the new data set.

The data set generation step according to an embodiment of the present invention may generate the data set by dividing the data set into a learning data set necessary for learning the artificial neural network and a verification data set for verifying a degree of progress of learning of the artificial neural network.

The verification dataset according to an embodiment of the present invention may be a dataset that does not overlap with the learning dataset.

The data set for verification according to an embodiment of the present invention may be data used for performance evaluation of the artificial neural network when the new data set becomes an input of the artificial neural network after the preprocessing process.

The image obtaining step according to an embodiment of the present invention may receive a gastric lesion image obtained from an imaging device provided in an endoscope device.

The preprocessing step according to an embodiment of the present invention may include: cutting (crop) a peripheral region of the image, which does not include the gastric lesion, from the gastric lesion image included in the data set with the gastric lesion image as a center, to a size usable for the deep learning algorithm; moving (shift) the stomach lesion image in parallel in the vertical and horizontal directions; rotating (rotation) the stomach lesion image; a step of inverting (flipping) the stomach lesion image; and a step of performing color adjustment (color adjustment) on the stomach lesion image; performing at least one of a plurality of preprocessing processes to preprocess the stomach lesion image into a state usable for the deep learning algorithm.

According to an embodiment of the present invention, the preprocessing step may further include: a step of enlarging image data for increasing the number of data of the gastric lesion image; the step of enlarging the image data enlarges the stomach lesion image data using at least one of rotation, inversion, cropping, and noise addition of the stomach lesion image.

The artificial Neural network constructing step according to an embodiment of the present invention constructs a diagnostic model through learning of a Convolutional Neural network (Convolutional Neural Networks) and a Fully-connected Neural network (Fully-connected Neural Networks) that take the data set subjected to the preprocessing process as an input and take items related to the classification result of the gastric lesion as an output.

The preprocessed data set may be input to the convolutional neural network, and the fully-connected neural network may take as input the output of the convolutional neural network and the patient information.

According to an embodiment of the present invention, the neural network may output a plurality of feature patterns from the plurality of gastric lesion images, and the plurality of feature patterns are finally classified by the fully connected neural network.

According to the above-mentioned artificial Neural network constructing step of an embodiment of the present invention, the training data may be used for a deep learning algorithm structure including a Convolutional Neural network (Convolutional Neural network) and a Fully-connected Neural network (Fully-connected Neural network), and learning is performed by a back propagation (backpropagation) algorithm feedback result of gradually increasing a weight value of the Neural network structure corresponding to an error.

The above-described step of performing a gastric disorder diagnosis according to an embodiment of the present invention may classify gastric disorders into at least one of advanced gastric cancer (advanced gastric cancer), early gastric cancer (early gastric cancer), high-grade dysplasia (high-grade dysplasia), low-grade dysplasia (low-grade dysplasia) and lung tumor (non-neoplasia).

A lesion diagnostic apparatus according to an embodiment of the present invention, in an apparatus for diagnosing a lesion from an endoscopic image, may include: an image obtaining unit that obtains a plurality of stomach lesion images; a data generating unit for generating a data set by connecting the plurality of gastric lesion images and patient information; a data preprocessing unit for preprocessing the data set so as to be usable for a deep learning algorithm; and a learning unit that constructs an artificial neural network by learning with the data set subjected to the preprocessing process as an input and items regarding the classification result of the gastric lesion as an output.

The apparatus for diagnosing a lesion from an endoscopic image according to an embodiment of the present invention may further include a lesion diagnosing part for performing a diagnosis of a gastric lesion through the artificial neural network after the preprocessing process is performed on the new data set.

The above-described subject solutions are exemplary only and should not be construed as limiting the present invention. In addition to the exemplary embodiments described above, additional embodiments may exist in the figures and detailed description of the invention.

Effects of the invention

According to the solution of the above-described object of the present invention, white light gastric endoscopic images (images) obtained from an endoscopic imaging device can be collected, and a gastric lesion can be diagnosed using a deep learning algorithm.

According to the solution of the present invention, it is possible to provide a deep learning model for automatically classifying gastric tumors based on endoscopic images of the stomach and evaluating the generated artificial neural network.

According to the solution of the above-described object of the present invention, it is possible to learn in real time a plurality of image data obtained when a doctor (user) inspects gastric neoplasia with an endoscope apparatus, and to diagnose gastric neoplasia that may be overlooked.

According to the solution of the above-described problems of the present invention, compared to the conventional endoscopic interpretation that requires more than experience, the image obtained by the endoscopic imaging device is learned, and the gastric lesion is classified, thereby achieving the effect of significantly saving the cost and labor cost.

According to the solution of the above-described object of the present invention, by predicting and diagnosing a gastric lesion using a gastric endoscope image obtained from an endoscopic imaging device by the above-described gastric lesion diagnosing device, an objective and consistent interpretation result can be obtained, and a possibility of error and interpretation error in interpretation by a doctor can be reduced, and the device can be used as a clinical decision aid.

However, the effects of the present invention are not limited to the above-mentioned effects, and other effects may be present.

Brief description of the drawings

Fig. 1 is a schematic configuration diagram of a lesion diagnostic apparatus according to an embodiment of the present invention;

fig. 2 is a schematic block diagram of a lesion diagnostic apparatus according to an embodiment of the present invention;

fig. 3 is a schematic view for explaining an embodiment of constructing an artificial neural network in a lesion diagnostic apparatus according to an embodiment of the present invention;

fig. 4 is a flowchart illustrating an operation of a lesion diagnostic method according to an embodiment of the present invention.

Detailed Description

Embodiments of the present invention will be described in detail below with reference to the accompanying drawings to assist those skilled in the art in easily carrying out the invention. The present invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. In order to more clearly explain the present invention, the contents irrelevant to the explanation are omitted, and the same or similar structures are given the same reference numerals throughout the specification.

In the present invention, the term "connected" to a certain portion includes not only the case of "directly connected" but also the case of "electrically connected" or "indirectly connected" to another portion through another member.

In the present invention, when a certain component is referred to as being "on", "above", "upper", "lower" or "lower" another component throughout the specification, the component is not limited to being connected to another component, but also includes other components existing between two components.

When a certain element is referred to as being "included" in a certain portion throughout the specification of the present invention, other elements may be included instead of being excluded unless otherwise stated.

The present invention relates to a device and a method for diagnosing gastric lesions, including a deep learning model that classifies gastric tumors based on gastric endoscopic images obtained from an endoscopic device and evaluates the performance thereof. The invention can interpret the gastric endoscope picture based on the Convolutional Neural network (Convolutional Neural Networks) to automatically diagnose the neoplasm of the stomach.

The present invention can apply a deep learning algorithm called convolutional neural network in the image data set of the gastric endoscope picture to learn through a computer, then decipher the newly input gastric endoscope picture, automatically classify the gastric neoplasm in the corresponding picture through the process, diagnose or predict gastric cancer or gastric dysplasia, etc.

Fig. 1 is a schematic configuration diagram of a lesion diagnostic apparatus according to an embodiment of the present invention.

As shown in fig. 1, the lesion diagnostic apparatus 10, the endoscope apparatus 20, and the display apparatus 23 can transmit or receive data (images, videos, texts) and various communication signals via a network. The lesion diagnostic system 1 may include all kinds of servers, terminals, or devices having data storage and processing functions.

Examples of the Network used for information sharing among the diagnostic apparatus 10, the gastric endoscopic apparatus 20, and the display apparatus 23 include, but are not limited to, a 3GPP (3rd Generation Partnership Project) Network, an lte (long Term evolution) Network, a 5G Network, a wimax (world Interoperability for Microwave access) Network, a wired or Wireless Internet (Internet), a LAN (Local Area Network), a Wireless Local Area Network (Wireless Local Area Network), a wan (wide Area Network), a pan (personal Area Network), a Bluetooth (Bluetooth) Network, a Wifi Network, an nfc (near Field communication) Network, a satellite broadcasting Network, an analog broadcasting Network, a dmb (digital Multimedia broadcasting) Network, and the like.

The endoscopic device 20 may be a device used in gastric endoscopy. The endoscope apparatus 20 may include an operation portion 21 and a main body portion 22. The endoscope apparatus 20 may include a main body portion 22 inserted into a body and an operation portion 21 provided at a rear end of the main body portion 22. The main body 22 may include an imaging unit for imaging the inside of the body, an illumination unit for illuminating the imaging unit, a water jet unit for cleaning the inside of the body to facilitate imaging, a suction unit for sucking foreign matter, air, and the like in the body, and channels corresponding to the plurality of units (units) may be provided in the main body 22. In addition, a biopsy channel (biopsy channel) may be provided in the insertion portion, and an endoscope operator may insert a surgical knife through the biopsy channel to collect tissue inside the body. The imaging unit (i.e., the camera) provided in the endoscope apparatus 20 for imaging the inside of the body may be provided with a small-sized camera. The imaging device can obtain a white light endoscope image.

The imaging section of the endoscope apparatus 20 can transmit or receive the obtained gastric lesion image to the lesion diagnostic apparatus 10 through a network. The lesion diagnostic apparatus 10 may generate a control signal for controlling the biopsy unit based on the diagnosis result of the gastric lesion. A biopsy unit may be a unit that acquires tissue inside the body. The tissue can be judged positive and negative by collecting the tissue inside the body. In addition, cancerous tissue may be removed by harvesting tissue from within the body. For example, the lesion diagnostic device 10 may include an endoscopic device 20 that obtains endoscopic images of the stomach, capturing tissue inside the body. In other words, the gastric endoscopic image obtained in real time from the endoscope apparatus 20 is input to the artificial neural network via the learning means, and is classified into at least one item related to the diagnosis result of the gastric lesion, so that diagnosis and prediction of the gastric lesion can be realized.

According to another embodiment of the present invention, the endoscopic device 20 may be formed in a capsule form. For example, the endoscope apparatus 20 is formed in a capsule form and is insertable into the body of the test object to obtain an endoscopic image of the stomach. The capsule endoscope apparatus 20 can provide position information of a position where the esophagus, the stomach, the small intestine, and the large intestine of the subject are collected. In other words, the capsule endoscope apparatus 20 is located inside the body of the detection target (patient), and can provide the video (image) obtained in real time to the lesion diagnostic apparatus 10 through the network. At this time, the capsule endoscope apparatus 20 provides not only the gastric endoscopic image but also the positional information for obtaining the gastric endoscopic image, so that when the diagnosis classification result of the lesion diagnostic apparatus 10 belongs to at least one of advanced gastric cancer (advanced gastric cancer), early gastric cancer (early gastric cancer), high-grade dysproliferation (high-grade dysproliferation), and low-grade dysproliferation (low-grade dysproliferation), in other words, when the lesion diagnostic classification result belongs to a lung tumor risk tumor, the user (doctor) can directly perform the resection operation while grasping the corresponding lesion position.

According to an embodiment of the present invention, the lesion diagnostic apparatus 10 performs a diagnosis of a gastric lesion using an endoscopic image of the gastric lesion obtained in real time in the endoscopic apparatus 20 as an input to an algorithm formed by learning, and for a neoplasm a lesion, the endoscopic apparatus 20 can excise the corresponding lesion by endoscopic mucosal resection or endoscopic submucosal dissection.

The display device 23 may include, for example, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, an Organic Light Emitting Diode (OLED) display, or a micro-electro-mechanical system (MEMS) display. The display device 23 can display the gastric endoscopic image obtained from the endoscope device 20 and the gastric lesion diagnostic information diagnosed by the lesion diagnostic device 10 to the user. The display device 23 may comprise a touch screen, for example, may receive touch, gesture, near point or hover input using an electronic pen or a portion of the user's body. The display device 23 can output a gastric lesion image obtained at the endoscope device 20. In addition, the display device 23 may output a gastric disorder diagnosis result.

Fig. 2 is a schematic block diagram of a lesion diagnostic apparatus according to an embodiment of the present invention, and fig. 3 is a schematic diagram for explaining an embodiment of constructing an artificial neural network in the lesion diagnostic apparatus according to an embodiment of the present invention.

As shown in fig. 2, the lesion diagnostic apparatus 10 may include an image obtaining section 11, a data generating section 12, a data preprocessing section 13, a learning section 14, and a lesion diagnostic section 15. However, the structure of the lesion diagnostic device 10 is not limited to the above disclosure. For example, the lesion diagnostic device 10 may further include a database for storing information.

The image obtaining section 11 can obtain a plurality of stomach lesion images. The image obtaining unit 11 can receive a gastric lesion image from an imaging device provided in the endoscope apparatus 20. The image obtaining unit 11 can obtain a gastric lesion image obtained by an endoscopic imaging device (digital camera) for gastric endoscopic diagnosis and treatment. The image obtaining unit 11 can collect endoscopic white light images of pathologically confirmed gastric lesions. The image acquiring unit 11 can receive a plurality of images of gastric lesions from image storage devices and database systems in a plurality of hospitals. The image keeping apparatuses in the plurality of hospitals may be apparatuses that keep images of gastric lesions obtained when performing gastric endoscopy in the plurality of hospitals.

The image obtaining unit 11 may obtain an image (image) captured by changing any one of the angle, direction, and distance of the first region of the stomach to be detected. The image obtaining section 11 can obtain a JPEG-form image of the gastric lesion. The gastric lesion image may be an image of a pattern applying a 35 degree angular field at a resolution of 1280x640 pixels. The image obtaining unit 11 can obtain an image from which discrete marker information for each of the gastric lesion images is extracted. The image obtaining section 11 may obtain an image in which the lesion is located at the center and the black frame region in the stomach lesion image is removed.

In contrast, when the image obtaining unit 11 obtains an image with low quality or low resolution such as defocus, artifact, and range during image obtaining, the image may be discarded. In other words, the image obtaining section 11 may discard the image that is not suitable for the depth algorithm.

According to an embodiment of the present invention, the endoscope apparatus 20 can control the image pickup section using the operation section 21. The operation unit 21 is capable of receiving an operation input signal from a user so that the position of a target lesion is located within the visual field of the imaging unit. The operation section 21 may control the position of the photographing section based on an operation input signal received from the user. In addition, the operation section 21 can obtain an operation input signal for capturing a corresponding image when the field of view of the photographing section is located at the position of the target lesion, and generate a signal for capturing a corresponding stomach lesion image.

According to another embodiment of the present invention, the endoscopic device 20 may be a device formed in a capsule form. The capsule endoscope apparatus 20 is inserted into the human body of a subject (inspection object) and can be remotely operated. The image of the gastric lesion obtained from the capsule endoscope apparatus is not only an image of an area that the user wishes to capture, but also data obtained by converting all images obtained by video shooting into an image. The capsule endoscope apparatus 20 may include a photographing section and an operating section. The shooting part is inserted into the human body and is controlled in the human body based on an operation signal of the operation part.

The data generating unit 12 may generate a data set by connecting a plurality of stomach lesion images and patient information. The patient information may include various information such as sex, age, height, weight, race, nationality, smoking amount, drinking amount, family history, and the like of the subject (detection subject). Additionally, the patient information may include clinical information. Clinical information refers to all data that a physician making a diagnosis uses for a particular diagnosis. In particular, it includes electronic obligation record data including data of sex and age generated in the course of diagnosis and treatment, data of whether to be treated specially, salary application and prescription data, etc. In addition, the clinical information may include biological data such as genetic information. The biological data may include personal health information including data on heart rate, electrocardiogram, exercise amount, oxygen saturation, blood pressure, weight, diabetes, etc.

The patient information is data that is input to the fully-connected neural network together with the result of the convolutional neural network configuration in the learning unit 14 described below, and by inputting information other than the gastric lesion image to the artificial neural network, the accuracy can be further improved.

The data generation unit 12 may generate a learning dataset and a verification dataset for the deep learning algorithm. The data set is generated by dividing the data set into a learning data set necessary for learning the artificial neural network and a verification data set for verifying progress information of learning the artificial neural network. For example, the data generating unit 12 may randomly classify the image for the learning dataset and the image for the verification dataset from the stomach lesion image obtained by the image obtaining unit 11. The data generation unit 12 may use the remaining data set selected as the verification data set as the learning data set. The data set for verification may be randomly selected. The ratio of the verification dataset to the learning dataset may be determined based on a predetermined reference value. At this time, the preset reference value may be set such that the ratio of the data sets for verification is 10% and the ratio of the data sets for learning is 90%, but is not limited thereto.

The data generating unit 12 generates a data set by distinguishing a data set for learning from a data set for verification in order to prevent an excessive state. For example, the learning data set may be in an over-fitting state according to the learning characteristics of the neural network structure, and the data generation unit 12 may prevent the artificial neural network from being in the over-fitting state using the verification data set.

In this case, the verification dataset may be a dataset that does not overlap with the learning dataset. Since the verification data is data that is not used for constructing the artificial neural network, the verification data is data that is first exposed to the artificial neural network when performing the verification operation. Therefore, the dataset for verification is a dataset suitable for performance evaluation of the artificial neural network when there is a new image (new image not used for learning) input.

The preprocessing section 13 preprocesses the data set to be available for the deep learning algorithm. The preprocessing unit 13 may preprocess the data set for improving recognition performance in the deep learning algorithm and reducing the similarity of the image with the patient. The deep learning algorithm can be composed of two parts of a Convolutional Neural network (Convolutional-Neural Networks) structure and a Fully-connected Neural network (Fully-connected Neural Networks) structure.

According to an embodiment of the present invention, the preprocessing section 13 may perform a five-step preprocessing process. First, the preprocessing portion 13 may perform a trimming (crop) step. The cropping (crop) step may crop an unnecessary portion of the edge (black beijing) with the lesion as the center in the gastric lesion image obtained from the image obtaining section 11. For example, the preprocessing unit 13 may set an arbitrarily specified pixel size (e.g., 299x299 pixels and 244x244 pixels) to crop the gastric lesion image. In other words, the preprocessing section 13 may crop the gastric lesion image in a size usable for the deep learning algorithm.

Next, the preprocessing section 13 may perform a parallel shift (shift) step. The preprocessing section 13 can move the stomach lesion image in parallel in the up-down and left-right directions. In addition, the preprocessing section 13 may perform a flipping (flipping) step. For example, the preprocessing section 13 may vertically invert the stomach lesion image. In addition, the preprocessing unit 13 may perform a process of inverting the gastric lesion image in the up-down direction and then in the left-right direction.

In addition, the preprocessing section 13 may perform a color adjustment (flipping) step. For example, in the color adjustment step, the preprocessing section 13 may perform color adjustment of the image based on the color extracted by the average subtraction method with the average RGB values of all the data sets. In addition, the preprocessing section 13 may randomly adjust the color of the stomach lesion image.

The preprocessing section 13 can generate the stomach lesion image as a data set usable for the deep learning algorithm by performing the entire process of the five-step preprocessing process. In addition, the preprocessing unit 13 can generate a gastropathy image as a data set usable for the deep learning algorithm by executing any one of the five-step preprocessing processes.

In addition, the preprocessing section 13 may also perform a scaling (resizing) step. The zooming (reducing) step may be a step of enlarging and reducing the stomach lesion image to a preset size.

The preprocessing section 13 may include an enlarging section (not shown) that enlarges image data for increasing the number of data of the stomach lesion image.

According to an embodiment of the present invention, when a deep learning algorithm including a convolutional neural network is used, the larger the data amount is, the better the performance is, but the number of examinations of the gastric endoscopic photograph image is considerably smaller than that of other examinations, and the amount of data collection of the gastric lesion image detected by the image obtaining portion 11 is far insufficient for use in the convolutional neural network. Accordingly, the enlargement portion (not shown) may perform an enlargement (augmentation) process based on the learning data set. The enlargement portion (not shown) may perform an data enlargement (augmentation) process using at least one of rotation, inversion, cropping, and noise addition of the stomach lesion image.

The preprocessing section 13 performs a preprocessing process to correspond to a preset reference value. The preset reference value may be a value arbitrarily designated by a user. In addition, the preset reference value may be a value determined from an average value of the obtained stomach lesion images. The data set passed through the preprocessing section 13 may be supplied to the learning section 14.

The learning section 14 may construct an artificial neural network by learning with the data set subjected to the preprocessing process as an input and the items on the classification result of the gastric lesion as an output.

According to an embodiment of the present invention, the learning unit 14 may output the gastric lesion classification result using a deep learning algorithm including a Convolutional Neural network (Convolutional-Neural Networks) structure and a Fully-connected Neural network (Fully-connected Neural Networks) structure. The fully-connected neural network is characterized in that two-dimensional connection is formed between nodes in the transverse direction/the longitudinal direction, no connection relation exists between nodes positioned on the same layer, and only connection relation exists between nodes positioned on adjacent layers.

The learning section 14 may construct a training model by learning a convolutional neural network having the data set for learning subjected to the preprocessing as an input, and learning an output of the convolutional neural network as an input of the fully-connected neural network.

According to an embodiment of the invention, the convolutional neural network can output a plurality of specific characteristic patterns for analyzing the gastric lesion image. At this time, the extracted specific feature pattern may be used for final classification in the fully-connected neural network.

Convolutional Neural Networks (Convolutional Neural Networks) are one of the Neural Networks mainly used for speech recognition or image recognition. Can process multi-dimensional arrangement data, and is particularly suitable for multi-dimensional arrangement processing such as color images. Therefore, in the field of image recognition, techniques using deep learning are mostly based on convolutional neural networks.

For example, as shown in fig. 3, a Convolutional Neural Network (CNN) divides an image into a plurality of pieces of non-single data to be processed. Thus, a partial characteristic of an image can be extracted even if the image is distorted, so that correct performance can be obtained.

The convolutional neural network may be composed of a plurality of layers. The factors that make up each layer may consist of convolutional layers, activation functions, max firing layers, dropout layers. The convolutional layer functions as a filter called kernel, so that the operation of partially processing the entire image (or the generated heart feature pattern) extracts a heart feature pattern (feature pattern) of the same size as the image. The convolutional layer may be modified to facilitate processing of the values of the feature pattern by the activation function in the feature pattern. The max pooling layer may sample (sampling) the partial gastric lesion image to resize, thereby reducing the size of the image. Although the convolutional neural network reduces the size of a feature pattern (feature pattern) through a convolutional layer and a max boosting layer, a plurality of feature patterns (feature patterns) can be extracted by using kernel. The dropout layer may be a method of intentionally not considering part of the weight values for efficient training when training the weight values of the convolutional neural network. In addition, the dropout layer is not used when performing actual testing through the trained model.

A plurality of feature patterns (feature patterns) extracted from the convolutional neural network may be transferred to the fully-connected neural network as a next step for a classification operation. The convolutional neural network regulates the number of layers. The number of layers of the convolutional neural network can be adjusted according to the amount of training data used for model training, thereby constructing a more stable model.

The learning unit 14 may construct a diagnosis (training) model by learning in which the data set for learning having undergone the preprocessing process is input to the convolutional neural network and the output of the convolutional neural network and the patient information are input to the fully-connected neural network. In contrast, the learning unit 14 may preferentially input the image data subjected to the preprocessing process to the convolutional neural network, and input the result output from the convolutional neural network to the all-connected neural network. The learning unit 14 may input an arbitrarily extracted feature (feature) directly to the fully-connected neural network without passing through the convolutional neural network.

In this case, the patient information may include various information such as sex, age, height, weight, race, nationality, smoking amount, drinking amount, family history, and the like of the subject (subject to be detected). Additionally, the patient information may include clinical information. Clinical information refers to all data that a physician making a diagnosis uses for a particular diagnosis. In particular, it includes electronic obligation record data including data of sex and age generated in the course of diagnosis and treatment, data of whether to be treated specially, salary application and prescription data, etc. In addition, the clinical information may include biological data such as genetic information. The biological data may include personal health information including data on heart rate, electrocardiogram, exercise amount, oxygen saturation, blood pressure, weight, diabetes, etc.

The patient information is data input to the fully-connected neural network together with the result of the convolutional neural network configuration in the learning unit 14, and the accuracy can be further improved by using the patient information as the input of the artificial neural network, compared with the result derived only from the gastric lesion image.

For example, when it is learned from clinical information of the learning dataset that cancer is present in a large number of elderly people and the age of 42 or 79 years is input together with image features, if cancer or benign lesion is difficult to distinguish among gastric lesion classification results, the results of elderly patients may be biased toward one side of cancer.

The learning unit 14 may perform learning by comparing an error between a result derived by using training data for a deep learning algorithm configuration (a configuration including a convolutional neural network and a fully-connected neural network) and an actual result, and gradually increasing a back propagation (back propagation) algorithm feedback result corresponding to a weight value of the neural network configuration of the error. The back propagation algorithm may be an algorithm that adjusts the weight value from each node to the next node in order to reduce the error of the outcome (difference between the actual value and the outcome value). The learning unit 14 may learn a neural network using the learning dataset and the verification dataset, and derive a final diagnosis model by obtaining a weight value medium variable.

The lesion diagnostic section 15 performs a gastric lesion diagnosis by an artificial neural network after the new data set is subjected to a preprocessing process. In other words, the lesion diagnostic unit 15 can derive a diagnosis for the new data using the final diagnosis model derived from the learning unit 14 described above. The new data may be data containing an image of the gastric lesion that the user wishes to diagnose. The new data set may be a data set generated by concatenating the new stomach lesion image with patient information. The new data set may be preprocessed by the preprocessing process of the preprocessing section 12 into a state available for the deep learning algorithm. After that, the new data set subjected to the preprocessing is input to the learning section 14 and the stomach lesion image is diagnosed based on the learning parameter.

According to an embodiment of the present invention, the lesion diagnostic unit 15 may classify the gastric lesion into at least one of advanced gastric cancer (advanced gastric cancer), early gastric cancer (early gastric cancer), high-grade dysplasia (high-grade dysplasia), low-grade dysplasia (low-grade dysplasia) and lung tumor (non-neoplasia). In addition, the lesion diagnostic portion 15 may be classified into cancer and non-cancer. In addition, the lesion diagnostic section 15 can distinguish between gastric lesion diagnosis in two ranges of neoplasms and nonneoplasms. The neoplasm classification may include AGC, EGC, HGD, and LGD. Non-species ranges may include lesions such as gastritis, benign ulcers, malformations, polyps, intestinal metaplasia, or epithelial tumors.

The lesion diagnostic apparatus 10 classifies and diagnoses a blurred lesion to reduce side effects due to unnecessary biopsy or endoscopic resection, analyzes an image obtained by the endoscopic apparatus 20 to automatically classify and diagnose a blurred lesion, and performs endoscopic resection in the case of a neoplasm (dangerous tumor).

According to another embodiment of the present invention, the endoscope apparatus 20 may include an operation portion 21, a main body portion 22, a control portion 23, a lesion position obtaining portion 24, and a display portion 25.

The operation unit 21 is provided at the rear end of the main body 22 and operates based on input information from the user. The operation unit 21 is a portion to be held by an endoscope operator, and is operable to operate the main body unit 22 inserted into the body of the detection target. The operation unit 21 is capable of operating a plurality of unit devices required for endoscopic surgery housed in the main body 22. The operation portion 21 may include a rotation control portion. The rotation control portion may include a portion responsible for a function of generating a control signal and a function of providing a rotational force (e.g., a motor). The operation section 21 may include a button for operating a photographing section (not shown). The button is a button for controlling the position of the imaging unit (not shown), and may be a button for the user to change the position of the main body 22 such as up, down, left, right, forward, and backward.

The main body 22 is a portion to be inserted into a body of a test subject, and can accommodate a plurality of unit devices. The plurality of unit devices may include at least one of a photographing part (not shown) photographing the inside of the body of the test object, an air supply unit supplying air to the inside of the body, a water supply unit supplying water to the inside of the body, an illumination unit irradiating light to the inside of the body, a biopsy (biopsy) unit collecting or treating a portion of tissue inside the body, and a suction unit sucking air or foreign substances from the inside of the body. The biopsy unit may include various medical instruments such as a surgical knife, a needle, etc. that collect a portion of tissue from a living body, and is inserted into the body through a biopsy channel by an endoscope operator and collects cells in the body.

The imaging unit (not shown) can house a camera having a size corresponding to the diameter of the main body 22. The imaging unit (not shown) is provided at the distal end of the main body 22, and images the lesion image of the stomach, and supplies the imaged lesion image of the stomach to the lesion diagnostic unit 10 and the display unit 25 via a network.

The control unit 23 can generate a control signal for controlling the operation of the main body unit 22 based on the user input information supplied from the operation unit 21 and the diagnosis result of the lesion diagnostic apparatus 10. When receiving a certain selection input from the user from a button included in the operation unit 21, the control unit 23 generates a control signal for controlling the operation of the main body unit 22 in accordance with the button. For example, when the user inputs a button for advancing the main body 22, the control unit 23 generates a control signal to advance the main body 22 at a constant speed in the body of the subject (patient). The main body portion 22 can be advanced in the body of the subject (patient) based on a control signal of the control portion 23.

The control unit 23 may generate a control signal for controlling the operation of the imaging unit (not shown). The control signal for controlling the operation of the imaging unit (not shown) may be a signal for capturing an image of a lesion in the stomach by the imaging unit (not shown) located in the lesion region. In other words, when the user desires to obtain an image through the operation unit 21 by the imaging unit (not shown) located in a specific lesion region, the capture acquisition button may be clicked. The control unit 23 may generate a control signal based on input information obtained from the operation unit 21 so that an image is obtained by an imaging unit (not shown) in a corresponding lesion region. The control unit 23 may generate a control signal for causing an imaging unit (not shown) to obtain a specific stomach lesion image from an image being captured.

Further, the control unit 23 generates a control signal to control the operation of the biopsy unit that collects a part of the tissue of the target body based on the diagnosis result of the lesion diagnostic apparatus 10. The control unit 23 generates a control signal for controlling the operation of the biopsy unit to perform the resection operation when the diagnosis result of the lesion diagnostic apparatus 10 is at least one of advanced gastric cancer (advanced gastric cancer), early gastric cancer (early gastric cancer), high-grade dysplasia (high-grade dysplasia), low-grade dysplasia (low-grade dysplasia). The biopsy unit may include various medical instruments such as a surgical knife, a needle, etc. that collect a portion of tissue from a living body, and is inserted into the body through a biopsy channel by an endoscope operator and collects cells in the body. Further, the control unit 23 generates a control signal based on the user input signal supplied from the operation unit 21 to control the operation of the biopsy unit. The operation of collecting, cutting, and removing cells in the body may be performed by the user using the operation unit 21.

According to an embodiment of the present invention, the lesion position obtaining unit 24 may generate the gastric lesion information by connecting the gastric lesion image and the position information provided from the imaging unit (not shown). The positional information may be positional information where the main body portion 22 is currently located in the body. In other words, when the main body portion 22 is located at a first location of the stomach of the subject (patient) and obtains the gastric lesion image from the first location, the lesion position obtaining portion 24 may generate the gastric lesion information by connecting the gastric lesion image and the position information.

The lesion position obtaining portion 24 may provide the user (doctor) with the stomach lesion information generated by connecting the obtained stomach lesion image and the position information. By providing the diagnosis result of the lesion diagnostic unit 10 and the lesion information of the lesion position obtaining unit 24 to the user through the display unit 25, it is possible to prevent the occurrence of an excision operation at a site other than the lesion position when performing an operation for excising (removing) the corresponding lesion.

Further, the control unit 23 may generate a control signal for controlling the position of the biopsy unit when the biopsy unit is not located at the corresponding lesion position, using the position information supplied from the lesion position obtaining unit 24.

The control signal for controlling the biopsy unit is generated in the lesion diagnostic apparatus 10 to collect or remove cells in the body, so that more rapid tissue examination can be accomplished. Meanwhile, cells for diagnosing gastric cancer are directly excised in the endoscopic diagnosis process, so that rapid treatment can be realized.

The operation flow of the present invention will be briefly described based on the above-described details.

Fig. 4 is a flowchart illustrating the operation of a method for diagnosing gastric lesion in an endoscopic image according to an embodiment of the present invention.

The method of diagnosing a gastric lesion in an endoscopic image as shown in fig. 4 can be performed by the lesion diagnostic apparatus 10 described above. Therefore, even if the following description is omitted, the description of the lesion diagnostic apparatus 10 is similarly applied to the description of the method for diagnosing a gastric lesion in an endoscopic image.

In step S401, the lesion diagnostic apparatus 10 may obtain a plurality of stomach lesion images. The lesion diagnostic apparatus 10 can receive a gastric lesion image obtained from an imaging device provided in the endoscope apparatus 20. The stomach lesion image may be a white light image.

In step S402, the lesion diagnostic apparatus 10 may connect a plurality of stomach lesion images and patient information generation data sets. The lesion diagnostic apparatus 10 generates a data set divided into a learning data set necessary for learning the artificial neural network and a verification data set for verifying progress information of learning the artificial neural network. In this case, the verification dataset may be a dataset that does not overlap with the learning dataset. The verification data set may be data used for performance evaluation of the artificial neural network when the new data set is input to the artificial neural network after the preprocessing process.

In step S403, the lesion diagnostic device 10 preprocesses the data set to be available for the deep learning algorithm. The lesion diagnostic apparatus 10 may perform a process of cropping (crop) a peripheral region of an image not containing a gastric lesion with the gastropathy becoming the center, to a size usable for a deep learning algorithm, using a gastric lesion image contained in the data set. In addition, the lesion diagnostic apparatus 10 can move (Shift) the stomach lesion image in parallel in the up-down, left-right directions. In addition, the lesion diagnostic device 10 may invert (flipping) the stomach lesion image. In addition, the lesion diagnostic apparatus 10 may adjust the color of the stomach lesion image. The lesion diagnostic device 10 may preprocess the stomach lesion image into a state usable for the deep learning algorithm by performing a certain process among a plurality of preprocessing processes.

In addition, the lesion diagnostic apparatus 10 may enlarge the image data for increasing the number of data of the stomach lesion image. The lesion diagnostic device 10 may magnify the image data using at least one of rotation, flipping, cropping, and noise-adding of the stomach lesion image.

In S404 step, the lesion diagnostic apparatus 10 may construct an artificial neural network by learning with the data set subjected to the preprocessing process as an input and the items on the classification result of the gastric lesion as an output. The lesion diagnostic apparatus 10 may construct a training model by learning of a Convolutional Neural network (Convolutional Neural Networks) and a Fully-connected Neural network (Fully-connected Neural Networks) that have the above-described data set subjected to the preprocessing process as an input and have items regarding the classification result of the gastric lesion as an output.

In addition, the lesion diagnostic apparatus 10 inputs the data set subjected to the preprocessing process to the convolutional neural network, and the fully-connected neural network constructs a training model using the output of the convolutional neural network and the patient information as inputs. The neural network can output a plurality of characteristic patterns from a plurality of stomach lesion images, and the plurality of characteristic patterns are finally classified through the fully-connected neural network.

In step S405, the lesion diagnostic apparatus 10 performs a diagnosis of gastric lesion through the artificial neural network after the new data set is subjected to a preprocessing process. The lesion diagnostic apparatus 10 may classify gastric lesions into at least one of advanced gastric cancer (advanced gastric cancer), early gastric cancer (early gastric cancer), high-grade dysplasia (high-grade dysprosia), low-grade dysplasia (low-grade dysprosia), and lung tumor (non-neoplasms).

In the above description, the steps of S401 to S405 may be further divided into additional steps or combined into fewer steps according to an embodiment of the present invention. In addition, some steps may be omitted as necessary, and the order between the steps may be changed.

The method for diagnosing gastric lesion in an endoscopic image according to an embodiment of the present invention may be implemented in the form of program commands executable by various computer devices and recorded on a computer-readable recording medium. The computer-readable media described above may include program commands, data files, data structures, etc., alone or in combination. The program command recorded on the medium may be specially designed and constructed for the present invention, or may be publicly available in the field of computer software. The computer-readable recording medium includes Magnetic Media such as hard disks, floppy disks, and Magnetic tapes (Magnetic Media), Magneto-Optical Media such as CD-ROMs and DVDs (Optical Media), and Magneto-Optical Media such as Floptical disks (flash disks), and hardware devices such as ROMs, RAMs, and flash memories that can store and execute program commands. The program command includes not only a machine language code generated in a compiler but also a high-level language code executed in a computer using an interpreter or the like. The hardware devices described above may be comprised of more than one software module that performs the acts of the present invention and vice versa.

In addition, the aforementioned method of diagnosing a gastric lesion in an endoscopic image may also be implemented in the form of a computer program or application stored in a recording medium and implemented by a computer.

The above-described embodiments are intended to be illustrative only and not limiting, and it will be appreciated by those of ordinary skill in the art that changes, modifications, and equivalents may be made. But rather should be construed to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims. For example, the individual components may be dispersed and the dispersed components may be combined.

The scope of the present invention is defined by the claims rather than the description of the invention, and all modifications or variations derived from the meaning and range of the claims and the equivalent concept should be construed as falling within the scope of the present invention.

18页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:医疗保健提供者数据系统处理和分析

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!