Dental target detection method and image integration method and device using dental target

文档序号:1835376 发布日期:2021-11-12 浏览:18次 中文

阅读说明:本技术 牙齿目标检测方法及利用牙齿目标的影像整合方法及装置 (Dental target detection method and image integration method and device using dental target ) 是由 金镇喆 金镇柏 于 2020-02-26 设计创作,主要内容包括:本发明涉及利用牙齿目标的影像整合方法及装置。本发明一实施例的利用牙齿目标的影像整合方法的特征在于,包括:在整合对象的口腔扫描影像及计算机断层扫描(Computed Tomography;CT)影像中分别生成相互隔开的多个基准点的步骤;以及利用口腔扫描影像的基准点(第一基准点)及计算机断层扫描影像的基准点(第二基准点)来对整合对象的口腔扫描影像与计算机断层扫描影像进行整合的步骤,上述第一基准点及第二基准点包括对于前齿区域的最前方1个牙齿的基准点和对于臼齿区域的最后方两侧的2个牙齿的基准点,上述第一基准点从作为牙齿的简化形状的目标导出。(The invention relates to an image integration method and device by utilizing a tooth target. An image integration method using a dental target according to an embodiment of the present invention includes: generating a plurality of reference points spaced apart from each other in an oral cavity scan image and a Computed Tomography (CT) image of an integration object, respectively; and a step of integrating the oral cavity scan image and the computed tomography image of the integration object by using a reference point (first reference point) of the oral cavity scan image and a reference point (second reference point) of the computed tomography image, the first reference point and the second reference point including a reference point for the foremost 1 tooth of the front tooth region and reference points for the rearmost 2 teeth on both sides of the molar tooth region, the first reference point being derived from a target which is a simplified shape of the teeth.)

1. A method for detecting a dental target in an oral cavity scanning image is characterized by comprising the following steps:

a step of extracting a region of interest including teeth from a scanned image of the oral cavity of a learning object;

a step of generating a learning model by learning with learning data targeted for each tooth in a plurality of directions of the extracted region of interest; and

and detecting a target in a plurality of directions for each tooth in the mouth scan image of the detection object by using the learning model.

2. The method as claimed in claim 1, further comprising the step of extracting information about the position, center point and size of each detected object.

3. The method as claimed in claim 1, further comprising a step of displaying each object detected in the scanned image of the mouth of the subject.

4. The method of claim 1, wherein the learning data includes objects in 2 or more different directions of each tooth, i.e., objects in a specific direction of each tooth.

5. The method of claim 4, wherein the specific direction includes a plane direction and a direction other than the plane direction.

6. The method of claim 4, further comprising a step of forming a three-dimensional object including 2 or more objects for each tooth to be detected as a surface and displaying the three-dimensional object on the scanned oral cavity image of the object to be detected.

7. The method as claimed in claim 6, wherein the step of displaying the three-dimensional object in the mouth scan image includes a step of displaying at least one of position, center point and size information of each object to be detected and each object to be detected in the mouth scan image of the object to be detected.

8. An image integration method using a dental target,

the method comprises the following steps:

generating a plurality of reference points spaced apart from each other in an oral cavity scan image and a computed tomography scan image of an integration target; and

integrating the oral cavity scanning image and the computed tomography image of the integration object by using the reference point of the oral cavity scanning image and the reference point of the computed tomography image, wherein the reference point of the oral cavity scanning image is a first reference point, the reference point of the computed tomography image is a second reference point,

the first and second reference points include a reference point for the foremost 1 tooth of the front tooth region and reference points for the rearmost 2 teeth on both sides of the molar region,

the first reference point is derived from a target that is a simplified shape of the tooth.

9. The image integration method using a dental target according to claim 8, wherein the generating of the reference points includes:

a step of generating a learning model by learning by scanning a plurality of directions of an image in an oral cavity of a learning object to form learning data of a target for each tooth;

a step of detecting targets in a plurality of directions for each tooth in an oral cavity scan image of the integration target using the generated learning model; and

and a step of selecting a reference target among the detected targets to generate a first reference point at the selected reference target.

10. The image integration method using a dental target according to claim 8, wherein the step of generating the model in the step of generating the reference point includes:

a step of generating a learning model by learning data for forming respective targets for n teeth spaced from each other as a part of the teeth in a plurality of directions of a mouth scan image of a learning object, wherein the n teeth are the object teeth, and n is a natural number of 3 or more;

a step of detecting targets in a plurality of directions for the teeth of the respective subjects in the mouth scan image of the integrated subject using the generated learning model; and

and selecting the detected target as a reference target, and generating a first reference point in the selected reference target.

11. The image integrating method using dental objects as set forth in claim 9 or 10, wherein the reference objects include an object for the foremost 1 tooth of the front tooth area and an object for the rearmost 2 teeth of both sides of the molar tooth area.

12. The image integration method using a dental target according to claim 9 or 10, wherein the step of generating the reference point further comprises the step of selecting a center point of the selected reference target as the first reference point.

13. An image integration device is characterized in that,

the method comprises the following steps:

a storage unit for storing the oral cavity scanning image and the computed tomography scanning image of the integration object; and

a control unit for integrating the oral cavity scan image and the computed tomography scan image of the integration object by using the reference point of the oral cavity scan image and the reference point of the computed tomography scan image after generating a plurality of reference points spaced apart from each other in the stored oral cavity scan image and computed tomography scan image, respectively, wherein the reference point of the oral cavity scan image is a first reference point, the reference point of the computed tomography scan image is a second reference point,

the first and second reference points include a reference point for the foremost 1 tooth of the front tooth region and reference points for the rearmost 2 teeth on both sides of the molar region,

the first reference point is derived from a target that is a simplified shape of the tooth.

14. The image integration apparatus according to claim 13, wherein the control unit generates the first reference point in the selected reference target by selecting the reference target from among the detected targets after detecting the targets in the plurality of directions for each tooth in the oral cavity scan image to be integrated by using a learning model generated by learning the learning data for forming the target for each tooth in the plurality of directions of the oral cavity scan image and the computed tomography image to be learned.

15. The image integration apparatus according to claim 13, wherein the control unit generates the first reference point in the selected reference object by using a learning model generated by learning data for forming each object in a plurality of directions of the mouth scan image and the computed tomography image of the object of learning with respect to n teeth spaced apart from each other as the part of the teeth, and after detecting the objects in the plurality of directions with respect to each of the object teeth in the mouth scan image of the object of integration, selecting the detected object as the reference object, and generating the first reference point in the selected reference object, wherein n is a natural number of 3 or more.

Technical Field

The present invention relates to a method and apparatus for detecting an object (object) of each tooth in an oral scan image (oral scan image), and in the oral scan image and a Computed Tomography (CT) image of the inside of an oral cavity, image integration between the oral scan image and the CT image can be performed using the object of each tooth.

Background

In the dental field, various procedures are performed using various images of the interior of the oral cavity. Such images include intraoral scan images, computed tomography images, Magnetic Resonance Images (MRI), and the like. The oral cavity scan image is a three-dimensional image showing the surface state of the tooth, unlike a computed tomography image and a nuclear magnetic resonance image, which are three-dimensional images showing the internal state of the tooth.

On the other hand, in order to use as a reference point for integrating with the image of the computed tomography image, to grasp the implant implantation position, to grasp the arch shape, and the like, it is necessary to separately detect each tooth in the oral cavity scan image. For this reason, conventionally, each tooth is detected using curvature information in a mouth scan image. However, in the conventional detection method, since the boundary between teeth is blurred, curvature information of teeth and gum is similar, and detection errors frequently occur, and moreover, a load (load) due to the detection is large, and thus, the detection time and effectiveness are reduced.

On the other hand, in the field of computer vision, when the same object is photographed at different times, measurement methods, viewpoints, or the like, images having different coordinate systems are acquired, and image integration refers to processing for presenting such different images in one coordinate system.

In particular, in the dental field, it is necessary to perform image integration between an oral cavity scan image and a computed tomography scan image before an operation of implanting a tooth or the like. In this case, the integrated image may be important data for determining an optimal implant surgery position by grasping bone tissue and nerve tube positions, etc.

However, in the conventional method for image integration between an intraoral scan image and a computed tomography image, image integration is performed using a marker of each image manually designated by a user, or image integration is performed by comparing distances between all vertexes (vertex) included in each image. As a result, the conventional method has a problem that the speed of image integration is reduced due to a large load, and the degree of image integration is not accurate due to inaccurate manual marking and vertex characteristics.

Disclosure of Invention

Technical problem

In order to solve the above-described problems of the prior art, it is an object of the present invention to provide a method for detecting an object that can correspond to each tooth in an oral cavity scan image of the inside of the oral cavity.

Another object of the present invention is to provide a method and an apparatus for quickly and accurately performing image integration between an oral cavity scan image and a computed tomography scan image by using an object of each tooth in the oral cavity scan image and the computed tomography scan image of the inside of the oral cavity.

However, the problems to be solved by the present invention are not limited to the above-mentioned problems, and other problems not mentioned can be clearly understood from the following description by those skilled in the art to which the present invention pertains.

Technical scheme

The method for detecting a tooth target in an oral cavity scanning image according to an embodiment of the present invention for solving the above problems includes: the method comprises the following steps of (1) extracting a region of interest including teeth from an oral cavity scanning image of a learning object; a step (2) of generating a learning model by learning with learning data targeted for each tooth in a plurality of directions of the extracted region of interest; and (3) detecting a target in a plurality of directions for each tooth in the mouth scan image of the detection object by using the learning model.

The method for detecting dental targets in an oral cavity scan image according to an embodiment of the present invention may further include a step of extracting information on a position, a center point, and a size of each detected target.

The method for detecting dental targets in an oral cavity scan image according to an embodiment of the present invention may further include a step of displaying each detected target in the oral cavity scan image of the detected object.

The above learning data may contain different 2 or more directional targets for each tooth, that is, targets in a specific direction for each tooth.

The specific direction may include a planar direction and a direction other than the planar direction.

The method for detecting a dental target in an intraoral scan image according to an embodiment of the present invention may further include a display step of forming a three-dimensional target including 2 or more targets for each detected tooth as a plane and displaying the three-dimensional target on the intraoral scan image of the detection target.

The displaying step may include the step of displaying at least one of the position, center point and size information of each detected target together with each detected target in the scanned image of the oral cavity of the detected subject.

In order to solve the above problems, an image integration method using a dental target according to an embodiment of the present invention includes: generating a plurality of reference points separated from each other in an oral cavity scanning image and a computed tomography scanning image of an integration object; and (2) integrating the oral cavity scanning image and the computed tomography scanning image of the integration object by using the reference point (first reference point) of the oral cavity scanning image and the reference point (second reference point) of the computed tomography scanning image.

The first and second reference points may include reference points for the foremost 1 tooth of the front tooth region and reference points for the rearmost 2 teeth on both sides of the molar region.

The first reference point may be derived from a target that is a simplified shape of the tooth.

The generating step may include: a step (1) of generating a learning model by learning in a plurality of directions of an image scanned in the oral cavity of a learning object with learning data for forming a target for each tooth; a step (2) of detecting a target in a plurality of directions for each tooth in an oral cavity scan image of an integration target using the generated learning model; and a step (3) of selecting a reference target among the detected targets to generate a first reference point at the selected reference target.

The generating step may include: a step (1) of generating a learning model by learning data of each target formed for n teeth (target teeth) spaced apart from each other as a part of the teeth (only, n is a natural number of 3 or more) in a plurality of directions of an oral cavity scan image of a learning target; a step (2) of detecting targets in a plurality of directions for the teeth of the respective subjects in the mouth scan image of the integrated subject using the generated learning model; and (3) selecting the detected target as a reference target, and generating a first reference point at the selected reference target.

The reference targets may include a target for the front-most 1 tooth of the front tooth region and a target for the 2 teeth on the rear-most both sides of the molar region.

The generating step may further include the step of selecting a center point of the selected reference target as the first reference point.

An image integration apparatus according to an embodiment of the present invention includes: (1) a storage unit for storing the oral cavity scanning image and the computed tomography scanning image of the integration object; and (2) a control unit that integrates the oral cavity scan image and the computed tomography image to be integrated using the reference point (first reference point) of the oral cavity scan image and the reference point (second reference point) of the computed tomography image after generating a plurality of reference points spaced apart from each other in the stored oral cavity scan image and computed tomography image, respectively.

The first and second reference points may include reference points for the foremost 1 tooth of the front tooth region and reference points for the rearmost 2 teeth on both sides of the molar region.

The first reference point may be derived from a target that is a simplified shape of the tooth.

The control unit may use a learning model generated by learning data for forming a target for each tooth in a plurality of directions of an oral cavity scan image and a computer tomography image of a learning object, detect targets in the plurality of directions for each tooth in an oral cavity scan image of an integration object, select a reference target from the detected targets, and generate a first reference point in the selected reference target.

The control unit may use a learning model generated by learning data for each target in a plurality of directions of an oral cavity scan image and a computed tomography scan image of a learning object, the learning model forming n teeth (target teeth) spaced apart from each other as a part of teeth (n is a natural number of 3 or more, only), to detect targets in the plurality of directions of each target tooth in an oral cavity scan image of an integration object, and then select the detected target as a reference target to generate a first reference point in the selected reference target.

ADVANTAGEOUS EFFECTS OF INVENTION

The present invention as described above has an effect that it is possible to easily detect an object that can correspond to each tooth in an oral cavity scan image of the inside of the oral cavity, and thus it is possible to improve the detection time and effectiveness.

Furthermore, the present invention has an effect that it is possible to provide extraction information such as the position, the center point, and the size information of each detected target, and the extraction information of the result can be used for operations such as reference point integration with the image of the computed tomography image, implant operation implantation position grasping, and arch grasping, thereby increasing usability.

Further, the present invention as described above has an advantage that the image integration between the oral cavity scan image and the computed tomography image can be performed using the target of each tooth which can be extracted quickly and accurately with respect to the oral cavity scan image and the computed tomography image inside the oral cavity, thereby making it possible to improve the speed and accuracy of the corresponding image integration.

Drawings

Fig. 1 is a block diagram of an image integration apparatus 100 according to an embodiment of the invention.

Fig. 2 is a flowchart illustrating an image integration method using a dental target according to an embodiment of the present invention.

Fig. 3 shows a tooth area including a front tooth area FA and a molar area BA.

FIG. 4 is a detailed flowchart of S100 of a method for integrating images of a dental target according to an embodiment of the present invention.

Fig. 5 shows a state in which the relationship region is extracted in the first learning object mouth scan image.

Fig. 6 shows a state in which targets for 4 directions are set in the region of interest ROI of the second learning object oral cavity scan image.

Fig. 7 illustrates a state of scanning the image detection target in the oral cavity of the first to fourth integration objects.

Fig. 8 shows a state in which a plurality of directions of the three-dimensional target are detected in the oral scan image of the fifth integration target.

Fig. 9 and 10 show a state of an image integration process between an oral cavity scan image and a computed tomography scan image of an integration object.

Fig. 11 shows a state in which images of the oral cavity scan image and the computed tomography scan image of the integration object are integrated.

Detailed Description

The above objects, elements and effects based on the present invention will become more apparent from the following detailed description in connection with the accompanying drawings, whereby those skilled in the art to which the present invention pertains can easily carry out the present invention. In describing the present invention, when it is determined that a detailed description of a known technology related to the present invention makes the gist of the present invention unclear, a detailed description thereof will be omitted.

The terminology used in the description of the embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. In this specification, the singular forms include the plural forms as long as it is not specifically mentioned in the sentence, depending on the case. In the present specification, the terms "comprising", "forming", "providing" or "having" and the like do not exclude the presence or addition of one or more other structural elements than the mentioned structural elements.

In this specification, the term "or", "at least one", etc. means one of the words listed together, or a combination of two or more. For example, "a or B", "at least one of a and B" may include only one of a or B, or may include both a and B.

In the present specification, where a property, variable, or value is referred to, in a description based on "for example," the disclosed information may not be completely consistent, and embodiments of the method of various embodiments of the present invention are not limited to effects such as distortion including allowable errors, measurement errors, limitations on measurement accuracy, and other factors generally known.

In the present specification, when one component is "connected" or "coupled" to another component, the component may be directly connected or coupled to the other component, or the other component may be present therebetween. In contrast, when a structural element is "directly connected" or "directly coupled" to other structural elements, there are no other structural elements present therebetween.

In the present specification, when one component is located "on top" of another component or "in contact with" another component, the other component may be directly in contact with or connected to the other component, or the other component may be present in the middle of the component. In contrast, when one component is located "directly above" or "directly contacting" another component, the other component is not present in the middle. Other expressions describing the relationship between the components are the same, for example, "between" and "directly between" and the like.

In the present specification, terms of "first", "second", and the like may be used to describe various structural elements, and the corresponding structural elements are not limited to the above terms. The above terms are not intended to limit the order of the respective constituent elements, but to distinguish one constituent element from another constituent element. For example, a "first structural element" may be named a "second structural element", and similarly, a "second structural element" may also be named a "first structural element".

Unless otherwise defined, all terms used in the present specification may have the same meaning as commonly understood by one of ordinary skill in the art to which the present invention belongs. Also, terms of dictionary definitions generally used cannot be abnormally or excessively interpreted as long as they are not specifically defined.

Hereinafter, a preferred embodiment of the present invention will be described in detail with reference to the accompanying drawings.

Fig. 1 is a block diagram of an image integration apparatus 100 according to an embodiment of the invention.

The image integration apparatus 100 according to an embodiment of the present invention is an electronic apparatus for performing image integration between an oral cavity scan image and a computed tomography scan image of an interior of an oral cavity.

The mouth scan image is an image providing information on the shape of the crown portion of the tooth exposed to the outside and the shape of the gum around the tooth. In this case, the oral scan image may be acquired by directly scanning the inside of the oral cavity of the operated person by an oral scanner (oral scanner) or the like or may be acquired by scanning an impression model that models the inside of the oral cavity of the operated person intaglio or a plaster model generated by the impression of the model, and the scan image of the impression model may be used as the oral scan image by inversion.

A computed tomography image is an image captured by a computed tomography apparatus using radiation. That is, the computed tomography image can represent the distribution of internal tissues such as crown, root, and alveolar bone in the oral cavity, bone density information, and the like based on the transmittance of radiation.

Referring to fig. 1, an embodiment of an image integration apparatus 100 may include a communication unit 110, an input unit 120, a display unit 130, a storage unit 140, and a control unit 150.

The communication unit 110 is configured to communicate with an external device such as a video acquisition device (not shown) or a server (not shown), and is capable of receiving video data. For example, the communication section 110 may perform fifth generation communication (5G, 5)thgeneration communication), long term evolution-advanced (LTE-a), Long Term Evolution (LTE), Bluetooth Low Energy (BLE), Near Field Communication (NFC), and other wireless communication, and may also perform wired communication such as cable communication.

In this case, the image data may include intraoral scan image data, computed tomography image data, and the like.

The input unit 120 generates input data in accordance with the input of the user. The input part 120 includes at least one input unit. For example, the input part 120 may include a keyboard (key board), a keypad (key pad), a dome switch (dome switch), a touch pad (touch panel), a touch key (touch key), a mouse (mouse), a menu button (menu button), and the like.

The display unit 130 displays display data based on the operation of the preprocessing device 100. Such display data may include image data. For example, the display portion 130 may include a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, an Organic Light Emitting Diode (OLED) display, a Micro Electro Mechanical Systems (MEMS) display, and an electronic paper (electronic paper) display. Also, the display portion 130 may be embodied as a touch screen (touch screen) or the like in combination with the input portion 120.

The storage unit 140 stores various information and programs for the operation of the video integrator 100. For example, the storage unit 140 may store image data received from an image acquisition device or the like, an algorithm related to an image integration method using a dental target according to an embodiment of the present invention, and the like. Also, the storage part 140 may store the learning model.

The control unit 150 performs image integration between the intraoral scan image and the computed tomography scan image received from the image acquisition device or the server or stored in advance in the storage unit 140. Therefore, the control unit 150 may receive image data from an image acquisition device, a server, or the like and store the image data in the storage unit 140. The control unit 150 can control operations of the communication unit 110, the input unit 120, the display unit 130, and the storage unit 140.

Hereinafter, an image integration method using a dental target according to an embodiment of the present invention in which the operation is controlled by the control unit 150 will be described.

Fig. 2 is a flowchart illustrating an image integration method using a dental target according to an embodiment of the present invention.

Referring to fig. 2, an image integration method using a dental target according to an embodiment of the present invention may include steps S100 and S200 of performing image processing on image data.

First, in step S100, the control unit 150 generates a plurality of reference points spaced apart from each other in the oral cavity scan image and the computed tomography image to be integrated. That is, the control unit 150 generates a reference point (hereinafter, referred to as a "first reference point") on the oral cavity scan image to be integrated, and generates a reference point (hereinafter, referred to as a "second reference point") on the computed tomography image to be integrated.

Thereafter, in step S200, the control unit 150 performs image integration by changing the integration target mouth scan image or the integration target computer tomographic image so that the first reference point and the second reference point coincide with each other.

In this case, the reference point is a point indicating a position with respect to a specific tooth (for example, a position corresponding to a center point of the specific tooth), and is used when the images are integrated. That is, in step S200, the control unit 150 performs image integration between the integrated object intraoral scan image and the integrated object computed tomography image by changing the angle, size, position, and the like of these images so that the specific first reference point coincides with the specific second reference point corresponding thereto.

Fig. 3 shows a tooth area including a front tooth area FA and a molar area BA.

Referring to fig. 3, the integrated subject mouth scan image and the computer tomography image of the integrated subject include tooth regions representing shapes of a plurality of teeth. In this case, the tooth area includes an anterior tooth area FA located at the front and a molar tooth area BA located at the rear of the anterior tooth area FA. For example, the front tooth area FA may be an area where No. 1 to No. 3 teeth are located, and the molar area BA may be an area where No. 4 to No. 8 teeth are located.

In this case, the reference points may include reference points for the foremost 1 tooth FT of the anterior tooth area FA (hereinafter, referred to as "foremost reference point") and reference points for the rearmost 2 teeth BT1, BT2 on both sides of the molar tooth area BA (hereinafter, referred to as "rearmost reference point"), respectively. That is, among the rearmost reference points, one is a reference point for one tooth selected from the right teeth of the molar area BA, and the other is a reference point for another tooth selected from the left teeth of the molar area BA.

Such a forefront reference point and a rearmost reference point are generated in the integrated subject mouth scan image and the integrated subject computed tomography image. As a result, the 1 forefront reference point and the 2 rearmost reference point in each image form the vertex of a triangle, and when the images are integrated between the integrated subject intraoral scan image and the integrated subject computed tomography image, it is possible to provide a reference for making changes in the angle, size, position, and the like of these images easier and more accurate.

In this case, the second reference points can be easily derived manually or by various algorithms from the three-dimensional coordinate information of the integration target computed tomography image that can present information on the structure, size, position, etc. of the inside of the tooth. In this case, the derived second reference point may be a point representing the position of the center point of the 1 forward-most tooth FT and the 2 rearward-most teeth BT1, BT 2. Of course, the second reference point can be derived by a method of deriving (detecting) the first reference point described later. In this case, the description of "intraoral scan image" may be replaced with the description of "computed tomography image".

On the other hand, it is necessary to derive a first reference point corresponding to the derived second reference point. However, the first reference point needs to be derived in the mouth scan image that presents information about the surface of the tooth, and therefore, in the case of a manual or forgetting algorithm (using curvature information in the mouth scan image), its accuracy has to be reduced.

Thus, the present invention derives a first reference point using object OB, which is a simplified shape of a tooth. This will be described in more detail with reference to steps S101 to S104 described later.

FIG. 4 is a detailed flowchart of S100 of a method for integrating images of a dental target according to an embodiment of the present invention. Fig. 5 shows a state where the relational region ROI is extracted in the scanned image of the oral cavity of the first learning object.

Referring to fig. 5, in step S101, the control unit 150 extracts a region of interest ROI including teeth from a dental scan image (hereinafter, referred to as a "learning dental scan image") of a learning object. That is, the learning intraoral scan image may include a tooth region and a tooth region outside the tooth region, and the control section 150 may extract such a tooth region from the region of interest ROI in step S101.

Then, in step S102, the control unit 150 generates a learning model by forming and learning data (training data) of the object OB1 for each tooth in a plurality of directions of the extracted region of interest ROI. In this case, the control unit 150 may learn the prepared learning data by a machine learning (machine learning) method.

For example, the machine Learning method is a supervised Learning (supervised Learning) method, and may be one of the methods of intellectual neural network, Boosting, Bayesian statistics, Decision tree, Gaussian process regression, Nearest neighbor algorithm, Support vector machine, Random formats, symbol machine Learning, Ensembles of classifiers, Deep Learning, and the like.

That is, the learning data may include a learning oral cavity scan image of the extracted region of interest ROI as an input value, and the corresponding image may include the target OB formed for each tooth as a result value (target value) forming a set (set) corresponding to the input value. In this case, the object OB1 may be set in a plurality of directions in the region of interest ROI of the learning oral scan image, and may have a plurality of shapes (which cover the area of the corresponding tooth in the corresponding direction and have a shape simpler than the above-described tooth shape) such that the shape of the corresponding tooth is simplified in the corresponding direction, that is, a shape such as a circle or a polygon, so as to correspond to the shape of each tooth. Also, the resultant value of the learning data for such an object OB1 may include position information, center point information, and size information occupied by the corresponding object OB1 in the region of interest ROI.

The learning model includes a rule function for matching the corresponding input value and result value, and the model is supervised by a machine learning method using learning data.

Fig. 6 shows a state in which the objects OB1 for 4 directions are set in the region of interest ROI of the second learning object oral cavity scan image. That is, the portion (a) of fig. 6 is a plane direction, the portion (b) of fig. 6 is a front direction, the portion (c) of fig. 6 is a left direction, and the portion (d) of fig. 6 is a right direction.

For example, as shown in fig. 6, in the region of interest ROI of the learning oral scan image, objects OB1 for 4 directions, i.e., the plane, front, left, and right directions, can be set. In this case, the user can set the target OB1 through the input unit 120. As a result, the learning data may contain object OB1 in at least 2 directions (specific directions) for each tooth. In this case, the specific direction may include a planar direction and a direction other than the planar direction (a front direction, a left direction, or a right direction).

That is, for the teeth nos. 1 to 3 presented in the front direction of the region of interest ROI, the object OB1 can be set in the planar direction and the front direction of the region of interest ROI. Further, for the left No. 4 to left No. 8 teeth appearing in the left direction of the region of interest ROI, the object OB1 can be formed in the left direction and the front direction of the region of interest ROI, respectively. Also, for the right No. 4 to right No. 8 teeth appearing in the right direction of the region of interest ROI, the object OB1 may be formed in the right direction and the front direction of the region of interest ROI, respectively. Further, as described above, the object OB1 can be formed for all teeth for the planar direction of the region of interest ROI.

The planar-direction object OB1 is a shape simplifying the shape in the planar direction of the corresponding tooth, and the planar-direction object OB1 is a shape simplifying the shape in the direction other than the planar direction of the corresponding tooth. Thus, the planar-direction object OB1 may be a medium that provides information on the plane of the corresponding tooth, and the out-of-plane-direction object OB1 may be a medium that provides information on the lateral side (height, etc.) of the corresponding tooth.

For example, an object OB1 for the planar direction of the teeth of the lower jaw may be a medium providing information on the upper face of the corresponding teeth, and an object OB1 for the direction other than the planar direction of the teeth of the lower jaw may be a medium providing information on the lateral face of the corresponding teeth. Also, object OB1 for the planar direction of the teeth of the upper jaw may be a medium providing information on the lower face of the corresponding teeth, and object OB1 for the direction other than the planar direction of the teeth of the upper jaw may be a medium providing information on the lateral face of the corresponding teeth.

Then, in step S103, the control unit 150 detects the objects OB2 in the plurality of directions for each tooth in the oral cavity scan image to be integrated using the learning model generated in step S102. That is, the control unit 150 may input the integration target oral cavity scan image as an input value to the learning model, and as a result, the learning model may output the object OB2 corresponding to the integration target oral cavity scan image as a result value thereof.

Fig. 7 shows a state of the intraoral scan image detection object OB2 in the first to fourth integrated objects. Fig. 8 shows a state in which various directions of the three-dimensional object OB3 are detected in the oral scan image of the fifth integrated object.

Referring to fig. 7, the learning model may scan images in the oral cavity of the integration object to output (detect) the object OB2 corresponding to the plurality of directions of step S102.

That is, for the teeth No. 1 to No. 3 appearing in the front direction of the scanned image of the oral cavity of the integration target, the corresponding object OB2 can be detected in the planar direction and the front direction of the scanned image of the oral cavity of the integration target, respectively. Further, the object OB2 can be detected in the left direction and the front direction of the scanned image of the integrated subject oral cavity for the left teeth No. 4 to No. 8 which appear in the left direction of the scanned image of the integrated subject oral cavity. Further, with respect to the right No. 4 to right No. 8 teeth appearing in the right direction of the integrated subject oral cavity scan image, object OB1 can be detected in the right direction and the front direction of the integrated subject oral cavity scan image, respectively. As described above, the object OB2 for all teeth can be detected in the plane direction of the region of interest ROI.

On the other hand, in step S103, the control section 150 can generate (detect) a three-dimensional object OB3 using 2 or more objects OB2 for each detected tooth. That is, the controller 150 generates a three-dimensional shape including 2 or more objects OB2 as a plane for each detected tooth, and can detect the corresponding three-dimensional shape as the three-dimensional object OB 3.

In this case, object OB2, which creates the learning model, may take a variety of shapes (covering the area of the corresponding tooth in the corresponding direction and having a shape simpler than the tooth shape), that is, a shape such as a circle or a polygon, which simplifies the shape of the corresponding tooth in the corresponding direction so as to correspond to the shape of each tooth. The three-dimensional object OB3 can have a variety of three-dimensional shapes (shapes that cover the volume of the corresponding tooth and are simpler than the three-dimensional shapes of the teeth) that simplify the three-dimensional shape of the corresponding tooth, that is, shapes such as a circular column, an elliptic column, a polygonal column, a cone, or a polygonal cone, so as to correspond to the three-dimensional shape of each tooth.

For example, as shown in fig. 8, when the three-dimensional object OB3 is a regular hexahedron, the control unit 150 generates a regular hexahedron having 2 objects OB2 for any one tooth as a first surface and a second surface and having a virtual surface perpendicular to the corresponding first surface and second surface as a remaining surface, and thereby can detect a regular hexahedron-shaped three-dimensional object OB3 for the corresponding tooth.

That is, the planar-direction object OB2 represents a shape in the planar direction of the corresponding tooth, and the out-of-plane-direction object OB2 represents a shape in the out-of-plane-direction of the corresponding tooth. Thus, the planar-direction object OB2 may be a medium that provides information on the plane of the corresponding tooth, and the out-of-plane-direction object OB2 may be a medium that provides information on the lateral side (height, etc.) of the corresponding tooth.

As a result, the control section 150 can detect the three-dimensional object OB3 having the object OB2 in the planar direction as the upper face or the lower face of the corresponding tooth and the object OB2 in the direction other than the planar direction as the one side face of the corresponding tooth. In this case, the controller 150 may add the remaining surface other than the surface formed by the object OB2 in the planar direction and the object OB2 in the direction other than the planar direction as a virtual surface. That is, the control unit 150 may add one or more virtual surfaces perpendicular to the planar-direction object OB2 as the other side surface, and add a virtual surface corresponding to the planar-direction object OB in parallel as the other plane (the lower side surface in the case where the corresponding tooth is in the lower jaw, and the upper side surface in the case where the corresponding tooth is in the upper jaw).

Then, in step S103, the controller 150 may extract the position information (position coordinates in the mouth scan image), the center point information (center point coordinates in the mouth scan image), and the size information of the respective objects OB2 and OB 3. In this case, the position information, the center point information, and the size information of the object OB2 may be output together with the object OB2 as result values relating to the input values of the learning model.

The control unit 150 can extract the position information, the center point information, and the size information of the corresponding three-dimensional object OB3 using the position information, the center point information, and the size information of 2 or more related objects OB2 used for generating each three-dimensional object OB 3.

For example, as shown in fig. 8, when the three-dimensional object OB3 is a regular hexahedron, the controller 150 may extract position information, center point information, and size information corresponding to the three-dimensional object OB3 using the position information, center point information, and size information of the object OB2 constituting the first and second faces of the regular hexahedron.

Then, the controller 150 may display the detected objects OB2 and OB3 on the scan image of the integrated oral cavity. That is, the controller 150 can display the respective objects OB2 and OB3 in the integrated subject oral cavity scan image using the detected position information, center point information, and size information of the respective objects OB2 and OB 3. In order to clearly distinguish the teeth from each other, the controller 150 may display the detected objects OB2 and OB3 in different colors on the scan image of the integrated subject's oral cavity for each tooth, as shown in fig. 6.

In this case, the controller 150 may display at least one of the position information, the center point information, and the size information about the respective targets OB2 and OB3 detected together with the respective targets OB2 and OB3 detected in the integrated subject intraoral scan image.

On the other hand, fig. 5 to 8 only show the teeth of the lower jaw, but the present invention is not limited thereto, and the object detection operation of the present invention can be applied to the teeth of the upper jaw as well.

Fig. 9 and 10 show a state of an image integration process between an oral cavity scan image and a computed tomography scan image of an integration object. In this case, in fig. 9 and 10, the dashed square represents the region of the foremost tooth FT and the rearmost teeth BT1, BT2 in each image. That is, fig. 9 and 10 show a state in which the image integration is performed so that the first reference point and the second reference point match in each image. Fig. 11 shows a state in which the images of the oral cavity scan image and the computed tomography scan image to be integrated are integrated.

Thereafter, in step S104, the control section 150 selects a reference target among the detected targets OB2, OB3 to generate a first reference point in the selected reference target. In this case, the reference targets may include a target for the front-most 1 tooth FT of the anterior tooth area FA (hereinafter, referred to as "front-most target") and targets for the 2 teeth BT1, BT2 on both sides of the posterior-most of the molar tooth area BA (hereinafter, referred to as "posterior-most target"). Also, the control part 150 may select a center point of the reference target (a center point of the surface or the volume) as the first reference point.

The present invention can easily detect the objects OB2 and OB3 corresponding to the respective teeth in the scanned image of the oral cavity inside the oral cavity, thereby improving the detection time and effectiveness.

Furthermore, the present invention can provide extraction information such as the position, center point, and size information of each of the objects OB2 and OB3 to be detected, and as a result, the extraction information can be used as a reference point for image integration with the image of the computed tomography image.

That is, referring to fig. 9 and 10, when integrating with the image of the computed tomography image, the center point information of the plurality of objects OB2 and OB3, which enables the plurality of objects OB2 and OB3 to be extracted quickly, can be used as the first reference point for image integration, and the information with this first reference point is more accurate than the information of the tooth detection method using the conventional curvature information, and as a result, the speed and accuracy of image integration can be improved as shown in fig. 11.

On the other hand, in step S100, the control unit 150 may cause the learning model to output the front-most target and the rear-most target as output values thereof.

In this case, in step S102, the control unit 150 forms learning data for each object OB1 of n spaced teeth (hereinafter, referred to as "target teeth") (n is a natural number of 3 or more) as a part of the teeth in a plurality of directions of the extracted region of interest ROI, and performs learning according to a machine learning method to generate a learning model. That is, the subject teeth include the foremost tooth FT and the rearmost tooth BT. Thus, the learning model performs learning using the learning data forming the object OB1 corresponding to the forefront object and the rearmost object. In step S103, the control unit 150 detects the objects OB2 in the plurality of directions for each tooth in the oral cavity scan image to be integrated using the learning model generated in step S102. Then, a three-dimensional object OB3 is detected by the object OB 2. As a result, in step S104, the control section 150 selects the detected objects OB2, OB3 as reference objects to generate a first reference point at the selected reference object. Further, the same contents as those of the above-described step S101 to step S104 are provided.

In the detailed description of the present invention, the description is made in relation to specific embodiments, and various modifications may be made without departing from the scope of the present invention. Therefore, the scope of the present invention is not limited to the embodiments described above, and should be defined by the claims to be described later and the equivalents thereof.

Industrial applicability

The dental target detection method and the image integration method and device using the dental target can be used in various dental treatment fields such as dental implant surgery and the like.

22页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:听诊辅助设备以及听诊器

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!