Method for tracking, predicting and pre-correcting malocclusions and related problems

文档序号:1967906 发布日期:2021-12-17 浏览:11次 中文

阅读说明:本技术 跟踪、预测和预先矫正咬合不正及相关问题的方法 (Method for tracking, predicting and pre-correcting malocclusions and related problems ) 是由 D·梅森 D·布兰特 于 2016-01-05 设计创作,主要内容包括:本发明涉及跟踪、预测和预先矫正咬合不正及相关问题的方法。本发明提供了一种用于计算患者口腔内的口腔对象的未来位置的计算机实现的方法。该方法可包括在第一和第二时间点处接收表示口腔内的实际状态的第一和第二数字数据。该方法可包括处理包括第一和第二数字数据的数据,以确定在第一和第二时间点上口腔内的口内对象的速度。基于速度确定未来时间点处口内对象的未来位置。(The present invention relates to methods for tracking, predicting and pre-correcting malocclusions and related problems. The present invention provides a computer-implemented method for calculating a future position of an oral object within a patient's oral cavity. The method may include receiving first and second digital data representing an actual state within the oral cavity at first and second points in time. The method may include processing data including the first and second digital data to determine a velocity of an intraoral object within the oral cavity at the first and second points in time. A future position of the intraoral object at a future point in time is determined based on the velocity.)

1. A computer-implemented method for calculating a future condition of an intraoral object of a patient's intraoral cavity, the method comprising:

receiving first digital data representative of the intraoral cavity at a first point in time;

receiving second digital data representative of the intraoral cavity at a second point in time;

evaluating a characteristic of an intraoral object within the oral cavity at the first time point and the second time point; and

determining that a future condition of the intraoral object occurs at a third point in time, wherein the third point in time occurs after the first point in time and the second point in time;

wherein the future condition is determined based at least in part on the evaluated characteristic, and wherein the future condition is determined prior to occurrence of the future condition.

2. The method of claim 1, wherein the characteristics of the intraoral object are selected from one or more of: a position of the intraoral object, a shape of the intraoral object, an orientation of the intraoral object, a size of the intraoral object, and a color of the intraoral object.

3. The method of claim 1, wherein evaluating characteristics of an intraoral object comprises determining a change in characteristics of an intraoral object during the first time point and the second time point.

4. The method of claim 3, wherein the change in characteristic comprises a velocity.

5. The method of claim 1, wherein the future condition is selected from one or more of: crowding of teeth, spacing of teeth, overbite, overjet, backjet, cross-bite, open bite, tooth decay, loss of one or more teeth, root resorption, periodontal disease, gingival atrophy, temporomandibular joint abnormalities, bruxism, blocked airways, and sleep apnea.

6. The method of claim 1, wherein the future condition comprises an undesirable dental or orthodontic condition predicted to occur at the third point in time without treatment of the patient's oral cavity.

7. The method of claim 1, wherein the future condition is a tooth collision.

8. The method of claim 1, wherein the first digital data represents an intraoral actual state at the first point in time.

9. The method of claim 1, wherein the second digital data represents an intraoral actual state at the second point in time.

10. The method of claim 1, wherein the first and second digital data comprise one or more of surface data or subsurface data within the oral cavity of the patient.

11. The method of claim 1, further comprising displaying a graphical representation of the intraoral object at the third point in time on a user interface shown on a display.

12. The method of claim 1, further comprising:

determining a change in position of the intraoral object between the first and second points in time based on the first and second digital data; and

evaluating whether the change in position exceeds a predetermined threshold.

13. The method of claim 12, further comprising outputting an alert to a user in response to an evaluation that the change in position exceeds the predetermined threshold.

14. The method of claim 12, wherein the predetermined threshold is input by a user.

15. The method of claim 12, wherein the predetermined threshold is determined based on one or more of user preferences or patient characteristics.

16. The method of claim 12, wherein the predetermined threshold is indicative of an undesirable dental or orthodontic condition.

17. The method of claim 12, further comprising:

generating a plurality of options for producing a desired dental or orthodontic result in response to the evaluation that the change in position exceeds the predetermined threshold; and

displaying the plurality of options on a user interface shown on the display.

18. The method of claim 17, wherein the plurality of options includes a plurality of treatment options for an undesirable dental or orthodontic condition, and wherein displaying the plurality of options includes displaying one or more of pricing information, treatment time information, or treatment complication information associated with each of the plurality of treatment options.

19. The method of claim 1, wherein the intraoral object comprises one or more of a dental crown, a dental root, a gum, an airway, a palate, a tongue, or a jaw.

20. The method of claim 1, further comprising processing the first digital data and the second digital data to determine a rate of change of the characteristic.

21. The method of claim 20, wherein the intraoral object comprises a tooth and the rate of change comprises a tooth shape change rate.

22. The method of claim 20, wherein the intraoral object comprises a tooth and the rate of change comprises a speed of change of tooth position.

23. The method of claim 20, wherein the intraoral object comprises a tooth and the rate of change comprises a speed of change of tooth orientation.

24. The method of claim 20, wherein the intraoral object comprises a tooth and the rate of change comprises a tooth size change rate.

25. The method of claim 20 wherein the intraoral object comprises a gum and the rate of change comprises a rate of gum shape change.

26. The method of claim 3, wherein evaluating characteristics of an intraoral object comprises determining a motion trajectory of the intraoral object based on changes in characteristics.

27. The method of claim 26, wherein the motion profile is linear.

28. The method of claim 1, further comprising:

receiving fourth digital data representative of the intraoral cavity at a fourth time point different from the first and second time points; and

processing the first digital data, the second digital data, and the fourth digital data to determine a change in a characteristic of the intraoral object between the first point in time, the second point in time, and the fourth point in time.

29. The method of claim 28, further comprising:

determining a motion trajectory of the intraoral object based on the velocity of the intraoral object during the first time point, the second time point, and the fourth time point, wherein the motion trajectory is non-linear; and

determining a future position of the intraoral object based on a non-linear motion trajectory.

30. The method of claim 29, wherein the non-linear motion profile comprises one or more of a change in direction of motion or a change in speed of motion.

31. The method of claim 28, further comprising processing the first digital data, the second digital data, and the fourth digital data to determine a force vector associated with the intraoral object during the first point in time, the second point in time, and the fourth point in time.

32. The method of claim 1, wherein determining that the future condition of the intraoral object occurs at a third point in time comprises extrapolating a rate of change of the trait to the third point in time using a linear or non-linear extrapolation algorithm.

33. A tangible computer readable storage medium having computer executable instructions stored thereon that, if executed, cause a computer system to perform a computer implemented method for calculating a future condition of an intraoral object of a patient's intraoral cavity, the method comprising:

receiving first digital data representative of the intraoral cavity at a first point in time;

receiving second digital data representative of the intraoral cavity at a second point in time;

evaluating a characteristic of an intraoral object within the oral cavity at the first time point and the second time point; and

determining that a future condition of the intraoral object occurs at a third point in time, wherein the third point in time occurs after the first point in time and the second point in time;

wherein the future condition is determined based at least in part on the evaluated characteristic, and wherein the future condition is determined prior to occurrence of the future condition.

34. A computer-implemented method for visualizing a patient's oral cavity, the method comprising:

receiving first digital data representative of the intraoral cavity at a first point in time;

receiving second digital data representative of the intraoral cavity at a second point in time;

processing the first and second digital data to assess characteristics of an intraoral object of an oral cavity at the first and second points in time, wherein processing the first and second digital data comprises determining a change in characteristics during the first and second points in time; and

delivering therapy data, wherein the therapy data comprises a first visualization of a change in a characteristic during the first and second points in time.

35. The method of claim 34, wherein the first visualization includes the first digital data and the second digital data overlapping each other.

36. The method of claim 34, wherein the change in characteristic is a change selected from one or more of: a position of the intraoral object, a shape of the intraoral object, an orientation of the intraoral object, a size of the intraoral object, a color of the intraoral object, a velocity of the intraoral object, or a combination thereof.

37. The method of claim 34, further comprising determining a future position of the intraoral object at a future point in time based on a rate of change of the characteristic, wherein the future position is determined before the intraoral object is located at the future position.

38. The method of claim 37, further comprising providing a second visualization representing a future position of the intraoral object.

39. The method of claim 34, further comprising determining a future condition of the intraoral subject at a future point in time based on the rate of change of the characteristic, wherein the future condition is determined before the intraoral subject is in the future condition.

40. The method of claim 39, wherein the future condition is selected from one or more of: crowding of teeth, spacing of teeth, overbite, overjet, backjet, cross-bite, open bite, tooth decay, loss of one or more teeth, root resorption, periodontal disease, gingival atrophy, temporomandibular joint abnormalities, bruxism, blocked airways, sleep apnea, and collision of an intraoral subject.

41. The method of claim 39, wherein the future condition comprises an undesirable dental or orthodontic condition predicted to occur at the future point in time without treatment of the patient's oral cavity.

42. The method of claim 34, wherein said first digital data represents an intraoral actual state at said first point in time.

43. The method of claim 34, wherein the second digital data represents an intraoral actual state at the second point in time.

44. The method of claim 34, wherein the first and second digital data comprise one or more of surface data or subsurface data within the oral cavity of the patient.

45. The method of claim 34, further comprising evaluating whether the change in the characteristic exceeds a predetermined threshold.

46. The method of claim 45, further comprising outputting an alert to a user in response to the evaluation that the change exceeds the predetermined threshold.

47. The method of claim 45, wherein the predetermined threshold is input by a user.

48. The method of claim 45, wherein the predetermined threshold is determined based on one or more of user preferences or patient characteristics.

49. The method of claim 45, wherein the predetermined threshold is indicative of an undesirable dental or orthodontic condition.

50. The method of claim 45, further comprising:

generating a plurality of options for producing a desired dental or orthodontic result in response to the evaluation that the change exceeds the predetermined threshold; and

displaying the plurality of options on a user interface shown on the display.

51. The method of claim 50, wherein the plurality of options includes a plurality of treatment options for an undesirable dental or orthodontic condition, and wherein displaying the plurality of options includes displaying one or more of pricing information, treatment time information, or treatment complication information associated with each of the plurality of treatment options.

52. The method of claim 34, wherein the intraoral object comprises one or more of a dental crown, a dental root, a gum, an airway, a palate, a tongue, or a jaw.

53. The method of claim 34, further comprising processing the first digital data and the second digital data to determine a rate of change of the characteristic.

54. The method of claim 53, wherein the intraoral object comprises a tooth and the rate of change comprises a tooth shape change rate.

55. The method of claim 53, wherein the intraoral object comprises a tooth and the rate of change comprises a speed of change of tooth position.

56. The method of claim 53, wherein the intraoral object comprises a tooth and the rate of change comprises a speed of change of tooth orientation.

57. The method of claim 53, wherein the intraoral object comprises a tooth and the rate of change comprises a speed of tooth size change.

58. The method of claim 53 wherein the intraoral object comprises a gum and the rate of change comprises a rate of gum shape change.

59. The method of claim 34, wherein determining the change in the characteristic during the first and second points in time comprises determining a motion trajectory of an intraoral object based on the change in the characteristic.

60. The method of claim 59, wherein the motion profile is linear.

61. The method of claim 44, wherein determining the change in the characteristic during the first and second points in time comprises extrapolating the rate of change of the characteristic to a future point in time using a linear or non-linear extrapolation algorithm.

62. The method of claim 61, further comprising providing a third visualization of said intraoral object at said future point in time based, at least in part, on a rate of change of said characteristic.

63. The method of claim 61, wherein the first visualization and the third visualization are presentable to a user on a same interface.

64. A tangible computer readable storage medium having computer executable instructions stored thereon that, if executed, cause a computer system to perform a computer-implemented method for visualizing a patient's oral cavity, the method comprising:

receiving first digital data representative of the intraoral cavity at a first point in time;

receiving second digital data representative of the intraoral cavity at a second point in time;

processing the first and second digital data to assess characteristics of an intraoral object of an oral cavity at the first and second points in time, wherein processing the first and second digital data comprises determining a change in characteristics during the first and second points in time; and

delivering therapy data, wherein the therapy data comprises a first visualization of a change in a characteristic during the first and second points in time.

65. A computer-implemented method for predicting orthodontic treatment, the method comprising:

receiving a plurality of orthodontic conditions respectively associated with a plurality of digital data representations;

receiving first digital data representative of an oral cavity of a first patient;

identifying, by one or more computer processors, an orthodontic condition that matches the first patient from the plurality of orthodontic conditions based on first digital data represented by one or more matching algorithms; and

the identified orthodontic condition is transmitted for display.

66. The method of claim 65, further comprising providing for display a list of orthodontic conditions.

67. The method of claim 65, further comprising:

receiving a plurality of ranges respectively associated with the plurality of orthodontic conditions;

receiving second digital data representative of the oral cavity of a second patient;

identifying, by the one or more processors, an orthodontic condition from the plurality of orthodontic conditions by determining whether the second digital data falls within a range associated with one of the plurality of orthodontic conditions, and

the identified orthodontic condition is transmitted for display.

68. The method of claim 67, further comprising providing for display a list of orthodontic conditions.

69. The method of claim 65, further comprising displaying a parameter associated with the identified orthodontic condition.

70. The method of claim 65, further comprising:

displaying the parameter options to the user, an

Receiving a selection of one or more of the preferences from the user.

71. The method of claim 65, further comprising predicting a future condition of the first patient based on the identified orthodontic condition.

72. The method of claim 71, further comprising displaying the predicted future condition to a user.

73. The method of claim 65, further comprising:

detecting an intraoral feature associated with the first digital data; and

measurements of the oral cavity are obtained based on the detected intraoral features.

74. The method of claim 73, further comprising displaying the measurements of the oral cavity to a user.

Background

Existing methods and devices for identifying and characterizing orthodontic problems are undesirable in at least some situations. For example, previous practices may passively diagnose and treat orthodontic problems only after a patient has developed symptoms (e.g., pain, malocclusion), and may also be less than ideal for actively identifying and correcting problems that will necessarily occur in the future. Furthermore, while medical personnel can identify orthodontic problems through visual inspection under certain conditions, the number and rate of change of a patient's dentition is not readily characterized and communicated. To name a few examples, medical personnel may indicate that a patient's teeth may continue to become more distorted, positioning between teeth may continue to increase, wear of enamel on the teeth may continue to deteriorate (e.g., nocturnal bruxism), the patient's chin may lock, the patient may have difficulty sleeping (e.g., sleep apnea), or gum surgery may be necessary (e.g., identifying gum degradation). However, it can be difficult for medical personnel to determine the specific, subtle changes that will occur and the problems that may arise in the future due to these changes. Furthermore, existing methods are undesirable for tracking and predicting changes in a patient's dentition that are undetectable by visual inspection (e.g., changes in the roots of teeth).

The prior art is undesirable in at least some situations to characterize and report subtle changes associated with many orthodontic problems. For example, the prior art may not allow medical personnel or patients to visualize (e.g., three-dimensionally) any expected progression of orthodontic problems. Other techniques provide only static views of orthodontic problems and do not provide predictive information.

In view of the foregoing, it is desirable to provide an improved method and apparatus for tracking, predicting and pre-correcting dental or orthodontic conditions, such as malocclusions. Theoretically, such methods and apparatus allow for accurate identification, prediction, quantification, and visualization of dentition changes in a patient.

Disclosure of Invention

Embodiments of the present invention provide a system, method and apparatus for predicting a patient's future dental or orthodontic conditions. In some embodiments, a digital data representation of the patient's oral cavity is received at a plurality of different time points and used to generate a predicted digital representation of the oral cavity at a future time point. For example, changes in one or more intraoral objects (e.g., teeth, gums, airways) may be determined and extrapolated to a future point in time using digital data, thereby improving the accuracy of the prediction compared to predictions based solely on visual inspection. Based on the predictive digital representation, a patient's future undesirable dental or orthodontic condition can be determined, allowing for pre-diagnosis and treatment before the problem has actually occurred or has progressed to a further state. Further, the systems, methods, and devices herein can be used to generate one or more treatment options for a predicted problem to facilitate decision making by the patient and/or physician. Optionally, a numerical representation of the predicted outcome obtained with the selected treatment option may be generated and displayed to provide further guidance to the patient and/or physician. Advantageously, the methods provided herein enable active diagnosis and correction of various dental or orthodontic conditions, which can be beneficial in reducing treatment costs, treatment duration, and/or treatment difficulty.

In one aspect, a computer-implemented method for calculating a future position of an intraoral object of a patient's intraoral cavity includes receiving a first digital data representation of an actual state within the intraoral cavity at a first point in time, and receiving a second digital data representation of the actual state within the intraoral cavity at a second point in time different from the first point in time. The method may include processing data having the first and second digital data to determine a velocity of an intraoral object within the intraoral cavity during the first and second points in time. A future position of the intraoral object at a future point in time may be determined based on the velocity. The future location may be determined before the intraoral object is at the future location.

Other objects and features of the present invention will become more fully apparent by referring to the specification, claims and accompanying drawings.

Citations of documents

All publications, patents and patent applications mentioned in this specification are herein incorporated by reference to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated by reference.

Drawings

The novel features of the invention are set forth with particularity in the appended claims. The features and advantages of the present invention will be better understood by referring to the following detailed description of illustrative embodiments that utilize the principles of the invention and the accompanying drawings, in which:

fig. 1 is a front view illustrating anatomical relationships of a jaw of a patient according to various embodiments;

FIG. 2A illustrates a patient's mandible in more detail, according to various embodiments, and provides a general illustration of how teeth may be moved;

FIG. 2B illustrates a single tooth of FIG. 2A and defines how tooth movement distance is determined, in accordance with various embodiments;

fig. 3 illustrates a system for predicting a patient's future dental or orthodontic condition, in accordance with various embodiments;

FIG. 4A shows a schematic view of a set of teeth according to various embodiments;

FIG. 4B shows a schematic view of the set of teeth of FIG. 4A having been moved, in accordance with various embodiments;

FIG. 4C illustrates a schematic view of the set of teeth of FIG. 4A compared to the set of teeth of FIG. 4B to determine a change in position of the set of teeth, in accordance with various embodiments;

FIG. 4D illustrates a schematic diagram of predicted future positions of the set of teeth of FIG. 4B based on previously determined trajectories and amplitudes of changes, in accordance with various embodiments;

FIG. 5A shows a schematic view of a set of teeth including roots, according to various embodiments;

FIG. 5B shows a schematic view of the set of teeth of FIG. 5A having been moved, in accordance with various embodiments;

FIG. 5C illustrates a schematic of the set of teeth of FIG. 5A compared to the set of teeth of FIG. 5B to determine changes in position of the set of teeth based on root motion, in accordance with various embodiments;

FIG. 5D illustrates a schematic diagram of predicted future positions of the set of teeth of FIG. 5B based on previously determined trajectories and amplitudes of changes, in accordance with various embodiments;

FIG. 6A shows a schematic view of a tooth according to various embodiments;

FIG. 6B is a schematic diagram illustrating the shape and size of the tooth of FIG. 6 over time according to various embodiments;

FIG. 6C illustrates a schematic representation of the tooth of FIG. 6A relative to the tooth of FIG. 6B to determine changes in shape and size of the set of teeth in accordance with various embodiments;

FIG. 6D illustrates a schematic diagram of a predicted future shape of the tooth in FIG. 6B based on previously determined shape and size change trajectories and amplitudes, in accordance with various embodiments;

fig. 7A shows a schematic view of a gum line according to various embodiments;

fig. 7B shows a schematic representation of the location and shape of the gum line of fig. 7A over time, in accordance with various embodiments;

FIG. 7C is a schematic view of the gum line of FIG. 7A compared to the gum line of FIG. 7B to determine variations in the location and shape of the gum line;

fig. 7D illustrates a schematic diagram of a predicted future position and shape of the gum line of fig. 7B based on a previously determined trajectory and magnitude of shape change, in accordance with various embodiments;

FIG. 8A shows a schematic diagram of a linear push algorithm of tooth trajectory according to various embodiments;

FIG. 8B shows a schematic diagram of a non-linear extrapolation algorithm for tooth trajectory according to various embodiments;

FIG. 9 illustrates a method for generating a predictive digital representation of a patient's oral cavity in order to determine a future problem for the patient, in accordance with various embodiments;

FIG. 10A illustrates an algorithm for predicting and treating future problems of a patient according to various embodiments;

FIG. 10B is a continuation of the algorithm of FIG. 10A according to various embodiments;

11A-11G illustrate user interfaces for predicting a patient's future dental or orthodontic condition according to various embodiments;

fig. 12 illustrates a schematic diagram of a system for predicting a patient's future dental or orthodontic condition, in accordance with various embodiments; and

figure 13 illustrates a method for calculating a change in an intraoral object in order to determine a future state of the intraoral object, in accordance with various embodiments.

Detailed Description

The present invention provides an improved and more efficient method and system for early detection and/or prediction of dental or orthodontic conditions. The methods and devices disclosed herein can be combined in a variety of ways and used to diagnose or treat one or more of a variety of oral problems. In some embodiments, the methods and apparatus herein may be used to detect and predict various types of dental or orthodontic conditions that may be present in a patient, determine an appropriate treatment product and/or procedure to prevent or correct the problem, and/or display a predicted outcome of performing the treatment product and/or procedure. The prediction method may involve comparing surface scan data and/or sub-surface data of the tooth at multiple points in time to determine changes in the position and/or shape of the tooth over time, and then generating a prediction of the future position and/or shape of the tooth, e.g., based on the determined changes.

These methods may improve prediction accuracy compared to methods that rely solely on visual inspection or static data. Based on photographs, research tables, and/or records of previous clinical examinations, it is difficult to visually assess changes within the oral cavity, as changes are often subtle and/or slowly developing among other issues. Because the differences between scans may be very small, visual detection of intraoral scan-based changes may be more difficult without the aid of numerical geometric evaluation. However, over time, this subtle change is cumulative and, if time is plentiful, can develop into a larger and more significant problem. These problems may be prevented, reduced or solved based on embodiments of the present invention.

With the methods and systems of the present invention, dental or orthodontic conditions can be identified and predicted before they become more severe and/or less amenable to treatment. For example, a child, because the deciduous teeth are missing and being replaced by permanent teeth, can identify potentially crowded teeth and appropriate timing of treatment early and apply less force (e.g., because the teeth may not yet be firmly attached to the bony structure of the mouth or erupting teeth are smaller or fewer). In contrast to conventional passive treatment methods, the predictive method of the present invention allows detection of a dental or orthodontic condition before a problem has actually occurred or has developed to a more serious state, allowing medical personnel to anticipate and proactively initiate early treatment. Advantageously, treatment of orthodontic or corrective conditions at an early stage may be more difficult, proactive, expensive, time consuming, and/or painful than those detected at a later stage.

In one aspect, a computer-implemented method for calculating a future position of an intraoral object of a patient's intraoral cavity includes receiving a first digital data representation of an actual state of the intraoral cavity at a first point in time and receiving a second digital data representation of the actual state of the intraoral cavity at a second point in time different from the first point in time. The method may include processing data having the first and second digital data to determine a velocity of an intraoral object within the oral cavity during the first to second points in time. A future position of the intraoral object at a future point in time may be determined based on the velocity. The future location may be determined before the intraoral object is at the future location.

In some embodiments, the first and second digital data each comprise one or more surface or subsurface data within the oral cavity.

In some embodiments, the method further comprises displaying a graphical representation of the intraoral object in a future location of the intraoral object on a user interface shown on the display.

In some embodiments, the method further comprises determining a change in position of the intraoral object between the first and second points in time based on the first and second digital data, and evaluating whether the change in position exceeds a predetermined threshold. The predetermined threshold may be received or determined in various ways. For example, the predetermined threshold may be input by a user. Alternatively or in combination, the predetermined threshold may be determined based on one or more of user preferences, patient characteristics, or values of corrective or orthodontic literature. For example, the predetermined threshold may reflect an undesirable dental or orthodontic condition.

If the change in position exceeds a predetermined threshold, various actions may be performed. In some embodiments, the method further comprises outputting an alert to a user in response to evaluating that the change in position exceeds a predetermined threshold. Alternatively or in combination, the method further comprises generating a plurality of options for producing a desired correction or orthodontic result in response to evaluating the change in position to exceed a predetermined threshold. The plurality of options may be displayed on a user interface shown on the display. The plurality of options may include a plurality of treatment options for undesirable dental or orthodontic conditions. In some embodiments, displaying the plurality of options includes displaying one or more of pricing information, treatment time information, treatment complication information, or insurance reimbursement information associated with each of the plurality of treatment options.

The present invention is applicable to various types of intraoral objects located within and/or associated with the oral cavity. For example, the intraoral object may include one or more of a crown, a root, a gum, an airway, a palate, a tongue, or a jaw. The method may further include processing the data having the first and second digital data to determine a rate of change of one or more of a shape, a size, or a color of the intraoral object. For example, the intraoral object may include a tooth, and the rate of change may include a rate of change of tooth shape. As another example, the intraoral object may include a gum, and the rate of change may include a rate of change of gum shape.

The systems, methods, and devices presented herein can be used to predict linear and/or non-linear motion of intraoral objects. In some embodiments, determining the future position of the intraoral object includes determining a motion trajectory of the intraoral object based on the velocity. The motion trajectory may be linear. In some embodiments, the method further comprises determining a motion trajectory of the intraoral object based on the velocities during the first, second, and third points in time. The motion trajectory may be non-linear. For example, a non-linear motion profile may include one or more changes in direction of motion or changes in speed of motion. The future position of the intraoral object may be determined based on the non-linear motion trajectory. Optionally, the method further comprises processing the data with the first, second and third digital data to determine a force vector associated with the intraoral object during the first, second and third points in time.

In some embodiments, the method further comprises generating a predictive digital representation of the intraoral cavity at a future point in time after the first and second points in time based on the future position of the intraoral object.

In some embodiments, determining the future position of the intraoral object includes extrapolating the velocity to a future point in time using a linear or non-linear extrapolation algorithm.

In some embodiments, the method further comprises determining a future condition within the oral cavity based on the future position of the intraoral object. The future condition may include an undesirable dental or orthodontic condition that would be predicted to occur at a future point in time if not treated within the oral cavity. The future condition may be determined before the future condition occurs.

In some embodiments, one or more of the first digital data is received, the second digital data is received, the data is processed, or the future location is determined with the aid of one or more processors.

In another aspect, a computer system for calculating a future position of an intraoral object of a patient's intraoral cavity includes one or more processors and memory. The memory may include instructions that, when executed by the one or more processors, cause the system to receive a first digital data representation of an actual state within the oral cavity at a first point in time and a second digital data representation of the actual state within the oral cavity at a second point in time different from the first point in time. The instructions may cause the system to process the data with the first and second digital data to determine a velocity of an intraoral object within the oral cavity during the first and second points in time. The instructions may cause the system to determine a future position of the intraoral object at a future point in time based on the velocity, wherein the future position is determined before the intraoral object is at the future position.

In another aspect, a computer-implemented method for calculating a change in position over time of an intraoral object of a patient's intraoral cavity includes receiving first digital data representing an actual state of the intraoral cavity at a first point in time and receiving second digital data representing the actual state of the intraoral cavity at a second point in time different from the first point in time. The method may include processing data having first and second digital data to determine a change in position of the intraoral object between the first and second points in time. The method may include evaluating whether the change in position exceeds a predetermined threshold.

In some embodiments, the method further comprises outputting an alert to a user in response to evaluating that the change in position exceeds a predetermined threshold. Optionally, the method may include generating a plurality of options for producing a desired correction or orthodontic result in response to evaluating the change in position exceeding a predetermined threshold, and displaying the plurality of options on a user interface shown on the display. The predetermined threshold may be determined based on one or more of user preferences, patient characteristics, or values of corrective or orthodontic literature.

In another aspect, a computer system for calculating a change in position over time of an intraoral object of a patient's intraoral cavity includes one or more processors and memory. The memory may include instructions that, when executed by the one or more processors, cause the system to receive first digital data representing an actual state within the oral cavity at a first point in time and receive second digital data representing the actual state within the oral cavity at a second point in time different from the first point in time. The instructions may cause the system to process the data with the first and second digital data to determine a change in position of the intraoral object between the first and second points in time. The instructions may cause the system to evaluate whether the change in position exceeds a predetermined threshold.

In another aspect, a method is provided for generating a predictive digital representation of the patient's oral cavity in order to determine the patient's future condition. The method may include receiving first digital data representative of the intraoral cavity at a first point in time and receiving a second digital data representative of the intraoral cavity at a second point in time different from the first point in time. A predicted number representative of the intraoral cavity at a future point in time after the first and second points in time may be generated based on the first and second digital data. The future condition within the oral cavity can be determined from the predictive digital representation. Future conditions may include undesirable dental or orthodontic conditions that are predicted to occur at a future point in time if the oral cavity is not being treated. The future condition may be determined before the future condition occurs.

Various types of digital data are suitable for use with the present invention. In some embodiments, each of the first and second digital data comprises three-dimensional data within the oral cavity. Alternatively or in combination, each of the first and second digital data comprises two-dimensional data within the oral cavity. In some embodiments, each of the first and second digital data comprises one or more scans within an intra-oral cavity. Each of the first and second digital data includes intraoral surface data, and the surface data includes scan data representing a three-dimensional surface topography within the intraoral cavity.

Alternatively or in combination, each of the first and second digital data comprises sub-surface data within the oral cavity. The sub-surface data includes one or more of X-ray data, Cone Beam Computed Tomography (CBCT) data, CAT scan data, Magnetic Resonance Imaging (MRI) data, or ultrasound data. For example, the sub-surface data comprises a representation of one or more roots of the patient's teeth. Thus, the predicted digital representation within the oral cavity includes a predicted digital representation of one or more roots at a future point in time based on the secondary surface data, and the future condition is determined based on the predicted digital representation of the one or more roots. Optionally, the sub-surface data comprises a representation of one or more of the patient's airway, jaw, or bone.

In some embodiments, the first and second time points differ by at least 1 month, at least 3 months, at least 6 months, or at least 1 year. The future time point may be at least 1 month, at least 3 months, at least 6 months, at least 1 year, at least 2 years, or at least 5 years after the first and second time points.

Digital data is obtained and analyzed from a plurality of different time points in order to monitor the progress of the patient's oral cavity over time and predict its future state. For example, in some embodiments, the method further comprises receiving a third digital data representation within the oral cavity at a third point in time that is different from the first point in time and the second point in time. Based on the first, second and third digital data, a predicted digital representation may be generated.

In addition to intraoral digital data, the prediction methods herein utilize other types of data. In some embodiments, the method further comprises receiving additional data of the patient and generating a predictive digital representation based on the additional data. The additional data includes one or more of demographic information, lifestyle information, medical history, family medical history, or genetic factors.

The prediction techniques described herein may be implemented in a number of ways. In some embodiments, generating the predicted digital representation includes generating a comparison of the first and second digital data. Generating the comparison may comprise registering (register) the first and second digital data with each other in a common coordinate system. Optionally, generating the comparison comprises: measuring characteristics of an intraoral object at a first time point; measuring characteristics of the intraoral object at a second point in time; and determining a change in a characteristic of the intraoral object between the first point in time and the second point in time. The intraoral object may include one or more teeth or gums, and the characteristic may include, for example, one or more of a position, orientation, shape, size, or color of a tooth or gum.

In some embodiments, the method further comprises comparing the measured characteristic at one or more of the first point in time or the second point in time to the measured characteristic from the patient information database. The measured features at one or more of the first or second points in time may be compared to measured features from a database of patient information based on one or more of demographic information, lifestyle information, medical history, family medical history, or genetic factors.

Changes to one or more intraoral objects may be used as a basis for predicting a future state within the patient's intraoral cavity. For example, in some embodiments, the method further comprises predicting future changes to the characteristics of the intraoral object in response to the determined changes, and generating a predicted digital representation based on the future changes. Future changes may be predicted in response to selected time intervals.

In some embodiments, the predictive digital representation is generated in response to a determined change in a characteristic of the intraoral object between the first and second points in time. Determining the characteristic change may include determining one or more of a tooth movement speed, a tooth shape change speed, a tooth size change speed, or a gum shape change speed. For example, generating the predictive digital representation based on the determined change between the first and second points in time may include determining a rate of change of a characteristic of the intraoral object. The rate of change is extrapolated to a future point in time to predict characteristics of the intraoral object at the future point in time.

The present invention may be used to predict the future state of various types of intraoral objects, such as the future position of one or more teeth. In some embodiments, for example, the predicted numerical representation represents the patient's teeth at a predicted arrangement at a future point. Thus, generating the predicted digital representation may include generating a first digital model representing the patient's teeth in a first arrangement at a first point in time, generating a second digital model representing the patient's teeth in a second arrangement at a second point in time, and calculating a speed of motion of one or more teeth between the first and second arrangements. One or more teeth of the second digital model may be repositioned according to the speed of motion of the teeth to generate a predicted configuration of the patient's teeth at a future point in time. Optionally, repositioning one or more teeth of the second digital model may include detecting a collision occurring between the one or more teeth during repositioning the one or more teeth and changing (modifying) a speed of tooth movement in response to the detected collision.

Many different types of dental or orthodontic conditions can be predicted using the embodiments provided herein. In some embodiments, the undesirable dental or orthodontic condition includes one or more of: malocclusion, tooth decay, loss of one or more teeth, root resorption, periodontal disease, gingival atrophy, temporomandibular joint disorder, bruxism, obstruction of the airway, or sleep apnea. By measuring one or more parameters of the predicted digital representation indicative of an undesirable dental or orthodontic condition, a future condition can be determined. The one or more parameters may include one or more of: an amount of overbite, an amount of underbite, an amount of tooth tilt, an amount of tooth extrusion, an amount of tooth intrusion, an amount of tooth rotation, an amount of tooth translation, an amount of tooth spacing, an amount of tooth crowding, an amount of tooth wear, an amount of gum atrophy, a jaw width or a palate width. In some embodiments, the method further comprises comparing the one or more parameters to one or more thresholds to determine whether an undesirable dental or orthodontic condition is predicted to occur at a future point in time. An alert may be generated if an anomaly is detected in one or more parameters. Optionally, the method may further include generating a user interface displayed on the display, the user interface configured to display the one or more parameters.

The results of the prediction techniques presented herein may be displayed to a user. In some embodiments, the method further comprises generating a user interface displayed on the display. The user interface may be configured to display a predicted three-dimensional model representation of the predicted digital representation within the oral cavity at a future point in time. The user interface may be configured to display a first three-dimensional model representation of the oral cavity at a first point in time and a second three-dimensional model representation of the oral cavity at a second point in time. Optionally, the user interface is configured to display a superposition of two or more of the first three-dimensional model, the second three-dimensional model, or the predicted three-dimensional model.

Once the future condition is identified, potential treatment options may be generated and displayed to the user. Accordingly, in some embodiments, the method further comprises generating one or more treatment options for the future condition and displaying the one or more treatment options on a user interface displayed by the display. The one or more treatment options may include a list of one or more treatment products or procedures for the future condition. The list of one or more treatment products or programs may include one or more of pricing information, treatment time information, treatment complication information, or insurance reimbursement information. Optionally, the method may further comprise generating a comparison of the one or more treatment options and displaying the comparison on a user interface displayed by the display. Generating the comparison may include comparing one or more of treatment effectiveness, cost, duration, or predicted outcome.

Some embodiments of the invention provide for achieving prediction of treatment outcome by administering one or more treatment options to a patient. In some embodiments, the method further comprises generating a second predicted digital representation within the oral cavity after implementing at least one of the one or more treatment options, and displaying a second predicted three-dimensional model representing the second predicted digital representation over time using a user interface on a display.

In some embodiments, the method further comprises receiving user input selecting at least one of the one or more treatment options via a user interface shown on the display, and generating a command for the at least one treatment option in response to the received user input. Optionally, the command is generated based on one or more of the first or second digital data.

In another aspect, the invention provides a system for generating a predictive digital representation of a patient's oral cavity to determine a future condition of the patient. The system may include one or more processors and memory. The memory may include instructions executable by the one or more processors to cause the system to receive first digital data representative of the intraoral cavity at a first point in time and receive second digital data representative of the intraoral cavity at a second point in time different from the first point in time. The instructions may cause the system to generate a predicted digital representation of the intraoral cavity at a future point in time subsequent to the first and second points in time based on the first and second digital data. The instructions may cause the system to determine a future condition within the oral cavity based on the predictive digital representation. The future condition may include an undesirable dentition or orthodontic condition that is predicted to occur at a future point in time if the oral cavity is not being treated. The future condition may be determined before the future condition occurs.

In some embodiments, the system further comprises a display, and the instructions cause the display to generate the user interface. The user interface may be configured to display one or more of the following: a predicted three-dimensional model representing a predicted digital representation of the oral cavity, a first three-dimensional model representing the oral cavity at a first time point, a second three-dimensional model representing the oral cavity at a second time point, or one or more treatment options for a future condition. In some embodiments, the user interface is configured to display one or more treatment options and receive a user input selecting at least one of the one or more treatment options. Optionally, the user interface is configured to display a second predicted three-dimensional model representing a second predicted digital representation within the oral cavity after implementing at least the treatment option. In some embodiments, the user interface is configured to display a comparison of the one or more treatment options.

Although certain embodiments herein describe predicting a future tooth arrangement, this is not intended to be limiting, and it should be understood that the systems, methods, and apparatus of the present invention may also be applied to estimate future states of other types of tissues or objects within the oral cavity, such as, for example, the gums, jaws, palate, tongue, and/or airway.

Referring now to the drawings, FIG. 1 shows a skull 10 having an upper jawbone 22 and a lower jawbone 20. The mandible 20 is hinged to the skull 10 at joint 30. The joint 30 is known as the temporomandibular joint (TMJ). The upper jawbone 22 is associated with the upper jaw 101, while the lower jawbone 20 is associated with the lower jaw 100.

Computer models of jaws 100 and 101 are generated and the computer models mimic (moudel) the interaction between teeth on jaws 100 and 101. Computer simulations may allow the system to focus on the movement (motion) of contact between teeth mounted on the jaw. Computer simulations may allow the system to provide real jaw movements that are physically correct when jaws 100 and 101 are in contact with each other. Further, the model may be used to simulate jaw movements including reach, lateral and "teeth-guided" movements, where the path of the lower jaw 100 is guided by tooth contact rather than by the structural limits of the jaws 100 and 101. The movement may be determined as one jaw, but may also be determined as two jaws indicating occlusion.

Referring now to fig. 2A, for example, a lower jaw 100 includes a plurality of teeth 102. At least some of the teeth may be moved from the initial tooth arrangement to the subsequent tooth arrangement. As a frame of reference describing how the tooth moves, an arbitrary Center Line (CL) is drawn through the tooth 102. With reference to the Centerline (CL), the motion of each tooth can be tracked in the orthogonal direction represented by axes 104, 106, and 108 (where 104 is the centerline). The centerline may be rotated about axis 108 (root angulation) and axis 104 (moment), respectively, as indicated by arrows 110 and 112. In addition, the teeth may rotate about a centerline. Thus, all possible free-form movements of the teeth can be tracked. These movements include translation (e.g., motion in one or more of the X-axis or Y-axis), rotation (e.g., motion about the Z-axis), extrusion (e.g., motion in the Z-axis), or tilting (e.g., motion about one or more of the X-axis or Y-axis), among others. In addition to tooth motion, a model such as model 100 may be used to track the movement of the gum line 114. In some embodiments, the model includes X-ray information of the jaw so that the motion of the root of the tooth can also be tracked.

Fig. 2B illustrates how the amplitude of any tooth motion is defined in terms of the maximum linear translation of any point P on the tooth 102. Each point P1 may undergo a cumulative translation as the tooth moves in any orthogonal or rotational direction as defined in fig. 2A. That is, while the point will generally follow a non-linear path, there may be a linear distance between any point in the tooth when determined any two times during treatment. Thus, in practice, any point P1 may make a true left-right translation as indicated by arrow d1, while at the same time, the second arbitrary point P2 may follow an arcuate path, resulting in a final translation d 2. Many aspects of the invention are defined in terms of the maximum allowable motion of point P1 induced on any particular tooth. Accordingly, this maximum tooth movement may be defined as the maximum linear translation of point P1 on the tooth that performs the maximum tooth movement at any treatment step.

The present invention provides a system, method and apparatus for tracking and tracking changes in one or more structures in the oral cavity, including but not limited to teeth, gums, jaws, temporomandibular joints, tongue, palate and airway. Examples of such variations include one or more of the following: movement (e.g., extrusion, intrusion, rotation, torsion, tilting, or translation) of one or more teeth; a change in size, shape, and/or color of one or more teeth; a change in size, shape, and/or color of gums associated with one or more teeth; a change in the occlusal relationship ("bite") between the upper and lower jaws; a change in jaw and/or palate width; a change in tongue positioning; or changes in airway shape.

In some embodiments, changes in the patient's oral cavity may cause and/or indicate one or more dental or orthodontic conditions. As used herein, the term "condition" refers to a disease, abnormality, or other undesirable, abnormal and/or dysfunctional condition exhibited by a patient. Examples of such conditions include, but are not limited to: malocclusions (e.g., crowding of teeth, spacing of teeth, overbite, overjet, backjet, cross-bite, open bite), tooth decay, loss of one or more teeth, root resorption, periodontal disease (e.g., gingivitis, periodontitis), gingival atrophy, temporomandibular joint abnormalities, bruxism, blocked airways, and sleep apnea. As used herein, a condition can be distinguished from the results of unsatisfactory or unsuccessful treatment or other therapeutic intervention (e.g., deviation of teeth from a tooth arrangement specified in an orthodontic treatment plan, failure of sleep apnea treatment to achieve an intended result, etc.).

In some embodiments, certain changes within the oral cavity are indicative of and/or associated with inevitable future undesired dental or orthodontic conditions. For example, some tooth movements, if allowed to develop, may result in a future malocclusion. As another example, some changes in gum shape may indicate future gum atrophy. In yet another example, insufficient width of the patient's dental arch and/or palate may be associated with an increased likelihood of sleep apnea, for example, due to posterior displacement of the tongue. In yet another example, a change in tooth and/or gum staining may be indicative of tooth decay and/or periodontal disease.

Accordingly, the present invention provides a system, method and apparatus for predicting a patient's future condition by tracking and tracking changes in the oral cavity over time. In some embodiments, to detect changes within the oral cavity, data within the oral cavity is captured during a plurality of different points in time. Based on the detected changes, a prediction of the state within the oral cavity (e.g., location, shape, size, etc. of one or more objects within the oral cavity) may be made at a future point in time. The predicted future state may be analyzed to identify any future tooth or orthodontic condition that may occur at a future point in time, for example, future malocclusions, tooth decay, one or more tooth loss, root resorption, periodontal disease, gingival atrophy, temporomandibular joint abnormalities, bruxism, blocked airways and/or sleep apnea. Once the future condition is determined, potential treatment options (e.g., for preventing or correcting the condition) may be generated and presented to medical personnel and/or the patient for examination. In some embodiments, these methods (approaches) are implemented using computer methods with numerical modeling to enable detection of intraoral changes and prediction of future conditions with greater sensitivity and accuracy than traditional visual inspection.

In some embodiments, the systems, methods, and devices of the present invention are used to predict future conditions before they occur. For example, some embodiments herein may be used to predict future malocclusions of a patient's teeth even if the patient's current tooth arrangement is normal. As another example, some embodiments herein may be used to predict that a patient will suffer sleep apnea in the future, even if the patient is not currently experiencing any sleep apnea events. The methods herein allow for the prediction of future dental or orthodontic conditions months or even years before the condition is actually presented on the patient, thereby enabling the advance and proactive treatment of such conditions.

Fig. 3 illustrates a system 300 for predicting a patient's future dental or orthodontic condition. The system 300 includes a plurality of data sets representing the patient's intraoral cavity at a plurality of different points in time, such as first digital data 302 representing the intraoral cavity at a first point in time and second digital data 304 representing the intraoral cavity at a second point in time (e.g., after the first point in time). Digital data for additional time points (e.g., third digital data for a third time point, fourth digital data for a fourth time point, etc.) may also be included, if desired. The digital data may provide a representation of the actual state of one or more intraoral objects (e.g., teeth, gums, jaws, TMJ joints, tongue, palate, etc.), such as the positioning, shape, size, coloration, etc. of the intraoral object at a particular point in time. For example, the first digital data 302 may include a three-dimensional model representing an arrangement of one or more teeth and/or gums at a first point in time, and the second digital data 304 may include a three-dimensional model representing one or more teeth and/or gums at a second point in time. Alternatively or in combination, the digital data may provide data for other parts of the oral cavity in addition to the teeth and surrounding tissue (e.g., the patient's jaw or airway). These data provide a more comprehensive understanding of how various structures within the oral cavity interact to produce a dental or orthodontic condition, and how to correct these interactions to reduce or treat the dental or orthodontic condition. For example, treatment of sleep apnea may involve correction of tooth position (e.g., palatal expansion to move the tongue forward) and correction of the patient's jaw and bite alignment (e.g., jaw advancement to tighten the tissues of the airway). It should be understood that data representing the actual state within the oral cavity may be distinguished from data representing an expected, desired, or ideal state within the oral cavity, such as data representing a desired state achieved by implementing a dental or orthodontic treatment, e.g., a target arrangement of teeth in an orthodontic treatment plan.

Various types of digital data are suitable for use with embodiments presented herein, such as two-dimensional data (e.g., photographs, radiographs, or other types of images) or three-dimensional data (e.g., scan data, surface topography data, three-dimensional models constructed from two-dimensional data). The digital data may be static (e.g., images) or dynamic (e.g., video). The digital data may provide a representation of the size, shape, and/or surface topography of one or more intraoral objects, such as scans or images depicting the position and orientation of the patient's teeth, gums, and the like. Additionally, the digital data may also provide a representation of the spatial relationship (e.g., relative position and orientation) of the different intraoral objects to one another, such as bite registration data representing the occlusal relationship between the upper and lower jaws, cephalometric analysis data, facial arch measurement data representing the position of the TMJ relative to the dental arch, and the like. A number of different types of digital data may be combined with one another to form a digital model that accurately represents the patient's intra-oral surface and/or sub-surface cavities at a particular point in time. For example, two-dimensional data may be combined with three-dimensional data as further described herein.

In some embodiments, the digital data includes scan data, such as one or more three-dimensional scans, within the oral cavity of the patient. Three-dimensional intraoral scanning can be performed, for example, by an intraoral scanner that uses confocal focusing of an array of light beams to determine surface topography (e.g., iTero available from allium technologies, Inc. of san jose, california)TMAnd iOCTMA scanning system). The scan data may provide a digital representation of the three-dimensional surface topography of the intraoral object, such as teeth and/or gums. For example, small changes in dentition can be accurately captured and observed in a relatively non-invasive manner through three-dimensional intraoral scanning. Three-dimensional intraoral scanning may not use ionizing radiation, making the procedure very safe relative to other techniques that may use X-rays, such as Cone Beam Computed Tomography (CBCT) or CAT scanning. Three-dimensional intraoral scans may also have sufficient accuracy and resolution (e.g., 20-50 microns) to detect small amounts or subtle changes (e.g., with dentition impressions such as silicone or alginate, visual inspection) that are difficult to detect with alternative methods. In some embodiments, the scan data is segmented to separate individual teeth from each other and/or from the gums, thereby providing an operable three-dimensional representation of each tooth. Alternatively, undivided scan data may be used.

In some embodiments, the digital data provides surface data representing one or more visible external surfaces within the oral cavity (e.g., tooth surfaces located above the gum line, such as the crown, gum surface, tongue, etc.). As described above and herein, surface data is obtained using intraoral scanning. Alternatively or in combination, the digital data may provide subsurface data representing one or more subsurface structures within the oral cavity (e.g., portions of the tooth below the gum line that are not visible in the surface scan data, such as the tooth roots, bones, muscles, jaws, airways, etc.). The sub-surface data may include one or more of X-ray data (e.g., bitewing, periapical X-ray, head camera, panorama), CBCT data, CAT scan data, Magnetic Resonance Imaging (MRI) data, or ultrasound data. The sub-surface data may be two-dimensional (e.g., images) or three-dimensional (e.g., volumetric data).

In some embodiments, the subsurface data may be combined with the surface data to generate a three-dimensional digital representation of the intraoral cavity including the surface and subsurface structures. For example, the surface data and the sub-surface data may be combined to generate a three-dimensional representation of the entire tooth structure including the crown, the root, and the gum. Data on tooth roots is useful in improving the understanding of tooth movement, particularly non-linear changes with respect to speed and/or direction of movement, and therefore may be useful in more accurately predicting future tooth movement. For example, the impact of the tooth roots, changes in root shape due to decay or absorption, etc. can affect the movement of the teeth. Some types of tooth movement may be "root first" movement, which causes the root to cause a corresponding movement in the crown. In some embodiments, a scan may be used to obtain three-dimensional surface data of one or more crowns. Three-dimensional subsurface data for one or more tooth roots corresponding to a crown may be obtained using CBCT scan data. Alternatively or in combination, the two-dimensional X-ray or other images may be stitched together to form a three-dimensional representation of the tooth root. Alternatively, the tooth crown data and tooth root data may be segmented into separate tooth components, respectively, to allow the components to be manipulated separately. The tooth crown data and tooth root data may then be digitally combined to generate a three-dimensional model of the entire tooth, for example, using surface matching. In some embodiments, the surface matching algorithm uses surface data of the crown of the tooth to match and orient tooth root data to the correct position of each tooth. Matching may be performed in three-dimensional space based on markers (landmark) (e.g., gingival margin, occlusal ridge). The accuracy and speed of the matching may vary during the matching process based on the amount of surface data used. Once the roots are matched and in the correct position, the algorithm can sample the data of the roots to create root surface data. The root surface data may then be stitched to the crown surface data to generate a tooth model.

The point in time at which the digital data is generated and/or obtained may vary as desired. For example, each time point may differ by at least 1 month, at least 2 months, at least 3 months, at least 4 months, at least 5 months, at least 6 months, at least 7 months, at least 8 months, at least 9 months, at least 10 months, at least 11 months, at least 1 year, at least 2 years, at least 5 years, or any other extended time interval sufficient to accurately detect changes and/or predictions. The interval between each time point may be the same or may vary, as desired. Alternatively, the intervals between time points may be shorter for patients for which more changes are expected (e.g., pediatric patients) and longer for patients for which less changes are expected (e.g., adult patients). In some embodiments, each digital data set is obtained at a different point in time, while in other embodiments, at least some digital data sets are obtained at the same point in time. In some embodiments, the digital data is obtained during a periodic dental exam (e.g., an annual or semi-annual exam) such that the point in time corresponds to the time of the exam. Alternatively, the digital data may be obtained before and/or after a surgical procedure within the patient's oral cavity, which facilitates obtaining digital data in a very short time interval for more accurate monitoring.

Optionally, the system 300 may include one or more sets of additional data 306. Additional data 306 may include any data of the patient potentially relating to dental or corrective health, such as demographic information (e.g., age, gender, ethnicity), lifestyle information (e.g., physical activity level, smoking status, drug intake status, drinking status, eating habits, oral hygiene habits), medical information (e.g., height, weight, Body Mass Index (BMI)), medical history, family medical history, and/or genetic factors. For example, these patient characteristic factors may affect the likelihood of certain dental or orthodontic conditions occurring. The additional data may be obtained at a single point in time or at a plurality of different points in time, and the points in time may or may not correspond to points in time of the digital data 302, 304.

In some embodiments, the digital data and/or additional data used to generate the prediction as described herein includes one or more of the following: two-dimensional images, three-dimensional images generated from one or more two-dimensional images, CBCT data, three-dimensional scan data, video data, prospective analysis data, plaster model analysis data, growth prediction, bite relations, historical data of similar patients and/or treatments, family names, gender, age, eating habits, whether the patient experiences sleep difficulties, whether the patient is snoring, sleep apnea diagnostic data, titration test data for oral sleep appliances, bruxism, and/or analysis data from a treatment professional.

The first digital data 302, the second digital data 304, and/or the additional data 306 are used to generate a predicted digital representation 308 of the patient's intraoral cavity. The predictive digital representation 308 may be a two-dimensional or three-dimensional model of the patient's oral cavity at a future time point subsequent to the first and second time points. The future time point may be at least 1 month, at least 2 months, at least 3 months, at least 4 months, at least 5 months, at least 6 months, at least 7 months, at least 8 months, at least 9 months, at least 10 months, at least 11 months, at least 1 year, at least 2 years, at least 5 years, at least 10 years, at least 20 years, or any other desired length of time following the last time point at which numerical data was obtained. In some embodiments, the predicted digital representation 308 represents a predicted state (e.g., position, shape, size, coloring, etc.) of one or more intraoral objects at a future point in time.

The predicted digital representation 308 may be generated in various ways. In some embodiments, a comparison of the digital data (e.g., first digital data 302 and second digital data 304) within the oral cavity at a plurality of different points in time is generated to determine changes in the oral cavity over time. For example, using digital data, one or more characteristics (e.g., position, orientation, shape, size, color) of one or more intraoral objects (e.g., teeth, gums, jaws, TMJ, palate, airway) are measured at each of a plurality of different points in time. By comparing measurements obtained at different points in time, the rate, magnitude, direction, location, etc. of changes in the intraoral object can be determined. These changes can be extrapolated to future points in time to predict the future state of the intraoral object. Thus, by repeating this process for each target intraoral object, a predicted digital representation of the intraoral cavity is generated 308. An exemplary method for generating the predictive digital representation 308 is described in more detail below.

Alternatively or in combination, the predictive digital representation 308 is generated based on a comparison of patient data (e.g., digital data, additional data, measured characteristics, determined changes) with historical data (e.g., stored in a patient information database) of similar patients. For example, the historical data may be patient data having characteristics similar or closely matching the current patient's condition. The similarity may include bite, tooth position, tooth shape, tooth movement speed, tooth shape change speed, and the like. Alternatively, similarity may be based on additional patient-specific factors described herein, e.g., patients with similar demographic information, lifestyle information, medical history, family medical history, and/or genetic factors. The determined changes in the patient's oral cavity may be compared to data from a patient database for similar changes in similar patients. In some embodiments, the determined rate, magnitude, direction, and/or location of change of one or more intraoral objects may be adjusted based on historical patient data. The historical patient data may be used to predict future outcomes of changes in the patient's oral cavity in order to generate a predicted digital representation 308.

The predictive digital representation 308 is used to predict a future condition 310 within the patient's oral cavity. As described above and herein, the future condition 310 may be a tooth or orthodontic condition predicted to occur at a future point in time (e.g., malocclusion, tooth decay, one or more tooth loss, root resorption, periodontal disease, gingival atrophy, TMJ abnormalities, bruxism, blocked airways, sleep apnea, etc.) if not treated within the oral cavity. As used herein, "untreated" means that no treatment is being administered for a particular condition, and does not necessarily imply that the patient is not receiving treatment for other conditions. For example, if the patient is not receiving treatment to correct or prevent malocclusions, the future condition 310 may be a malocclusion that is predicted to occur in the future. As another example, if the patient is not being treated to correct or prevent gum atrophy, the future condition 310 can be a prediction of future occurrence of gum atrophy.

In some embodiments, the future condition 310 is predicted by analyzing the predictive digital representation 308 to identify whether any undesirable dental or orthodontic conditions are present at a future point in time. For example, the position of one or more teeth in the predictive digital representation 308 may be evaluated to determine whether a malocclusion exists. As another example, the location of the gum line in the predicted digital representation 308 may be evaluated to determine whether excessive tooth atrophy has occurred. Alternatively, one or more parameters indicative of an undesirable condition may be measured using a predictive digital representation 308, such as an amount of overbite, an amount of underbite, an amount of tooth tilt, an amount of tooth extrusion, an amount of tooth intrusion, an amount of tooth rotation, an amount of tooth translation, an amount of tooth spacing, an amount of tooth crowding, an amount of tooth wear, an amount of gum recession, a jaw width, and/or a palate width. The measured parameter may be compared to an expected range and/or threshold value for the parameter to determine whether an undesirable condition may occur at a future point in time.

Based on the predicted future condition 310, one or more treatment options 312 may be generated. Treatment options may include products and/or procedures for correcting and/or preventing a predicted dental or orthodontic condition. Exemplary therapeutic products include, but are not limited to, orthodontic appliances (e.g., tooth repositioning appliances, such as orthoses or braces, retainers, sleep apnea devices, mouth guards, dental splints, bite plates), implants and restorations (e.g., prostheses such as crowns or bridges, fillings), and drugs (e.g., antibiotics, mouthwash, toothpaste). Exemplary treatment procedures include, but are not limited to, corrective surgery (e.g., orthognathic surgery, periodontal surgery), changes in dentition and/or other intraoral objects (e.g., orthodontics, tooth extractions, space maintenance, space monitoring, space restoration, interproximal reduction (IPR), distillation, palate expansion, bite adjustments), changes in oral hygiene habits (e.g., brushing teeth, flossing, using mouthwash), and lifestyle changes (e.g., eating habits, physical activity level, smoking status, drug intake status, alcohol intake status).

Some examples of dental or orthodontic conditions and corresponding treatment options include: blocking an airway or sleep apnea (e.g., a sleep apnea device such as a mandibular advancement appliance, corrective surgery, palatal expansion, etc.), tooth crowding (e.g., tooth extraction, IPR, distillation, palatal expansion, tooth extraction, orthodontics, etc.), at least one tooth missing (e.g., closure, implant, corrective surgery, etc.), tooth spacing problems (e.g., closure, IPR, tooth extraction, palatal expansion, corrective surgery, etc.), gum disease (e.g., corrective surgery, better health recommended, gargling, etc.), gum atrophy (e.g., corrective surgery, better health recommended, mouthwash, etc.), TMJ abnormalities (e.g., jaw repositioning surgery, jaw repositioner, etc.), bite (e.g., corrective surgery, corrective appliance, etc.), overbite (e.g., corrective appliance, orthodontic, etc.), cross bite (e.g., orthodontic appliances, segmental expansion, orthodontics, etc.), open bite (e.g., orthodontic appliances, orthodontic surgery, orthodontics, etc.), overspray (e.g., orthodontic appliances, orthodontic surgery, etc.), contra-bite (e.g., orthodontic appliance orthodontic surgery, etc.), malocclusion (e.g., space maintenance, space monitoring or space restoration, orthodontic surgery, etc.), root resorption (e.g., implants, etc.), or bruxism (e.g., orthodontic appliances, bite adjustment, etc.).

Treatment options 312 may be provided as a list of treatment products and/or procedures. Optionally, the list may also include one or more of pricing information, treatment time information, treatment complication information, or insurance reimbursement information for the treatment options. The listed treatment options may be arranged in a variety of ways, such as by effectiveness as a therapeutic effect, cost, treatment time, patient compliance or patient, or insurance reimbursement, and so forth. The list may also include hyperlinks for preferred suppliers of treatment products and/or procedures and/or medical personnel. For example, airway problems may be identified, and one or more airway experts may be recommended.

Optionally, a prediction 314 of one or more treatment options may be generated. The prediction result 314 may represent a predicted state (e.g., location, shape, size, coloration, etc.) of one or more intraoral objects at a future point in time subsequent to delivery of the selected treatment option. The future time point may be at least 1 month, at least 2 months, at least 3 months, at least 4 months, at least 5 months, at least 6 months, at least 7 months, at least 8 months, at least 9 months, at least 10 months, at least 11 months, at least 1 year, at least 2 years, at least 5 years, at least 10 years, at least 20 years, or any other desired length of time subsequent to administration of the treatment option. The future time point fix is determined based on patient activity or life events (e.g., wedding, vacation, business trip, etc.). In some embodiments, generating the prediction 314 involves generating one or more models (e.g., two-dimensional or three-dimensional models) representative of the patient's oral cavity after treatment is administered.

The predicted treatment outcome 314 may be generated using various techniques. For example, the results 314 may be generated based on previously obtained data (e.g., the first digital data 302, the second digital data 304, and/or the additional data 306) of the patient's intraoral cavity and/or other related patients. In some embodiments, the treatment outcome 314 is determined based on a comparison to historical data of similar patients (e.g., patients with similar characteristics to the current patient). Alternatively, historical treatment data representing the results of similar treatments may be used to predict the results of applying the treatment to the current patient.

Optionally, in some embodiments, an existing condition 316 within the patient's oral cavity is also determined. The existing condition may be an undesirable tooth or orthodontic condition (e.g., malocclusion, tooth decay, loss of one or more teeth, root resorption, periodontal disease, gingival atrophy, temporomandibular joint abnormalities, bruxism, blocked airways, sleep apnea, etc.) that has occurred and is currently existing within the patient's oral cavity. In some embodiments, the existing condition 316 is predicted by analyzing previous and/or current data (e.g., the first digital data 302 and the second digital data 304) within the patient's oral cavity to identify whether any undesirable dental or orthodontic conditions are currently present. Identifying existing conditions from the numerical data is similar to identifying future conditions from the predicted numerical representation as described above and herein. Similar to the procedures described herein for the future condition 310, treatment options 312 and/or predicted outcomes 314 for the existing condition 316 may be generated. It should be understood that the embodiments presented herein for detecting and treating a predicted future condition are equally applicable to detecting and treating an existing condition.

As described above and herein, by comparing numerical data of the oral cavity obtained at different time points, a predicted numerical representation of the patient's oral cavity at a future time point can be generated. In some embodiments, the digital data is compared to determine a change in one or more characteristics (e.g., location, size, shape, color, etc.) of the intraoral object over time. The digital data sets are compared using various methods to identify changes in the intraoral objects. In some embodiments, two or more sets of digital data are registered with each other within a common coordinate system. Here, the method may be used to register a two-dimensional digital data set with another two-dimensional digital data set (e.g., two images), a three-dimensional digital data set with another three-dimensional digital set (e.g., two three-dimensional digital data models) and/or a two-dimensional digital data set with a three-dimensional digital data set, or vice versa on demand (e.g., images with three-dimensional models). By registering the data with each other in a single coordinate system, a reference frame for the measurements is established.

For example, digital data of teeth obtained at different points in time (e.g., the stitched three-dimensional models of tooth crowns and tooth roots described herein) may be processed to determine unique identifiers or identifying indicia such as cusps or edges of teeth or Facial Axes (FACCs) of clinical crowns. These identifiers may be matched with corresponding identifiers in other digital representations in order to determine the transformation that has occurred between different points in time. Alternatively or in combination, the digital data are registered to each other using a surface matching algorithm. In some embodiments, the matching process positions two teeth substantially based on each crown center and the tooth local coordinate system. Then, for each tooth, a matching operation is performed. The matching operation may be an iterative process that minimizes false values while attempting to find a suitable tooth position. In some embodiments, the process finds a point on the crown of the original tooth and a corresponding point on the current tooth and calculates the distance between these points. The process then determines the transform that minimizes the sum of the squares of these errors. Positioning the teeth and repeating the process. A new set of points is selected and the process finds the difference and determines the transform that minimizes the error. The above steps may be iterated until the error is less than a termination criterion or a maximum number of iterations is reached.

Once the digital data are registered with each other, variations between different data sets can be determined. For example, the initial and subsequent digital data may be superimposed into a common coordinate system to determine volumetric differences between the data in three dimensions to determine the tooth changes that occur between the data. By comparing the magnitude of the change with the time interval over which the change occurs, the rate of change can be determined. In some embodiments, the rate of change may include one or more of a speed of tooth movement, a speed of tooth shape change, a speed of tooth size change, a speed of gum shape change, and the like. For example, the speed of tooth shape change can be calculated in case of recognizing tooth wear, and the speed of gum shape change can be calculated in case of recognizing gum atrophy or inflammation. These changes may be represented as one or more vectors that represent the magnitude and/or direction of the change over time.

Subsequently, a predictive digital representation may be generated by extrapolating the determined rate of change to a future point in time. It is assumed that the extrapolation may cause the intraoral object to continue to change at a rate consistent with the determined rate of change. For example, in the case of tooth motion, it is assumed that unless an obstacle is encountered, the tooth will continue to move in the direction and at the speed specified by the current tooth motion vector. Thus, the dead reckoning can be used to predict the trajectory of the tooth and thereby determine its future position. As further described herein, the extrapolation algorithm used may be linear (e.g., assuming that the rate of change is constant) or non-linear (e.g., the rate of change may change over time). The linear estimation is performed by using data of at least three different time points, and the non-linear estimation is performed by using data of at least three different time points. For example, non-linear prediction is used to predict tooth motion along a curved path and/or to accelerate or decelerate tooth motion. In some embodiments, non-linear tooth motion occurs if the tooth surface and/or subsurface collide, or due to changes in the patient's physiology, diet, age, and the like.

For example, fig. 4A-4D illustrate how the motion of a set of teeth 400 is tracked and predicted. The set of teeth 400 may include a first tooth, a second tooth, and a third tooth. Fig. 4A shows a first tooth 401a, a second tooth 402a, and a third tooth 403a at an initial point in time. Fig. 4B shows a first tooth 401B, a second tooth 402B, and a third tooth 403c at subsequent points in time. As shown in fig. 4B, the set of teeth 400 has been moved out of position at an initial point in time.

As shown in fig. 4C, the positions of a set of teeth 400 at initial and subsequent time points are compared. For example, three-dimensional models of the teeth 400 may be superimposed on each other and compared. A motion vector of the tooth between the initial and subsequent time points may be determined. The motion may be a translation of the first tooth between an initial point in time (tooth 401a) and a subsequent point in time (tooth 401b), and a corresponding motion vector 411 may be determined. The motion may be an extrusion of the second tooth between an initial point in time (tooth 402a) and a subsequent point in time (tooth 402b), and a corresponding motion vector 412 may be determined. The motion may be a tilt of the second tooth between an initial point in time (tooth 403a) and a subsequent point in time (tooth 403b), and a corresponding motion vector 413 may be determined.

As shown in fig. 4D, the motion vectors 411, 412, and 413 may be used to determine the position of the tooth 400 at a later point in time. In some embodiments, it is assumed that tooth 400 will continue to move along the trajectory represented by motion vectors 411, 412, and 413. For example, a first tooth 401c may be translated more at a rate represented by first motion vector 411, a second tooth 402c may be squeezed more at a rate represented by second motion vector 412, and a third tooth 403c may be tilted more at a rate represented by third motion vector 413. For clarity, first tooth 401a, first tooth 401b, and first tooth 401c are the same as the first tooth of group 400, but at different points in time, second tooth 402a, second tooth 402b, and second tooth 402c are the same second tooth but at different points in time, and third tooth 403a, third tooth 403b, and third tooth 403c are the same as the third tooth of group 400, but at different points in time.

Although translation, extrusion, and tilting are shown isolated (isolation) in fig. 4A-4D, the teeth may be moved in other ways, such as rotation, or in any combination of ways. For example, the tooth may be tilted and translated, the tooth may be squeezed and rotated, the tooth may be tilted, translated and squeezed, several possible tooth movements that may be tracked, etc. to determine future movement. The trackable tooth movement includes one or more of tooth extrusion, intrusion, rotation, torsion, tilting, or translation. Alternatively or in combination, other types of changes to the tooth, such as root resorption, enamel decay, and/or caries formation, may also be tracked using the methods herein.

In some embodiments, as described above and herein, the motion of a set of teeth is tracked and predicted using sub-surface data in addition to surface scan data. Fig. 5A to 5D illustrate how the motion of a set of teeth 500 is tracked and predicted. The set of teeth 500 may include a first tooth and a second tooth. Fig. 5A shows a first tooth 501a and a second tooth 502a at an initial point in time. Each tooth includes a portion above gum line 503 (crowns 504a and 505a, respectively) and a portion below gum line 503 (roots 506a and 507a, respectively). Fig. 5B shows the first tooth 501B and the second tooth 502B at subsequent points in time. As shown in fig. 5B, the set of teeth 500 has been moved out of position at an initial point in time such that the positioning of the crowns 504B, 505B and roots 506B, 507B at subsequent points in time is different from the initial point in time. In some embodiments, three-dimensional scanning is used to determine the position of the crown of the tooth 500 at different points in time, while other types of data, such as sub-surface data (e.g., X-ray, CBCT scan, CT scan, MRI, ultrasound, etc.), are used to determine the position of the root of the tooth 500 at different points in time. Thus, the entirety of each tooth, including both visible and invisible portions, may be represented numerically, e.g., a three-dimensional model.

As shown in fig. 5C, the positions of the tooth 500 at the initial and subsequent points in time may be compared. For example, three-dimensional models of the tooth 500 including the crown and root of the tooth may be superimposed and compared with each other. The comparison involves comparing the position of the crown of the tooth 500 at the starting and subsequent time points and comparing the position of the root of the tooth 500 at the starting and subsequent time points. Motion vectors 510, 511 for the teeth between the initial and subsequent points in time may be determined. The motion vectors 510, 511 may be based on changes in the position of the crown and/or root of the tooth 500. As shown in fig. 5C, tooth 500 moves over time at a rate and trajectory represented by vectors 510, 511. The motion vectors 510, 511 may be used to determine the position of the tooth 500 at a later point in time. For example, as shown in fig. 5D, if it is assumed that the teeth will continue to move according to the motion vectors 510, 511, it is predicted that the first tooth 501c and the second tooth 502c will start decelerating and collide with each other at a subsequent point in time. Optionally, a vector representing the change in root shape (e.g., root shrinkage due to resorption) may also be determined and used to predict the future position of the tooth.

In some embodiments, future tooth positions are predicted by analyzing and estimating root motion and/or shape changes. This approach may be advantageous compared to methods that rely solely on surface scan data, and thus are limited to analysis of the tooth crowns, as the positioning and configuration of the tooth roots can have a significant effect on the motion of the tooth. Thus, combining surface scan data and sub-surface data to determine overall changes in tooth structure as described herein may improve the accuracy of estimating future tooth positions.

As described above and herein, variations in the shape and/or size of the teeth may also be determined. Variations in the shape and/or size (e.g., length, width, height, surface area, volume, etc.) of a tooth may be associated with conditions such as bruxism, malocclusion, incorrect bite alignment, etc. Fig. 6A shows a tooth 600a at an initial point, and fig. 6B shows the same tooth 600B at a subsequent point in time. As described above and herein, the teeth 600a, 600b are scanned at different points in time to produce a three-dimensional model of the shape and/or size over time. As shown in fig. 6C (showing models of the teeth 600a, 600b superimposed on one another to show changes in shape and/or size over time), the models of the teeth 600a, 600b may be registered with one another. Trajectory and magnitude vectors 610 for the changes are determined by comparing the surfaces of teeth 600a and 600 b. As shown in fig. 6B and 6C, tooth 600B is worn away over time at a rate and trajectory represented by vector 610. Based on vector 610, a future shape and/or size of tooth 600b, as shown by tooth 600c in FIG. 6D, may be determined. For example, assume that the tooth continues to wear at a similar rate as represented by vector 610. Alternatively or in combination, the volumes of the tooth models 600a and 600b can be compared to each other to determine the rate of change of volume of the teeth (e.g., percent change from initial volume). Thus, the size and/or shape of the future tooth 600c is determined by estimating the volume change at the future point in time.

Although vertical wear of the teeth is shown separately in fig. 6A-6D, other changes in shape and/or size, e.g., wear of the sides of the teeth, and combinations thereof, may be tracked to determine future motion. Changes in shape and/or size may be tracked in conjunction with tooth motion to determine the shape and position of the tooth at a future point in time. The predictive methods described herein may allow for earlier and more accurate detection of tooth shape and/or size changes, allowing for earlier diagnosis and correction of conditions such as malocclusion, root resorption, enamel decay, caries formation, etc., as compared to methods that rely on visual inspection.

As described above and herein, changes to the gums and other tissue in the vicinity of the teeth can also be determined. The change in the positioning and/or shape of the gums may be associated with a gum-related condition such as gum atrophy or gingivitis. Fig. 7A shows a tooth line 700a of a tooth 702 at an initial point, and fig. 7B shows the same tooth 702 and gum line 700B at a subsequent point in time. As described above and herein, the tooth 702 and gum lines 700a, 700b can be scanned at different points in time to produce a three-dimensional model of the position and/or shape over time. As shown in fig. 7C (showing the tooth 702 and gum lines 700a, 700b superimposed on each other to show a model of the position and/or shape over time), the models of the tooth 702 and gum lines 700a, 700b may be registered with each other. A vector 710 of trajectories and magnitudes for changes in the position and/or shape of the gum line may be determined. As shown in fig. 7B and 7C, the gum line recedes over time at a rate and trajectory represented by vector 710. Based on the vector 710, a future position and/or shape of the gum line 700b, as shown by gum line 700c in fig. 7D, may be determined. For example, the position and/or shape of future gum line 700c is calculated based on the assumption that the gum will continue to recede according to vector 710.

Alternatively or in combination, the trajectory and magnitude of the tooth line changes are determined by tracking the size (e.g., length, width, height, surface area, volume, etc.) of the corresponding tooth 702. For example, an increase in the surface area and/or height of a tooth (e.g., the distance from the crown to the dentition) may indicate gingival atrophy. As shown in fig. 7A and 7B, a tooth 702 has an initial height 704a at a first point in time and an increased height 704B at a second point in time. As shown in FIG. 7C, the difference 705 between the heights 704a, 704b can be used to calculate the rate of change of height of the tooth 702. The rate of change of height may be used to predict a future height 704c of the tooth 702 at a future point in time, as shown in fig. 7D. For example, a tooth height above a threshold may indicate gingival atrophy or gingivitis. The prediction methods described herein may allow for earlier and more accurate detection of gingival positioning and/or shape changes than other methods (e.g., visual inspection, measurement of gingival gaps).

As described above and herein, changes in the shape and/or positioning of an intraoral object (e.g., a tooth) are tracked to determine a current trajectory of shape changes and/or motion of the object. Some changes in the shape and/or location of intraoral objects may be linear and may be tracked by comparing data obtained from at least two points in time (e.g., scan data, sub-surface data, etc.). Some changes in the shape and/or location of intraoral objects may be non-linear (e.g., exponential) and may be tracked by comparing data obtained from at least three points in time. Subsequently, linear or non-linear dead reckoning techniques may be used to determine an expected trajectory of the object in order to predict a shape and/or location at a future point in time.

Fig. 8A shows a tooth at a first, initial point in time 800 (time t ═ 0) and at a second, subsequent point in time 801 (time t ═ t-1) The same tooth as it was. As described above and herein, scan data (e.g., a three-dimensional scan) can be made of teeth and can be compared to determine how the teeth change at a future point in time. The scan data may be combined with additional data (e.g., sub-surface data of the root of the tooth) at one or more points in time. The difference between the teeth at the first time point 800 and the second time point 801 may show that tooth motion and/or shape change may be linearly defined, and the linear rate of motion and/or shape change may be extrapolated to determine the position and/or shape of the tooth at a future time point. Fig. 8A also shows a future, subsequent point in time 899 (time t ═ t-n) The teeth of (a).

Fig. 8B shows a tooth at a first, initial point in time 800 '(time t equal to 0), and at a second, subsequent point in time 801' (time t equal to t)1) And at a third, subsequent point in time 802' (time t ═ t)2) The same tooth as the tooth. As described above and herein, scan data may be made of teeth and may be compared to determine how the teeth change at a future point in time. The scan data may be combined with additional data (e.g., sub-surface data of the root of the tooth) at one or more points in time. The differences between the teeth at the first time point 800', the second time point 801', and the third time point 802' may show that tooth motion and/or shape changes may be defined non-linearly, and that non-linear motion rates and/or shape changes may be extrapolated to determine the position and/or shape of the teeth at future time points. Fig. 8B also shows that at a future, subsequent point in time 899' (time t ═ tn) The teeth of (1).

Optionally, the predictive digital representation is generated using a calculation program that adjusts the expected trajectory of the intraoral object based on obstructions that the structure may encounter (e.g., other teeth, colliding roots, bites, root absorption or decay, gingival atrophy), e.g., by collision avoidance algorithms as described herein. For example, in some embodiments, digital models of different arrangements of the patient's teeth obtained at different points in time are used to generate predicted digital representations of the patient's teeth at future points in time. As described herein, the tooth movement velocity of one or more teeth between different arrangements can be calculated. The teeth in the digital model are then repositioned based on the calculated velocities to produce a future tooth arrangement. During repositioning, a collision detection process may be used to detect whether the expected motion will cause a collision with an adjacent tooth. For example, the collision detection process may determine at each time step whether any geometric shapes that describe the tooth surfaces intersect. It can be assumed that unless a collision occurs, the teeth will continue to move according to the calculated speed of motion.

Various techniques are implemented to detect the expected collision between teeth. For example, in some embodiments, the collision detection algorithm is centered around a recursive subdivision of the object footprint (centered), which is organized in a binary-tree-like manner (invalid-treelikefalion). Triangles are used to represent the teeth of the digital data set. Each node of the tree is called an Oriented Bounding Box (OBB) and contains a subset of the triangles that appear in the parent node. All triangle data stored in the parent node is contained between child nodes of the parent node.

The bounding box of a node is oriented so that it fits closely (fit) around all triangles in that node. Leaf nodes in the tree preferably contain a single triangle, but may contain more than one triangle. Detecting a collision between two objects involves determining whether the OBB trees of the objects intersect. And if the OBBs of the root nodes of the fruit trees are overlapped, checking whether the child nodes of the root are overlapped. The algorithm proceeds in a recursive manner until a leaf node is reached. At this point, a robust triangle intersection routine is used to determine whether a triangle on the blade is involved in the collision.

In some embodiments, the OBB tree is constructed in a lazy manner to save memory and time. This approach stems from the observation that certain parts of the model will never participate in collisions, and therefore there is no need to compute the OBB tree for those parts of the model. During the recursive collision determination algorithm, the OBB tree is expanded by partitioning the internal nodes of the tree, if necessary. Also, in particular, triangles in the model that are not needed for collision data may also be disregarded when constructing the OBB tree. For example, see movement on two levels. An object may be considered "moving" in a global sense, or "moving" relative to other objects. The additional information increases the time taken for collision detection by avoiding recalculation of collision information between objects that are stationary with respect to each other because the state of the collision between the objects has not changed.

In an alternative embodiment, the collection detection algorithm calculates a "collision buffer" oriented along the z-axis where the two teeth are located. Each step or each position of the collision buffer along the motion trajectory required for collision detection is calculated. To create a buffer, an x, y plane is defined between the teeth. The plane may be "neutral" with respect to the two teeth. Ideally, the neutral plane is defined so as not to intersect any of the teeth. If intersection with one or two teeth is unavoidable, the neutral plane is oriented such that the teeth are as on opposite sides of the plane as possible. In other words, the neutral plane minimizes the amount of surface area of each tooth that is on the same side of the plane as the other tooth. On the plane is a grid of discrete points, the resolution of which depends on the resolution required by the collision detection routine. A typical high resolution crash cushion comprises a 400 x 400 grid; a typical low resolution buffer comprises a 20 x 20 grid. The z-axis is defined by a line perpendicular to the plane.

The relative positions of the teeth are determined by calculating, for each point in the grid, a linear distance parallel to the z-axis between a plane parallel to each tooth and the nearest surface. For example, at any given grid point (M, N), the plane of the posterior tooth and the nearest surface are separated by a distance represented by the value Z1(M, N), while the plane of the anterior tooth and the nearest surface are separated by a distance represented by the value Z2(M, N). If the collision buffer is defined such that the plane is at Z ≦ 0 and positive values of Z are toward the rear teeth, then at any grid point (M, N) the teeth collide on the plane when Z1(M, N) ≦ Z2(M, N).

If a collision between teeth is detected, the speed and/or trajectory of movement of one or both teeth may be altered. For example, in some embodiments, if a collision occurs, a "push" vector is created to deviate the path of the expected motion from the collision. Based on the push vector, the current tooth "bounces" from the collision and a new tooth motion is generated. Alternatively, it is assumed that the collision stops some or all of the further movement of the colliding teeth, e.g. the speed of movement is reduced or set to zero. Optionally, in some embodiments, the dead reckoning process is stopped if a collision is detected (e.g., to alert the user).

The extrapolation of surface scan data and/or other types of digital data at multiple different points in time may provide an earlier and more reliable prediction that is not possible through visual inspection or review of the static image or model. Malocclusions may be small increments of the cascade of tooth movements that may occur. Due to the dense bite between teeth, individual teeth may move in small to barely detectable increments. A dense bite may result in this tooth moving first and then a dense bite at a different location, which may also cause additional movement there, which may also result in a dense bite elsewhere, and so on. The prediction of this "domino" effect can be quite difficult, as the visual inspection by medical personnel often does not take into account the effects on jaw movements and musculature. By obtaining digital data, such as a three-dimensional scan, at an initial point in time and allowing sufficient time to pass before second and/or subsequent digital data (e.g., 6 months or 1 year later) are taken, the actual result of the bite, jaw, soft tissue is evident in the alignment of the tooth itself. The effect of any dental restorative work can also be determined.

Fig. 9 illustrates a method 900 for generating a predictive digital representation of the patient's oral cavity to determine the patient's future condition. As with all other methods provided herein, method 900 may be used in conjunction with any embodiment of the system and apparatus of the present invention. For example, some or all of the steps of method 400 may be performed by one or more processors of a computer system, as further described herein.

In step 905, first digital data within the oral cavity is received. The first digital data may represent an actual state of the oral cavity or one or more objects thereof (e.g., teeth, gums, jaws, palate, tongue, airway, TMJ, etc.) at a first point in time. In some embodiments, the first time point is an initial time point of orthodontic monitoring and of the patient's teeth. In some embodiments, the initial time point occurs after the eruption of deciduous teeth and before the eruption of permanent teeth, e.g., between 4 and 6 years of age. Obtaining scan data of teeth of an earlier age facilitates better examination of the teeth or orthodontic condition before it is discovered by visual inspection.

In step 910, second digital data is received within the oral cavity. The second digital data may represent an actual state of the intraoral cavity or one or more structures thereof at a second point in time different from the first point in time. The first and second time points may be different, e.g., at least 1 month, at least 2 months, at least 3 months, at least 4 months, at least 5 months, at least 6 months, at least 7 months, at least 8 months, at least 9 months, at least 10 months, at least 11 months, at least 1 year, at least 2 years, at least 5 years, or any other time interval suitable for generating accurate measurements and/or predictions as described herein. The second point in time may be subsequent to the first point in time. The second time point may occur after the permanent teeth have begun to erupt but before all deciduous teeth have fallen, for example between 6 and 12 years of age.

In some embodiments, for example, the first and second digital data are three-dimensional scans within the oral cavity and/or include data representing a three-dimensional surface topography of the patient's dentition and/or surrounding tissue (e.g., gums, palate, tongue, airway). Each scan may include the upper and/or lower arch of the teeth. As described above and herein, a three-dimensional model of the teeth may be generated based on the description. Scan data taken at two or more different times is used to evaluate the future position and/or shape of an intraoral object, e.g., teeth, gums, bite, etc., as further described herein. Alternatively or in combination, other types of digital data may be used in addition to the scan data, such as sub-surface data of the tooth roots, so that the location and/or shape of invisible intraoral objects (e.g. tooth roots, jaws, airways) may also be determined.

Alternatively, digital data representative of the intraoral cavity may be received at other points in time after the second point in time, e.g., third digital data obtained at a third point in time, fourth digital data obtained at a fourth point in time, etc. As described below and herein, any number of data sets at any number of points in time may be received and subsequently analyzed. The numerical data herein may be obtained at any combination of time points during the development of a patient's teeth (e.g., time points before, during, and/or after the development of the dentition). For example, subsequent digital data may be obtained after all deciduous teeth have fallen (e.g., after 12 years of age) and/or once a complete permanent dentition is completed (e.g., after eruption of a third molar, which typically occurs between 17 and 25 years of age).

In step 915, additional data is received. In some embodiments, the additional data provides additional information for identifying a current condition and/or predicting a future condition in the patient. For example, the additional data may include demographic information, lifestyle information, medical history, family medical history, and/or genetic factors. The additional data may be received at one or more different points in time, which may or may not be the same point in time at which the first and second digital data were collected. For example, additional data may be received at multiple time points during tooth development and/or simultaneously with the scan data. Alternatively, the additional data is received at a single point in time.

In step 920, a predicted digital representation of the intraoral cavity is generated. The predictive digital representation may be a two-dimensional or three-dimensional model of the predicted future state or one or more structures thereof within the oral cavity at a future time point subsequent to the first and second time points. The future time point may be at least 1 month, at least 2 months, at least 3 months, at least 4 months, at least 5 months, at least 6 months, at least 7 months, at least 8 months, at least 9 months, at least 10 months, at least 11 months, at least 1 year, at least 2 years, at least 5 years, at least 10 years, at least 20 years, or any other desired length of time after the first and/or second time point. The future point in time may correspond to a life event, for example, when the patient is 18 years old, married, moved, retired, traveled, competed, etc.

As described above and herein, various methods are used to generate the predicted digital representation based on the first and second digital data. For example, in some embodiments, digital data at two or more points in time are compared to produce a comparison. Any number of data sets at any number of points in time may be compared. In some embodiments, generating the comparison involves registering the first and second digital data with each other in a common coordinate system. Generating the comparison may involve measuring a characteristic of the intraoral object at a first point in time (e.g., using the first digital data), measuring a characteristic of the intraoral object at a second point in time (e.g., using the second digital data), and determining a change in the characteristic between the first and second points in time. Optionally, the characteristic is measured at a subsequent point in time using corresponding digital data to determine a characteristic change during the subsequent point in time.

Future changes in the characteristics of the intraoral object may be predicted in response to the determined changes and/or selected time intervals and used to generate a predicted digital representation. For example, as described above and herein, the rate of change of a particular characteristic may be determined and the rate of change may be extrapolated to a selected future point in time in order to predict characteristics of the intraoral object at the future point in time. By repeating this process for multiple intraoral objects (e.g., each tooth in a patient's intraoral cavity), a digital representation of the intraoral cavity at a future point in time can be predicted and generated.

In an exemplary embodiment, the methods herein can be used to generate predictions of a patient's teeth at future points in time. In some embodiments, the position and/or shape of the tooth at a first point in time, e.g., scan data, may be measured, and the position and/or shape of the tooth at a second point in time, e.g., scan data, may be measured. Optionally, the position and/or shape of the tooth at additional points in time (e.g., a third point in time) may be measured, e.g., scan data obtained at additional points in time. In some embodiments, the location and/or shape of the tissue surrounding the teeth, e.g., the location and/or shape of the gums, tongue, palate, airway, etc., can also be measured.

Many tooth changes may have occurred between two or more time points. For example, one or more teeth may have moved, one or more teeth may have worn or fallen off, one or more teeth may have been restored or replaced, and some or all of the gums around the teeth may have shrunk or swollen. Thus, the speed of movement and/or shape change of one or more teeth and/or surrounding tissue may be determined. By determining the movement and/or change in shape of one or more teeth and/or surrounding tissue during the time interval between the first point in time and the second point in time, and by determining the speed of such movement or change, a comparison between teeth and/or surrounding tissue at different points in time is determined. For example, an upper arch at a first time point may be registered and compared to an upper arch at a second time point, while a lower arch at the first time point may be registered and compared to a lower arch at the second time.

The mouth of each person is unique due to the size, shape and bite of the person. Any natural deterioration in tooth position is a result of the different rates of tooth movement and the different areas of tooth wear. For example, some patients' bites are exacerbated by wear of the posterior occlusal surfaces, resulting in denser contact of their anterior portions, and greater crowding due to inward pressure exerted on the anterior teeth by their collapse toward the tongue. In another example, the pressure of the tongue may cause greater spacing and anterior bite openings in other teeth, which may begin at a rate that is initially barely perceptible, but may amplify to a significant change over time. The speed of such changes for each tooth can be determined by comparison of the three-dimensional scans. Individual teeth may be identified and marked. The X, Y and/or Z position of each tooth and/or the difference in one or more identified landmarks of the tooth between the two scans may be determined. The respective trajectories of the teeth can be calculated. The rate, magnitude, and direction of tooth motion can be computed as a three-dimensional vector. The rate and location of tooth shape change can also be determined. For example, an area that has worn may continue to wear. In some embodiments, the determined rate, magnitude, direction, and/or position of the varying of the one or more teeth is adjusted based on additional information, such as from a comparison of tooth root positions or gum lines. For example, a comparison of tooth root positions at two different points in time is extrapolated to determine the position of the root at any future point in time, and this determined position of the root may be used, at least in part, to determine the tooth and the impact at the future point in time. In another example, a comparison of gum lines at two different points in time is extrapolated to determine a gum line at any future point in time, and the determined gum line can be used, at least in part, to determine a change in the tooth at the future point in time. The determination and application of the speed and trajectory of tooth changes based on scanning is described above and further herein.

Alternatively, the change in teeth may be compared to data obtained from a patient database regarding similar changes for similar patients. For example, based on a change in the speed of motion or shape of the tooth determined between the first and second points in time and/or patient data from a patient database (e.g., similar actions or changes like a patient), the position and/or shape of the tooth at a future point in time may be predicted. For example, the vector and/or velocity of motion to position and/or shape at a future point in time is determined by making a prediction by linearly extrapolating position and/or shape data from the first and second points in time. As further described herein, a third or more three-dimensional scans at a third or more points in time are utilized to detect any non-linear changes and perform non-linear extrapolation. For example, tooth movement along a curved path and/or acceleration or deceleration in tooth movement may be predicted.

In some embodiments, while the expected trajectory of each tooth may be determined, obstacles such as adjacent teeth may prevent the full occurrence of the expected tooth movement. For example, teeth may overlap in a three-dimensional geometry when changes are rendered by a computer system. Embodiments of the present invention may provide a tooth collision avoidance algorithm in predicting the future position and/or shape of teeth, as described above.

In some embodiments, the predicted numerical representation is displayed to the user, for example, as a three-dimensional model of the oral cavity at a future point in time. The three-dimensional model may include one or more of an expected tooth action, an expected change in tooth orientation, an expected change in tooth shape, an expected change in gum line, an expected change in root position, an expected change in bite position, an expected change in airway, or an expected change in tongue position, among others. The three-dimensional model data is displayed for medical personnel to observe and verify predicted future positions, sizes, shapes, etc. of teeth and/or other intraoral objects. These models may also be stored in a patient database for later use.

In step 925, the future condition within the oral cavity is determined. As described above and herein, the future condition may be an undesirable dental or orthodontic condition that would be predicted to occur at a future point in time if not treated within the oral cavity. In some embodiments, the future condition is determined before the future condition occurs. Alternatively, the future condition is determined after the future condition occurs but before it is visually detectable. In some embodiments, the condition is predicted based on predicting the digital representation (e.g., by measuring one or more parameters of the representation to determine whether an anomaly exists). Additional information such as patient or patient age, tooth development age, molar relationships, incisor crowding and/or spacing, arch formation, facial type, airway, and horizontal/vertical overbite may be considered to predict a dental or orthodontic condition.

For example, if the current patient or patient data falls within a suitable range of tooth or orthodontic conditions, if the current patient or patient data is in close proximity to data of tooth or orthodontic conditions previously identified in the database, if the current patient or patient data is in close proximity to data of the current patient or patient in a recurring condition, the computer system may automatically predict the tooth or orthodontic conditions, or may automatically provide a list of applicable tooth or orthodontic conditions. Medical personnel using the computer system may then select one or more conditions from the list. Alternatively or in combination, the computer system may provide and display key parameters (e.g., measurements) that medical personnel may evaluate for diagnosing a dental or orthodontic condition. Alternatively, medical personnel may select certain key parameters for monitoring and the computer system may generate an alert if a problem or abnormality is detected in the prediction process that is associated with those parameters. Predicting a dental or orthodontic condition and treating the condition can include any of the conditions and their corresponding treatments described above and herein. In some embodiments, the methods described herein may also be used to identify existing dental or orthodontic conditions. U.S. provisional application No.62/079, 451, which is incorporated herein by reference, describes other methods of identifying or predicting dental or orthodontic conditions and treatment conditions.

In some embodiments, step 925 involves the automatic creation and/or detection of reference data and/or features on the predictive digital representation. Such reference data and/or features may be used to obtain various dental measurements that are automatically calculated to facilitate diagnosis and treatment. These selected reference data and features may be suitably identified through the use of databases, libraries, and other similar storage, including reference data and tooth feature features that enable automatic identification of these reference data and features through computer-implemented methods. For example, in some embodiments, automatically establishing and/or detecting reference data and/or features may include automatically establishing reference objects, automatically establishing a reference frame, automatically detecting dental anatomical features, and/or automatically constructing orthodontic references. Any of one or more of these features, systems and references may be used to automatically calculate appropriate measurements to predict future conditions.

In some embodiments, automatically calculating dental measurements may include calculating a number of tooth dimensions, e.g., size, shape, and other tooth characteristics, as well as calculating arch dimensions and the like, e.g., relative positions of arch teeth. For example, automatically calculating dental measurements may include calculating angular relative positions, such as crown angle (tip), crown inclination (torque) and/or crown rotation (about the tooth axis). Further, automatically calculating dental measurements may include calculating a translational relative position of each tooth relative to other teeth, such as crown height and/or crown protrusion. Automatically calculating dental measurements may also include calculating relative overlap, e.g., local overcrowding or how teeth are blocked from each other. Another calculation of dental measurements may include relative coherence derived from the angular relative position and the translational relative position measurements. For example, the relative coherence of two adjacent teeth is the difference in their relative positions with respect to angular and translational components.

In some embodiments, the calculated dental measurements include such things as crown shape, mesial-distal width, facial-lingual width, crown height (length of crown along a plane perpendicular to the occlusal plane), and other similar tooth characteristics, such as a spline curve around the base of the crown (a spline curve on the boundary of crown and gum surface). Automatically calculating dental measurements can also include calculating various other dental features such as incisors, canines, premolars, and molar characteristics, including point features (described by a center point and a region surrounding the center point, e.g., the cusp) and/or elongated features (described by a center curve and a region surrounding the center curve, e.g., the troughs and ridges). Automatically calculating dental measurements may include calculating tooth dimensions and/or arch dimensions, etc., based on one or more of established, detected, and/or constructed static or dynamic dental features, reference objects, and/or reference frames.

In some embodiments, the automatically calculated dental measurements may also determine dental features such as incisor ridge alignment angles, mandibular posterior alignment features, maxillary posterior alignment features, posterior edge ridge relative height, buccal-lingual inclination distance of posterior teeth, and/or interproximal contact. Further, automatically calculating dental measurements may include automatically calculating bite characteristics including, for example, bite contacts and bite relationships along the dental arch.

The dental measurements described herein can be used to detect the presence of an undesirable dental or orthodontic condition. In some embodiments, the degree and amount of malocclusion (e.g., crowding, spacing, overjet, open bite, cross bite, angular categories, bite contact, and/or the like) may be determined from various dental measurements and then displayed appropriately to the user for facilitating dental treatment and planning, as described above and herein. For example, overjet is determined by the distance of the intersection of curves passing through the midpoint of the mandibular incisor ridge with the medial plane on the mandibular arch to the buccal surface of the maxillary incisors (in the closed position). As another example, overbite may be defined as the percentage of the buccal surface of the maxillary incisors above the curve passing through the midpoint of the mandibular incisor ridge. In this manner, automatic determination of malocclusion may be accurately, reliably, and/or efficiently achieved.

In some embodiments, the measurements may be used for automated calculations, such as for evaluating and evaluating orthodontic or dental indicators that predict future conditions. For example, automatic calculation of orthodontic or dental indicators such as Peer Assessment Rating (PAR) indicators, ABO variance indicators, ABO objective rating systems, and the like may be accomplished based on automatic calculation of measurements. The automated methods provided herein may reduce or eliminate errors, inaccuracies, and time delays associated with manual calculations or visual assessments of such indicators. In some embodiments, the PAR index is automatically calculated by determining and evaluating such measures as, for example, finding an anterior tooth contact point, determining a posterior tooth bite category, detecting or measuring a posterior open bite or cross bite, calculating an anterior overbite, and/or measuring a midline difference. Such an index is calculated to allow the user to effectively assess the complexity of the case and/or measure the quality of the treatment outcome at any stage of the treatment session. Furthermore, the use of such an index may allow scoring or evaluation of treatment cases before, during, and/or after treatment.

In step 930, treatment options for the future condition are generated and/or displayed. For example, a list of recommended treatment products and/or procedures is generated and displayed, e.g., to medical personnel. The generated treatment products and/or programs may be customized based on the severity of the condition, patient-specific characteristics (e.g., demographic information, lifestyle information, etc.), desired parameters for cost and duration of treatment, and the like. Medical personnel can select one or more of the displayed treatment products and/or programs for use in treating the predicted dental or orthodontic condition.

The recommended treatment product and/or program may be generated based on the condition or predicted severity of the condition. In some embodiments, the severity of the condition or the predicted condition is evaluated to determine whether remediation is required. For example, if the severity exceeds a threshold (e.g., distance, rotation, arch, overbite, backocclusion, crowding, space, gingival space, etc.), then correction may be recommended. The threshold may vary according to the preference and judgment of the medical personnel. Optionally, the severity of the patient's condition or predicted condition is displayed to medical personnel, with or without a corresponding threshold, to allow the medical personnel to determine whether correction is needed. If medical personnel determine that correction is needed, a list of recommended products and/or procedures as described herein may be generated and displayed. If the medical personnel determine that no correction is required, steps 935 and 945 may be omitted. In other embodiments, if a condition or predicted condition is identified, a recommended treatment product and/or program may be generated regardless of severity. For example, if gingivitis or the like is predicted, when the condition is first detected, it may be advantageous to prescribe a treatment, e.g., providing a bite plate at the first sign of tooth wear due to bruxism to limit TMJ problems, change oral hygiene habits (e.g., brushing habits, toothbrush type, use of mouthwash).

In step 935, a predicted outcome of the treatment option is generated and/or displayed. Various methods may be used to predict the outcome of administering a treatment option to a patient. In some embodiments, the dead reckoning techniques presented herein may also be used to predict treatment outcome. For example, to predict a future arrangement of a patient's teeth after wearing the appliance, changes in tooth position caused by the tooth positioning appliance are estimated to a future point in time. Alternatively or in combination, data mining of patient history information is used to predict treatment outcome. For example, data mining is used to retrieve historical data for the same type of patient from a patient database (e.g., compared against type of condition, severity of condition, demographic information, lifestyle information, medical history, family medical history, genetic factors, etc.), and for the same type of treatment from a treatment database. Historical data of the patient and/or treatment can then be used as a basis for predicting how the patient will respond to the proposed course of treatment, as well as the effectiveness, cost, and duration of such treatment.

The predicted outcomes of different treatment products and/or procedures are compared to one another to facilitate selection of an optimal course of treatment (e.g., with respect to treatment effectiveness, duration, cost, patient preference, aesthetics, etc.). For example, two-dimensional or three-dimensional models of the oral cavity are generated in view of the application of therapeutic products and/or procedures. In some embodiments, a plurality of different models representing the results of different treatment options are displayed and compared. Alternatively, an intra-oral model of a future point in time that is not being treated and a similar treatment model may be displayed to medical personnel and/or the patient as comparison points.

At step 940, treatment options for the future condition are selected and ordered. Treatment options may be ordered from a treatment provider or product manufacturer. In embodiments where the selected option is a therapeutic product (e.g., an appliance, a prosthesis, etc.), the ordering procedure involves the use of patient-specific data to design and/or generate the therapeutic product. In some embodiments, the patient-specific data includes the digital data and/or additional data obtained in step 905 and 915. For example, three-dimensional digital data representing a patient's current tooth arrangement may be used to design a shell aligner for repositioning the patient's teeth. As another example, digital representations of the patient's teeth and jaw may be used as a basis for producing a mandibular advancement appliance for treating sleep apnea in a patient. For example, the patient-specific data may be transmitted to a manufacturing machine and/or manufacturing facility for manufacturing the product.

It should be understood that the method 900 described above is described as an example. One or more steps of the method 900 may include one or more substeps. Method 900 may include further steps or sub-steps, one or more steps or sub-steps may be omitted, and/or one or more steps or sub-steps may be repeated. For example, three or more sets of digital data are employed or analyzed rather than two sets of digital data; and determines and analyzes non-linear trajectories and amplitudes or other changes in tooth and/or gum motion. In some embodiments, one or more of steps 915, 930, 935, or 940 are optional. The steps of the method 900 may be performed in any order, for example, the order of steps 920-905 may be changed as desired. One or more steps of method 900 may be performed by a computer system as described below and herein.

Fig. 10A and 10B illustrate an algorithm 1000 for predicting and treating a patient's future condition. Algorithm 1000 may be used in conjunction with any embodiment of the systems, methods, and apparatus described herein. Further, although algorithm 1000 is discussed herein in the context of a three-dimensional scan, it should be understood that algorithm 1000 may also be applied to other types of digital data within a patient's oral cavity, such as, for example, two-dimensional image data.

The algorithm starts at step 1002. In step 1004, at a first point in time (t)0) A first three-dimensional scan is received and placed into a coordinate system. The three-dimensional scan may be segmented to isolate target intraoral objects (e.g., individual teeth) to facilitate measurement and analysis of the objects. The scans can be processed to identify characteristics (e.g., gum line, jaw width, occlusal surfaces of teeth, opposing jaws, etc.) associated with one or more intraoral objects and/or representations within the oral cavity that serve as a basis for comparison with other scans, as described above and herein.

In some embodiments, the three-dimensional scan provides intraoral surface data such as data of the dental crown and gums. Optionally, in step 1006, the algorithm checks whether any sub-surface data is received, for example data of the tooth root. If the sub-surface data is received, the algorithm determines whether the data is two-dimensional or three-dimensional in step 1008. The three-dimensional subsurface data may be stitched directly to the three-dimensional scan, as shown in step 1010. The two-dimensional sub-surface data may be additionally processed before being combined with the three-dimensional scan data, as shown in step 1012. For example, an algorithm such as a surface matching algorithm may be used to identify the identity in the two-dimensional data for the resume three-dimensional image. Alternatively, the two-dimensional data may be applied to the same coordinate system as the first three-dimensional scan. The sub-surface data may then be combined with the three-dimensional scan data in step 1010.

In step 1014, a second point in time (t) is received1) The second three-dimensional scan of (1). The second scan may be placed in the same coordinate system as the first scan, segmented, and processed to identify features and/or logos, similar to step 1004 discussed above. Alternatively, the secondary surface data may be acquired and stitched into the second three-dimensional scan data, similar to the process described herein with respect to steps 1006 to 1010.

In step 1016, the algorithm detects whether there is any change in the intraoral object depicted by the scan. Changes may be detected by comparing the first three-dimensional scan with the second three-dimensional scan. According to various methods described herein, changes between scan data are detected. For example, the variation between corresponding characteristics and/or identifications of one or more intraoral objects between two scans is calculated. If a change is detected, the algorithm calculates and displays the change (e.g., on a graphical user interface), as shown in step 1018.

If no change is detected, the algorithm generates a prediction of the future state within the oral cavity, as shown in step 1020. Step 1020 may involve relaying the point in time t0、t1A predicted digital representation of the intraoral cavity is generated at a future point in time thereafter. Predictions are generated based on changes in the one or more intraoral objects determined in step 1016, for example, using linear and/or non-linear prediction techniques.

In step 1022, the algorithm evaluates whether any tooth or orthodontic conditions are detected in the predicted future state within the oral cavity. If no condition is detected, the algorithm proceeds to step 1024, where at a subsequent point in time (t) in step 1024x) Scan data is received, placed in a coordinate system, segmented and/or processed to determine characteristics and/or identification. Optionally, the secondary surface data is obtained and stitched to additional three-dimensional scan data, similar to the process described herein with respect to steps 1006 to 1010. The algorithm may then continue to execute stepsSteps 1016 through 1022 are performed to determine if there is any change between the old and new scans, predict future intraoral states, and detect if any conditions exist for the future states. Steps 1016 through 1024 may be repeated as necessary to update the predictions as new scans are obtained. For example, as described above and herein, three or more scans may be compared to make a non-linear prediction of the detected change.

If the predicted dental or orthodontic condition is detected in step 1022, the predicted condition is displayed (e.g., as a three-dimensional model and/or list of conditions in the user interface), as shown in step 1026. In step 1028, potential treatment options for the predicted condition are generated and displayed. In step 1030, the algorithm detects whether a user input is received to select one or more of the displayed treatment options. If no selection is made and/or the user declines to select the option, the algorithm proceeds to step 1032, where the user is prompted to schedule the next intraoral scan in step 1032. Additional scans are received and processed at step 1024 discussed above. Alternatively, if no additional scans are scheduled, the algorithm ends at step 1034.

If a treatment option is selected, the user is given the option to view information about the selected treatment option in step 1036. In step 1038, information (e.g., on a graphical user interface) is displayed. For example, the user interface may provide a link to a website of the treatment provider with the information.

In step 1040, the user is given the option to view the predicted outcome of the treatment option. The predicted outcome may include a predicted treatment cost, duration, outcome, etc., and may be displayed in step 1042, e.g., on a graphical user interface. In some embodiments, if treatment options are applied, the prediction results may be displayed as a two-dimensional or three-dimensional model representing the predicted future state within the oral cavity. Alternatively, the user may be given the option to compare the predicted results of different treatment options and/or the untreated state within the oral cavity.

In step 1044, the user is given the option to order the treatment options. If the user does not select this option, the algorithm ends at step 1046. Alternatively, the algorithm may return to step 1024 to receive additional scan data and continue the prediction process. If the user decides to order the treatment option, the option is ordered in step 1048. Optionally, digital data related to treatment options (e.g., scans of the patient's oral cavity) may be uploaded and/or transmitted to a treatment provider to facilitate the generation of a treatment product. The algorithm then ends at step 1050. In an alternative embodiment, after the patient is treated, an additional scan is received in step 1024 to continue monitoring of the patient's oral cavity.

Fig. 13 illustrates a method 1300 for calculating a change in an intraoral object of a patient's intraoral cavity in order to determine a future state of the intraoral object, in accordance with various embodiments. Method 1300 can be used in conjunction with any embodiment of the systems, methods, and apparatuses described herein. In some embodiments, method 1300 is a computer-implemented method such that some or all of the steps of method 1300 are performed by means of a computing system or device (e.g., one or more processors). For example, method 1300 may be performed by a computer system including one or more processors and memory having instructions executable by the processors to cause the system to perform the steps described herein.

In step 1310, first digital data representative of the intraoral cavity at a first point in time is received. In step 1320, second digital data representative of the intraoral cavity at a second point in time is received. Steps 1310 and 1320 may be performed similarly to steps 905 and 910 of method 900 previously described herein. Likewise, digital data for additional points in time may be received and used in subsequent steps of method 1300. The received digital data may include any data within the oral cavity, such as surface data and/or sub-surface data. The received digital data may represent the actual state within the oral cavity, such as the actual position, orientation, size, shape, color, etc. of one or more intraoral objects at a particular point in time, and thus may be distinguished from data representing an expected, desired, or ideal state within the oral cavity.

In step 1330, the data is processed to determine a change in the intraoral object of the intraoral cavity during the first and second time points. The processed data may include, for example, the first and second digital data obtained in steps 1310 and 1320, as well as data from other points in time and/or any other additional data that may be relevant to orthodontic health. The intraoral object may be any object described herein, for example, one or more of a dental crown, a dental root, a gum, an airway, a palate, a tongue, or a jaw. The change in the intraoral object may be a change in any characteristic of the object (e.g., position, orientation, shape, size, and/or color). For example, step 1330 may involve determining a change in position of the intraoral object between the first and second points in time.

Alternatively or in combination, the data is processed to determine the rate of change of the intraoral object at the first and second points in time. As described above and herein, the rate of change may be one or more of a position, orientation, shape, size, and/or color of the intraoral object. For example, step 1330 may involve determining a velocity (e.g., a speed of movement) of an intraoral object. As another example, step 1330 can involve determining a tooth shape change rate of a tooth and/or a gum shape change rate of a gum. In some embodiments, determining the rate of change involves determining a vector representing, for example, the trajectory and magnitude of the change during the points in time similar to FIGS. 4A-D, 5A-D, 6A-D, and 7A-D.

In step 1340, it is evaluated whether the change determined in step 1330 exceeds a predetermined threshold. For example, step 1330 may involve determining a change in position of the intraoral object between the first and second points in time based on the first and second digital data, and step 1340 may involve evaluating whether the change in position exceeds a predetermined threshold. The predetermined threshold may be indicative of an undesirable dental or orthodontic condition, such that it may be determined that the patient has a condition or is at risk of developing a condition at a future point in time if the change exceeds the threshold. The value of the predetermined threshold may be determined in various ways (e.g., based on user preferences, patient characteristics, and/or values from dental or orthodontic literature). In some embodiments, the predetermined threshold is input by a user (e.g., a practicing or treatment professional).

In some embodiments, if the determined change exceeds a predetermined threshold, an alert is output to the user through, for example, a user interface shown on the display. The alert may indicate to the user that the patient has developed or is at risk of developing an undesirable dental or orthodontic condition. Alternatively, in response to an evaluation that the change exceeds a predetermined threshold, a plurality of options for generating a desired dental or orthodontic result may be generated and displayed to a user on a user interface shown on the display. The plurality of options may be a plurality of treatment options for treating the presence or predicting the occurrence of an undesirable dental or orthodontic condition. The displayed treatment options may also include relevant pricing information, treatment time information, treatment complication information, and/or insurance reimbursement information, as described above and herein.

In alternative embodiments, other criteria are used to evaluate the determined changes, including but not limited to: whether the determined change is less than a predetermined threshold, whether the determined change is substantially equal to a predetermined value, whether the determined change falls within a predetermined range, whether the determined change is outside of a predetermined range, or a combination thereof.

In step 1350, a future state of the intraoral object is determined based on the determined change. The future state may be a future position, orientation, shape, size, color, etc. of the intraoral object. For example, in some embodiments, step 1350 involves determining a future position of the intraoral object, and determining a motion trajectory to determine the future position based on the speed of movement calculated in step 1340. The motion trajectory may be linear (see, e.g., fig. 8A) or non-linear (see, e.g., fig. 8B). Thus, optionally, the future position of the intraoral object is determined using linear or non-linear estimation to estimate the velocity of the intraoral object to a future point in time. In some embodiments, the non-linear motion profile is determined based on digital data at more than two points in time (e.g., first, second, and third digital data representing the actual state of the intraoral cavity at first, second, and third points in time, respectively). Such a non-linear motion trajectory may for example relate to a change in the direction of movement and/or a change in the speed of movement of the object. Such non-linear changes can be determined and inferred using data from more than two points in time. Furthermore, data for more than two points in time may be used to determine other parameters, such as a force vector associated with the intraoral object during the first, second and third points in time.

Optionally, a graphical representation of the future state of the intraoral object is displayed to the user to facilitate visualization and diagnosis of current or future dental or orthodontic conditions. For example, if a future position is predicted, the graphical representation may display intraoral objects in the future position within the oral cavity. As discussed further herein, the graphical representation is provided as part of an interactive graphical user interface shown on a display operatively connected to the computer system.

The future state of the intraoral object may be used to determine a future undesirable dental or orthodontic condition that is predicted to occur at a future point in time if the intraoral cavity is not being treated. As described above and herein, future conditions may be predicted before they occur to allow for prior treatment. Similar to other embodiments herein, a predicted digital representation of the intraoral cavity at a future time point is generated based on the predicted future state of the intraoral object and used to predict a future condition.

It should be understood that the method 1300 described above is by way of example. One or more steps of method 1300 may include one or more sub-steps. Method 1300 may include further steps or sub-steps, may omit one or more steps or sub-steps, and/or may repeat one or more steps or sub-steps. For example, three or more sets of digital data are employed and analyzed rather than two sets of digital data. Some steps are optional, for example, steps 1340 and/or 1350. For example, in some embodiments, step 1350 is omitted such that method 1300 does not involve predicting an intraoral object and/or a future state within the intraoral cavity. In these embodiments, it is evaluated whether the change exceeds a predetermined threshold sufficient to indicate whether an undesirable dental or orthodontic condition is currently present or to predict whether an undesirable dental or orthodontic condition will occur in the future.

In some embodiments, systems, methods, and devices for predicting a future dental or orthodontic condition implement one or more user interfaces to output data to a user and/or receive user input. For example, a computer system configured to predict a patient's future condition may include instructions for generating a graphical user interface on a display (e.g., monitor, screen, etc.). The computer system may be operatively connected to one or more input devices (e.g., keyboard, mouse, touch screen, joystick, etc.) for receiving user inputs to enable interaction with the user interface. The user interface may enable the user to visualize changes in the patient's oral cavity, any predicted future conditions, potential treatment options, and/or predicted outcomes of treatment options.

Fig. 11A-11G illustrate a user interface 1100 for predicting a future condition of a patient. The user interface 1100 may be generated by one or more processors of a computer system and displayed to a user on a display. The user interface 1100 may include one or more drop down menus 1102 that allow a user to access various functions, a display window 1104 for displaying a digital model or other patient data within the oral cavity, a timeline 1106 representing chronological information for displaying the patient data, a list 1108 of identified and/or predicted dental or orthodontic conditions, a list of potential treatment options or solutions for the conditions 1110, buttons 1112 or other interactive elements for displaying cost information related to one or more treatment options, and/or a navigation dial 1114 or other interactive element for manipulating the view in the display window 1104. The user may interact with various elements of user interface 1100 (e.g., by clicking, keyboard entry, etc.) to perform various operations related to predicting future conditions.

Fig. 11A shows a user interface 1100 displaying digital data within a patient's oral cavity. The digital data may be generated from any suitable combination of the types of digital data described herein (e.g., scan data, sub-surface data, etc.) and may be displayed in any suitable format via user interface 1100. In the depicted embodiment, the digital data is displayed in the display window 1104 as a three-dimensional model of the patient's upper and lower dental arches 1116a, 1116 b. In alternative embodiments, the digital data may be displayed in other formats, such as a two-dimensional image of the oral cavity. Models of other portions of the oral cavity, such as models of the jaw, palate, airway, tongue, and/or TMJ, may also be displayed, if desired. The user interface 1100 can include various types of tools and settings to allow a user to control what data is presented in the display window 1104 and how the data is displayed. For example, the user has the option to switch between different display formats (e.g., switch between a two-dimensional view and a three-dimensional view). In some embodiments, for example, selecting the appropriate field in the "View" option 1118 of the drop-down menu 1102, the user may control which portions of the oral cavity are displayed. For example, the user may choose to view only the lower arch, the upper arch, or the upper and lower arches together (e.g., occlusally or separately). Alternatively, if such data is available, the user may also choose whether to view certain tissues, such as teeth, tooth roots, gums, tongue, jaw, palate, airway, etc. Additional manipulation of the displayed data is performed using the navigation dial 1114. For example, the navigation dial 1114 may control the position, orientation, and/or zoom level of data displayed in the display window in three-dimensional space.

As discussed herein, digital data is received from within a patient's oral cavity at a plurality of time points. Thus, the user interface 1100 may be used to display and compare data at different points in time. In some embodiments, the timeline 1106 displays a chronological order of the digital data available for a particular patient, e.g., a scan over time. The user may select one or more points in time using timeline 1106 to display data for the selected points in time. For example, as shown in FIG. 11A, a single point in time, as indicated by marker 1120, has been selected and the numerical data presented in display window 1104 corresponds to the intraoral data obtained at the selected point in time.

In some embodiments, if multiple points in time are selected, the display window 1104 displays corresponding sets of digital data that overlap each other, thereby facilitating visual comparison of data at different points in time. For example, the visual representation of changes in the oral cavity between different points in time is made using highlighting, colors, shading, markings, etc., according to the user's preferences. In some embodiments, the user interface 1100 also displays measurement data, such as size, force, and/or vector data, that quantifies changes between different points in time. Optionally, the user interface 1100 can display an animation of the progress within the patient's mouth during one or more selected time points. In these embodiments, the user interface 1100 may include one or more animation controls (not shown) that allow the user to play, stop, rewind, and/or fast forward an animation.

The user interface 1100 may display a list 1108 of existing tooth or orthodontic conditions present within the patient's oral cavity at the selected point in time. As described above and herein, the methods presented herein can identify an existing condition based on digital data within the oral cavity. For example, as shown in FIG. 11A, a smaller pitch and smaller rotation condition is identified at a selected point in time. In some embodiments, the intra-oral portion associated with the listed condition is identified on the data shown in display window 1104 using a visual indicator such as a label, color, shading, or the like. For example, in the depicted embodiment, the affected areas of the patient's upper and lower dental arches 1116a-b are marked with circles and labels.

In some embodiments, the user interface 1100 may display a list 1110 of potential treatment options (also referred to herein as treatment regimens) for the identified dental or orthodontic condition. The list 1110 may indicate which condition or conditions each solution preconditions. Alternatively, the listed treatment protocols may be displayed as hyperlinks, and the user may click on or select a hyperlink to view additional information about each protocol, such as descriptions, images of treatment products or programs, predicted costs, duration of predicted treatment, insurance and reimbursement information, treatment provider lists, treatment provider web pages, order information, and the like. Alternatively or in combination, the predicted cost for a particular solution is displayed in response to the user selecting the "cost" button 1114. In some embodiments, if a treatment regimen is selected, the user interface 1100 displays a prediction of treatment outcome, as discussed further herein. Optionally, the user interface 1100 may generate display suggested times for one or more appointments, e.g., for monitoring progress of a detected condition and/or implementing a selected treatment regimen.

Fig. 11B shows a user interface 1100 that displays digital data obtained at additional points in time. In the depicted embodiment, timeline 1106 is updated to include additional points in time. The list of identified conditions 1108 and the list of potential treatment regimens 1110 are updated to reflect the progress within the oral cavity. For example, patient data at a later point in time shown in FIG. 11B represents an increase in the number and severity of the identified conditions 1108 as compared to patient data shown in FIG. 11A. This is also reflected by an increase in the number and aggressiveness of the processing schemes 1110 shown.

Fig. 11C illustrates a predictive digital representation user interface 1100 displaying the intraoral cavity of the patient at a future point in time. As described above and herein, the predictive numerical representation may be generated from numerical data in the oral cavity from previous points in time. In some embodiments, the prediction number representation is generated in response to a user selecting the "predict" option 1122 from the drop down menu 1102. The user may indicate that a predicted future point in time should be generated by selecting from one or more preset options or by entering a custom date.

Once the time interval is selected, the numerical data from the previous point in time is used to generate a predicted numerical representation according to the methods described herein. Alternatively, the user may select which points in time to use to generate the prediction, for example, by selecting desired points in time from timeline 1106. The resulting predicted digital representation may be displayed within the display window 1104 as a two-dimensional or three-dimensional model of the oral cavity, e.g., a model of the patient's lower and upper arches 1124a, 1124 b. The user may adjust how the predicted representation is displayed in the display window 1104 (e.g., by adjusting the position, orientation, zoom, view, etc.), as discussed herein with respect to fig. 11A. The timeline 1106 is updated to show the chronological relationship between the future points in time represented by the predictions (e.g., as represented by the marker 1126) and the points in time corresponding to the digital data used in the predictions. Additionally, the user interface 1100 may incorporate tools to allow a user to compare the predicted state within the oral cavity to previous and/or current states within the oral cavity. For example, the predictive digital representation may be superimposed on the digital data from a previous point in time and visually represent changes between previous and future states using highlighting, colors, shading, markings, etc. according to user preferences. If desired, a quantitative measure of change can be calculated and displayed. As another example, the user interface 1100 may be configured to display an animated representation of the progression of the patient's intraoral cavity from a previous point in time to a future point in time.

As described above and herein, the predictive digital representation can be used to predict one or more orthodontic or dental conditions that can occur at the intraoral pit at a selected future point in time. The predicted future condition may be displayed to the user in the list 1108 and/or represented on the model in the display window 1104. Further, if applicable, the potential treatment options for the identified condition are displayed in the list 1110 using links to the associated resources. For example, the prediction shown in fig. 11C indicates that five years after the last time point, the patient will have a greater pitch and less crowded condition (see fig. 11A) in addition to the smaller pitch and less rotational condition that occurred at the last time point. User interface 1100 also indicates that orthodontics is a potential treatment regimen for treating and/or preventing the identified condition.

Fig. 11D illustrates a user interface 1100 displaying predicted results for a selected treatment regimen. The prediction results may be displayed as a two-dimensional or three-dimensional model representing the state within the oral cavity at a particular point in time after implementation of the selected protocol. For example, in the depicted embodiment, models of the patient's upper and lower dental arches 1128a, 1128b are displayed in display window 1104 and correspond to predicted results of the selected treatment protocol 1130 at a future point in time 1132. The user may adjust how the model representing the predicted result is displayed in the display window 1104 (e.g., by adjusting position, orientation, zoom, view, etc.), as discussed herein. Alternatively, interface 1100 may also display other types of data related to the outcome of the prediction, such as predicted cost, predicted treatment duration, insurance and reimbursement information, treatment provider lists, treatment provider web pages, order information, and the like.

In some embodiments, the user interface 1100 allows the user to compare the prediction results with the intraoral previous and/or current state (e.g., by selecting a corresponding point in time in the timeline 1106) and the future state of the unprocessed intraoral prediction. For example, the model representing the predicted outcome may override the numerical data and/or the predicted numerical representation of the unprocessed state to a previous point in time. Changes between the various digital models may be visually represented by, for example, highlighting, color, shading, markings, and the like. If desired, a quantitative measure of change can be calculated and displayed.

Optionally, if multiple treatment protocols are available, the interface 1100 may include tools for comparing characteristics (e.g., cost, duration, etc.) and/or results of different treatments for decision-making. For example, the interface 1100 may display multiple models within the oral cavity representing the results of different treatment regimens (e.g., overlapping one another), thereby facilitating visual comparison of the efficacy of different treatments. As another example, the predicted cost, duration, and/or any other relevant parameters for each treatment regimen may be displayed and compared, for example, in a list or table format.

Fig. 11E shows a user interface 1100 displaying a comparison of the digital data within the patient's mouth. Although the embodiments describe the comparison of intraoral data obtained at two different points in time, it should be understood that the embodiments herein are equally applicable to comparison between other types of data, such as comparison between previously obtained intraoral data and predicted future intraoral states. Further, the methods herein can be used to compare more than two data sets (e.g., three, four, five, or more data sets), if desired.

In some embodiments, the comparison is presented to the user in the display window 1104 as an overlay of the first model 1134 and the second model 1136. The user (e.g., by making an appropriate selection in timeline 1106) may choose which data sets to compare. As shown in fig. 11E, the first model 1134 corresponds to intraoral digital data obtained at a first point in time 1138, and the second model 1136 corresponds to data obtained at a subsequent point in time 1140. Similar to other embodiments herein, the user may manipulate the data displayed on the display window 1104, for example, by selecting which portions of the oral cavity are to be compared (e.g., via the "view" option 1118 and/or the display settings 1142) and/or by adjusting the display view (e.g., via the navigation dial 1114).

The different models shown in the display window 1104 may be visually distinguished from each other by color shading, contour lines, highlighting, and the like. For example, the first model 1134 is depicted in dashed outline, while the second model 1136 is depicted as a volumetric representation. Further, any differences or variations between the displayed models may be shown by visual identifiers, e.g., highlights, colors, shading, markings, labels, etc. Alternatively, if the differences or changes represent existing or predicted future conditions, the differences or changes are represented to the user in the display window 1104 and/or the condition list 1108. For example, in the depicted embodiment, certain teeth of the patient are worn between the first time point 1138 and the second time point 1140 while "bruxism" is displayed in the condition list 1108.

Fig. 11F illustrates a user interface 1100 displaying a model 1144 of a patient's dental arch in a bite state. As described above and herein, data representing the spatial relationship between the patient's upper and lower dental arches can be obtained and used to mimic the patient's bite. Thus, the user interface 1100 may be used to display the arch of a patient's teeth in a bite state, alternatively or in conjunction with displaying the arch separately (e.g., as shown in fig. 11A-11E). Displaying the arch in the bite state is advantageous for displaying the progress and/or prediction of bite related conditions (e.g., back bite, overbite, cross bite, or similar class II or class III malocclusions). Optionally, the bite data is supplemented with data of other parts of the jaw in order to provide a more complete representation of the patient's intraoral anatomy, thereby facilitating diagnosis and treatment of complex conditions involving multiple regions of the oral cavity. For example, data for the TMJ and tooth roots may be displayed in conjunction with bite data, for example, to facilitate viewing of root movement and bruxism problems that may cause TMJ pain.

Fig. 11G illustrates a user interface 1100 displaying a digital model 1146 for displaying a treatment regimen for sleep apnea. In some embodiments, the treatment regimen for sleep apnea involves applying an oral appliance (e.g., a mandibular advancement splint) to the patient's jaw to advance the patient's lower jaw relative to the upper jaw during sleep. For example, mandibular advancement may reduce the occurrence of sleep apnea events by moving the tongue away from the upper airway. Thus, the user interface 1100 may be used to display a model 1146 representing the relative position of the patient's jaw produced by wearing the appliance. The model 1146 is generated using digital data within the patient's oral cavity (e.g., scan data of teeth and/or bite data representing a bite relationship between jaws) at a current or previous point in time.

The present invention also provides a computer system that can be programmed or otherwise configured to implement the methods provided herein.

Fig. 12 schematically illustrates a system 1200 that includes a computer server ("server") 1201 programmed to implement the methods described herein. The server 1201 may be referred to as a "computer system". The server 1201 includes a central processing unit (CPU, also referred to herein as a "processor") 1205, which may be a single or multi-core processor, or multiple processors for parallel processing. The server 1201 also includes memory 1210 (e.g., random access memory, read only memory, flash memory), an electronic storage unit 1215 (e.g., a hard disk), a communication interface 1220 (e.g., a network adapter) for communicating with one or more other systems, and peripheral devices 1225 such as cache, other memory, data storage, and/or an electronic display adapter. The memory 1210, storage unit 1215, interface 1220 and peripheral devices 1225 communicate with the CPU1205 through a communication bus (solid line) such as a motherboard. The storage unit 1215 may be a data storage unit (or data repository) for storing data. The server 1201 is operatively connected to a computer network ("network") 1230 by way of a communication interface 1220. The network 1230 can be the internet, the internet and/or an extranet, or an intranet and/or extranet in communication with the internet. In some cases, network 1230 is a telecommunications and/or data network. The network 1230 may include one or more computer servers, which may implement distributed computing, such as cloud computing. In some cases, the network 1230 may implement a peer-to-peer network with the server 1201 such that devices can connect to the server 1201 to act as clients or servers. The server 1201 communicates with an imaging device 1245, such as an intra-oral scanning device or system. The server 1201 may communicate with the imaging device 1245 through the network 1230, or alternatively, through direct communication with the imaging device 1245.

The storage unit 1215 may store a file such as a computer-readable file (e.g., a 3D intraoral scan file). In some cases, the server 1201 may include one or more additional data storage units located external to the server 1201, such as on a remote server in communication with the server 1201 via an intranet or the internet.

In some cases, system 1200 includes a single server 1201. In other instances, system 1200 includes multiple servers in communication with each other via an intranet and/or the internet.

The methods described herein may be implemented by machine (or computer processor) executable code (or software) stored on an electronic storage location of the server 1201, such as the memory 1210 or an electronic storage unit. During use, code may be executed by the processor 1205. In some cases, code may be retrieved from the storage unit 1215 and stored in the memory 1210 for access by the processor 1205. In some cases, without electronic storage unit 1215, machine-executable instructions are stored on memory 1210. Alternatively, the code may be executed on a remote computer system.

The code may be pre-compiled and configured for use with a machine having a processor adapted to execute the code, or may be compiled during runtime. The code may be provided in the form of a programming language selected to cause the code to be executed in a pre-compiled or compiled form.

Aspects of the systems and methods provided herein, such as the server 1201, may be embodied in programming. Various aspects of the technology may be considered an "article of manufacture" or an "article of manufacture" typically in the form of machine (or processor) executable code and/or associated data that is carried or embodied in a machine-readable medium. The machine executable code is stored on an electronic storage unit, such memory (e.g., read only memory, random access memory, flash memory) or a hard disk. A "storage" type medium may include any or all of a tangible memory of a computer, processor, etc., or associated modules thereof (such as various semiconductor memories, tape drives, disk drives, etc.), which may provide non-transitory storage for software programming at any time. All or a portion of the software may sometimes communicate over the internet or various other telecommunications networks. For example, such communication may cause software to be loaded from one computer or processor to another, e.g., from a management server or host to the computer platform of an application server. Thus, another type of media which may carry software elements includes optical, electrical, and electromagnetic waves, such as those used at the physical interface between local devices, through wired and optical landline networks and various air links. The physical elements that carry such waves (e.g., wired or wireless links, optical links, etc. may also be considered as media carrying software). As used herein, unless limited to a non-transitory tangible "storage" medium, terms such as a computer or machine "readable medium" refer to any medium that participates in providing instructions to a processor for execution.

Thus, a machine-readable medium, such as computer executable code, may take many forms, including but not limited to tangible storage media, carrier wave media, or physical transmission media. Non-volatile storage media include, for example, optical or magnetic disks, such as any storage device in any computer or the like, such as may be used to implement the databases and the like shown in the figures. Volatile storage media includes dynamic memory, such as the main memory of such a computer platform. Tangible transmission media include coaxial cables, copper wire and fiber optics, including the wires that comprise a bus within a computer system. Carrier-wave transmission media can take the form of electrical or electromagnetic signals, or acoustic or light waves, such as those generated during Radio Frequency (RF) and Infrared (IR) data communications. Common forms of computer-readable media therefore include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD or DVD-ROM, any other optical medium, punch cards, paper tape, any other physical storage medium with patterns of holes, a RAM, a ROM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave transporting data or instructions, cables or links transporting such a carrier wave, or any other medium from which a computer can read programming code and/or data. Many of these forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to a processor for execution.

The server 1201 may be configured for data mining, extraction, transformation, and loading (ETL), or crawling (including web crawling, i.e., the system retrieves data from a remote system over a network and accesses an application programmer interface or parse result tags) operations, which may allow the system to load information from the original data source (or mined data) into a data warehouse. The data warehouse may be configured for a business intelligence system (e.g., a business intelligence system))。

The results of the method of the present invention may be displayed to a user (e.g., a healthcare provider) on a User Interface (UI) (e.g., a Graphical User Interface (GUI)) of the user's electronic device. A UI, such as a GUI, may be provided on a display of the user's electronic device. The display may be a capacitive or resistive touch display. Such a display may be used with other systems and methods of the present invention.

The one or more processors may be programmed to perform the various steps and methods described in the various embodiments and implementations of the invention. The system described in the embodiments of the present application may be composed of various modules, for example, as described above. Each module may include various subroutines, programs, and macros. Each module may be separately compiled and linked into a single executable program.

It is obvious that the number of steps used in such a method is not limited to those described above. Moreover, the methods do not require that all of the described steps be present. Although the above describes methods as discrete steps, one or more steps may be added, combined, or even deleted without departing from the intended functionality of the embodiments. For example, the steps may be performed in a different order. It is also obvious that the above-described method can be carried out in a partially or substantially automated manner.

As will be appreciated by one skilled in the art, the method of the present invention may be embodied, at least in part, in software and executed in a computer system or other data processing system. Thus, in some exemplary embodiments, hardware may incorporate software instructions to implement the invention. Any process descriptions, elements or blocks in the flow diagrams described herein and/or shown in the figures should be understood as potentially representing modules, segments or portions of code which include one or more executable instructions for implementing specific logical functions or elements in the process. Furthermore, the functions described in one or more examples may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be transmitted as one or more instructions or code and stored on a computer-readable medium, which may be executed by a hardware-based processing unit such as one or more processors, including a general purpose microprocessor, an application specific integrated circuit, a field programmable logic array, or other logic circuitry.

As used herein, the term "and/or" is used as a functional word to mean two words or expressions used together or separately. For example, A and/or B includes A alone, B alone, and A and B together.

While preferred embodiments have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Numerous variations, changes, and substitutions will now occur to those skilled in the art without departing from the invention. It should be understood that various alternatives to the embodiments described herein may be employed in practicing the invention. By way of non-limiting example, it will be appreciated by those skilled in the art that a particular feature or characteristic described with reference to one drawing or embodiment may be combined with a feature or characteristic described with reference to another drawing or embodiment. The following claims define the scope of the invention and methods and structures within the scope of these claims and their equivalents are therefore intended to be covered thereby.

59页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:正畸剥离工具

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!