System and method for analyzing surgical video

文档序号:1926691 发布日期:2021-12-03 浏览:29次 中文

阅读说明:本技术 分析外科手术视频的系统和方法 (System and method for analyzing surgical video ) 是由 T·沃尔夫 D·阿塞尔曼 于 2020-02-20 设计创作,主要内容包括:公开了用于分析和审查外科手术视频的系统和方法。该系统和方法可以包括对表征的外科手术术中事件编索引、基于复杂性对外科手术短片进行分析和编目录、生成术中外科手术事件摘要、在外科手术视频上叠加时间线和/或生成外科手术事件短片汇编。该系统和方法还可以包括外科手术视频的分析,以估计外科手术压力、估计流体泄漏的来源和程度、检测略过的外科手术事件、预测患者的出院后风险、更新所预测的结果、向外科医生提供实时建议、确定保险理赔、调整手术室时间表和/或填写术后报告。(Systems and methods for analyzing and reviewing surgical videos are disclosed. The system and method may include indexing characterized intraoperative events, analyzing and cataloging surgical clips based on complexity, generating an intraoperative surgical event summary, superimposing lines of time on a surgical video, and/or generating a surgical event clip compilation. The systems and methods may also include analysis of surgical video to estimate surgical pressure, to estimate the source and extent of fluid leaks, to detect skipped surgical events, to predict patient risk after discharge, to update predicted results, to provide real-time recommendations to surgeons, to determine insurance claims, to adjust operating room schedules, and/or to fill out post-operative reports.)

1. A computer-implemented method for reviewing a surgical video, the method comprising:

accessing at least one video of a surgical procedure;

causing the at least one video to be output for display;

superimposing a surgical timeline over the at least one video output for display, wherein the surgical timeline includes markers identifying at least one of a surgical phase, an intraoperative surgical event, and a decision-making node; and

enabling a surgeon to select one or more markers on the surgical timeline while viewing the playback of the at least one video, and thereby skip display of the video to a location associated with the selected marker.

2. The method of claim 1, wherein the indicia is encoded by at least one of a color or a criticality level.

3. The method of claim 1, wherein the surgical timeline includes textual information identifying portions of the surgical procedure.

4. The method of claim 1, wherein the at least one video comprises a short compilation of procedures from a plurality of surgical procedures arranged in chronological order of the procedure, wherein the short compilation depicts complications from the plurality of surgical procedures; and wherein the one or more markers are associated with the plurality of surgical procedures and displayed on a common timeline.

5. The method of claim 1, wherein the one or more markers comprise decision-making node markers that correspond to decision-making nodes of the surgical procedure and are selected to enable a surgeon to view two or more alternative video segments from two or more corresponding other surgical procedures; and wherein the two or more video segments exhibit different behavior.

6. The method of claim 1, wherein the one or more indicia comprises a decision-making node indicia corresponding to a decision-making node of the surgical procedure; and wherein selection of the decision-making node indicia causes display of one or more alternative possible decisions related to the selected decision-making node indicia.

7. The method of claim 6, wherein one or more estimates associated with the one or more alternative possible decisions are displayed in conjunction with a display of the one or more alternative possible decisions.

8. The method of claim 7, wherein the one or more estimation results are results of analyzing a plurality of past surgical videos including respective similar decision-making nodes.

9. The method of claim 6, wherein information related to a distribution of past decisions made at respective similar past decision-making nodes is displayed in conjunction with the display of the alternative possible decisions.

10. The method of claim 9, wherein the decision-making node of the surgical procedure is associated with a first patient and the respective similar past decision-making node is selected from past surgical procedures associated with: the patient has similar characteristics to the first patient.

11. The method of claim 9, wherein the decision-making node of the surgical procedure is associated with a first medical professional, and the respective similar past decision-making node is selected from past surgical procedures associated with medical professionals as follows: the medical professional has similar features to the first medical professional.

12. The method of claim 9, wherein the decision-making node of the surgical procedure is associated with a first prior event in the surgical procedure, and the similar past decision-making node is selected from past surgical procedures comprising prior events: the prior event is similar to the first prior event.

13. The method of claim 1, wherein the indicia includes intraoperative surgical event indicia, selection of which enables the surgeon to view alternative video clips from different surgical procedures, and wherein the alternative video clips present different ways of processing the selected intraoperative surgical event.

14. The method of claim 1, wherein the overlay on the video output is displayed prior to the end of the surgical procedure depicted in the displayed video.

15. The method of claim 8, wherein the analysis is based on one or more electronic medical records associated with the plurality of past surgical videos.

16. The method of claim 8, wherein the respective similar decision-making nodes are similar to the decision-making node of the surgical procedure according to a similarity index.

17. The method of claim 8, wherein the analyzing comprises using an implementation of a computer vision algorithm.

18. The method of claim 1, wherein the indicia relate to an intraoperative surgical event, and selection of the intraoperative surgical event indicia enables the surgeon to view alternative video clips from different surgeries.

19. A system for reviewing surgical video, the system comprising:

at least one processor configured to:

accessing at least one video of a surgical procedure;

causing the at least one video to be output for display;

superimposing a surgical timeline over the at least one video output for display, wherein the surgical timeline includes markers identifying at least one of a surgical phase, an intraoperative surgical event, and a decision-making node; and

enabling a surgeon to select one or more markers on the surgical timeline while viewing the playback of the at least one video, and thereby cause the display of the video to jump to a location associated with the selected marker.

20. A non-transitory computer-readable medium containing instructions that, when executed by at least one processor, cause the at least one processor to perform operations that enable surgical video review, the operations comprising:

accessing at least one video of a surgical procedure;

causing the at least one video to be output for display;

Superimposing a surgical timeline over the at least one video output for display, wherein the surgical timeline includes markers identifying at least one of a surgical phase, an intraoperative surgical event, and a decision-making node; and

enabling a surgeon to select one or more markers on the surgical timeline while viewing the playback of the at least one video, and thereby cause the display of the video to jump to a location associated with the selected marker.

21. A computer-implemented method for video indexing, the method comprising:

accessing a video clip to be indexed, the video clip to be indexed comprising a clip of a particular surgical procedure;

analyzing the video clips to identify video clip locations associated with a surgical stage of the particular surgical procedure;

generating a stage tag associated with the surgical stage;

associating the stage label with the video filmlet location;

analyzing the video clips to identify event locations for particular intra-operative surgical events within the surgical stage;

associating an event tag with the event location of the particular intraoperative surgical event;

Storing event characteristics associated with the particular intraoperative surgical event;

associating at least a portion of the video clips of the particular surgical procedure with the stage tags, the event tags, and the event features in a data structure containing additional video clips of other surgical procedures, wherein the data structure further includes respective stage tags, respective event tags, and respective event features associated with one or more of the other surgical procedures;

enabling a user to access the data structure by selecting a selected phase tag, a selected event tag, and a selected event feature of a video filmlet for display;

performing a lookup in the data structure of a surgical video filmlet matching the at least one selected stage tag, selected event tag, and selected event features to identify a matching subset of stored video filmlets; and

causing the matching subset of the stored video clips to be displayed to the user, thereby enabling the user to view a surgical clip of at least one intraoperative surgical event sharing the selected event feature while skipping playback of a video clip lacking the selected event feature.

22. The method of claim 21, wherein enabling the user to view a surgical clip of at least one intraoperative surgical event having the selected event feature while skipping playback of a portion of a selected surgical event lacking the selected event feature comprises: sequentially presenting to the user portions of surgical clips of a plurality of intraoperative surgical events that share the selected event feature while skipping playback of portions of selected surgical events that lack the selected event feature.

23. The method of claim 21, wherein the stored event characteristics include an adverse outcome of the surgical event, and wherein causing the matching subset to be displayed comprises: enabling the user to view a surgical clip of a selected adverse outcome while skipping playback of a surgical event lacking the selected adverse outcome.

24. The method of claim 21, wherein the stored event characteristics include surgical techniques, and wherein causing the matching subset to be displayed comprises: enabling the user to view a surgical clip of a selected surgical technique while skipping playback of a surgical clip not associated with the selected surgical technique.

25. The method of claim 21, wherein the stored event characteristics include surgeon skill level, and wherein causing the matching subset to be displayed comprises: enabling the user to view a tabstock exhibiting a selected surgeon skill level while skipping playback of a tabstock lacking the selected surgeon skill level.

26. The method of claim 21, wherein the stored event characteristics include physical patient characteristics, and wherein causing the matching subset to be displayed comprises: enabling the user to view a short film exhibiting a selected physical patient characteristic while skipping playback of the short film lacking the selected physical patient characteristic.

27. The method of claim 21, wherein the stored event characteristics include an identity of a particular surgeon, and wherein causing the matching subset to be displayed comprises: enabling the user to view a tab showing activity of a selected surgeon while skipping playback of a tab lacking activity of the selected surgeon.

28. The method of claim 21, wherein the stored event characteristics include physiological responses, and wherein causing the matching subset to be displayed comprises: enabling the user to view a short sheet exhibiting a selected physiological response while skipping playback of the short sheet lacking the selected physiological response.

29. The method of claim 21, wherein analyzing the video clip to identify the video clip location associated with at least one of the surgical event or the surgical stage comprises: performing computer image analysis on the video footage to identify at least one of a start location for playback of the surgical phase or a start for playback of a surgical event.

30. The method of claim 21, further comprising: summary data relating to a plurality of surgical procedures similar to the particular surgical procedure is accessed and statistical information associated with the selected event feature is presented to the user.

31. The method of claim 21, wherein the accessed video clips include video clips captured via at least one image sensor located in at least one of a position above an operating table, a surgical cavity of a patient, within an organ of a patient, or within a vasculature of a patient.

32. The method of claim 21, wherein identifying the video filmlet location is based on user input.

33. The method of claim 21, wherein identifying the video filmlet location comprises: analyzing frames of the video filmlet using computer analysis.

34. The method of claim 29, wherein the computer image analysis includes using a neural network model trained with example video frames including previously identified surgical stages to thereby identify at least one of a video shot location or a stage label.

35. The method of claim 21, further comprising: the stored event characteristics are determined based on user input.

36. The method of claim 21, further comprising: determining the stored event characteristics based on a computer analysis of a video clip depicting the particular intraoperative surgical event.

37. The method of claim 21, wherein generating the stage label is based on computer analysis of a video clip depicting the surgical stage.

38. The method of claim 21, wherein identifying the matching subset of the stored video clips comprises: computer analysis is used to determine a degree of similarity between the matching subset of stored videos and the selected event features.

39. A surgical video indexing system, the surgical video indexing system comprising:

At least one processor configured to:

accessing a video clip to be indexed, the video clip to be indexed comprising a clip of a particular surgical procedure;

analyzing the video clips to generate phase tags;

identifying a video clip location associated with a surgical stage of the surgical procedure;

associating the stage label with the video filmlet location;

analyzing the video clips to identify event locations for particular intraoperative surgical events;

associating an event tag with the event location of the particular intraoperative surgical event;

storing event characteristics of the particular intraoperative surgical event;

associating at least a portion of the video clips of the particular surgical procedure with the stage tags, the event tags, and the event features in a data structure containing additional video clips of other surgical procedures, wherein the data structure further comprises respective stage tags, respective event tags, and respective event features associated with one or more other surgical procedures;

enabling a user to access the data structure by selecting a selected phase tag, a selected event tag, and a selected event feature of a video for display;

Performing a lookup in the data structure of a surgical video filmlet matching the at least one selected stage tag, selected event tag, or selected event feature to identify a matching subset of stored video filmlets; and

causing the display of the matching subset of the stored video clips to the user, thereby enabling the user to view surgical clips of at least one intraoperative surgical event sharing the selected event feature while skipping playback of video clips lacking the selected event feature.

40. A non-transitory computer-readable medium containing instructions that, when executed by at least one processor, cause the at least one processor to perform operations that enable video indexing, the operations comprising:

accessing a video clip to be indexed, the video clip to be indexed comprising a clip of a particular surgical procedure;

analyzing the video clips to generate phase tags;

identifying a video clip location associated with a surgical stage of the surgical procedure;

associating the stage label with the video filmlet location;

Analyzing the video clips to identify event locations for particular intraoperative surgical events;

associating an event tag with the event location of the particular intraoperative surgical event;

storing event characteristics of the particular intraoperative surgical event;

associating at least a portion of the video clips of the particular surgical procedure with the stage tags, the event tags, and the event features in a data structure containing additional video clips of other surgical procedures, wherein the data structure further comprises respective stage tags, respective event tags, and respective event features associated with at least one other surgical procedure;

enabling a user to access the data structure by selecting a selected phase tag, a selected event tag, and a selected event feature of a video for display;

performing a lookup in the data structure of a surgical video filmlet matching the at least one selected stage tag, selected event tag, and selected event features to identify a matching subset of stored video filmlets; and

causing the display of the matching subset of the stored video clips to the user, thereby enabling the user to view the surgical clips of the at least one other intraoperative surgical event sharing the selected event feature while skipping playback of the video clips lacking the selected event feature.

41. A computer-implemented method of generating a surgical summary snippet, the method comprising:

accessing a particular surgical clip that includes a first set of frames associated with at least one intraoperative surgical event and a second set of frames that are not associated with surgical activity;

accessing historical data based on historical surgical clips of prior surgeries, wherein the historical data comprises distinguishing portions of a surgical clip into frames associated with intra-operative surgical events and frames not associated with surgical activity;

distinguishing the first set of frames from the second set of frames in the particular surgical short sheet based on the information of the historical data; and

upon a user request, presenting to the user a summary of the first set of frames of the particular surgical clip while skipping presenting to the user the second set of frames.

42. The method of claim 41, wherein the information distinguishing portions of the historical surgical clip into frames associated with intraoperative surgical events includes an indicator of at least one of presence or movement of a surgical tool.

43. The method of claim 41, wherein the information distinguishing portions of the historical surgical short sheet into frames associated with intraoperative surgical events includes detected tools and anatomical features in the associated frames.

44. The method of claim 41, wherein the user's request includes an indication of at least one type of intraoperative surgical event of interest, and wherein the first set of frames depicts at least one of the at least one type of intraoperative surgical event of interest.

45. The method of claim 41, wherein the user's request comprises a request to view a plurality of intra-operative surgical events in the particular surgical clip, and wherein presenting the user with a summary of the first set of frames comprises: the first set of frames is displayed chronologically and the second set of frames chronologically is skipped.

46. The method of claim 41, wherein:

the historical data further includes historical surgical outcome data and corresponding historical reason data;

the first set of frames comprises a cause frame set and a result frame set;

The second set of frames comprises a set of intermediate frames; and is

Wherein the method further comprises the steps of:

analyzing the particular surgical clip to identify a surgical outcome and a corresponding cause of the surgical outcome, the identification based on the historical outcome data and corresponding historical cause data;

detecting the set of result frames in the particular surgical clip based on the analysis, the set of result frames being within a result phase of the surgical procedure;

detecting a set of cause frames in the particular surgical slice based on the analysis, the set of cause frames being within a cause phase of the surgical procedure that is temporally distant from the outcome phase, and wherein the set of intermediate frames are within an intermediate phase that is between the set of cause frames and the set of outcome frames;

generating a causal summary of the surgical short, wherein the causal summary comprises the set of causal frames and the set of outcome frames and skips over the set of intermediate frames; and is

Wherein the summary of the first set of frames presented to the user includes the causal summary.

47. The method of claim 46, wherein the cause phase comprises a surgical phase in which the cause occurs, and wherein the set of cause frames is a subset of frames in the cause phase.

48. The method of claim 46, wherein the outcome stage comprises a surgical stage in which the outcome can be observed, and wherein the set of outcome frames is a subset of frames in the outcome stage.

49. The method of claim 46, wherein the method further comprises the steps of: using a machine learning model trained to identify surgical outcomes and corresponding causes of the surgical outcomes using the historical data to analyze the particular surgical short.

50. The method of claim 51, wherein the particular surgical slice depicts a surgical procedure performed on a patient and captured by at least one image sensor in an operating room, and wherein the method further comprises the steps of: deriving the first set of frames for storage in a medical record of the patient.

51. The method of claim 50, wherein the method further comprises the steps of: generating an index of the at least one intra-operative surgical event, and deriving the first set of frames comprises: generating a compilation of the first set of frames that includes the index and is configured to enable viewing of the at least one intra-operative surgical event based on selection of one or more index entries.

52. The method of claim 51, wherein said compilation comprises a series of frames of different intra-operative events stored as a continuous video.

53. The method of claim 50, further comprising the steps of: the first set of frames is associated with a unique patient identifier and a medical record including the unique patient identifier is updated.

54. The method of claim 51, wherein the location of the at least one image sensor is at least one of above an operating table in the operating room or within the patient.

55. The method of claim 51, wherein distinguishing the first set of frames from the second set of frames in the particular surgical slice comprises:

analyzing the specific surgical short to detect a medical instrument;

analyzing the specific surgical patch to detect an anatomical structure;

analyzing the video to detect relative movement between the detected medical instrument and the detected anatomical structure; and

distinguishing the first set of frames from the second set of frames based on the relative movement, wherein the first set of frames includes a surgical active frame and the second set of frames includes a non-surgical active frame, and wherein presenting the summary thereby enables a surgeon preparing for surgery to skip over the non-surgical active frame during a video review of a curtailed presentation.

56. The method of claim 55, wherein distinguishing the first set of frames from the second set of frames is further based on the detected relative position between the medical instrument and the anatomical structure.

57. The method of claim 55, wherein distinguishing the first set of frames from the second set of frames is further based on the detected interaction between the medical instrument and the anatomical structure.

58. The method of claim 55, wherein skipping the non-surgical activity frame comprises: the majority of frames that capture non-surgical activity are skipped.

59. A system for generating a surgical summary snippet, the system comprising:

at least one processor configured to:

accessing a particular surgical clip that includes a first set of frames associated with at least one intraoperative surgical event and a second set of frames that are not associated with surgical activity;

accessing historical data associated with a historical surgical short for a prior surgical procedure, wherein the historical data comprises information that distinguishes portions of the historical surgical short into frames associated with intra-operative surgical events and frames not associated with surgical activity;

Distinguishing the first set of frames from the second set of frames in the particular surgical short sheet based on the information of the historical data; and

upon a user request, presenting to the user a summary of the first set of frames of the particular surgical clip while skipping presenting to the user the second set of frames.

60. A non-transitory computer-readable medium containing instructions that, when executed by at least one processor, cause the at least one processor to perform operations that enable generation of a surgical summary snippet, the operations comprising:

accessing a particular surgical clip that includes a first set of frames associated with at least one intraoperative surgical event and a second set of frames that are not associated with surgical activity;

accessing historical data associated with a historical surgical short for a prior surgical procedure, wherein the historical data comprises information that distinguishes portions of the historical surgical short into frames associated with intra-operative surgical events and frames not associated with surgical activity;

distinguishing the first set of frames from the second set of frames in the particular surgical short sheet based on the information of the historical data; and

Upon a user request, presenting to the user a summary of the first set of frames of the particular surgical clip while skipping presenting to the user the second set of frames.

61. A computer-implemented method of surgical preparation, the method comprising:

accessing a repository of a plurality of sets of surgical video clips that reflect a plurality of surgical procedures performed on different patients and that include intra-operative surgical events, surgical results, patient characteristics, surgeon characteristics, and intra-operative surgical event characteristics;

enabling a surgeon preparing a contemplated surgical procedure to input case-specific information corresponding to the contemplated surgical procedure;

comparing the case-specific information to data associated with the plurality of sets of surgical video clips to identify a set of intra-operative events that are likely to be encountered during the contemplated surgical procedure;

identifying, using the case-specific information and the identified set of intra-operative events likely to be encountered, a particular frame in a particular set of the plurality of sets of surgical video clips that corresponds to the identified set of intra-operative events, wherein the identified particular frame comprises frames from the plurality of surgical procedures performed on different patients;

Determining that the first and second sets of video clips from different patients contain frames associated with intraoperative events sharing a common characteristic;

skipping inclusion of the second set from a compilation to be presented to the surgeon and including the first set in the compilation to be presented to the surgeon; and

enabling the surgeon to view a presentation comprising the compilation containing frames from different surgeries performed on different patients.

62. The method of claim 61, further comprising: enabling display of a common surgical timeline along the presentation, the common surgical timeline including one or more chronological markers corresponding to one or more of the identified particular frames.

63. The method of claim 61, wherein enabling the surgeon to view the presentation comprises: a discrete set of video clips of different surgeries performed on different patients is sequentially displayed.

64. The method of claim 63, wherein sequentially displaying the discrete sets of video clips comprises: displaying an index of the discrete sets of video clips such that the surgeon can select one or more of the discrete sets of video clips.

65. The method of claim 64, wherein the indexing includes parsing the discrete sets into a timeline of corresponding surgical stages and text stage indicators.

66. The method of claim 65, wherein the timeline includes an intraoperative surgical event marker corresponding to an intraoperative surgical event, and wherein the surgeon is enabled to click on the intraoperative surgical event marker to display at least one frame depicting the corresponding intraoperative surgical event.

67. The method of claim 61, wherein the case specific information corresponding to the contemplated surgical procedure is received from an external device.

68. The method of claim 61, wherein comparing the case-specific information to the data associated with the plurality of sets of surgical video clips comprises: using an artificial neural network to identify the set of intraoperative events likely to be encountered during the contemplated surgical procedure.

69. The method of claim 68, wherein using the artificial neural network comprises: providing the case-specific information as input to the artificial neural network.

70. The method of claim 61, wherein the case-specific information includes characteristics of a patient associated with the contemplated procedure.

71. The method of claim 70, wherein the patient characteristic is received from a medical record of the patient.

72. The method of claim 71, wherein the case-specific information comprises information related to a surgical tool.

73. The method of claim 72, wherein the information related to the surgical tool comprises at least one of a tool type or a tool model.

74. The method of claim 71, wherein the common characteristics comprise characteristics of different patients.

75. The method of claim 61, wherein the common characteristic comprises an intra-operative surgical event characteristic of the contemplated surgical procedure.

76. The method of claim 61, wherein determining that the first and second sets of video clips from different patients contain frames associated with intraoperative events sharing common characteristics comprises: the common features are identified using an implementation of a machine learning model.

77. The method of claim 76, wherein the method further comprises: training the machine learning model using an example video filmlet to determine whether two sets of video filmlets share the common feature, and wherein implementing the machine learning model comprises: a trained machine learning model is implemented.

78. The method of claim 61, wherein the method further comprises: training a machine learning model based on the intra-operative surgical event, the surgical result, the patient features, the surgeon features, and the intra-operative surgical event features to generate an index of the repository; and generating the index of the repository, and wherein the step of comparing the case-specific information to the data associated with the plurality of sets comprises: the index is searched.

79. A surgical preparation system, the surgical preparation system comprising:

at least one processor configured to:

accessing a repository of a plurality of sets of surgical video clips that reflect a plurality of surgical procedures performed on different patients and that include intra-operative surgical events, surgical results, patient characteristics, surgeon characteristics, and intra-operative surgical event characteristics;

enabling a surgeon preparing a contemplated surgical procedure to input case-specific information corresponding to the contemplated surgical procedure;

comparing the case-specific information to data associated with the plurality of sets of surgical video clips to identify a set of intra-operative events that are likely to be encountered during the contemplated surgical procedure;

Identifying, using the case-specific information and the identified set of intraoperative events likely to be encountered, a particular frame in a particular set of the plurality of sets of surgical video clips corresponding to the identified set of intraoperative events, wherein the identified particular frame comprises frames from a plurality of surgical procedures performed on different patients;

determining that the first and second sets of video clips from different patients contain frames associated with intraoperative events sharing a common characteristic;

skipping inclusion of the second set from the compilation to be presented to the surgeon and including the first set in the compilation to be presented to the surgeon; and

enabling the surgeon to view a presentation comprising the compilation, and the presentation comprising frames from different surgeries performed on different patients.

80. A non-transitory computer-readable medium containing instructions that, when executed by at least one processor, cause the at least one processor to perform operations that enable surgical preparation, the operations comprising:

accessing a repository of a plurality of sets of surgical video clips that reflect a plurality of surgical procedures performed on different patients and that include intra-operative surgical events, surgical results, patient characteristics, surgeon characteristics, and intra-operative surgical event characteristics;

Enabling a surgeon preparing a contemplated surgical procedure to input case-specific information corresponding to the contemplated surgical procedure;

comparing the case-specific information to data associated with the plurality of sets of surgical video clips to identify a set of intra-operative events that are likely to be encountered during the contemplated surgical procedure;

identifying, using the case-specific information and the identified set of intraoperative events likely to be encountered, a particular frame in a particular set of the plurality of sets of surgical video clips corresponding to the identified set of intraoperative events, wherein the identified particular frame comprises frames from a plurality of surgical procedures performed on different patients;

determining that the first and second sets of video clips from different patients contain frames associated with intraoperative events sharing a common characteristic;

skipping inclusion of the second set from the compilation to be presented to the surgeon and including the first set in the compilation to be presented to the surgeon; and

enabling the surgeon to view a presentation comprising the compilation, and the presentation comprising frames from different surgeries performed on different patients.

81. A computer-implemented method of analyzing the complexity of a surgical clip, the method comprising:

analyzing frames of the surgical short to identify anatomical structures in a first set of frames;

accessing first historical data, the first historical data based on an analysis of first frame data captured from a first set of prior surgical procedures;

analyzing the first set of frames using the first historical data and using the identified anatomical structure to determine a first surgical complexity level associated with the first set of frames;

analyzing the frames of the surgical short to identify a medical tool, the anatomical structure, and an interaction between the medical tool and the anatomical structure in a second set of frames;

accessing second historical data, the second historical data based on an analysis of second frame data captured from a second set of prior surgical procedures; and

analyzing the second set of frames using the second historical data and using the identified interactions to determine a second surgical complexity level associated with the second set of frames.

82. The method of claim 81, wherein the step of determining the first surgical complexity further comprises: identifying a medical tool in the first set of frames.

83. The method of claim 81, wherein determining the second surgical complexity level is based on an elapsed time from the first set of frames to the second set of frames.

84. The method of claim 81, wherein at least one of determining the first complexity level or determining the second complexity level is based on a physiological response.

85. The method of claim 81, wherein the method further comprises: determining a skill level exhibited by a healthcare provider in the surgical short sheet, and wherein at least one of determining the first complexity level or determining the second complexity level is based on the determined skill level exhibited by the healthcare provider.

86. The method of claim 81, further comprising: determining that the first surgical complexity is less than a selected threshold, determining that the second surgical complexity exceeds the selected threshold, and in response to determining that the first surgical complexity is less than the selected threshold and determining that the second surgical complexity exceeds the selected threshold, storing the second set of frames in a data structure while skipping the first set of frames from the data structure.

87. The method of claim 81, wherein identifying the anatomical structure in the first set of frames is based on an identification of a medical tool and a first interaction between the medical tool and the anatomical structure.

88. The method of claim 81, further comprising:

tagging the first set of frames having the first surgical complexity level;

tagging the second set of frames having the second surgical complexity level; and

generating a data structure comprising the first set of frames with the first label and the second set of frames with the second label to enable a surgeon to select the second surgical complexity level and thereby cause the second set of frames to be displayed while skipping displaying the first set of frames.

89. The method of claim 81, further comprising: determining at least one of the first surgical complexity level or the second surgical complexity level using a machine learning model trained to identify surgical complexity levels using frame data captured from a prior surgical procedure.

90. The method of claim 81, wherein determining the second surgical complexity level is based on events occurring between the first set of frames and the second set of frames.

91. The method of claim 81, wherein determining at least one of the first surgical complexity or the second surgical complexity is based on a condition of the anatomical structure.

92. The method of claim 81, wherein determining at least one of the first surgical complexity level or the second surgical complexity level is based on an analysis of an electronic medical record.

93. The method of claim 81, wherein determining the first surgical complexity level is based on events occurring after the first set of frames.

94. The method of claim 81, wherein determining at least one of the first surgical complexity level or the second surgical complexity level is based on a skill level of a surgeon associated with the surgical patch.

95. The method of claim 81, wherein determining the second surgical complexity is based on an indication that an additional surgeon is summoned after the first set of frames.

96. The method of claim 81, wherein determining the second surgical complexity level is based on an indication that a particular medication was administered after the first set of frames.

97. The method of claim 81, wherein the first historical data comprises a machine learning model trained using the first frame of data captured from the first set of prior surgeries.

98. The method of claim 81, wherein the first historical data comprises an indication of a statistical relationship between a particular anatomical structure and a particular surgical complexity level.

99. A system for analyzing the complexity of a surgical clip, the system comprising:

at least one processor configured to:

analyzing frames of the surgical short to identify anatomical structures in a first set of frames;

accessing first historical data, the first historical data based on an analysis of first frame data captured from a first set of prior surgical procedures;

analyzing the first set of frames using the first historical data and using the identified anatomical structure to determine a first surgical complexity level associated with the first set of frames;

Analyzing the frames of the surgical short sheet to identify a medical tool, an anatomical structure, and an interaction between the medical tool and the anatomical structure in a second set of frames;

accessing second historical data, the second historical data based on an analysis of second frame data captured from a second set of prior surgical procedures; and

analyzing the second set of frames using the second historical data and using the identified interactions to determine a second surgical complexity level associated with the second set of frames.

100. A non-transitory computer-readable medium containing instructions that, when executed by at least one processor, cause the at least one processor to perform operations that enable surgical video review, the operations comprising:

analyzing frames of the surgical short to identify anatomical structures in a first set of frames;

accessing first historical data, the first historical data based on an analysis of first frame data captured from a first set of prior surgical procedures;

analyzing the first set of frames using the first historical data and using the identified anatomical structure to determine a first surgical complexity level associated with the first set of frames;

Analyzing the frames of the surgical short sheet to identify a medical tool, an anatomical structure, and an interaction between the medical tool and the anatomical structure in a second set of frames;

accessing second historical data, the second historical data based on an analysis of second frame data captured from a second set of prior surgical procedures; and

analyzing the second set of frames using the second historical data based on analysis of frame data captured from a prior surgical procedure and using the identified interactions to determine a second surgical complexity level associated with the second set of frames.

101. A non-transitory computer-readable medium containing instructions that, when executed by at least one processor, cause the at least one processor to perform a method for enabling an adjustment of an operating room schedule, the method comprising the steps of:

receiving visual data tracking an ongoing surgical procedure from an image sensor located in a surgical operating room;

accessing a data structure containing information based on historical surgical data;

analyzing the visual data of the ongoing surgical procedure using the data structure to determine an estimated time to completion of the ongoing surgical procedure;

Accessing a schedule of the surgical operating room, the schedule including scheduled times associated with completion of the ongoing surgical procedure;

calculating, based on the estimated completion time of the ongoing surgical procedure, whether an expected completion time is likely to result in a discrepancy relative to a scheduled time associated with completion; and

outputting a notification regarding the calculation of the difference, thereby enabling subsequent users of the surgical operating room to adjust their schedules accordingly.

102. The non-transitory computer-readable medium of claim 101, wherein the notification includes an updated operating room schedule.

103. The non-transitory computer readable medium of claim 101, wherein the updated operating room schedule enables a queued healthcare professional to prepare for a subsequent surgical procedure.

104. The non-transitory computer-readable medium of claim 101, wherein the method further comprises: electronically sending the notification to a device associated with a subsequently scheduled user of the surgical operating room.

105. The non-transitory computer-readable medium of claim 101, wherein the method further comprises:

Determining a degree of the discrepancy relative to the scheduled time associated with completion;

outputting the notification in response to the first determined degree; and

in response to the second determined degree, forgoing outputting the notification.

106. The non-transitory computer-readable medium of claim 101, wherein the method further comprises:

determining whether the expected completion time is likely to result in a delay of at least a selected threshold amount of time relative to a scheduled time associated with completion;

outputting the notification in response to determining that the expected completion time is likely to result in a delay of at least the selected threshold amount of time; and

responsive to determining that the expected completion time is unlikely to result in a delay of at least the selected threshold amount of time, forgoing outputting the notification.

107. The non-transitory computer-readable medium of claim 101, wherein determining the estimated completion time is based on one or more stored characteristics associated with a healthcare professional performing the ongoing surgical procedure.

108. The non-transitory computer readable medium of claim 101, further comprising the steps of: updating a historical average completion time based on the determined actual time to complete the surgical procedure in progress.

109. The non-transitory computer readable medium of claim 101, wherein the image sensor is positioned over a patient.

110. The non-transitory computer readable medium of claim 101, wherein the image sensor is positioned on a surgical tool.

111. The non-transitory computer readable medium of claim 101, wherein the analyzing step further comprises: detecting a characteristic event in the received visual data, accessing the information based on historical surgical data to determine an expected completion time of the surgical procedure after occurrence of the characteristic event in the historical surgical data, and determining the estimated completion time based on the determined expected completion time.

112. The non-transitory computer-readable medium of claim 111, wherein the method further comprises: training a machine learning model using historical visual data to detect the feature events.

113. The non-transitory computer-readable medium of claim 101, wherein the method further comprises: training a machine learning model using historical visual data to estimate a completion time, and wherein calculating the estimated completion time comprises: a trained machine learning model is implemented.

114. The non-transitory computer-readable medium of claim 101, wherein the method further comprises: determining the estimated completion time using an average historical completion time.

115. The non-transitory computer-readable medium of claim 101, wherein the method further comprises: detecting a medical tool in the visual data, and wherein calculating the estimated completion time is based on the detected medical tool.

116. The non-transitory computer readable medium of claim 101, wherein the analyzing step further comprises: detecting an anatomical structure in the visual data, and wherein calculating the estimated completion time is based on the detected anatomical structure.

117. The non-transitory computer readable medium of claim 101, wherein the analyzing step further comprises: detecting an interaction between an anatomical structure and a medical tool in the visual data, and wherein calculating the estimated completion time is based on the detected interaction.

118. The non-transitory computer readable medium of claim 101, wherein the analyzing step further comprises: determining a skill level of a surgeon in the visual data, and wherein calculating the estimated completion time is based on the determined skill level.

119. A system for enabling adjustment of an operating room schedule, the system comprising:

at least one processor configured to:

receiving visual data tracking an ongoing surgical procedure from an image sensor located in a surgical operating room;

accessing a data structure containing information based on historical surgical data;

analyzing the visual data of the ongoing surgical procedure using the data structure to determine an estimated time to completion of the ongoing surgical procedure;

accessing a schedule of the surgical operating room, the schedule including scheduled times associated with completion of the ongoing surgical procedure;

calculating, based on the estimated completion time of the ongoing surgical procedure, whether an expected completion time is likely to result in a difference relative to a scheduled time associated with completion; and

outputting a notification regarding the calculation of the difference, thereby enabling subsequent users of the surgical operating room to adjust their schedules accordingly.

120. A computer-implemented method of enabling adjustment of an operating room schedule, the method comprising:

receiving visual data tracking an ongoing surgical procedure from an image sensor located in a surgical operating room;

Accessing a data structure containing information based on historical surgical data;

analyzing the visual data of the ongoing surgical procedure using the data structure to determine an estimated time to completion of the ongoing surgical procedure;

accessing a schedule of the surgical operating room, the schedule including scheduled times associated with completion of the ongoing surgical procedure;

calculating, based on the estimated completion time of the ongoing surgical procedure, whether an expected completion time is likely to result in a difference relative to a scheduled time associated with completion; and

outputting a notification regarding the calculation of the difference, thereby enabling subsequent users of the surgical operating room to adjust their schedules accordingly.

121. A computer-implemented method of analyzing a surgical image to determine insurance claims, the method comprising:

accessing video frames taken during a surgical procedure for a patient;

analyzing the video frames taken during the surgical procedure to identify at least one medical instrument, at least one anatomical structure, and at least one interaction between the at least one medical instrument and the at least one anatomical structure in the video frames;

Accessing a database of claim criteria associated with the medical instrument, the anatomical structure, and the interaction between the medical instrument and the anatomical structure;

comparing the identified at least one interaction between the at least one medical instrument and the at least one anatomical structure to information in the claim criteria database to determine at least one claim criteria associated with the surgical procedure; and

outputting the at least one claim criterion for use in obtaining an insurance claim for the surgical procedure.

122. The method of claim 121, wherein the output at least one claim criterion comprises a plurality of output claim criteria.

123. The method of claim 122, wherein at least two of the plurality of output claim criteria are based on different interactions with a common anatomical structure.

124. The method of claim 123, wherein the at least two output claim criteria are determined based in part on detection of two different medical instruments.

125. The method of claim 121, wherein determining the at least one claim criterion is further based on analysis of post-operative surgical reports.

126. The method of claim 121, wherein the video frame is taken from an image sensor positioned above the patient.

127. The method of claim 121, wherein the video frame is taken from an image sensor associated with a medical device.

128. The method of claim 121, further comprising: updating the database by associating the at least one claim criteria with the surgical procedure.

129. The method of claim 121, further comprising: generating a correlation between the processed claim criteria and at least one of a plurality of medical instruments in a historical video snippet, a plurality of anatomical structures in the historical video snippet, or a plurality of interactions between medical instruments and anatomical structures in the historical video snippet; and updating the database based on the generated correlations.

130. The method of claim 129, wherein generating a correlation comprises: a statistical model is implemented.

131. The method of claim 129, further comprising: detecting at least one of a plurality of medical instruments, a plurality of anatomical structures, or a plurality of interactions between medical instruments and anatomical structures in the historical video snippets using a machine learning model.

132. The method of claim 121, further comprising: analyzing video frames taken during the surgical procedure to determine a condition of the patient's anatomy; and determining the at least one claim criterion associated with the surgical procedure based on the determined condition of the anatomical structure.

133. The method of claim 121, further comprising: analyzing video frames taken during the surgical procedure to determine a change in condition of the patient's anatomy during the surgical procedure; and determining the at least one claim criterion associated with the surgical procedure based on the determined change in condition of the anatomical structure.

134. The method of claim 121, further comprising: analyzing video frames taken during the surgical procedure to determine usage of a particular medical device; and determining the at least one claim criterion associated with the surgical procedure based on the determined usage of the particular medical device.

135. The method of claim 134, further comprising: analyzing video frames taken during the surgical procedure to determine a type of use of the particular medical device; determining at least a first claim criteria associated with the surgical procedure in response to the first determined type of use; and in response to a second determined type of use, determining at least a second claim criterion associated with the surgical procedure, the at least first claim criterion being different from the at least second claim criterion.

136. The method of claim 121, further comprising: processed claim criteria associated with the surgical procedure is received, and the database is updated based on the processed claim criteria.

137. The method of claim 136, wherein the processed claim criteria are different from corresponding claim criteria of the at least one claim criteria.

138. The method of claim 121, further comprising: analyzing video frames taken during the surgical procedure to determine an amount of a particular type of medical supply used in the surgical procedure; and determining the at least one claim criterion associated with the surgical procedure based on the determined amount.

139. A surgical image analysis system to determine insurance claims, the system comprising:

at least one processor configured to:

accessing video frames taken during a surgical procedure for a patient;

analyzing video frames taken during the surgical procedure to identify at least one medical instrument, at least one anatomical structure, and at least one interaction between the at least one medical instrument and the at least one anatomical structure in the video frames;

Accessing a database of claim criteria associated with the medical instrument, the anatomical structure, and the interaction between the medical instrument and the anatomical structure;

comparing the identified at least one interaction between the at least one medical instrument and the at least one anatomical structure to information in the claim criteria database to determine at least one claim criteria associated with the surgical procedure; and

outputting the at least one claim criterion for use in obtaining an insurance claim for the surgical procedure.

140. A non-transitory computer-readable medium containing instructions that, when executed by at least one processor, cause the at least one processor to perform operations that enable determination of insurance claims, the operations comprising:

accessing video frames taken during a surgical procedure for a patient;

analyzing video frames taken during the surgical procedure to identify at least one medical instrument, at least one anatomical structure, and at least one interaction between the at least one medical instrument and the at least one anatomical structure in the video frames;

accessing a database of claim criteria associated with the medical instrument, the anatomical structure, and the interaction between the medical instrument and the anatomical structure;

Comparing the identified at least one interaction between the at least one medical instrument and the at least one anatomical structure to information in the claim criteria database to determine at least one claim criteria associated with the surgical procedure; and

outputting the at least one claim criterion for use in obtaining an insurance claim for the surgical procedure.

141. A non-transitory computer-readable medium containing instructions that, when executed by at least one processor, cause the at least one processor to perform operations that enable automatic completion of a post-operative report of a surgical procedure, the operations comprising:

receiving input of an identifier of a patient;

receiving an input of an identifier of a healthcare provider;

receiving input of a surgical short sheet of a surgical procedure performed by the healthcare provider on the patient;

analyzing a plurality of frames of the surgical short sheet to derive image-based information for populating a post-operative report of the surgical procedure; and

causing the derived image-based information to fill out the postoperative report of the surgical procedure.

142. The non-transitory computer-readable medium of claim 141, wherein the operations further comprise: analyzing the surgical patch to identify one or more stages of the surgical procedure and to identify characteristics of at least one of the identified stages; and wherein the derived image-based information is based on the identified at least one stage and the identified characteristics of the at least one stage.

143. The non-transitory computer-readable medium of claim 142, wherein the operations further comprise: analyzing the surgical patch to associate a name with the at least one stage; and wherein the derived image-based information comprises the name associated with the at least one stage.

144. The non-transitory computer-readable medium of claim 142, wherein the operations further comprise: determining at least a start of the at least one phase; and wherein the derived image-based information is based on the determined start.

145. The non-transitory computer-readable medium of claim 142, wherein the operations further comprise: associating a time stamp with the at least one stage; and wherein the derived image-based information comprises the time stamp associated with the at least one stage.

146. The non-transitory computer-readable medium of claim 141, wherein the operations further comprise: transmitting data to the healthcare provider, the transmitted data including the patient identifier and the derived image-based information.

147. The non-transitory computer-readable medium of claim 141, wherein the operations further comprise: analyzing the surgical patch to identify at least one recommendation for post-operative treatment; and providing the identified at least one suggestion.

148. The non-transitory computer-readable medium of claim 141, wherein the resulting completion of the post-operative report of the surgical procedure is configured to enable a healthcare provider to alter at least a portion of the derived image-based information in the post-operative report.

149. The non-transitory computer readable medium of claim 141, wherein the resulting filling of the post-operative report of the surgical procedure is configured to cause at least a portion of the derived image-based information to be identified in the post-operative report as automatically generated data.

150. The non-transitory computer-readable medium of claim 141, wherein the operations further comprise: analyzing the surgical patch to identify a surgical event within the surgical patch and to identify a characteristic of the identified surgical event; and wherein the derived image-based information is based on the identified surgical event and the identified characteristic.

151. The non-transitory computer-readable medium of claim 150, wherein the operations further comprise: analyzing the surgical clip to determine an event name for the identified surgical event; and wherein the derived image-based information includes the determined event name.

152. The non-transitory computer-readable medium of claim 150, wherein the operations further comprise: associating a time stamp with the identified surgical event; and wherein the derived image-based information comprises the time stamp.

153. The non-transitory computer-readable medium of claim 141, wherein the operations further comprise: the derived image-based information is provided in a form that enables updating of the electronic medical record.

154. The non-transitory computer-readable medium of claim 141, wherein the derived image-based information is based in part on user input.

155. The non-transitory computer-readable medium of claim 141, wherein the derived image-based information includes a first portion associated with a first portion of the surgical procedure and a second portion associated with a second portion of the surgical procedure, and wherein the operations further comprise:

Receiving a preliminary post-operative report;

analyzing the preliminary post-operative report to select a first location and a second location within the preliminary post-operative report, the first location associated with the first portion of the surgical procedure and the second location associated with the second portion of the surgical procedure; and

inserting the first portion of the derived image-based information into the selected first location and inserting the second portion of the derived image-based information into the selected second location.

156. The non-transitory computer-readable medium of claim 141, wherein the operations further comprise: analyzing the surgical patch to select at least a portion of at least one frame of the surgical patch; and causing the selected at least part of the at least one frame of the surgical short to be included in the post-operative report of the surgical procedure.

157. The non-transitory computer-readable medium of claim 156, wherein the operations further comprise:

receiving a preliminary post-operative report; and

analyzing the preliminary post-operative report and the surgical short to select the at least a portion of at least one frame of the surgical procedure.

158. The non-transitory computer-readable medium of claim 141, wherein the operations further comprise:

receiving a preliminary post-operative report;

analyzing the preliminary post-operative report and the surgical short to identify at least one inconsistency between the preliminary post-operative report and the surgical short; and

providing an indication of the identified at least one inconsistency.

159. A computer-implemented method of populating a postoperative report of a surgical procedure, the method comprising:

receiving input of an identifier of a patient;

receiving an input of an identifier of a healthcare provider;

receiving input of a surgical short sheet of a surgical procedure performed by the healthcare provider on the patient;

analyzing the plurality of frames of the surgical short sheet to identify stages of the surgical procedure based on detected interactions between a medical instrument and biological structures, and associating a name with each identified stage based on the interactions;

determining at least the start of each identified phase;

associating a time stamp with the start of each identified phase;

transmitting data to the healthcare provider, the transmitted data including the patient identifier, the name of the identified stage of the surgical procedure, and a time stamp associated with the identified stage; and

Populating a post-operative report with the transmitted data in a manner that enables the healthcare provider to change phase names in the post-operative report.

160. A system for automatically populating a post-operative report of a surgical procedure, the system comprising:

receiving input of an identifier of a patient;

receiving an input of an identifier of a healthcare provider;

receiving input of a surgical short sheet of a surgical procedure performed by the healthcare provider on the patient;

analyzing the plurality of frames of the surgical short sheet to identify stages of the surgical procedure based on detected interactions between a medical instrument and biological structures, and associating a name with each identified stage based on the interactions;

determining at least the start of each identified phase;

associating a time stamp with the start of each identified phase;

transmitting data to the healthcare provider, the transmitted data including the patient identifier, the name of the identified stage of the surgical procedure, and the time stamp of the identified stage; and

populating a post-operative report with the transmitted data in a manner that enables the healthcare provider to change the phase name in the post-operative report.

161. A non-transitory computer-readable medium containing instructions that, when executed by at least one processor, cause the at least one processor to perform operations that enable determination and notification of skipped events in a surgical procedure, the operations comprising:

accessing video frames taken during a particular surgical procedure;

accessing stored data identifying a proposed sequence of events for the surgical procedure;

comparing the accessed frame to a suggested sequence of events to identify an indication of a deviation between the particular surgical procedure and the suggested sequence of events for the surgical procedure;

determining a name of an intraoperative surgical event associated with the deviation; and

providing a notification of the deviation, the notification including the name of the intraoperative surgical event associated with the deviation.

162. The non-transitory computer-readable medium of claim 161, wherein identifying the indication of the deviation and providing the notification occur in real-time during the surgical procedure.

163. The non-transitory computer-readable medium of claim 161, wherein the operations further comprise:

Receiving an indication that a particular action is to occur in the particular surgical procedure;

identifying a preliminary action of the particular action using the suggested sequence of events;

determining, based on the analysis of the accessed frames, that the identified preliminary action has not occurred;

in response to determining that the identified preliminary action has not occurred, identifying the indication of the deviation.

164. The non-transitory computer readable medium of claim 161, wherein the particular surgical procedure is a cholecystectomy.

165. The non-transitory computer-readable medium of claim 161, wherein the suggested sequence of events is based on a critical safety view.

166. The non-transitory computer readable medium of claim 161, wherein the particular surgical procedure is an appendectomy.

167. The non-transitory computer readable medium of claim 161, wherein the particular surgical procedure is a hernia repair procedure.

168. The non-transitory computer readable medium of claim 161, wherein the particular surgical procedure is a hysterectomy.

169. The non-transitory computer readable medium of claim 161, wherein the particular surgical procedure is a radical prostatectomy.

170. The non-transitory computer readable medium of claim 161, wherein the particular surgical procedure is a partial nephrectomy and the deviation includes disregarding the identified hilum.

171. The non-transitory computer readable medium of claim 161, wherein the particular surgical procedure is a thyroidectomy and the deviation includes disregarding identified recurrent laryngeal nerves.

172. The non-transitory computer-readable medium of claim 161, wherein the operations further comprise: identifying a set of frames associated with the deviation, and wherein the step of providing the notification comprises: displaying the identified set of frames associated with the deviation.

173. The non-transitory computer-readable medium of claim 161, wherein the indication that the particular action is to occur is based on input from a surgeon performing the particular surgical procedure.

174. The non-transitory computer readable medium of claim 163, wherein the indication that the particular action is about to occur is entry of a particular medical instrument into a selected region of interest.

175. The non-transitory computer readable medium of claim 161, wherein identifying the deviation includes: determining that the surgical tool is in a particular anatomical region.

176. The non-transitory computer readable medium of claim 161, wherein the particular surgical procedure is a segmental colectomy.

177. The non-transitory computer readable medium of claim 176, wherein the deviation includes disregarding performance of an anastomosis.

178. The non-transitory computer-readable medium of claim 161, wherein identifying the indication of the deviation is based on an elapsed time associated with an intraoperative surgical procedure.

179. A computer-implemented method of enabling determination and notification of a bypass event in a surgical procedure, the method comprising:

accessing video frames taken during a particular surgical procedure;

accessing stored data identifying a proposed sequence of events for the surgical procedure;

comparing the accessed frame to a suggested sequence of events to identify a deviation between the particular surgical procedure and the suggested sequence of events for the surgical procedure;

determining a name of an intraoperative surgical event associated with the deviation; and

providing a notification of the deviation, the notification including the name of the intraoperative surgical event associated with the deviation.

180. A system for enabling determination and notification of a bypass event in a surgical procedure, the system comprising:

at least one processor configured to:

accessing video frames taken during a particular surgical procedure;

accessing stored data identifying a proposed sequence of events for the surgical procedure;

comparing the accessed frame to a suggested sequence of events to identify a deviation between the particular surgical procedure and the suggested sequence of events for the surgical procedure;

determining a name of an intraoperative surgical event associated with the deviation; and

providing a notification of the deviation, the notification including the name of the intraoperative surgical event associated with the deviation.

181. A non-transitory computer-readable medium containing instructions that, when executed by at least one processor, cause the at least one processor to perform operations of providing decision support for a surgical procedure, the operations comprising:

receiving a video clip of a surgical procedure performed on a patient by a surgeon in an operating room;

accessing at least one data structure comprising image-related data characterizing a surgical procedure;

Analyzing the received video clips using the image-related data to determine the presence of a surgical decision-making node;

in the at least one data structure, a correlation between access results and specific actions taken at the decision-making node; and

outputting a suggestion to take the particular action to a user based on the determined presence of the decision-making node and the accessed relevance.

182. The non-transitory computer-readable medium of claim 181, wherein the instructions are configured to cause the at least one processor to perform the operation in real-time during the surgical procedure, and wherein the user is a surgeon.

183. The non-transitory computer readable medium of claim 181, wherein the decision-making node is determined by analysis of a plurality of different historical procedures in which different courses of action occur after a common surgical situation.

184. The non-transitory computer-readable medium of claim 181, wherein the video filmlet comprises images from at least one of an endoscope and an intra-body camera.

185. The non-transitory computer-readable medium of claim 181, wherein the recommendation comprises a recommendation to conduct a medical test.

186. The non-transitory computer-readable medium of claim 185, wherein the operations further comprise:

receiving results of the medical examination; and

outputting, to the user, a second suggestion to take a particular action based on the determined presence of the decision-making node, the accessed correlations, and the received results of the medical examination.

187. The non-transitory computer-readable medium of claim 181, wherein the particular action includes bringing an additional surgeon to the operating room.

188. The non-transitory computer readable medium of claim 181, wherein the decision-making node comprises improper access or exposure, retraction of an anatomical structure, a false determination of an anatomical structure, or a fluid leak.

189. The non-transitory computer-readable medium of claim 181, wherein the recommendation includes a level of confidence that a desired surgical outcome will occur if the particular action is taken.

190. The non-transitory computer-readable medium of claim 181, wherein the suggestion includes a level of confidence that a desired result will not occur if the particular action is not taken.

191. The non-transitory computer-readable medium of claim 181, wherein the recommendation is based on an elapsed time from a particular point in the surgical procedure.

192. The non-transitory computer-readable medium of claim 181, wherein the recommendation includes an indication that an undesirable surgical result is likely to occur if the particular action is not taken.

193. The non-transitory computer-readable medium of claim 181, wherein the suggestion is based on a skill level of the surgeon.

194. The non-transitory computer-readable medium of claim 181, wherein the recommendation is based on a surgical event occurring in the surgical procedure prior to the decision-making node.

195. The non-transitory computer-readable medium of claim 181, wherein the particular action includes a plurality of steps.

196. The non-transitory computer-readable medium of claim 181, wherein the determination of the presence of the surgical decision-making node is based on at least one of a detected physiological response of an anatomical structure and a motion associated with a surgical tool.

197. The non-transitory computer-readable medium of claim 181, wherein the operations further comprise: receiving a vital sign of the patient, and wherein the recommendation is based on the accessed correlations and the vital sign.

198. The non-transitory computer readable medium of claim 181, wherein the surgeon is a surgical robot and the recommendation is provided in the form of instructions for the surgical robot.

199. The non-transitory computer-readable medium of claim 181, wherein the recommendation is based on a condition of a tissue of the patient.

200. The non-transitory computer-readable medium of claim 181, wherein the suggestion of the particular action includes creating a stoma.

201. A computer-implemented method of providing decision support for a surgical procedure, the method comprising:

receiving a video clip of a surgical procedure performed on a patient by a surgeon in an operating room;

accessing at least one data structure comprising image-related data characterizing a surgical procedure;

analyzing the received video clips using the image-related data to determine the presence of a surgical decision-making node;

In the at least one data structure, a correlation between access results and specific actions taken at the decision-making node; and

outputting a recommendation to the surgeon to take the particular action or avoid the particular action based on the determined presence of the decision-making node and the accessed correlations.

202. A system for providing decision support for a surgical procedure, the system comprising:

at least one processor configured to:

receiving a video clip of a surgical procedure performed on a patient by a surgeon in an operating room;

accessing at least one data structure comprising image-related data characterizing a surgical procedure;

analyzing the received video clips using the image-related data to determine the presence of a surgical decision-making node;

in the at least one data structure, a correlation between access results and specific actions taken at the decision-making node; and

outputting a recommendation to the surgeon to take the particular action or avoid the particular action based on the determined presence of the decision-making node and the accessed correlations.

203. A non-transitory computer-readable medium containing instructions that, when executed by at least one processor, cause the at least one processor to perform operations capable of estimating a contact force on an anatomical structure during a surgical procedure, the operations comprising:

receiving image data of a surgical procedure from at least one image sensor in an operating room;

analyzing the received image data to determine an identity of an anatomical structure and to determine a condition of the anatomical structure reflected in the image data;

selecting a contact force threshold associated with the anatomical structure, the selected contact force threshold being based on the determined condition of the anatomical structure;

receiving an indication of an actual contact force on the anatomical structure;

comparing the indication of actual contact force to the selected contact force threshold; and

outputting a notification based on determining that the indication of actual contact force exceeds the selected contact force threshold.

204. The non-transitory computer-readable medium of claim 203, wherein the contact force threshold is associated with a tension level.

205. The non-transitory computer-readable medium of claim 203, wherein the contact force threshold is associated with a level of compression.

206. The non-transitory computer readable medium of claim 203, wherein the actual contact force is associated with contact between a medical instrument and the anatomical structure.

207. The non-transitory computer-readable medium of claim 203, wherein the indication of actual contact force is estimated based on an image analysis of the image data.

208. The non-transitory computer-readable medium of claim 203, wherein outputting the notification comprises: providing real-time warnings to a surgeon performing the surgical procedure.

209. The non-transitory computer readable medium of claim 203, wherein the notification is an instruction to a surgical robot.

210. The non-transitory computer-readable medium of claim 203, wherein the operations further comprise: determining from the image data that the surgical procedure is in a combat mode, and wherein the notification is suspended during the combat mode.

211. The non-transitory computer-readable medium of claim 203, wherein the operations further comprise: determining from the image data that the surgeon is operating in a mode ignoring contact force notifications, and suspending, at least temporarily, further contact force notifications based on determining that the surgeon is operating in a mode ignoring contact force notifications.

212. The non-transitory computer-readable medium of claim 203, wherein selecting the contact force threshold is based on a contact location between the anatomical structure and a medical instrument.

213. The non-transitory computer-readable medium of claim 203, wherein selecting the contact force threshold is based on an angle of contact between the anatomical structure and a medical instrument.

214. The non-transitory computer-readable medium of claim 203, wherein selecting the contact force threshold comprises: providing a condition of the anatomical structure as an input to a regression model, and selecting the contact force threshold based on an output of the regression model.

215. The non-transitory computer-readable medium of claim 203, wherein selecting the contact force threshold is based on an anatomical table that includes corresponding contact force thresholds.

216. The non-transitory computer-readable medium of claim 203, wherein selecting the contact force threshold is based on an action performed by a surgeon.

217. The non-transitory computer-readable medium of claim 203, wherein the indication of actual contact force is received from a surgical tool.

218. The non-transitory computer-readable medium of claim 203, wherein the indication of actual contact force is received from a surgical robot.

219. The non-transitory computer-readable medium of claim 203, wherein the operations further comprise: determining a condition of the anatomical structure in the image data using a machine learning model trained using training examples.

220. The non-transitory computer-readable medium of claim 203, wherein the operations further comprise: selecting the contact force threshold using a machine learning model trained with training examples.

221. A computer-implemented method of estimating a contact force on an anatomical structure during a surgical procedure, the method comprising:

receiving image data of a surgical procedure from at least one image sensor in an operating room;

analyzing the received image data to determine an identity of an anatomical structure and to determine a condition of the anatomical structure reflected in the image data;

selecting a contact force threshold associated with the anatomical structure, the selected contact force threshold being based on the determined condition of the anatomical structure;

Receiving an indication of an actual contact force on the anatomical structure;

comparing the indication of actual contact force to the selected contact force threshold; and

outputting a notification based on determining that the indication of actual contact force exceeds the selected contact force threshold.

222. A system for estimating contact force on an anatomical structure during a surgical procedure, the system comprising:

at least one processor configured to:

receiving image data of a surgical procedure from at least one image sensor in an operating room;

analyzing the received image data to determine an identity of an anatomical structure and to determine a condition of the anatomical structure reflected in the image data;

selecting a contact force threshold associated with the anatomical structure, the selected contact force threshold being based on the determined condition of the anatomical structure;

receiving an indication of an actual contact force on the anatomical structure;

comparing the indication of actual contact force to the selected contact force threshold; and

outputting a notification based on determining that the indication of actual contact force exceeds the selected contact force threshold.

223. A non-transitory computer-readable medium containing instructions that, when executed by at least one processor, cause the at least one processor to perform operations capable of updating a prediction outcome during a surgical procedure, the operations comprising:

Receiving, from at least one image sensor configured to capture an image of a surgical procedure, image data associated with a first event during the surgical procedure;

determining a predicted outcome associated with the surgical procedure based on the received image data associated with the first event;

receiving, from at least one image sensor configured to capture an image of a surgical procedure, image data associated with a second event during the surgical procedure;

determining, based on the received image data associated with the second event, a change in the prediction result that reduces the prediction result below a threshold;

accessing a data structure based on image-related data of a prior surgical procedure;

identifying a suggested remedial action based on the accessed image-related data; and

outputting the suggested remedial action.

224. The non-transitory computer-readable medium of claim 223, wherein the suggested remedial action includes suggesting that a surgeon leave the surgical procedure to rest.

225. The non-transitory computer-readable medium of claim 223, wherein the suggested remedial action includes a suggestion to request assistance from another surgeon.

226. The non-transitory computer-readable medium of claim 223, wherein the suggested remedial action includes a revision to the surgical procedure.

227. The non-transitory computer-readable medium of claim 223, wherein the prediction comprises a likelihood of readmission.

228. The non-transitory computer-readable medium of claim 223, wherein determining the change in prediction is based on a magnitude of bleeding.

229. The non-transitory computer-readable medium of claim 223, wherein identifying the remedial action is based on an indication that the remedial action is likely to elevate the predicted outcome above the threshold.

230. The non-transitory computer-readable medium of claim 223, wherein identifying the remedial action comprises: a machine learning model is used that is trained using historical examples of the remedial action and the surgical outcome to identify the remedial action.

231. The non-transitory computer readable medium of claim 223, wherein determining the prediction comprises: using a machine learning model trained based on historical surgical videos and information indicative of surgical outcomes corresponding to the historical surgical videos to determine a predicted outcome.

232. The non-transitory computer readable medium of claim 223, wherein determining the prediction comprises: interactions between the surgical tool and the anatomical structure are identified, and the prediction outcome is determined based on the identified interactions.

233. The non-transitory computer-readable medium of claim 223, wherein determining the prediction outcome is based on a skill level of a surgeon depicted in the image data.

234. The non-transitory computer-readable medium of claim 223, wherein the operations further comprise: determining a skill level of a surgeon depicted in the image data; and wherein determining the change in the predicted outcome is based on the skill level.

235. The non-transitory computer-readable medium of claim 223, wherein the operations further comprise: updating a dispatch record associated with a surgical room associated with the surgical procedure in response to the prediction decreasing below a threshold.

236. The non-transitory computer-readable medium of claim 223, wherein determining the change in prediction is based on a time elapsed between a particular point in the surgical procedure and the second event.

237. The non-transitory computer-readable medium of claim 223, wherein determining the prediction is based on a condition of an anatomical structure depicted in the image data.

238. The non-transitory computer-readable medium of claim 237, wherein the operations further comprise: determining a condition of the anatomical structure.

239. The non-transitory computer-readable medium of claim 223, wherein determining a change in the prediction is based on a change in color of at least a portion of the anatomical structure.

240. The non-transitory computer-readable medium of claim 223, wherein determining the change in prediction is based on a change in appearance of at least a portion of the anatomical structure.

241. A computer-implemented method of updating a predicted outcome during a surgical procedure, the method comprising:

receiving, from at least one image sensor configured to capture an image of a surgical procedure, image data associated with a first event during the surgical procedure;

determining a predicted outcome associated with the surgical procedure based on the received image data associated with the first event;

Receiving image data associated with a second event during a surgical procedure from at least one image sensor configured to capture an image of the surgical procedure;

determining, based on the received image data associated with the second event, a change in the prediction result that reduces the prediction result below a threshold;

accessing a data structure based on image-related data of a prior surgical procedure;

identifying a suggested remedial action based on the data structure; and

outputting the suggested remedial action.

242. A system for updating a predicted outcome during a surgical procedure, the system comprising:

at least one processor configured to:

receiving, from at least one image sensor configured to capture an image of a surgical procedure, image data associated with a first event during the surgical procedure;

determining a predicted outcome associated with the surgical procedure based on the received image data associated with the first event;

receiving image data associated with a second event during a surgical procedure from at least one image sensor configured to capture an image of the surgical procedure;

Determining, based on the received image data associated with the second event, a change in the prediction result that reduces the prediction result below a threshold;

accessing a data structure based on image-related data of a prior surgical procedure;

identifying a suggested remedial action based on the data structure; and

outputting the suggested remedial action.

243. A computer-implemented method of analyzing fluid leakage during a surgical procedure, the method comprising:

receiving intra-cavity video of a surgical operation in real time;

analyzing frames of the intra-cavity video to determine abnormal fluid leakage conditions in the intra-cavity video; and

when the abnormal fluid leakage condition is determined, a remedial action is enacted.

244. The method of claim 243, wherein the fluid comprises at least one of blood, bile, or urine.

245. The method of claim 243, wherein the analyzing step comprises: analyzing frames of the intra-cavity video to identify blood splatter and at least one characteristic of the blood splatter, and wherein selection of the remedial action is dependent on the at least one characteristic of the identified blood splatter.

246. The method of claim 245, wherein the at least one characteristic is associated with a source of the blood splash.

247. The method of claim 245, wherein the at least one characteristic is associated with an intensity of the blood splash.

248. The method of claim 245, wherein the at least one characteristic is associated with a volume of the blood splash.

249. The method of claim 243, wherein analyzing the frames of intra-cavity video comprises: determining a characteristic of the abnormal fluid leakage condition, and wherein the selection of the remedial action is dependent on the determined characteristic.

250. The method of claim 249, wherein the characteristic is associated with a volume of the fluid leak.

251. The method of claim 249, wherein the characteristic is associated with a color of the fluid leak.

252. The method of claim 249, wherein the characteristic is associated with a fluid type associated with the fluid leak.

253. The method of claim 249, wherein the characteristic is associated with a fluid leakage rate.

254. The method of claim 243, wherein the method further comprises: storing the intra-cavity video, and upon determining that there is the abnormal leak condition, analyzing a previous frame of the stored intra-cavity video to determine a source of the leak.

255. The method of claim 243, wherein enacting the remedial action comprises: providing notification of the source of the leak.

256. The method of claim 255, wherein determining the source of the leak comprises: a ruptured anatomical organ is identified.

257. The method of claim 243, wherein the method further comprises: determining a flow rate associated with the fluid leak condition, and wherein formulating the remedial action is based on the flow rate.

258. The method of claim 243, wherein the method further comprises: determining an amount of fluid loss associated with the fluid leakage condition, and wherein enacting the remedial action is based on the amount of fluid loss.

259. The method of claim 243, wherein analyzing frames of the intra-cavity video to determine abnormal fluid leakage conditions in the intra-cavity video comprises: determining whether the determined fluid leakage situation is an abnormal fluid leakage situation, and wherein the method further comprises:

In response to determining that the determined fluid leak condition is an abnormal fluid leak condition, enacting the remedial action; and

in response to determining that the determined fluid leak condition is a normal fluid leak condition, forgoing formulation of the remedial action.

260. The method of claim 243, wherein the intra-cavity video depicts a surgical robot performing the surgical procedure, and the remedial action comprises sending an instruction to the robot.

261. A surgical system for analyzing fluid leaks, the system comprising:

at least one processor configured to:

receiving intra-cavity video of a surgical operation in real time;

analyzing frames of the intra-cavity video to determine abnormal fluid leakage conditions in the intra-cavity video; and

when the abnormal fluid leakage condition is determined, a remedial action is enacted.

262. A non-transitory computer-readable medium containing instructions that, when executed by at least one processor, cause the at least one processor to perform operations that enable fluid leak detection, the operations comprising:

Receiving intra-cavity video of a surgical operation in real time;

analyzing frames of the intra-cavity video to determine abnormal fluid leakage conditions in the intra-cavity video; and

when the abnormal fluid leakage condition is determined, a remedial action is enacted.

263. A computer-implemented method of predicting risk after discharge, the method comprising:

accessing video frames taken during a particular surgical procedure for a patient;

accessing stored historical data identifying intraoperative events and associated results;

analyzing the accessed frames and identifying at least one specific intra-operative event in the accessed frames based on information obtained from the historical data;

determining a predicted outcome associated with the particular surgical procedure based on information obtained from the historical data and the identified at least one intraoperative event; and

outputting the prediction result in a manner that associates the prediction result with the patient.

264. The method of claim 263, wherein identifying the at least one particular intra-operative event is based on at least one of: a detected surgical tool in the accessed frame, a detected anatomical structure in the accessed frame, an interaction between a surgical tool and an anatomical structure in the accessed frame, or a detected abnormal fluid leak condition in the accessed frame.

265. The method of claim 263, wherein the at least one particular intra-operative event in the accessed frame is identified using a machine learning model, the machine learning model being trained using example training data.

266. The method of claim 263, wherein determining the prediction outcome is based on at least one of a characteristic of the patient, an electronic medical record, or a post-operative surgical report.

267. The method of claim 263, wherein the predicted outcome associated with the particular surgical procedure is determined based on intra-operative events using a machine learning model, the machine learning model being trained using training examples.

268. A method as defined in claim 267 in which determining a prediction comprises: predicting a surgical outcome based on the identified intraoperative events and the identified features of the patient using a trained machine learning model.

269. The method of claim 267, wherein the method further comprises: information identifying a surgical outcome achieved after the surgical procedure is received, and the machine learning model is updated by training the machine learning model using the received information.

270. The method of claim 263, wherein the method further comprises: identifying a characteristic of the patient, and wherein the prediction result is determined based further on the identified patient characteristic.

271. The method of claim 270, wherein the patient characteristics are derived from an electronic medical record.

272. The method of claim 270, wherein the step of identifying the patient characteristic comprises: the accessed frames are analyzed using a machine learning model that is trained using historical surgical procedures and corresponding training examples of historical patient features to identify patient features.

273. The method of claim 263, wherein the predicted outcome comprises at least one of a post-discharge accident, a post-discharge adverse event, a post-discharge complication, or a risk assessment of readmission.

274. The method of claim 263, the method further comprising: accessing a data structure containing a proposed sequence of surgical events, and wherein identifying the at least one particular intraoperative event is based on identification of a deviation between the proposed sequence of events of the surgical procedure identified in the data structure and an actual sequence of events detected in the accessed frame.

275. The method of claim 274, wherein the identification of the deviation is based on at least one of a detected surgical tool in the accessed frame, a detected anatomical structure in the accessed frame, or an interaction between a surgical tool and an anatomical structure in the accessed frame.

276. The method of claim 274, wherein the identifying of the deviation comprises: identifying a deviation from a suggested sequence of events using a machine learning model trained based on historical surgical video clips, historical suggested sequences of events, and information identifying deviations from the historical suggested sequences of events in the historical video clips.

277. The method of claim 274, wherein identifying the deviation comprises: the accessed frame is compared to a reference frame depicting the proposed sequence of events.

278. The method of claim 263, wherein outputting the prediction comprises: updating an electronic medical record associated with the patient.

279. The method of claim 263, wherein outputting the prediction comprises: transmitting the prediction result to a data receiving device associated with a healthcare provider.

280. The method of claim 263, wherein the method further comprises: at least one action likely to improve the prediction result is determined based on the accessed frames, and a recommendation is provided based on the determined at least one action.

281. A system for predicting risk after discharge, the system comprising:

at least one processor configured to:

accessing video frames taken during a particular surgical procedure for a patient;

accessing stored historical data identifying intraoperative events and associated results;

analyzing the accessed frames and identifying at least one specific intra-operative event in the accessed frames based on information obtained from the historical data;

determining a predicted outcome associated with the particular surgical procedure based on information obtained from the historical data and the identified at least one intraoperative event; and

outputting the prediction result in a manner that associates the prediction result with the patient.

282. A non-transitory computer-readable medium containing instructions that, when executed by at least one processor, cause the at least one processor to perform operations that enable predicting risk post discharge, the operations comprising:

Accessing video frames taken during a particular surgical procedure for a patient;

accessing stored historical data identifying intraoperative events and associated results;

analyzing the accessed frames and identifying at least one specific intra-operative event in the accessed frames based on information obtained from the historical data;

determining a predicted outcome associated with the particular surgical procedure based on information obtained from the historical data and the identified at least one intraoperative event; and

outputting the prediction result in a manner that associates the prediction result with the patient.

Technical Field

The disclosed embodiments relate generally to systems and methods for analyzing video of a surgical procedure.

Background

In preparation for surgery, it may be beneficial for the surgeon to view video clips depicting certain surgical events, including events that may have certain characteristics. In addition, during surgery, it may be helpful to capture and analyze videos to provide various types of decision support to the surgeon. In addition, it may be helpful to analyze surgical video to facilitate post-operative activities.

Accordingly, there is a need for an unconventional method to efficiently and effectively analyze surgical videos so that a surgeon can view surgical events, provide decision support, and/or facilitate post-operative activities.

Disclosure of Invention

Embodiments consistent with the present disclosure provide systems and methods for analyzing surgical videos. The disclosed systems and methods may be implemented using a combination of conventional hardware and software, as well as special purpose hardware and software, such as a machine specially constructed and/or programmed to perform the functions associated with the disclosed method steps. Consistent with other disclosed embodiments, a non-transitory computer-readable storage medium may store program instructions that are executable by at least one processing device and perform any of the steps and/or methods described herein.

Consistent with the disclosed embodiments, systems, methods, and computer-readable media related to reviewing surgical videos are disclosed. Embodiments may include accessing at least one video of a surgical procedure and causing the at least one video to be output for display. Embodiments may also include overlaying a surgical timeline on the at least one video output for display. The surgical timeline may include markers identifying at least one of a surgical phase, an intraoperative surgical event, and a decision-making node. The surgical timeline may enable a surgeon to select one or more markers on the surgical timeline while viewing the playback of the at least one video, and thereby cause the display of the video to jump to a location associated with the selected marker.

In one embodiment, the one or more indicia may include decision-making node indicia corresponding to decision-making nodes of the surgical procedure. The selection of the decision-making node indicia may enable the surgeon to view two or more alternative video clips from two or more corresponding other surgeries. Further, the two or more video clips may exhibit different behavior. In another embodiment, selection of a decision-making node indicia may cause one or more alternative possible decisions to be displayed in relation to the selected decision-making node indicia.

Consistent with the disclosed embodiments, systems, methods, and computer-readable media related to video indexing are disclosed. The video indexing may include accessing a video clip to be indexed (including a clip for a particular surgical procedure), which may be analyzed to identify a video clip location associated with a surgical stage of the particular surgical procedure. A stage label may be generated and associated with the video filmlet location. The video indexing may include analyzing the video clips to identify event locations for particular intra-operative surgical events within a surgical stage; and associating the event tag with an event location of the particular intraoperative surgical event. Additionally, event characteristics associated with a particular intraoperative surgical event may be stored.

Video indexing may also include associating at least a portion of the video clips of a particular surgery with a stage tag, an event tag, and an event feature in a data structure containing additional video clips of other surgeries. The data structure may also include respective phase tags, respective event tags, and respective event features associated with one or more of the other surgical procedures. The data structure may be made accessible to a user by selecting a selected phase tab, a selected event tab, and a selected event feature of the video filmlet for display. A lookup matching the at least one selected stage tag, selected event tag, and selected event features may then be performed in a data structure of the surgical video clip to identify a matching subset of the stored video clips. The matching subset of stored video clips may be caused to be displayed to the user, thereby enabling the user to view surgical clips of the at least one intraoperative surgical event that share the selected event feature while skipping playback of the video clips lacking the selected event feature.

Consistent with the disclosed embodiments, systems, methods, and computer-readable media related to generating surgical summary snippets are disclosed. The embodiment may include accessing a particular surgical clip that includes a first set of frames associated with at least one intraoperative surgical event and a second set of frames not associated with surgical activity. The embodiment may also include accessing historical data associated with historical surgical clips of prior surgeries, wherein the historical data includes information that distinguishes portions of the historical surgical clips into frames associated with intra-operative surgical events and frames not associated with surgical activities. The first set of frames and the second set of frames in a particular surgical clip may be distinguished based on information from historical data. Upon request by the user, a summary of the first set of frames for a particular surgical slice may be presented to the user, while the second set of frames may be omitted from the presentation to the user.

In some embodiments, the disclosed embodiments can also include analyzing the particular surgical patch to identify the surgical outcome and the corresponding cause of the surgical outcome. The identification may be based on historical result data and corresponding historical reason data. A set of result frames in a particular surgical slice may be detected based on the analysis. The set of result frames may be within a result phase of the surgical procedure. Further, based on the analysis, a set of cause frames in a particular surgical slice may be detected. The set of reason frames may be in a reason phase of the surgical procedure that is temporally distant from the outcome phase, while the set of intermediate frames may be in an intermediate phase between the set of reason frames and the set of outcome frames. A cause-effect summary of the surgical short can then be generated, wherein the cause-effect summary includes a set of cause frames and a set of effect frames and skips over a set of intermediate frames. Presenting the summary of the first set of frames (aggregate) to the user may include a cause and effect summary.

Consistent with the disclosed embodiments, systems, methods, and computer-readable media related to surgical preparation are disclosed. The embodiment may include accessing a repository of multiple sets of surgical video clips that reflect multiple surgical procedures performed on different patients and include intra-operative surgical events, surgical results, patient characteristics, surgeon characteristics, and intra-operative surgical event characteristics. The method may further comprise: the surgeon is enabled to prepare the envisaged surgery to enter case-specific (case-specific) information corresponding to the envisaged surgery. The case-specific information can be compared to data associated with the multiple sets of surgical video clips to identify a set of intraoperative events that are likely to be encountered during the contemplated surgical procedure. Further, the case-specific information and the identified set of intraoperative events likely to be encountered can be used to identify a particular frame in a particular set of the plurality of sets of surgical video clips that corresponds to the identified set of intraoperative events. The identified particular frames may include frames from the plurality of surgical procedures performed for different patients.

Embodiments may also include determining that the first and second sets of video clips from different patients contain frames associated with intraoperative events sharing a common characteristic; and deleting the contained second set from the compilation to be presented to the surgeon and including the first set in the compilation to be presented to the surgeon. Finally, embodiments may include enabling a surgeon to view a presentation that includes the compilation containing frames from different surgical procedures performed for different patients.

Consistent with the disclosed embodiments, systems, methods, and computer-readable media related to analyzing the complexity of surgical clips are disclosed. Embodiments may include analyzing frames of a surgical short to identify anatomical structures in a first set of frames. The disclosed embodiments may also include accessing the first historical data. The first historical data may be based on an analysis of first frame data captured from a first set of prior surgical procedures. The first set of frames may be analyzed using the first historical data and using the identified anatomical structure to determine a first surgical complexity level associated with the first set of frames.

Some embodiments may also include analyzing the frames of the surgical short to identify the medical tool, the anatomical structure, and the interaction between the medical tool and the anatomical structure in the second set of frames. The disclosed embodiments may include accessing second historical data based on analysis of second frame data captured from a second set of prior surgeries. The second set of frames may be analyzed using the second historical data and using the identified interactions to determine a second surgical complexity level associated with the second set of frames.

Embodiments may also include tagging a first set of frames having a first surgical complexity level; tagging a second set of frames having a second surgical complexity level; and generating a data structure comprising a first set of frames with a first label and a second set of frames with a second label. The generated data structure may enable the surgeon to select a second surgical complexity level and thereby display the second set of frames while bypassing the display of the first set of frames.

Consistent with the disclosed embodiments, systems, methods, and computer-readable media are disclosed that enable adjustment of operating room schedules. Adjusting the operating room schedule may include receiving visual data tracking the surgical procedure in progress from an image sensor located in the surgical operating room; accessing a data structure containing historical surgical events; and analyzing the visual data and the historical surgical data of the ongoing surgical procedure to determine an estimated time of completion of the ongoing surgical procedure. Adjusting the operating room schedule may also include accessing a schedule for the surgical operating room. The schedule may include predetermined times associated with completion of an ongoing surgical procedure. Further, adjusting the operating room schedule may include calculating whether the expected completion time is likely to result in a difference relative to a predetermined time associated with completion based on an estimated completion time of the surgical procedure in progress; and outputting a notification based on the calculation of the difference, thereby enabling subsequent users of the surgical operating room to adjust their schedules accordingly.

Consistent with the disclosed embodiments, systems, methods, and computer-readable media for analyzing surgical images to determine insurance reimbursements are disclosed. The operation of analyzing the surgical image to determine insurance claims may include: accessing video frames taken during a surgical procedure for a patient; analyzing video frames taken during a surgical procedure to identify at least one medical instrument, at least one anatomical structure, and at least one interaction between the at least one medical instrument and the at least one anatomical structure in the video frames; and accessing a database of claim criteria associated with the medical instrument, the anatomical structure, and the interaction between the medical instrument and the anatomical structure. The operations may further include comparing the identified at least one interaction between the at least one medical instrument and the at least one anatomical structure to information in a claim criteria database to determine at least one claim criteria associated with the surgical procedure.

Consistent with the disclosed embodiments, systems, methods, and computer-readable media for populating a postoperative report of a surgical procedure are disclosed. The operation of populating the post-operative report of the surgical procedure may include: receiving an input of a patient identifier; receiving an input of an identifier of a healthcare provider; and receiving input of a surgical short sheet of a surgical procedure performed by a healthcare provider on a patient. The operations may further include analyzing the plurality of frames of the surgical short to derive image-based information for populating a postoperative report of the surgical procedure; and causing the derived image-based information to fill out a postoperative report of the surgical procedure.

Consistent with the disclosed embodiments, systems, methods, and computer-readable media are disclosed that enable determination and notification of skipped events in a surgical procedure. Operations that enable determination and notification of skipped events may include: accessing video frames taken during a particular surgical procedure; accessing stored data identifying a proposed sequence of events for a surgical procedure; comparing the accessed frame to the proposed sequence of events to identify an indication of a deviation between a particular surgical procedure and the proposed sequence of events for that surgical procedure; determining a name of an intraoperative surgical event associated with the deviation; and providing a notification of the deviation, the notification including a name of an intraoperative surgical event associated with the deviation.

Some embodiments of the present disclosure include systems, methods, and computer-readable media for providing real-time decision support for surgical procedures. Some of such embodiments may involve at least one processor. Such embodiments may involve receiving a video clip of a surgical procedure performed on a patient by a surgeon in an operating room; and accessing at least one data structure comprising image-related data characterizing the surgical procedure. In addition, the received video clips may be analyzed using image-related data to determine the presence of surgical decision-making nodes in real-time. At least one data structure may be accessed, and a correlation between the results and the particular action taken at the decision-making node. Based on the determined presence of the decision-making node and the accessed correlations, a recommendation may be output to the surgeon to take a particular action or avoid a particular action.

Embodiments of the present disclosure include the disclosed systems, methods, and computer-readable media for estimating contact force on an anatomical structure during a surgical procedure. Embodiments may relate to receiving image data of a surgical procedure from at least one image sensor in an operating room; and analyzing the received image data to determine the identity of the anatomical structure and to determine a condition of the anatomical structure reflected in the image data. A contact force threshold associated with the anatomical structure may be selected based on the determined condition of the anatomical structure. The actual contact force on the anatomical structure may be determined and compared to the selected contact force threshold. Thereafter, a notification may be output based on an indication that the actual contact force exceeds the selected contact force threshold.

Some embodiments of the present disclosure relate to systems, methods, and computer-readable media for updating a predicted outcome during a surgical procedure. The embodiments may involve receiving image data associated with a first event during a surgical procedure from at least one image sensor configured to capture an image of the surgical procedure. Embodiments may determine a predicted outcome associated with the surgical procedure based on the received image data associated with the first event; and may receive image data associated with a second event during the surgical procedure from at least one image sensor configured to capture an image of the surgical procedure. Embodiments may then determine a change in the prediction outcome that reduces the prediction outcome below a threshold based on the received image data associated with the second event. Suggested remedial actions may be identified and suggested based on image-related data contained in the data structure regarding prior surgery.

Some embodiments of the present disclosure relate to systems, methods, and computer-readable media for enabling fluid leak detection during a surgical procedure. Embodiments may relate to receiving intra-cavity video of a surgical procedure in real time. The processor may be configured to analyze frames of the intra-cavity video to determine an abnormal fluid leakage condition in the intra-cavity video. Embodiments may perform remedial action in determining an abnormal fluid leakage situation.

Consistent with the disclosed embodiments, systems, methods, and computer-readable media related to predicting risk after discharge are disclosed. The operation of predicting risk after discharge may include: accessing video frames taken during a particular surgical procedure for a patient; accessing stored historical data identifying intraoperative events and associated results; analyzing the accessed frame; and identifying at least one specific intra-operative event in the accessed frame based on information obtained from the historical data; determining a predicted outcome associated with the particular surgical procedure based on information obtained from the historical data and the identified at least one intraoperative event; and outputting the prediction in a manner that associates the prediction with the patient.

The foregoing summary provides a few examples of the disclosed embodiments to provide features of the present disclosure, and is not intended to summarize all aspects of the disclosed embodiments. Furthermore, the following detailed description is exemplary and explanatory only and is not restrictive of the claims.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate various disclosed embodiments. In the figure:

FIG. 1 is a perspective view of an example operating room consistent with the disclosed embodiments.

Fig. 2 is a perspective view of a camera consistent with the disclosed embodiments.

Fig. 3 is a perspective view of an example of a surgical instrument consistent with the disclosed embodiments.

FIG. 4 illustrates an example timeline superimposed over a video of a surgical procedure consistent with the disclosed embodiments.

Fig. 5 is a flow chart illustrating an example process of reviewing a surgical video consistent with the disclosed embodiments.

FIG. 6 is a schematic illustration of an example data structure consistent with the disclosed embodiments.

FIG. 7 is a schematic illustration of an example user interface for selecting an indexed video filmlet for display consistent with the disclosed embodiments.

Fig. 8A and 8B are flow diagrams illustrating an example process for video indexing consistent with the disclosed embodiments.

Fig. 9 is a flow chart illustrating an example process of distinguishing a first group of frames from a second group of frames consistent with the disclosed embodiments.

FIG. 10 is a flow chart illustrating an example process of generating a cause and effect summary consistent with the disclosed embodiments.

Fig. 11 is a flow chart illustrating an example process of generating a surgical summary snippet consistent with the disclosed embodiments.

Fig. 12 is a flow chart illustrating an exemplary process of performing surgical preparation consistent with the disclosed embodiments.

Fig. 13 is a flow chart illustrating an exemplary process of analyzing the complexity of a surgical clip consistent with the disclosed embodiments.

Fig. 14 is a schematic illustration of an exemplary system for managing various data collected during a surgical procedure and controlling various sensors, consistent with the disclosed embodiments.

FIG. 15 is an exemplary schedule consistent with the disclosed embodiments.

FIG. 16 is an exemplary form of entering information into a schedule consistent with the disclosed embodiments.

FIG. 17A illustrates an exemplary data structure consistent with the disclosed embodiments.

FIG. 17B illustrates an exemplary plot of data for historical completion time consistent with the disclosed embodiments.

FIG. 18 illustrates an example of a machine learning model consistent with the disclosed embodiments.

Fig. 19 illustrates an exemplary process of adjusting an operating room schedule consistent with the disclosed embodiments.

FIG. 20 is an exemplary data structure storing correlations of claim criteria with information obtained from surgical clips, consistent with the disclosed embodiments.

FIG. 21 is a block diagram of an exemplary machine learning method consistent with the disclosed embodiments.

FIG. 22 is a flow chart of an exemplary process of analyzing a surgical image to determine an insurance claim, consistent with the disclosed embodiments.

FIG. 23 is an example post-operative report containing fields consistent with the disclosed embodiments.

Fig. 24A is an example of a process of populating a post-operative report including a structure consistent with disclosed embodiments.

Fig. 24B is another example of a process of populating a post-operative report including a structure consistent with disclosed embodiments.

FIG. 25 is a flow chart of an exemplary process of populating a post-operative report consistent with disclosed embodiments.

FIG. 26 is a schematic illustration of an exemplary sequence of events consistent with the disclosed embodiments.

FIG. 27 illustrates an exemplary comparison of a sequence of events consistent with the disclosed embodiments.

FIG. 28 illustrates an exemplary process that enables determination and notification of skipped events consistent with the disclosed embodiments.

Fig. 29 is a flow chart illustrating an exemplary process of performing surgical decision support consistent with the disclosed embodiments.

Fig. 30 is a flow chart illustrating an exemplary process of estimating contact force on an anatomical structure during a surgical procedure consistent with the disclosed embodiments.

FIG. 31 is a flow chart illustrating an exemplary process of updating a predicted outcome during a surgical procedure consistent with the disclosed embodiments.

FIG. 32 is a flow chart illustrating an exemplary process for enabling fluid leak detection during a surgical procedure consistent with the disclosed embodiments.

FIG. 32A is an exemplary graph illustrating relationships between intraoperative events and results consistent with the disclosed embodiments.

Fig. 32B is an exemplary probability distribution graph of events that differ in the presence and absence of intra-operative events consistent with the disclosed embodiments.

FIG. 33 illustrates an exemplary probability distribution graph of different events consistent with the disclosed embodiments.

FIG. 34 illustrates an exemplary probability distribution graph of different events according to event characteristics consistent with the disclosed embodiments.

FIG. 35A illustrates an exemplary machine learning model consistent with the disclosed embodiments.

FIG. 35B illustrates exemplary inputs to a machine learning model consistent with the disclosed embodiments.

FIG. 36 illustrates an exemplary process of predicting risk after discharge consistent with the disclosed embodiments.

Detailed Description

Unless specifically stated otherwise, as apparent from the following description, throughout this specification discussions utilizing terms such as "processing," "calculating," "determining," "generating," "setting," "configuring," "selecting," "defining," "applying," "obtaining," "monitoring," "providing," "identifying," "segmenting," "classifying," "analyzing," "associating," "extracting," "storing," "receiving," "sending," or the like, include the actions and/or processes of a computer that manipulate and/or transform data into other data, represented as physical quantities (e.g., such as electronic quantities) and/or data representing physical objects. The terms "computer", "processor", "controller", "processing unit", "computing unit" and "processing module" should be broadly construed to encompass any kind of electronic device, component or unit having data processing capabilities, including, as non-limiting examples: personal computers, wearable computers, smart glasses, tablets, smartphones, servers, computing systems, cloud computing platforms, communication devices, processors (e.g., Digital Signal Processors (DSPs), image signal processors (ISRs), microcontrollers, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), central processing units (CPAs), Graphics Processing Units (GPUs), visual processing units (CPUs), etc.), single-core processors, multi-core processors, cores with processors, any other electronic computing device, or any combination thereof, possibly with embedded memory.

Operations in accordance with the teachings herein may be performed by a computer that is specially constructed or programmed to perform the described functions.

As used herein, the terms "for example," "such as," "for example," and variations thereof describe non-limiting embodiments of the presently disclosed subject matter. Reference in the specification to features of "an embodiment," "one instance," "some instances," "other instances," or variations thereof means that a particular feature, structure, or characteristic described can be included in at least one embodiment of the presently disclosed subject matter. Thus, appearances of such terms are not necessarily referring to the same embodiment. As used herein, the term "and/or" includes any one or more of the associated listed items and all combinations thereof.

For the sake of brevity, features of the presently disclosed subject matter are described in the context of a particular embodiment. However, it is to be understood that features described in connection with one embodiment are also applicable to other embodiments. Likewise, features described in the context of a particular combination may be considered a separate embodiment either alone or in addition to the particular combination.

In embodiments of the presently disclosed subject matter, one or more of the stages shown in the figures may be performed in a different order and/or one or more groups of stages may be performed concurrently, or vice versa. The figures illustrate a general schematic of a system architecture according to embodiments of the presently disclosed subject matter. Each of the modules in the figures may be comprised of a combination of software, hardware, and/or firmware that performs the functions as defined and described herein. The modules in the figure may be centralized in one location, or may be distributed over more than one location.

Examples of the presently disclosed subject matter are not limited in application to the details of construction and the arrangement of components set forth in the following description or illustrated in the drawings. The subject matter may be practiced or carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting.

In this document, elements in the drawings that are not described within the scope of the drawings and are labeled with numerals that have been described in previous figures may have the same use and description as in previous figures.

The drawings in this document may not be drawn to any scale. Different scales may be used for different figures, or even within the same figure, for example different scales for different views of the same object, or different scales for two adjacent objects.

Consistent with the disclosed embodiments, "at least one processor" may constitute any physical device or group of devices having circuitry to perform logical operations on one or more inputs. For example, the at least one processor may include one or more Integrated Circuits (ICs) including: an Application Specific Integrated Circuit (ASIC), a microchip, a microcontroller, a microprocessor, all or part of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a server, a virtual server, or other circuitry suitable for executing instructions or performing logical operations. The instructions executed by the at least one processor may be preloaded into a memory integrated with or embedded in the controller, for example, or may be stored in a separate memory. The memory may include: random Access Memory (RAM), Read Only Memory (ROM), hard disk, optical disk, magnetic media, flash memory, other permanent, fixed, or volatile memory, or any other mechanism capable of storing instructions. In some embodiments, the at least one processor may comprise more than one processor. Each processor may have a similar configuration, or the processors may have different configurations that are electrically connected or disconnected from each other. For example, the processor may be a separate circuit or integrated in a single circuit. When more than one processor is used, the processors may be configured to operate independently or in concert. The processor may be electrically coupled, magnetically coupled, optically coupled, acoustically coupled, mechanically coupled, or coupled by other means that allow them to interact.

The disclosed embodiments may include and/or access data structures. A data structure consistent with the present disclosure may include any collection of data values and relationships between them. The data may be stored in the following manner: linearly, horizontally, hierarchically, relational, unrelated, one-dimensional, multidimensional, operative, ordered, unordered, object-oriented, centralized, decentralized, distributed, custom, or in any way that enables data access. As non-limiting examples, the data structure may include: arrays, associative arrays, linked lists, binary trees, balanced trees, heaps, stacks, queues, collections, hash tables, records, tag unions (tagged unions), ER models, and graphs. For example, the data structure may include: XML databases, RDBMS databases, SQL databases or NoSQL alternatives for data storage/Search, such as, for example, MongoDB, Redis, Couchbase, Datastax Enterprise Graph, Elastic Search, Splunk, Solr, Cassandra, Amazon DynamoDB, Scylla, HBase and Neo 4J. The data structure may be a component of the disclosed system or a remote computing component (e.g., a cloud-based data structure). The data in the data structure may be stored in contiguous or non-contiguous memory. Furthermore, as used herein, data structures do not require that information be co-located. For example, the data structure may be distributed over multiple servers, which may be owned or operated by the same or different entities. Thus, the term "data structure" as used herein in the singular includes a plurality of data structures.

In some implementations, a machine learning algorithm (also referred to as a machine learning model in this disclosure) can be trained using training examples, such as in the context described below. Some non-limiting examples of such machine learning algorithms may include: classification algorithms, data regression algorithms, image segmentation algorithms, vision detection algorithms (such as object detectors, face detectors, body detectors, motion detectors, edge detectors, etc.), vision recognition algorithms (such as face recognition, body recognition, object recognition, etc.), speech recognition algorithms, mathematical embedding algorithms, natural language processing algorithms, support vector machines, random forests, nearest neighbor algorithms, deep learning algorithms, artificial neural network algorithms, convolutional neural network algorithms, recursive neural network algorithms, linear machine learning models, non-linear machine learning models, integration algorithms (ensemble algorithms), etc. For example, the trained machine learning algorithm may include inference models, such as predictive models, classification models, regression models, clustering models, segmentation models, artificial neural networks (such as deep neural networks, convolutional neural networks, recursive neural networks, etc.), random forests, support vector machines, and so forth. In some examples, the training examples may include example inputs and desired outputs corresponding to the example inputs. Further, in some examples, training the machine learning algorithm using the training examples may generate a trained machine learning algorithm, and the trained machine learning algorithm may be used to estimate an output of an input that is not included in the training examples. In some examples, engineers, scientists, processes, and machines training machine learning algorithms may further use verification examples and/or test examples. For example, the verification examples and/or test examples may include example inputs and desired outputs corresponding to the example inputs, a trained machine learning algorithm and/or an intermediately trained machine learning algorithm may be used to estimate outputs of the example inputs of the verification examples and/or test examples, the estimated outputs may be compared to the corresponding desired outputs, and the trained machine learning algorithm and/or the intermediately trained machine learning algorithm may be evaluated based on results of the comparison. In some examples, the machine learning algorithm may have parameters and hyper-parameters, wherein the hyper-parameters are set manually by a human or automatically by a process external to the machine learning algorithm (such as a hyper-parameter search algorithm), and the parameters of the machine learning algorithm are set by the machine learning algorithm according to the training examples. In some implementations, the hyper-parameters are set according to the training examples and the verification examples, and the parameters are set according to the training examples and the selected hyper-parameters.

In some implementations, a trained machine learning algorithm (also referred to in this disclosure as a trained machine learning model) can be used to analyze the inputs and generate the outputs, such as in the case described below. In some examples, a trained machine learning algorithm may be used as an inference model to which an inference output is generated when an input is provided. For example, the trained machine learning algorithm may include a classification algorithm, the input may include samples, and the inference output may include a classification of the samples (such as inference labels, inference tags, etc.). In another example, the trained machine learning algorithm may include a regression model, the input may include samples, and the inference output may include inferred values of the samples. In yet another example, the trained machine learning algorithm may include a clustering model, the input may include samples, and the inferring the output may include assigning the samples to at least one cluster. In additional examples, the trained machine learning algorithm may include a classification algorithm, the input may include an image, and the inferred output may include a classification of an item depicted in the image. In yet another example, the trained machine learning algorithm may include a regression model, the input may include an image, and the inferred output may include an inferred value of an item depicted in the image (such as an estimated characteristic of the item (such as size, volume, age of a person depicted in the image), cost of a product depicted in the image, and so forth). In additional examples, the trained machine learning algorithm may include an image segmentation model, the input may include an image, and the inference output may include a segmentation of the image. In yet another example, the trained machine learning algorithm may include an object detector, the input may include images, and the inferred output may include one or more detected objects in the images and/or one or more locations of objects within the images. In some examples, the trained machine learning algorithm may include one or more formulas and/or one or more functions and/or one or more rules and/or one or more programs, the input may be used as an input to the formulas and/or functions and/or rules and/or programs, and the inference output may be based on an output of the formulas and/or functions and/or rules and/or programs (e.g., selecting one of the outputs of the formulas and/or functions and/or rules and/or programs, using a statistical measure of the output of the formulas and/or functions and/or rules and/or programs, etc.).

In some implementations, an artificial neural network can be configured to analyze inputs and generate corresponding outputs. Some non-limiting examples of such artificial neural networks may include: shallow artificial neural networks, deep artificial neural networks, feedback artificial neural networks, feedforward artificial neural networks, automatic encoder artificial neural networks, probabilistic artificial neural networks, time-delay artificial neural networks, convolutional artificial neural networks, recursive artificial neural networks, long-and-short-term memory artificial neural networks, and the like. In some examples, the artificial neural network may be manually configured. For example, the structure of the artificial neural network may be manually selected, the type of artificial neuron of the artificial neural network may be manually selected, a parameter of the artificial neural network (such as a parameter of the artificial neuron of the artificial neural network) may be manually selected, and so forth. In some examples, the artificial neural network may be configured using machine learning algorithms. For example, the user may select hyper-parameters for the artificial neural network and/or the machine learning algorithm, and the machine learning algorithm may use the hyper-parameters and training examples to determine parameters of the artificial neural network, such as using back propagation, using gradient descent, using random gradient descent, using small-lot gradient descent, and so forth. In some examples, an artificial neural network may be created from two or more other artificial neural networks by combining the two or more other artificial neural networks into a single artificial neural network.

In some embodiments, analyzing the image data (e.g., by the methods, steps, and modules described herein) may comprise: the image data is analyzed to obtain pre-processed image data, and subsequently the image data and/or the pre-processed image data is analyzed to obtain a desired result. Some non-limiting examples of such image data may include: one or more images, videos, frames, clips, 2D image data, 3D image data, and the like. One of ordinary skill in the art will recognize that the following is an example, and that the image data may be pre-processed using other kinds of pre-processing methods. In some examples, the image data may be pre-processed by transforming the image data using a transformation function to obtain transformed image data, and the pre-processed image data may include the transformed image data. For example, the transformed image data may include one or more convolutions of the image data. For example, the transformation function may include one or more image filters, such as a low pass filter, a high pass filter, a band pass filter, an all pass filter, and the like. In some examples, the transformation function may include a non-linear function. In some examples, the image data may be pre-processed by smoothing at least a portion of the image data, for example, using gaussian convolution, using a median filter, and so on. In some examples, the image data may be pre-processed to obtain different representations of the image data. For example, the pre-processed image data may include: a representation of at least part of the image data in the frequency domain; a discrete fourier transform of at least part of the image data; a discrete wavelet transform of at least part of the image data; at least a portion of the image data is represented in time/frequency; a representation of at least part of the image data in a low dimension; a lossy representation of at least a portion of the image data; a lossless representation of at least part of the image data; a time-sequential series of any of the above; any combination of the above; and so on. In some examples, the image data may be pre-processed to extract edges, and the pre-processed image data may include information based on and/or related to the extracted edges. In some examples, the image data may be pre-processed to extract image features from the image data. Some non-limiting examples of such image features may include: edge-based information and/or edge-related information; a corner portion; speckle; a ridge; scale Invariant Feature Transform (SIFT) features; a temporal characteristic; and so on.

In some embodiments, analyzing the image data (e.g., by the methods, steps, and modules described herein) may include analyzing the image data and/or the pre-processed image data using: one or more rules, functions, procedures, artificial neural networks, object detection algorithms, face detection algorithms, visual event detection algorithms, motion detection algorithms, background subtraction algorithms, inference models, and the like. Some non-limiting examples of such inference models may include: a manually pre-programmed reasoning model; classifying the model; a regression model; training algorithms (such as machine learning algorithms and/or deep learning algorithms) are directed to the results of training examples, where training examples may include examples of data instances, and in some cases, data instances may be labeled with corresponding desired labels and/or results; and so on.

In some implementations, analyzing the image data (e.g., by the methods, steps, and modules described herein) may include analyzing pixels, voxels, point clouds, range data, and so forth included in the image data.

Fig. 1 illustrates an example operating room 101 consistent with the disclosed embodiments. The patient 143 is illustrated as being on the surgical table 141. The room 101 may include: audio sensors, video/image sensors, chemical sensors and other sensors, and various light sources (e.g., light source 119 shown in fig. 1) for facilitating the capture of video and audio data and data from other sensors during a surgical procedure. For example, room 101 may include one or more microphones (e.g., audio sensor 111, as shown in FIG. 1), a plurality of cameras (e.g., overhead cameras 115, 121, and 123, and table-side camera 125) for capturing video/image data during a surgical procedure. While some of these cameras (e.g., cameras 115, 123, and 125) may capture video/image data of the surgical table 141 (e.g., the cameras may capture video/image data at the location 127 of the body of the patient 143 where the surgery is performed), the camera 121 may capture video/image data of other portions of the operating room 101. For example, the camera 121 may capture video/image data of a surgeon 131 performing the surgical procedure. In some cases, the camera may capture video/image data associated with a surgical team member, such as an anesthesiologist, nurse, surgical technician, etc. located in the operating room 101. Additionally, the operating room camera may capture video/image data associated with the medical devices located within the room.

In various embodiments, one or more of cameras 115, 121, 123, and 125 may be movable. For example, as shown in fig. 1, the camera 115 may be rotated as indicated by arrow 135A showing the pitch direction of the camera 115 and arrow 135B showing the yaw direction. In various embodiments, the pitch and yaw angles of the camera (e.g., camera 115) may be electronically controlled such that the camera 115 is pointed at a region of interest (ROI) where video/image data needs to be captured. For example, the camera 115 may be configured to track the movement of surgical instruments (also referred to as surgical tools), anatomy, surgeon 131 hands, incisions, anatomy, and the like within the location 127. In various embodiments, the camera 115 may be equipped with a laser 137 (e.g., an infrared laser) for precise tracking. In some cases, the camera 115 may be automatically tracked via a computer-based camera control application that uses image recognition algorithms to position the camera to capture video/image data of the ROI. For example, the camera control application may identify the anatomy, identify a surgical tool at a particular location within the anatomy, the surgeon's hand, bleeding, motion, etc., and track that location with the camera 115 by rotating the camera 115 at the appropriate yaw and pitch angles. In some embodiments, the camera control application may control the position (i.e., yaw and pitch) of the various cameras 115, 121, 123, and 125 to capture video/image data from different ROIs during a surgical procedure. Additionally or alternatively, a human operator may control the position of the various cameras 115, 121, 123, and 125, and/or a human operator may supervise the camera control application while controlling the position of the cameras.

Cameras 115, 121, 123, and 125 may also include zoom lenses for focusing and zooming in on one or more ROIs. In an example embodiment, the camera 115 may include a zoom lens 138 for zooming near the ROI (e.g., a surgical tool near the anatomical structure). The camera 121 may include a zoom lens 139 for capturing video/image data from a larger area around the ROI. For example, camera 121 may capture video/image data of the entire location 127. In some embodiments, video/image data obtained from the camera 121 may be analyzed to identify a ROI during surgery, and the camera control application may be configured to zoom the camera 115 toward the ROI identified by the camera 121.

In various embodiments, a camera control application may be configured to coordinate the position, focus, and magnification of various cameras during a surgical procedure. For example, the camera control application may direct the camera 115 to track the anatomy and may direct the cameras 121 and 125 to track the surgical instrument. Cameras 121 and 125 may track the same ROI (e.g., surgical instrument) from different perspectives. For example, video/image data obtained from different perspectives may be used to determine the position of a surgical instrument relative to the surface of the anatomical structure, determine the condition of the anatomical structure, determine the pressure applied to the anatomical structure, or determine any other information that multiple perspectives may be beneficial. As another example, one camera may detect bleeding, and one or more other cameras may be used to identify the source of the bleeding.

In various embodiments, the control of the position, orientation, setting, and/or zoom of the cameras 115, 121, 123, and 125 may be rule-based and follow algorithms developed for a given surgical procedure. For example, a camera control application may be configured to direct the camera 115 to track the surgical instrument, the camera 121 to the location 127, the camera 123 to track the movement of the surgeon's hand, and the camera 125 to the anatomy. The algorithm may include any suitable logic statements that determine the position, orientation, setting, and/or zoom of the cameras 115, 121, 123, and 125 from various events during the surgical procedure. For example, the algorithm may direct at least one camera to a region of the anatomy where bleeding occurred during surgery. Some non-limiting examples of settings of cameras 115, 121, 123, and 125 that may be controlled (e.g., by a camera control application) may include: image pixel resolution, frame rate, image and/or color correction and/or enhancement algorithms, zoom, position, orientation, aspect ratio, shutter speed, aperture, focus, etc.

In various instances, when a camera (e.g., camera 115) tracks a moving or deformed object (e.g., when camera 115 tracks a moving surgical instrument or a moving/pulsating anatomical structure), the camera control application may determine a maximum allowed zoom of camera 115 such that the moving or deformed object does not fall out of the field of view of the camera. In an example embodiment, the camera control application may initially select a first zoom of the camera 115, evaluate whether a moving or deformed object is out of the field of view of the camera, and adjust the zoom of the camera as needed to prevent the moving or deformed object from being out of the field of view of the camera. In various embodiments, the camera zoom may be readjusted based on the direction and speed of the moving or deforming object.

In various embodiments, the one or more image sensors may include moving cameras 115, 121, 123, and 125. The cameras 115, 121, 123 and 125 may be used to determine the size of the anatomical structure and to determine the distance between the different ROIs, for example using triangulation. For example, FIG. 2 shows exemplary cameras 115(115 view 1, as shown in FIG. 2) and 121 supported by a movable member such that the distance between the two cameras is D1As shown in fig. 2. Both cameras are directed at the ROI 223. By knowing the position of cameras 115 and 121 and the orientation of the object relative to the cameras (e.g., by knowing angle A)1And A2(as shown in fig. 2), e.g. based on the correspondence between pixels depicting the same object or the same real world point in the images taken by 115 and 121), e.g. the sine law and the known distance D between the two cameras can be used1To calculate the distance D2And D3. In an exemplary embodiment, when camera 115(115, view 2) is rotated a small angle A3(measured in radians) to point at the ROI 225, the distance between the ROI 223 and the ROI 225 can be represented by A3D2Approximation (for small angles A)3). Higher accuracy can be obtained using another triangulation process. Knowing the distance between the ROI 223 and the ROI 225 enables determination of the length scale of the anatomical structure. Further, distances between various points of the anatomical structure and from the various points to one or more cameras may be measured to determine a point cloud representing the surface of the anatomical structure. Such a point cloud may be used to reconstruct a three-dimensional model of the anatomical structure. Further, distances between one or more surgical instruments and different points of the anatomy may be measured to determine the appropriate location of the one or more surgical instruments in the vicinity of the anatomy. In some other examples, one or more of cameras 115, 121, 123, and 125 may include a 3D camera (such as a stereo camera, active stereo pickup, etc.) Cameras, time-of-flight cameras, light detectors, and range cameras, etc.), and the actual and/or relative positions and/or sizes of objects within the operating room 101, and/or the actual distances between objects may be determined based on the 3D information captured by the 3D cameras.

Returning to fig. 1, the light source (e.g., light source 119) may also be movable to track one or more ROIs. In an example embodiment, the light source 119 may be rotated at yaw and pitch angles, and in some cases may extend toward or away from the ROI (e.g., position 127). In some cases, the light source 119 may include one or more optical components (e.g., a lens, a flat or curved mirror, etc.) to focus light on the ROI. In some cases, the light source 119 may be configured to control the color of the light (e.g., the color of the light may include different types of white light, light having a selected spectrum, etc.). In an example embodiment, the light 119 may be configured such that the spectrum and intensity of the light may be altered on the surface of the anatomical structure illuminated by the light. For example, in some cases, the light 119 may include infrared wavelengths, which may cause at least some portions of the surface of the anatomical structure to heat up.

In some embodiments, the operating room may include sensors embedded in various components that may or may not be depicted in fig. 1. Examples of such sensors may include: an audio sensor; an image sensor; a motion sensor; a positioning sensor; a chemical sensor; a temperature sensor; a barometer; a pressure sensor; a proximity sensor; an electrical impedance sensor; a voltage sensor; a current sensor; or any other detector capable of providing feedback regarding the environment or the surgical procedure, including, for example, any kind of medical or physiological sensor configured to monitor the patient 143.

In some implementations, the audio sensor 111 can include one or more audio sensors (e.g., audio sensor 121) configured to capture audio by converting sound into digital information.

In various embodiments, the temperature sensor may include an infrared camera (e.g., infrared camera 117 shown in fig. 1) for thermal imaging. The infrared camera 117 may allow the surface temperature of the anatomical structure to be measured at different points of the structure. Similar to the visible light cameras D115, 121, 123, and 125, the infrared camera 117 may be rotated using a yaw angle or a pitch angle. Additionally or alternatively, the camera 117 may include an image sensor configured to capture images from any spectrum, including: infrared image sensors, hyperspectral image sensors, and the like.

Fig. 1 includes a display screen 113 that may show views from different cameras 115, 121, 123, and 125, as well as other information. For example, the display screen 113 may show a magnified image of the tip of the surgical instrument and surrounding tissue of the anatomy near the surgical instrument.

Fig. 3 illustrates an example embodiment of a surgical instrument 301 that may include a plurality of sensors and light emitting sources. Consistent with this embodiment, a surgical instrument may refer to a medical device, a medical instrument, an electrical or mechanical tool, a surgical tool, a diagnostic tool, and/or any other tool that may be used during a surgical procedure. As shown, the instrument 301 may include: cameras 311A and 311B, light sources 313A and 313B, and tips 323A and 323B for contacting tissue 331. The cameras 311A and 311B may be connected to a data transmission device 321 via data connections 319A and 319B. In an example embodiment, the device 321 may transmit data to the data receiving device using wireless communication or using wired communication. In an example embodiment, the device 321 may use WiFi, bluetooth, NFC communication, inductive communication, or any other suitable wireless communication for sending data to a data receiving device. The data receiving means may comprise any form of receiver capable of receiving a data transmission. Additionally or alternatively, the device 321 may transmit data to a data receiving device using an optical signal (e.g., the device 321 may use an optical signal transmitted over the air or via an optical fiber). In some embodiments, the apparatus 301 may include a local memory for storing at least some of the data received from the sensors 311A and 311B. Additionally, the device 301 may include a processor to compress the video/image data prior to sending the data to the data receiving device.

In various implementations, such as when the apparatus 301 is wireless, it may include an internal power source (e.g., a battery, a rechargeable battery, etc.) and/or a port for charging the battery, an indicator for indicating the remaining power of the power source, and one or more input controls (e.g., buttons) for controlling the operation of the apparatus 301. In some implementations, control of the device 301 may be achieved using an external device (e.g., a smartphone, a tablet, smart glasses) that communicates with the device 301 via any suitable connection (e.g., WiFi, bluetooth, etc.). In an example embodiment, the input controls of the apparatus 301 may be used to control various parameters of the sensor or light source. For example, input controls may be used to dim/brighten the light sources 313A and 313B, to move the light sources if they can be moved (e.g., the light sources may be rotated at yaw and pitch angles), to control the color of the light sources, to control the focus of the light sources, to control the motion of the cameras 311A and 311B if they can be moved (e.g., the cameras may be rotated at yaw and pitch angles), to control zoom and/or capture parameters of the cameras 311A and 311B, or to change any other suitable parameters of the cameras 311A to 311B and the light sources 313A to 313B. It should be noted that camera 311A may have a first set of parameters, camera 311B may have a second set of parameters different from the first set of parameters, and appropriate input controls may be used to select these parameters. Similarly, the light source 313A may have a first set of parameters and the light source 313B may have a second set of parameters different from the first set of parameters, and these parameters may be selected using appropriate input controls.

Additionally, instrument 301 may be configured to measure data relating to various characteristics of tissue 331 via tips 323A and 323B and transmit the measured data to device 321. For example, tips 323A and 323B can be used to measure the resistance and/or impedance of tissue 331, the temperature of tissue 331, mechanical properties of tissue 331, and the like. For example, to determine the elastic properties of tissue 331, tips 323A and 323B may first be separated by angle 317 and applied to tissue 331. The tip may be configured to move so as to reduce the angle 317, and the movement of the tip may cause pressure on the tissue 331. Such pressure may be measured (e.g., via piezoelectric member 327, which may be located between first branch 312A and second branch 312B of instrument 301), and based on the change in angle 317 (i.e., strain) and the measured pressure (i.e., stress), the elastic properties of tissue 331 may be measured. Also, based on angle 317, the distance between tips 323A and 323B may be measured and transmitted to device 321. Such distance measurements may be used as a length scale for various video/image data that may be captured by the various cameras 115, 121, 123, and 125, as shown in fig. 1.

Instrument 301 is but one example of a possible surgical instrument, and other surgical instruments, such as scalpels, graspers (e.g., forceps), clips and occluders, needles, retractors, cutters, dilators, suction tips and pipettes, occlusion devices, irrigation and injection needles, scopes and probes, etc., may include any suitable sensor and light emitting source. In various instances, the type of sensor and light emitting source may depend on the type of surgical instrument being used for surgery. In various instances, these other surgical instruments may include devices similar to device 301 (shown in FIG. 3) for collecting and transmitting data to any suitable data receiving device.

In preparing a surgical procedure, it may be beneficial for a surgeon to review video clips of a surgical procedure having similar surgical events. However, it may be too time consuming for the surgeon to view the entire video or jump around to find the relevant portion of the surgical clip. Thus, there is a need for an unconventional method that efficiently and effectively enables a surgeon to view a short surgical video summary summarizing relevant surgical events while skipping over other irrelevant short clips.

Aspects of the present disclosure may relate to reviewing surgical videos, including methods, systems, devices, and computer-readable media. The interface may enable surgeons to review surgical videos (their own surgeries, surgeries of others, or compilations) while displaying surgical timelines. The timeline may include markers entered for activities or events occurring during the surgical procedure. These markings may enable the surgeon to jump to a particular activity, thereby simplifying review of the surgical procedure. In some embodiments, critical decision-making nodes may be flagged and the surgeon may be allowed to view alternative actions taken at those decision-making nodes.

For ease of discussion, a method is described below, and an understanding of aspects of the method applies equally to systems, apparatuses, and computer-readable media. For example, some aspects of such methods may occur electronically over a network, wired, wireless, or both. Other aspects of this method may occur using non-electronic means. In the broadest sense, the method is not limited to a particular physical and/or electronic tool, but can be accomplished using many different tools.

Consistent with the disclosed embodiments, a method may involve accessing at least one video of a surgical procedure. As described in more detail above, video may include any form of recorded visual media, including recorded images and/or sound. The Video may be stored as a Video file, such as an Audio Video Interleave (AVI) file, a Flash Video Format (FLV) file, a QuickTime file Format (MOV), MPEG (MPG, MP4, M4P, etc.), Windows Media Video (WMV) file, a Material Exchange Format (MXF) file, or any other suitable Video file Format, for example as described above.

The surgical procedure may include any medical procedure associated with or involving a manual procedure (procedure) or surgical procedure on the body of a patient. Surgery may include cutting, abrading (abrading), suturing, or other techniques involving physically altering body tissues and organs. Examples of such surgical procedures are provided above. The video of a surgical procedure may include any series of still images taken during the surgical procedure and associated with the surgical procedure. In some embodiments, at least a portion of the surgical procedure may be depicted in one or more of the still images included in the video. For example, video of a surgical procedure may be recorded by an image capture device (such as a camera) in an operating room or a body cavity of a patient. Accessing video of a surgical procedure may include: the storage device may be any suitable device(s) that may be configured to retrieve video from a storage device (such as one or more memory units, a video server, a cloud storage platform, or any other storage platform), receive video from another device through a communication device, capture video using an image sensor, or electronically access data or files.

Some aspects of the disclosure may relate to causing the at least one video to be output for display. Outputting the at least one video may include any process of generating, transmitting, or providing a video using a computer or at least one processor. As used herein, "display" may refer to any manner in which a video may be presented to a user for playback. In some implementations, outputting the video can include: video is presented using a display device, such as a screen (e.g., OLED, QLED LCD, plasma, CRT, DLPT, electronic paper, or similar display technology), a projector (e.g., projector, slide projector), a 3D display, a screen of a mobile device, electronic glasses, or any other form of visual and/or audio presentation. In other embodiments, outputting the video for display may include: the video is stored in a location accessible to one or more other computing devices. Such storage locations may include local storage (hard disks such as flash memory), network locations (such as servers or databases), cloud computing platforms, or any other accessible storage location. The video may be accessed from a separate computing device for display on the separate computing device. In some implementations, outputting the video can include sending the video to an external device. For example, outputting the video for display may include sending the video over a network to a user device for playback on the user device.

Embodiments of the present disclosure may also include overlaying a surgical timeline on the at least one video output for display. As used herein, a "timeline" may refer to any depiction from which a sequence of events may be tracked or divided. In some implementations, the timeline may be a graphical representation of events, for example, using a thin bar or line representing time with markers or other event indicators along the bar. The timeline may also be a text-based list of events arranged in chronological order. The surgical timeline may be a timeline representing events associated with a surgical procedure. As one example, a surgical timeline may be a timeline of events or actions that occur during a surgical procedure, as described in detail above. In some embodiments, the surgical timeline may include textual information identifying portions of the surgery. For example, the surgical timeline may be a list of descriptions of intraoperative surgical events or surgical stages within a surgical procedure. In other embodiments, the descriptor associated with the marker may be visualized by hovering over or otherwise actuating the graphical marker on the timeline.

Overlaying a surgical timeline on the at least one video may include: displaying the surgical timeline in any manner such that it can be viewed simultaneously with the at least one video. In some implementations, overlaying the video can include: the surgical timeline is displayed such that it at least partially overlaps the video. For example, the surgical timeline may be presented as a horizontal bar along the top or bottom of the video or as a vertical bar along one side of the video. In other embodiments, the superimposing may include presenting the surgical timeline alongside the video. For example, a video may be presented on the display and a surgical timeline presented above, below, and/or to one side of the video. The surgical timeline may be overlaid on the video as the video is played. Thus, "superimposed" as used herein more generally refers to simultaneous display. The simultaneous display may or may not be constant. For example, the overlay appears with the video output before the surgical procedure depicted in the displayed video is completed. Alternatively, the overlay may appear during nearly all video surgeries.

FIG. 4 illustrates an example timeline 420 superimposed over a video of a surgical procedure consistent with the disclosed embodiments. The video may be presented in a video playback zone 410, which video playback zone 410 may sequentially display one or more frames of the video. In the example shown in fig. 4, the timeline 420 may be displayed as a horizontal bar representing time, with the leftmost portion of the bar representing the start time of the video and the rightmost portion of the bar representing the end time. Timeline 420 may include a position indicator 424 that indicates the current playback position of the video relative to the timeline. A colored region 422 of the timeline 420 may represent progress within the timeline 420 (e.g., corresponding to video that the user has already viewed, or to video that comes before a currently presented frame). In some implementations, the location indicator 424 can be interactive such that a user can move to a different location within the video by moving the location indicator 424. In some embodiments, the surgical timeline may include indicia identifying at least one of a surgical phase, an intraoperative surgical event, and a decision-making node. For example, the timeline 420 may also include one or more markers 432, 434, and/or 436. Such markers are described in more detail below.

In the example shown in fig. 4, the timeline 420 may be displayed such that it overlaps the video playback zone 410 physically, temporally, or both. In some implementations, the timeline 420 may not be displayed all the time. As one example, the timeline 420 may automatically switch to a collapsed or hidden view when the user is watching a video, and may return to the expanded view shown in fig. 4 when the user takes action to interact with the timeline 420. For example, the user may move a mouse pointer while viewing the video, move the mouse pointer over a collapsed timeline, move the mouse pointer to a particular region, click or tap on a video playback region, or perform any action that may indicate an intent to interact with the timeline 420. As discussed above, the timeline 420 may be displayed in various other locations relative to the video playback zone 410, including: the top of the video playback zone 410, above or below the video playback zone 410, or within the control bar 612. In some implementations, the timeline 420 can be displayed separately from the video progress bar. For example, a separate video progress bar (including position indicator 424 and colored region 422) may be displayed in control bar 412, and timeline 420 may be a separate timeline for events associated with a surgical procedure. In such implementations, the timeline 420 may not have the same time scale or range as the video or video progress bar. For example, the video progress bar may represent a time scale and range of the video, while timeline 420 may represent a time frame of the surgery, which may be different (e.g., where the video includes a surgical summary, as discussed in detail above). In some implementations, the video playback area 410 can include a search icon 440, and the search icon 440 can allow a user to search for video clips, for example, through the user interface 700, as described above with reference to fig. 7. The surgical timeline shown in fig. 4 is provided by way of example only, and those skilled in the art will appreciate various other configurations that may be used.

Embodiments of the present disclosure may further include: enabling the surgeon to select one or more markers on the surgical timeline while viewing the playback of the at least one video, and thereby causing the display of the video to jump to a location associated with the selected marker. As used herein, "playback" may include any presentation of a video in which one or more frames of the video are displayed to a user. Typically, playback will include displaying images sequentially to reproduce moving images and/or sound, however, playback may also include displaying individual frames.

Consistent with the disclosed embodiments, a "marker" may include any visual indicator associated with a location within a surgical timeline. As described above, the location may refer to any particular location within the video. For example, the location may be a particular frame or range of frames in the video, a particular timestamp, or any other location indicator within the video. The indicia may be represented on the timeline in various ways. In some implementations, the markers can be icons or other graphical representations displayed at various locations along the timeline. The indicia may be displayed as lines, stripes, dots, geometric shapes (such as diamonds, squares, triangles, or any other shape), bubbles, or any other graphical or visual representation. In some implementations, the markup can be text-based. For example, the tag may include textual information, such as a name, description, code, timestamp, and the like. In another example, the surgical timeline may be displayed as a list, as described above. Thus, the markers may include a text-based title or description that indicates a particular location of the video. Markers 432, 434, and 436 are shown by way of example in fig. 4. The marker may be represented as a callout bubble, including an icon indicating the type of marker associated with the location. The markers may point to particular points along the timeline 420 that indicate locations in the video.

The selection of a marker may include any action by the user that points to a particular marker. In some embodiments, the selectable marker may include: clicking or tapping a marker through a user interface, touching a marker on a touch-sensitive screen, browsing the marker through smart glasses, indicating the marker through a voice interface, indicating the marker with a gesture, or taking any other action that results in the marker being selected. Selection of a marker may thereby cause display of the video to jump to the location associated with the selected marker. As used herein, skipping may include selectively displaying a particular frame within a video. This may include ceasing to display the frame at the current location in the video (e.g., if the video is currently playing) and displaying the frame at the location associated with the selected marker. For example, if the user clicks or otherwise selects the marker 432 (as shown in fig. 4), the frame at the location associated with the marker 432 may be displayed in the video playback region 410. In some embodiments, the video may continue to play from that location. The position indicator 424 may move to a position within the timeline 420 associated with the marker 432, and the colored region 422 may be updated accordingly. While the present embodiment is described as enabling the surgeon to select the one or more indicia, it should be understood that this is merely an example and the present disclosure is not limited to any form of user. Various other users may view and interact with the overlaid timeline, including surgical technicians, nurses, doctor assistants, anesthesiologists, doctors or any other medical professional, as well as patients, insurance companies, medical students, and the like. Other examples of users are provided herein.

According to embodiments of the present disclosure, markers may be automatically generated based on information in a video at a given location and included in a timeline. In some implementations, computer analysis can be used to analyze frames of a video filmlet and identify markers to be included at various locations in a timeline. Computer analysis may include any form of electronic analysis using a computing device. In some implementations, the computer analysis can include identifying features of one or more frames of the video filmlet using one or more image recognition algorithms. Computer analysis may be performed for a single frame, or may be performed across multiple frames, e.g., to detect motion or other changes between frames. In some implementations, the computer analysis may include an object detection algorithm, such as Viola-Jones object detection, Scale Invariant Feature Transform (SIFT), Histogram of Oriented Gradients (HOG) features, Convolutional Neural Network (CNN), or any other form of object detection algorithm. Other example algorithms may include: a video tracking algorithm, a motion detection algorithm, a feature detection algorithm, a color-based detection algorithm, a texture-based detection algorithm, a shape-based detection algorithm, a boosting-based detection algorithm, a face detection algorithm, or any other suitable algorithm for analyzing video frames. In one example, a machine learning model may be trained using training examples to generate labels for a video, and the trained machine learning model may be used to analyze a video and generate labels for the video. The markers so generated may include the location of the markers within the video, the type of markers, the characteristics of the markers, and so forth. Examples of such training examples may include video clips depicting at least part of the surgical procedure, along with a list of desired markers to be generated, possibly along with information of the respective desired markers, such as the location of the markers in the video, the type of markers, the characteristics of the markers, and so forth.

Such computer analysis may be used to identify surgical stages, intra-operative events, event features, and/or other features appearing in the video clips. For example, in some embodiments, computer analysis may be used to identify one or more medical instruments used in a surgical procedure, e.g., as described above. Based on the identification of the medical instrument, a particular intraoperative event may be identified at a location in the video filmstrip associated with the medical instrument. For example, a scalpel or other instrument may indicate that an incision is being made and a marker identifying the incision may be included in the timeline at that location. In some embodiments, for example, as described above, anatomical structures may be identified in a video filmlet using computer analysis. For example, the disclosed methods may include identifying an organ, tissue, fluid, or other structure of a patient to determine markers and their respective locations to include in a timeline. In some embodiments, the location of the videomark may be determined based on the interaction between the medical instrument and the anatomy, which may indicate a particular intra-operative event, the type of surgical procedure, characteristics of the event, or other information useful in identifying the location of the mark. For example, visual motion recognition algorithms may be used to analyze the video and detect interactions between the medical instrument and the anatomical structure. Other examples of features that may be detected in a video filmlet used to place a mark may include: motion of the surgeon or other medical professional, patient characteristics, surgeon characteristics or characteristics of other medical professionals, the sequence of surgeries being performed, the timing of a procedure or event, characteristics of the anatomy, medical conditions, or any other information that can be used to identify a particular surgery, surgical stage, intraoperative event, and/or event characteristics appearing in the video clip.

In some implementations, the marker locations can be identified using a trained machine learning model. For example, a machine learning model may be trained using training examples, each of which may include a video snippet known to be associated with a surgical procedure, a surgical stage, an intra-operative event, and/or an event feature, and a label indicating a location within the video snippet. Using the trained machine learning model, similar phases and events can be identified in other video clips used to determine marker positions. Various machine learning models may be used, including logistic regression models, linear regression models, random forest models, K-nearest neighbor (KNN) models, K-means models, decision trees, cox proportional hazards regression models, naive Bayes models, Support Vector Machine (SVM) models, gradient boosting algorithms, artificial neural networks (such as deep neural networks, convolutional neural networks, etc.), or any other form of machine learning model or algorithm.

In some implementations, videomarks may be identified in conjunction with the video indexing techniques discussed above. As described above, the video clips may be indexed based on the surgical stages, intra-operative events, and/or event features identified in the video clips. This information may be stored in a data structure, such as data structure 600 described with reference to FIG. 6. The data structure may include a clip location and/or an event location associated with a phase and an event within the video clip. In some implementations, the markers displayed in the timeline may correspond to these locations in the video. Thus, any of the techniques or processes described above for indexing video clips may be similarly applied to determine marker positions for presentation in a timeline.

According to various exemplary embodiments of the present disclosure, the indicia may be encoded by at least one of a color or a criticality level. The encoding of the marker may be any indicator of the type, nature or character of the marker. This encoding may be useful for a user to visually determine which locations of the video may be of interest. Where the marker is encoded in color, the color of the marker displayed on the surgical timeline may indicate the characteristics or features of the marker based on a predetermined color scheme. For example, the markers may have different colors depending on what type of intraoperative surgical event the markers represent. In some example embodiments, indicia associated with incision, resection (infusion), resection, ligation, transplantation (graft), or various other events may all be displayed in different colors. In other embodiments, intraoperative adverse events may be associated with one color (e.g., red), wherein planned events may be associated with another color (e.g., green). In some embodiments, color scales may be used. For example, the severity of an adverse event may be represented by a color scale ranging from yellow to red or other suitable color scale.

In some implementations, the location and/or size of the marker can be associated with a criticality level. The criticality level may indicate the relative importance of the event, action, technique, phase, or other occurrence identified by the marker. Thus, as used herein, the term "criticality level" refers to any measure of immediate need for action to prevent dangerous consequences within a surgical procedure. For example, the criticality level may include, for example, a numerical measure (such as "1.12", "3.84", "7", "-4.01", etc.) that is within a particular range of values. In another example, the criticality level may include a limited number of discrete levels (such as "level 0", "level 1", "level 2", "high criticality", "low criticality", "non-criticality", etc.).

While color is provided as one example of differentiating the appearance of indicia to represent information, various other techniques may be used. For example, the markers may have variable sizes, shapes, positions, orientations, font sizes, font types, font colors, marker animations, or other visual characteristics. In some implementations, markers can be associated with different icons according to the type, action, or phase of the event with which they are associated. For example, as shown in fig. 4, a marker 432 that may be associated with a decision-making node may have a different icon than a marker 434, which marker 434 may be associated with another type of event, such as a complication. The icon may represent the type of intraoperative event associated with the location. For example, marker 436 may indicate that a cut occurred at that location in the video. Icons (or other visual characteristics) may be used to distinguish between unplanned events and planned events, the type of error (e.g., error communicated, misjudged, or otherwise), the particular adverse event that occurred, the type of technique being performed, the surgical stage being performed, the location of the surgical event during surgery (e.g., at the abdominal wall, etc.), the surgeon performing the surgery, the results of the surgery, or various other information.

In some exemplary embodiments, the one or more indicia may include decision-making node indicia corresponding to decision-making nodes of the surgical procedure. In some embodiments, such decision-making node indicia may be visually distinct from other forms or types of indicia. As an illustrative example, the decision-making node indicia may have an icon indicating that a location is associated with the decision-making node, as shown in fig. 4 by indicia 432. As used herein, a decision-making node may refer to any part of a procedure in which a decision is made, or in which a decision in a selected type of decision or a decision in a plurality of selected types of decisions is made. For example, a decision-making node marker may indicate a location of a video depicting a surgical procedure where multiple courses of action may be performed and the surgeon chooses to follow one course instead of another. For example, the surgeon may decide whether to deviate from the planned surgery, take preventative action, remove an organ or tissue, use a particular instrument, use a particular surgical technique, or any other intra-operative decision that the surgeon may encounter. In one example, a decision-making node may refer to a portion of a procedure where decisions are made that have a significant impact on the outcome of the procedure. In another example, a decision-making node may refer to a portion of a procedure where decisions must be made without explicit decision guidance. In yet another example, a decision-making node may refer to a portion of a procedure where a surgeon is faced with two or more viable alternatives, and selection of a better alternative among the two or more viable alternatives (e.g., an alternative predicted to reduce a particular risk, an alternative predicted to improve outcomes, an alternative predicted to reduce costs, etc.) is based at least on a particular number of factors (e.g., based at least on two factors, based at least on five factors, based at least on ten factors, based at least on a hundred factors, etc.). In additional examples, a decision-making node may refer to a portion of an operation where a surgeon is faced with a particular type of decision, and where the particular type is included in a set of selected decision types.

Decision-making nodes may be detected using the computer analysis described above. In some embodiments, the video clips may be analyzed to identify a particular action or sequence of actions performed by a surgeon that may indicate that a decision has been made. For example, if the surgeon pauses during the procedure, starts using a different medical instrument, or changes to a different course of action, this may indicate that a decision has been made. In some embodiments, the decision-making node may be identified based on a surgical stage or intraoperative event identified at the location in the video filmlet. For example, an adverse event such as bleeding may be detected, which may indicate that a decision must be made as to how to resolve the adverse event. As another example, a particular stage of a surgical procedure may be associated with a number of possible courses of action. Thus, detection of the surgical stage in a video clip may indicate a decision-making node. In some implementations, the trained machine learning model can be used to identify decision-making nodes. For example, a machine learning model may be trained using training examples to detect decision-making nodes in a video, and the trained machine learning model may be used to analyze the video and detect the decision-making nodes. Examples of such training examples may include a video filmlet, along with a marker indicating the location of a decision-making node within the video filmlet, or along with a label indicating that no decision-making node is present in the video filmlet.

The selection of decision-making node markers may enable a surgeon to view two or more alternative video clips from two or more corresponding other surgeries, thereby enabling the viewer to compare alternative methods. An alternative video clip may be any video clip that exemplifies a procedure other than the one currently being displayed to the user. Such alternative video clips may be extracted from other video clips that are not included in the current video being output for display. Alternatively, if the current video clip includes a compilation of different procedures, then an alternative clip may be extracted from different locations of the current video clip being displayed. The other surgical procedure may be any other surgical procedure than the particular procedure depicted in the current video being output for display. In some embodiments, the other surgical procedures may be the same type of surgical procedure depicted in the video being output for display, but performed at different times, for different patients, and/or by different surgeons. In some embodiments, the other surgical procedures may not be the same type of procedure, but may share the same or similar decision-making nodes as the surgical procedures identified by the decision-making node indicia. In some implementations, the two or more video clips may exhibit different behaviors. For example, the two or more video clips may represent alternative action selections compared to actions taken in the current video, as represented by decision-making node markers.

The alternative video clips may be presented in various ways. In some embodiments, selecting a decision-making node marker may automatically cause display of the two or more alternative video clips. For example, one or more of the alternative video clips can be displayed in the video playback area 410. In some implementations, the video playback zone can be split or divided to show one or more of the alternative video clips and/or the current video. In some implementations, the alternative video filmlets can be displayed in another area, such as above, below, or to one side of the video playback area 410. In some implementations, the alternative video filmlet may be displayed in a second window, on another screen, or in any other space other than the playback zone 410. According to other embodiments, selecting the decision maker may open a menu or otherwise display an option for viewing an alternative video filmlet. For example, selection decision markers may pop up an alternative video menu containing a depiction of behavior in an associated alternative video filmlet. Alternative video clips may be presented as thumbnails, text-based descriptions, video previews (e.g., playing a smaller resolution version or shortened clips), and so forth. The menu may be superimposed on the video, may be displayed in conjunction with the video, or may be displayed in a separate area.

In accordance with embodiments of the present disclosure, selection of a decision-making node indicia may result in the display of one or more alternative possible decisions related to the selected decision-making node indicia. Similar to alternative videos, alternative possible decisions may be superimposed on the timeline and/or video, or may be displayed in a separate area, such as above, below, and/or to one side of the video, in a separate window, on a separate screen, or in any other suitable manner. An alternative possible decision may be a list of alternative decisions that the surgeon may have made at the decision-making node. The list may further include: images (e.g., depicting alternative actions), flow charts, statistics (e.g., success rate, failure rate, usage rate, or other statistical information), detailed descriptions, hyperlinks, or other information associated with alternative possible decisions that may be relevant to the surgeon viewing the playback. Such a list may be interactive, enabling a viewer to select an alternative course of action from the list, and thereby cause a video clip of the alternative course of action to be displayed.

Further, in some embodiments, one or more estimates associated with the one or more alternative possible decisions may be displayed in conjunction with the display of the one or more alternative possible decisions. For example, the list of alternative possible decisions may include an estimate of each of the alternative possible decisions. The estimated results may include results that are predicted to occur if the surgeon has taken an alternative possible decision. Such information may be helpful for training purposes. For example, the surgeon can determine that more appropriate actions can be taken than those in the video, and can plan future surgery accordingly. In some embodiments, each of the alternative possible decisions may be associated with multiple estimates, and a probability of each alternative possible decision may be provided. The one or more estimates may be determined in various ways. In some embodiments, the estimation result may be based on known probabilities associated with alternative possible decisions. For example, summary data from previous surgeries with similar decision-making nodes may be used to predict the outcome of an alternative possible decision associated with a marker. In some embodiments, the probabilities and/or data may be customized to one or more characteristics or characteristics of the current surgical procedure. For example, patient characteristics (such as the patient's medical condition, age, weight, medical history, or other characteristics), surgeon skill level, surgical difficulty, type of procedure, or other factors may be considered in determining the estimation result. Other features may also be analyzed, including the event features described above with reference to video indexing.

In accordance with the present disclosure, a decision-making node of a surgical procedure may be associated with a first patient, and a corresponding similar decision-making node may be selected from past surgical procedures associated with patients having similar characteristics to the first patient. Past surgeries may be pre-selected or automatically selected based on similar estimates to corresponding similar decision-making nodes, or because of similarities between patients in the current video and patients in the past surgeries. These similarities or characteristics may include the patient's sex, age, weight, height, fitness, heart rate, blood pressure, body temperature, whether the patient exhibits a particular medical condition or disease, medical history, or any other significant trait (trail) or condition that may be relevant.

Similarly, in some embodiments, a decision-making node of a surgical procedure can be associated with a first medical professional, and a corresponding similar past decision-making node can be selected from past surgical procedures associated with medical professionals having similar characteristics to the first medical professional. These features may include, but are not limited to: the age of the medical professional, the medical background, the level of experience (e.g., the number of times the surgeon has performed this or similar surgery, the total number of times the surgeon has performed, etc.), the skill level, the training history, the success rate of this or other surgery, or other characteristics that may be relevant.

In some exemplary embodiments, the decision-making node of the surgical procedure is associated with a first prior event in the surgical procedure, and the similar past decision-making node is selected from past surgical procedures that include prior events similar to the first prior event. In one example, a prior event may be determined to be similar to a first prior event, e.g., based on the type of prior event, the characteristics of the prior event, etc. For example, the prior event and the first prior event may be determined to be similar when the similarity measure between the two is above a selected threshold. Some non-limiting examples of such similarity measures are described above. The occurrence and/or characteristics of prior events may be relevant to determining an estimate of an alternative possible decision. For example, if a surgeon encounters a complication with a patient, the complication may be at least partially determinative of the most appropriate outcome, while in the absence of the complication, a different outcome may be appropriate. The first prior event may include, but is not limited to, any of the intra-operative events detailed above. Some non-limiting characteristics of the first prior event may include any of the event characteristics described above. For example, a first prior event may include an adverse event or complication, such as bleeding, mesenteric emphysema, injury, transformation to unplanned patency, incision significantly larger than planned, hypertension, hypotension, bradycardia, hypoxemia, adhesions, hernia, atypical anatomy, dural tears, periodator injury, arterial infarction, and the like. The first prior event may also include a determined or planned event, such as a successful incision, administration of a drug, use of a surgical instrument, resection, ligation, implantation, suturing, stapling, or any other event.

In accordance with the present disclosure, decision-making nodes of a surgical procedure may be associated with a medical condition, and corresponding similar decision-making nodes may be selected from past surgical procedures associated with patients having similar medical conditions. The medical condition may include any condition of the patient that is related to the patient's health or wellness (Well-being). In some embodiments, the medical condition may be a condition being treated by surgery. In other embodiments, the medical condition may be a separate medical condition. The medical condition may be determined in various ways. In some implementations, the medical condition may be determined based on data associated with the plurality of videos. For example, a video may be tagged with information including a medical condition. In other embodiments, the medical condition may be determined by analysis of the at least one video and may be based on an appearance of anatomical structures appearing in the at least one video. For example, the color of the tissue, the relative color of one tissue to the color of another tissue, the size of the organ, the relative size of one organ to another organ, the appearance of the gall bladder or other organ, the presence of lacerations or other markings, or any other visual indicator associated with the anatomical structure may be analyzed to determine the medical condition. In one example, the machine learning model may be trained using a training example to determine a medical condition from the video, and the trained machine learning model may be used to analyze the at least one video filmlet and determine the medical condition. Examples of such training examples may include video clips of a surgical procedure, and labels indicating one or more medical conditions.

In some aspects of the disclosure, information related to the distribution of past decisions made at respective similar past decision-making nodes may be displayed in conjunction with a display of alternative possible decisions. For example, as described above, a particular decision-making node may be associated with multiple possible decisions of the course of action. Past decisions may include decisions made by the surgeon when faced with the same or similar decision-making nodes in previous surgeries. For example, each of the past decisions may correspond to one of the above-described alternative possible decisions. Thus, as used herein, a corresponding similar past decision-making node refers to a decision-making node that occurs when past decisions are made during past surgical procedures. In some embodiments, the respective similar past decision-making nodes may be the same as the decision-making nodes identified by the labels. For example, if the decision-making node is an adverse event such as bleeding, the past decisions may correspond to how other surgeons have solved the bleeding problem in previous surgery. In other embodiments, the decision-making nodes may be different but may be similar. For example, the possible decisions made by the surgeon when encountering a dural tear may be similar to other forms of tears, and thus, the distribution of past decisions associated with dural tears may be correlated with other forms of tears. Past decisions may be identified by analyzing video clips, for example, using the computer analysis techniques described above. In some embodiments, past decisions may be indexed using the video indexing techniques described above so that they can be easily accessed to display a distribution of past decisions. In one example, the distribution may comprise a condition distribution, such as a distribution presenting past decisions made in respective similar past decision-making nodes having a common characteristic. In another example, the distribution may comprise a stateless distribution, e.g., a distribution that presents past decisions made in all corresponding similar past decision-making nodes.

The displayed distribution may indicate how common each of the possible decisions is among other alternative possible decisions associated with respective similar past decision nodes. In some implementations, the displayed distribution can include the number of times each of the decisions is made. For example, a particular decision-making node may have three alternative possible decisions: decision a, decision B, and decision C. Based on past decisions made at similar decision-making nodes, a determination may be made as to the number of times each of these alternative possible decisions has been performed. For example, decision a may have been performed 167 times, decision B may have been performed 47 times, and decision C may have been performed 13 times. The distribution may be displayed as a list of individual ones of the alternative possible decisions, and the number of times they have been executed. The displayed distribution may also indicate the relative frequency of individual ones of the decisions, for example, by displaying a ratio, percentage, or other statistical information. For example, the distribution may indicate that decisions A, B and C were performed 73.6%, 20.7%, and 5.7%, respectively, in past decisions. In some implementations, the distribution can be displayed as a graphical representation of the distribution, such as a bar graph, a histogram, a pie chart, a distribution curve, or any other graphical representation that can be used to show the distribution.

In some embodiments, only a subset of the decisions may be displayed. For example, only the most common decisions may be displayed based on the number of times the decision is made (e.g., exceeding a threshold number of times, etc.). Various methods described above for identifying similar past decision-making nodes may be used, including identifying surgical procedures associated with similar medical conditions, patient characteristics, medical professional characteristics, and/or prior events.

In some embodiments, the one or more estimation results may be a result of analyzing videos of a plurality of past surgical procedures that include respective similar decision-making nodes. For example, a video filmlet repository may be analyzed using various computer analysis techniques (such as the object and/or motion detection algorithms described above) to identify videos that include decision-making nodes that are the same as or share similar characteristics as the decision-making nodes identified by the markers. This may include identifying other video clips having the same or similar surgical stages, intraoperative surgical events, and/or event features as those used to identify decision-making nodes in the video presented in the timeline. The results of the alternative possible decisions may be estimated based on the results of past surgical procedures. For example, if the particular method of performing the stitching consistently results in complete recovery of the patient, then this result may be evaluated for this possible decision, and may be displayed on a timeline.

In some exemplary embodiments, the analysis may include implementation using computer vision algorithms. The computer vision algorithm may be the same as or similar to any of the computer vision algorithms described above. One example of such a computer algorithm may include the object detection and tracking algorithm described above. Another example of such a computer vision algorithm may include the use of a trained machine learning model. Other non-limiting examples of such computer vision algorithms are described above. For example, if a decision to end-marker is identified based on a particular adverse event occurring in the video, other video clips having the same or similar adverse event may be identified. The video clips may be further analyzed to determine the outcome of decisions made in past surgical videos. This may include the same or similar computer analysis techniques described above. In some implementations, this may include analyzing the video to determine the outcome of the decision. For example, if a decision-making node is associated with an adverse event associated with an anatomical structure (such as tearing), the anatomical structure may be evaluated at various frames following the following decisions: a determination is made as to whether the adverse event was remedied, how quickly the adverse event was remedied, whether additional adverse events occurred, whether the patient survived, or other indicators of outcome.

In some embodiments, additional information may also be used to determine the result. For example, the analysis may be based on one or more electronic medical records associated with the plurality of past surgical videos. For example, the determination may include referencing an electronic medical record associated with the video in which the particular decision was made to determine whether the patient recovered, how quickly the patient recovered, whether there were additional complications, and so forth. Such information may be useful for predicting results that may later result outside the range of the video filmlet. For example, the result may be days, weeks, or months after the surgical procedure. In some implementations, the additional information can be used to inform which videos to include in the analysis. For example, using information collected from medical records, videos sharing similar patient medical history, disease type, diagnosis type, treatment history (including past surgical procedures), healthcare professional identity, healthcare professional skill level, or any other relevant data may be identified. Videos sharing these or other features may provide a more accurate idea of what results of various alternative possible decisions may be expected.

Similar decision-making nodes may be identified based on how closely they are in relationship to the current decision-making node. In some embodiments, the respective similar decision-making nodes may be similar to the surgical decision-making nodes according to a similarity index. The metric may be any value, classification, or other indicator of how closely the decision-making node is related. Such an index may be determined based on computer vision analysis to determine how closely the procedure or technique matches. The index may also be determined based on the number of features that the decision-making nodes have in common and the degree to which the features match. For example, two decision-making nodes for patients with similar medical conditions and physical characteristics may be assigned a higher similarity than two more distinct patients based on the similarity index. Various other features and/or considerations may also be used. Additionally or alternatively, the similarity index may be based on any similarity measure, such as the similarity measures described above. For example, the similarity index may be the same as the similarity measure, may be a function of the similarity measure, and so on.

Various other marker types may be used in addition to or instead of decision making node markers. In some implementations, the markers may include intraoperative surgical event markers, which may be associated with a location in the video associated with the occurrence of an intraoperative (intraoperative) event. Throughout this disclosure, examples of various intraoperative surgical events that may be identified by markers are provided, including indexing with respect to the videos described above. In some embodiments, the intraoperative surgical event marker can be a universal marker indicating that an intraoperative surgical event occurred at the location. In other embodiments, the intraoperative surgical event marker may identify characteristics of the intraoperative surgical event, including the type of event, whether the event is an adverse event, or any other characteristic. FIG. 4 illustrates an example marker. As an illustrative example, an icon shown as indicia 434 may be used to represent a general intraoperative surgical event indicia. Indicia 436, on the other hand, may represent more specific intraoperative surgical event indicia, such as identifying an incision that occurred at that location. The markers shown in fig. 4 are provided as examples, and various other forms of markers may be used.

As described above, these intraoperative surgical event markers may be automatically identified. Using the computer analysis methods described above, medical instruments, anatomical structures, surgeon features, patient features, event features, or other features may be identified in the video clips. For example, the interaction between the identified medical instrument and the anatomical structure may indicate that an incision, suture, or other intraoperative event is being performed. In some embodiments, the intraoperative surgical event markers may be identified based on information provided in a data structure (such as the data structure 600 described above with reference to fig. 6).

Consistent with the disclosed embodiments, selection of intraoperative surgical event markers may enable a surgeon to view alternative video clips from different surgeries. In some embodiments, the alternative video clips may present different ways of handling the selected intraoperative surgical event. For example, in current video, the surgeon may perform an incision or other action according to one technique. Selecting an intra-operative surgical event marker may allow the surgeon to view alternative techniques that may be used to perform an incision or other action. In another example, the intraoperative surgical event may be an adverse event such as bleeding, and an alternative video clip may depict other ways in which the surgeon has dealt with the adverse event. In some embodiments, where the markers relate to intraoperative surgical events, selection of the intraoperative surgical event markers may enable the surgeon to view alternative video clips from different surgical procedures. For example, the different surgical procedures may be of different types (such as laparoscopic surgery and thoracoscopic surgery), but may still include the same or similar intraoperative surgical events. The surgical procedure may also differ in other ways, including different medical conditions, different patient characteristics, different medical professionals, or other differences. Selecting an intra-operative surgical event marker may allow the surgeon to view alternative video clips from different surgical procedures.

As with other embodiments described herein, the alternative video clips may be displayed in various ways. For example, selection of an intraoperative surgical event marker may result in a menu being displayed from which the surgeon may select an alternative video clip. The menu may include: a description of the different ways in which the selected intraoperative surgical event is processed, a thumbnail of the video clip, a preview of the video clip, and/or other information associated with the video clip, such as the date the video clip was recorded, the type of surgery, the name or identity of the surgeon performing the surgery, or any other relevant information.

According to some embodiments of the present disclosure, the at least one video may include a short-film compilation from a plurality of surgical procedures arranged in a chronological order of the procedures. The temporal order of the surgery may refer to the order of events that occur relative to the surgery. Thus, the compilation of clips arranged in chronological order of the procedure may include different events from different patients arranged in the following order: the order in which they would occur if the procedure had been performed on a single patient. In other words, although the clips are compiled from various surgical procedures on different patients, playback of the compilation will display the clips in the order in which they appear within the surgical procedure. In some embodiments, the short slice compilation may depict complications from the plurality of surgical procedures. In such embodiments, the one or more markers may be associated with the plurality of surgical procedures and may be displayed on a common timeline. Thus, while the viewer interacts with a single timeline, video clips presented along the timeline may result from different procedures and/or different patients. Example complications that may be displayed are described above with reference to video indexing.

Fig. 5 is a flow chart illustrating an example process 500 of reviewing a surgical video consistent with the disclosed embodiments. Process 500 may be performed by at least one processor, such as one or more microprocessors. In some implementations, the process 500 is not necessarily limited to the illustrated steps, and any of the various implementations described herein may also be included in the process 500. At step 510, process 500 may include accessing at least one video of a surgical procedure, e.g., as described above. The at least one video may comprise a video clip from a single surgery or may be a compilation of clips from multiple surgeries, as previously discussed. At step 520, process 500 may include causing the at least one video to be output for display. As described above, causing the at least one video to be output for display may include: sending a signal for causing the at least one video to be displayed on a screen or other display device, storing the at least one video in a location accessible to another computing device, sending the at least one video, or any other process or step that may cause the video to be displayed.

At step 530, process 500 may include overlaying a surgical timeline on the at least one video output for display, wherein the surgical timeline includes markers identifying at least one of a surgical phase, an intra-operative surgical event, and a decision-making node. In some embodiments, the surgical timeline may be represented as a horizontal bar displayed with the video. The markers may be represented along the timeline as shapes, icons, or other graphical representations. Fig. 4 provides an example of such an implementation. In other implementations, the timeline may be a text-based list of chronological stages, events, and/or decision-making nodes. The tags may similarly be text based and may be included in a list.

Step 540 may include enabling the surgeon to select one or more markers on the surgical timeline while viewing the playback of the at least one video, and thereby cause the display of the video to jump to a location associated with the selected marker. In some embodiments, the surgeon is able to view additional information about the event or occurrence associated with the marker, which may include information from past surgery. For example, markers may be associated with an intraoperative surgical event, and selection of markers may enable a surgeon to view alternative video clips of past surgeries associated with the intraoperative surgical event. For example, the surgeon may be enabled to view a short sheet from other surgical procedures in which similar intraoperative surgical events are handled differently, use different techniques, or result differently. In some embodiments, the indicia may be decision-making node indicia representing decisions made during the surgical procedure. Selecting a decision-making node marker may enable the surgeon to view information about the decision (including alternative decisions). Such information may include videos of past surgical procedures, including similar decision-making nodes, lists or distributions of alternative possible decisions, estimates of alternative possible decisions, or any other relevant information. Based on the steps described in process 500, a surgeon or other user may be able to more effectively and efficiently review surgical videos using a timeline interface.

In preparing a surgical procedure, it is often beneficial for a surgeon to review videos of similar surgical procedures that have already been performed. However, identifying relevant videos or video portions by a surgeon in preparation for a surgical procedure can be overly cumbersome and time consuming. Accordingly, there is a need for an unconventional method of efficiently and effectively indexing surgical video clips based on their content so that a surgeon or other medical professional can easily access and review the video.

Aspects of the present disclosure may relate to video indexing, including methods, systems, apparatuses, and computer-readable media. For example, a surgical event within a surgical stage in a surgical short can be automatically detected. The viewer may be enabled to jump directly to the event, to view only the event with specified characteristics, and so on. In some embodiments, a user may specify an event (e.g., an unintentional injury to an organ) having a feature (e.g., a particular complication) within a surgical stage (e.g., an anatomy) such that a video clip of one or more events sharing the feature may be presented to the user.

For ease of discussion, a method is described below, and an understanding of aspects of the method applies equally to systems, apparatuses, and computer-readable media. For example, some aspects of such methods may occur electronically over a network, wired, wireless, or both. Other aspects of this method may occur using non-electronic means. In the broadest sense, the method is not limited to a particular physical and/or electronic tool, but can be accomplished using many different tools.

Consistent with the disclosed embodiments, a method is provided that may involve accessing a video clip to be indexed that includes a clip of a particular surgical procedure. As used herein, video may include any form of recorded visual media, including recorded images and/or sound. For example, the video may include a sequence of one or more images captured by an image capture device (such as cameras 115, 121, 123, and/or 125, as described above in connection with fig. 1). The images may be stored as separate files or may be stored in a combined format, such as a video file, which may include corresponding audio data. In some embodiments, the video may be stored as raw data and/or images output from an image capture device. In other embodiments, the video may be processed. For example, a video file may include: audio Video Interleave (AVI), Flash Video Format (FLV), QuickTime File Format (MOV), MPEG (MPG, MP4, M4P, etc.), Windows Media Video (WMV), Material Exchange Format (MXF), uncompressed format, lossy compressed format, lossless compressed format, or any other suitable Video file format.

A video filmlet may refer to a piece of video that has been captured by an image capture device. In some implementations, a video filmlet may refer to a piece of video that includes a sequence of images in the order in which the sequence of images was originally taken. For example, a video filmlet may include video that has not been edited to form a video compilation. In other embodiments, the video clips may be edited in one or more ways to remove frames associated with inactivity or otherwise compile frames that were originally taken out of order. Accessing the video clips may include retrieving the video clips from a storage location, such as a memory device. The video clips may be accessed from local storage (such as a local hard drive) or may be accessed from a remote source (e.g., over a network connection). Consistent with this disclosure, indexing may refer to a process for storing data in a manner that makes it more efficient and/or effective to retrieve the data. Indexing a video filmlet may include associating one or more characteristics or indicators with the video filmlet such that the video filmlet may be identified based on the characteristics or indicators.

The surgical procedure may include any medical procedure associated with or involving a manual procedure (procedure) or surgical procedure on the body of a patient. Surgery may include cutting, abrading, suturing, or other techniques involving physical modification of body tissues and organs. Some examples of such surgical procedures may include: laparoscopic surgery, thoracoscopic surgery, bronchoscopic surgery, microscopic surgery, open surgery, robotic surgery, appendectomy, carotid endarterectomy, carpal tunnel release, cataract surgery, cesarean section, cholecystectomy, colectomy (such as partial colectomy, total colectomy, etc.), coronary angioplasty, coronary artery bypass surgery, debridement (e.g., wounds, burns, infections, etc.), free skin graft, hemorrhoidectomy, hip replacement, hysterectomy, hysteroscopy, groin hernia repair, knee arthroscopy, knee replacement, mastectomy (such as partial mastectomy, total mastectomy, modified radical mastectomy, etc.), prostatectomy, prostate removal, shoulder arthroscopy, spinal surgery (such as spinal fusion, etc.) Laminectomy, foraminotomy, discectomy, disc replacement, an intervertebral implant, etc.), tonsillectomy, cochlear implant surgery, resection of a brain tumor (e.g., meningioma, etc.), interventional procedures such as percutaneous transluminal coronary angioplasty, transcatheter aortic valve replacement, minimally invasive surgery to clear cerebral hemorrhage, or any other medical procedure involving some form of incision. Although the present disclosure is described with reference to surgical procedures, it is to be understood that it may be applicable to other forms of medical procedures or general procedures.

In some exemplary embodiments, the accessed video clips may include video clips captured via at least one image sensor located in at least one of a position above an operating table, a surgical cavity of a patient, within an organ of a patient, or within a vasculature of a patient. The image sensor may be any sensor capable of recording video. The image sensor located in a position above the surgical table may include any image sensor positioned outside the patient's body that is configured to take images from above the patient. For example, the image sensor may include a camera 115 and/or 121 as shown in FIG. 1. In other embodiments, the image sensor may be placed within the patient, such as within a lumen, for example. As used herein, a cavity may include any relatively empty space within an object. Thus, a surgical cavity may refer to a space within a patient's body in which a surgical procedure or operation is being performed or in which surgical tools are present and/or used. It should be understood that the surgical cavity may not be completely empty, but may include tissues, organs, blood or other fluids present within the body. An organ may refer to any individual region or portion of an organism. Some examples of human patient organs may include the heart or liver. Blood vessels may refer to systems or groups of blood vessels within an organism. The image sensor located within the surgical cavity, organ, and/or vasculature may include a camera included on a surgical tool inserted into the patient.

Aspects of the present disclosure may include analyzing video clips to identify video clip locations associated with a surgical stage of a particular surgical procedure. As used herein with reference to a video filmlet, a location may refer to any particular location or range within the video filmlet. In some implementations, the location may include a particular frame or range of frames of the video. Thus, a video filmlet location may be represented as one or more frame numbers or other identifiers of the video filmlet file. In other embodiments, the location may refer to a particular time associated with the video filmlet. For example, a video filmlet location may refer to a time index or timestamp within the video filmlet, a time range, a particular start time and/or end time, or any other location indicator. In other embodiments, a location may refer to at least one particular location within at least one frame. Thus, a video tile location may be represented as one or more pixels, voxels, bounding boxes, bounding polygons, bounding shapes, coordinates, and the like.

For purposes of this disclosure, a phase may refer to a particular period or phase of a process or series of events. Thus, a surgical stage may refer to a particular period or stage of surgery, as described above. For example, the surgical stages of laparoscopic cholecystectomy may include: trocar placement, preparation, calot triangle dissection, closing and cutting of cystic duct and artery, gallbladder dissection, gallbladder packaging, liver bed cleaning and solidification, gallbladder contraction and the like. In another example, the surgical stage of cataract surgery may include: preparation, povidone iodine injection, corneal incision, capsulorhexis, phacoemulsification, cortical aspiration, intraocular lens implantation, intraocular lens adjustment, wound closure, and the like. In yet another example, the surgical stage of pituitary surgery may include: preparation, nasal incision, nasal retractor installation, tumor access, tumor removal, nasal columella replacement, suturing, nasal compression, and the like. Some other examples of surgical stages may include: preparation, incision, laparoscopic positioning, suturing, and the like.

In some implementations, identifying the video filmlet location can be based on user input. The user input may include any information provided by the user. As used with reference to video indexing, the user input may include information related to identifying the video filmlet location. For example, the user may enter a particular frame number, timestamp, time range, start time, and/or stop time, or any other information that may identify the location of a video filmlet. Alternatively, the user input may include an input or selection of a stage, event, procedure, or device used, which may be associated with a particular video clip (e.g., via a lookup table or other data structure). User input may be received through a user interface of a user device, such as a desktop computer, a laptop, a tablet, a mobile handset, a wearable device, an internet of things (IoT) device, or any other device for receiving input from a user. For example, the interface may include: one or more drop down menus having one or more phase name selection lists; a data entry field that allows a user to enter a phase name and/or suggest a phase name once a few letters are entered; a selection list from which the phase names can be selected; a set of selectable icons, each icon associated with a different phase; or any other mechanism that allows a user to identify or select a phase. For example, the user may enter a phase name through a user interface similar to user interface 700, as described in more detail below with reference to FIG. 7. In another example, user input may be received through voice commands and/or voice input, and the user input may be processed using a voice recognition algorithm. In yet another example, user input may be received through gestures (such as hand gestures), and the user input may be processed using gesture recognition algorithms.

In some implementations, identifying the video filmed location includes: the frames of the video filmlet are analyzed using computer analysis. Computer analysis may include any form of electronic analysis using a computing device. In some implementations, the computer analysis can include identifying features of one or more frames of the video filmlet using one or more image recognition algorithms. Computer analysis may be performed for a single frame, or may be performed across multiple frames, e.g., to detect motion or other changes between frames. In some implementations, the computer analysis may include an object detection algorithm, such as Viola-Jones object detection, Scale Invariant Feature Transform (SIFT), Histogram of Oriented Gradients (HOG) features, Convolutional Neural Network (CNN), or any other form of object detection algorithm. Other example algorithms may include: a video tracking algorithm, a motion detection algorithm, a feature detection algorithm, a color-based detection algorithm, a texture-based detection algorithm, a shape-based detection algorithm, a boosting-based detection algorithm, a face detection algorithm, or any other suitable algorithm for analyzing video frames. In one example, a machine learning model may be trained using training examples to identify particular locations within a video, and the trained machine learning model may be used to analyze video clips and identify video clip locations. Examples of such training examples may include a video filmlet with a marker indicating a location within the video filmlet, or with a marker indicating that no corresponding location is included within the video filmlet.

In some embodiments, the computer image analysis may include a neural network model trained using example video frames including previously identified surgical stages, thereby identifying at least one of a video shot location or a stage label. In other words, frames of one or more videos known to be associated with a particular surgical stage may be used to train a neural network model, e.g., using machine learning algorithms, using backpropagation, using gradient descent optimization, etc. Thus, the trained neural network model may be used to identify whether one or more video frames are also associated with a surgical stage. Some non-limiting examples of such artificial neural networks may include: shallow artificial neural networks, deep artificial neural networks, feedback artificial neural networks, feedforward artificial neural networks, self-encoder artificial neural networks, probabilistic artificial neural networks, time-delay artificial neural networks, convolutional artificial neural networks, cyclic artificial neural networks, long-short term memory artificial neural networks, and the like. In some embodiments, the disclosed method may further include updating the trained neural network model based on at least one of the analyzed frames.

In some aspects of the present disclosure, analyzing the video clip to identify a video clip location associated with at least one of a surgical event or a surgical stage may include: performing computer image analysis on the video footage to identify at least one of a start position of the surgical phase for playback or an initiation of the surgical event for playback. In other words, using the computer analysis techniques discussed above, the disclosed method may include the steps of: the location within the video clip where the surgical phase or event began is identified. For example, the start of a surgical event (such as an incision) may be detected using the subject body and/or motion detection algorithms described above. In other embodiments, the start of the cut may be detected based on machine learning techniques. For example, a machine learning model may be trained using video clips and corresponding markers indicating known starting points for incisions or other surgical events and/or procedures. The trained models can be used to identify similar procedure and/or event start locations within other surgical video clips.

Some aspects of the present disclosure may include generating a stage label associated with a surgical stage. As used herein, a "tag" may refer to any process or indicia by which information is associated or linked to a data set. In some implementations, the tag can be a characteristic of a data file (such as a video file). Thus, generating the tag may include: the properties are written or overwritten within the video file. In some implementations, generating the tag can include: information is written to other files than the video file itself, for example, by associating the video file with a tag in a separate database. The tag may be expressed as a textual message, a numerical identifier, or any other suitable tagging means. As described above, the stage tag may be a tag that identifies a stage of the surgical stage. In one embodiment, the stage tag may be a marker indicating a location in the video where the surgical stage begins, a marker indicating a location in the video where the surgical stage ends, a marker indicating a location in the video in the middle of the surgical stage, or a marker indicating a range of video that encompasses the surgical stage. The tag may be a pointer in the video data itself or may be located in a data structure to allow for the phase location to be found. The stage label may include computer readable information for causing display of the stage, and may also include human readable information for identifying the stage to a user. For example, generating a stage label associated with a surgical stage may include: a tag is generated that includes text, such as "laparoscopic localization," to indicate that the tagged data is associated with the stage of the surgical procedure. In another example, generating a stage tag associated with a surgical stage may include: a binary-coded tag including a surgical stage identifier is generated. In some embodiments, generating the stage label may be based on computer analysis of a video clip depicting the surgical stage. For example, the disclosed method may include the steps of: the subject and motion detection analysis methods described above are used to analyze a short slice of the surgical stage to determine a stage label. For example, if it is known that a stage using a particular type of medical device or other tool used in a unique manner or in a unique sequence begins or ends, image recognition may be performed on the video filmlet to identify the particular stage by the image recognition performed, thereby identifying the unique use of the tool to identify the particular stage. Generating the phase tag may further include: using trained machine learning models or neural network models (such as deep neural networks, convolutional neural networks, etc.), they may be trained to associate one or more video frames with one or more stage labels. For example, the training examples can be fed to a machine learning algorithm to develop a model configured to associate other video filmed data with one or more stage labels. Examples of such training examples may include a video filmlet and a flag indicating a desired label or lack of a desired label corresponding to the video filmlet. Such markers may include: an indication of one or more locations within the video clip corresponding to the surgical stage, an indication of the type of surgical stage, an indication of the characteristics of the surgical stage, etc.

The method according to the present disclosure may comprise the steps of: the phase tag is associated with the video filmlet location. Any suitable means may be used to associate the stage labels with the video filmlet locations. Such tags may include: an indication of one or more locations within the video clip corresponding to the surgical stage, an indication of the type of surgical stage, an indication of the characteristics of the surgical stage, etc. In some implementations, video filmed locations can be included in the label. For example, the tags may include: a timestamp, a time range, a frame number, or other means for associating a stage tag with a video filmlet location. In other embodiments, the tags may be associated with video filmlet locations in a database. For example, the database may include information linking stage tags to video clips and specific video clip locations. The database may include data structures, as described in further detail.

Embodiments of the present disclosure may further include: the video clips are analyzed to identify event locations for particular intra-operative surgical events within the surgical session. An intraoperative surgical event can be any event or action that occurs during a surgical procedure or stage. In some embodiments, the intraoperative surgical event may include an action performed as part of a surgical procedure, such as an action performed by a surgeon, surgical technician, nurse, physician's assistant, anesthesiologist, physician, or any other medical professional. The intraoperative surgical event can be a planned event, such as an incision, administration of a drug, use of a surgical instrument, resection, ligation, implantation, suturing, stapling, or any other planned event associated with a surgical procedure or stage. In some embodiments, the intraoperative surgical event may include an adverse event or complication. Some examples of intraoperative adverse events may include: bleeding, mesenteric emphysema, injury, transition to unplanned open surgery (e.g., abdominal wall incision), incision significantly larger than planned, etc. Some examples of intraoperative complications may include: hypertension, hypotension, bradycardia, hypoxemia, adhesions, hernia, atypical dissection, dural tears, periodor injury, arterial infarction, and the like. Intraoperative events may include other errors, including: technical errors, communication errors, management errors, judgment errors, decision errors, errors related to medical device usage, error communication, and the like.

The event location may be a location or range within the video clip associated with the intraoperative surgical event. Similar to the phase positions described above, the event position may be expressed in terms of a particular frame of the video filmlet (e.g., a frame number or a frame number range), or based on temporal information (e.g., a timestamp, a time range, or start and end times), or any other manner for identifying a position within the video filmlet. In some implementations, analyzing the video filmlet to identify the event location can include: the frames of the video filmlet are analyzed using computer analysis. The computer analysis may include any of the techniques or algorithms described above. As with phase recognition, event recognition may be based on the detection of actions and tools used in a manner that uniquely identifies the event. For example, image recognition may identify when a particular organ is incised to enable marking of the incisional event. In another example, image recognition may be used to record the severing of a vessel or nerve to enable the marking of the adverse event. Image recognition may also be used to label events by detecting bleeding or other fluid loss. In some implementations, analyzing the video filmlet to identify the event location can include: neural network models (such as deep neural networks, convolutional neural networks, etc.) are used, which are trained using example video frames that include previously identified surgical events, to identify event locations. In one example, a machine learning model may be trained using training examples to identify locations of intraoperative surgical events in a video portion, and the trained machine learning model may be used to analyze a video clip (or a portion of the video clip corresponding to a surgical phase) and identify event locations within the surgical phase for a particular intraoperative surgical event. Examples of such training examples may include a video filmlet and a marker indicating the location of a particular event within the video filmlet or the absence of such an event.

Some aspects of the present disclosure may relate to associating an event tag with an event location of a particular intraoperative surgical event. As discussed above, tags may include any manner for associating information with data or a portion of data. Event tags can be used to associate data or portions of data with events, such as intraoperative surgical events. Similar to the phase tags, associating the event tags with the event locations may include: data is written to the video file, e.g., to a property of the video file. In other embodiments, associating the event tag with the event location may include: the data is written to a file or database that associates event tags with video clips and/or event locations. Alternatively, associating the event tag with the event location may include: the mark is recorded in a data structure, wherein the data structure associates a tag with a specific location or range of locations in the video filmlet. In some implementations, the same file or database may be used to associate the phase tags with the video clips as event tags. In other embodiments, a separate file or database may be used.

Consistent with the present disclosure, the disclosed method may include the steps of: event features associated with a particular intraoperative surgical event are stored. An event feature may be any salient feature or characteristic of an event. For example, the event characteristics may include characteristics of the patient or surgeon, characteristics or features of the surgical event or surgical stage, or various other salient features. Examples of features may include: excess adipose tissue, enlarged organs, tissue decay, broken bones, disc displacement, or any other physical characteristic associated with the event. Some features may be discerned visually by a computer, while other features may be discerned by manual input. In the latter example, the age or age range of the patient may be stored as an event characteristic. Similarly, aspects of a patient's prior medical history may be stored as event signatures (e.g., a patient with diabetes). In some embodiments, the stored event characteristics may be used to distinguish intraoperative surgical events from other similar events. For example, a physician may be allowed to search a video clip to identify one or more coronary bypass surgeries performed on men over the age of 70 with arrhythmia. Various other examples of stored event characteristics that may be used are provided below.

The stored event characteristics may be determined in various ways. Some aspects of the disclosed methods may involve determining stored event characteristics based on user input. For example, the user may enter the event characteristics to be stored via a user interface similar to that described above in connection with the selection of a phase or event. In another example, a user may enter the event characteristics to be stored via voice commands. Various examples of such uses are provided below. Other aspects of the methods of the present disclosure may involve determining stored event characteristics based on computer analysis of video clips depicting particular intraoperative surgical events. For example, the disclosed method may include the steps of: various image and/or video analysis techniques as described above are used to identify event features based on video snippets. As an illustrative example, the video filmlet may include a representation of one or more anatomical structures of the patient, and identifying event features of the anatomical structures may be determined based on detecting the anatomical structures in the video filmlet or based on detecting interactions between the medical instrument and the anatomical structures. In another example, a machine learning model may be trained using training examples to determine event features from a video, and the trained machine learning model may be used to analyze video clips and determine stored event features. An example of such a training example may include a video clip depicting an intraoperative surgical event and markers indicating features of the event.

Some aspects of the disclosure may include: associating at least a portion of the video clips of a particular surgical procedure with a stage tag, an event tag, and an event feature in a data structure containing additional video clips of other surgical procedures, wherein the data structure further includes a respective stage tag, a respective event tag, and a respective event feature associated with one or more of the other surgical procedures. A data structure consistent with the present disclosure may include any collection of data values and relationships between them. The data may be stored in the following manner: linearly, horizontally, hierarchically, relationally, irrelatively, one-dimensionally, multi-dimensionally, operatively, in an ordered manner, in an unordered manner, in an object-oriented manner, in a centralized manner, in a decentralized manner, in a distributed manner, in a custom manner, in a searchable repository, in a sorted repository, in an indexed repository, or in any manner that enables data access. As non-limiting examples, the data structure may include: arrays, associative arrays, linked lists, binary trees, balanced trees, heaps, stacks, queues, collections, hash tables, records, tag unions (tagged unions), ER models, and graphs. For example, the data structure may include: XML databases, RDBMS databases, SQL databases or NoSQL alternatives for data storage/Search, such as, for example, MongoDB, Redis, Couchbase, Datastax Enterprise Graph, Elastic Search, Splunk, Solr, Cassandra, Amazon DynamoDB, Scylla, HBase and Neo 4J. The data structure may be a component of the disclosed system or a remote computing component (e.g., a cloud-based data structure). The data in the data structure may be stored in contiguous or non-contiguous memory. Furthermore, as used herein, data structures do not require that information be co-located. For example, the data structure may be distributed over multiple servers, which may be owned or operated by the same or different entities. Thus, for example, the data structure may include any data format that may be used to associate a video filmlet with a phase tag, an event tag, and/or an event feature.

FIG. 6 illustrates an example data structure 600 consistent with the disclosed embodiments. As shown in fig. 6, data structure 600 may include a table that includes video clips 610 and video clips 620 for different surgical procedures. For example, video clip 610 may comprise a laparoscopic cholecystectomy clip, while video clip 620 may comprise a cataract surgical clip. Video clip 620 can be associated with clip location 621, which can correspond to a particular surgical stage of cataract surgery. Stage label 622 can identify the stage (in this case, the corneal incision) associated with the tab location 621, as discussed above. Video clip 620 can also be associated with event tag 624, which can identify an intra-operative surgical event (in this case an incision) that occurs within the surgical stage at event location 623. The video clips 620 may also be associated with event features 625, which may describe one or more features of the intraoperative surgical event (such as surgeon skill level), as described in detail above. Each video filmlet identified in the data structure may be associated with more than one filmlet location, phase marker, event location, event tag, and/or event characteristic. For example, video clips 610 may be associated with phase tags corresponding to more than one surgical phase (e.g., "Calot triangle dissection" and "cystic duct cutting"). Further, each surgical stage of a particular video clip can be associated with more than one event, and thus can be associated with more than one event location, event tag, and/or event feature. However, it should be understood that in some embodiments, a particular video clip may be associated with a single surgical stage and/or event. It should also be understood that in some embodiments, an event may be associated with any number of event characteristics, including: no event feature, a single event feature, two event features, more than two event features, etc. Some non-limiting examples of such event features may include: a skill level associated with the event (such as a minimum skill level required, a skill level exhibited, a skill level of a healthcare provider involved in the event, etc.), a time associated with the event (such as a start time, an end time, etc.), a type of event, information related to a medical instrument involved in the event, information related to an anatomical structure involved in the event, and information related to a medical outcome associated with the event, one or more quantities (such as a leak quantity, a drug quantity, a fluid quantity, etc.), one or more dimensions (such as a size of an anatomical structure, a size of an incision, etc.), and the like. Further, it is to be understood that data structure 600 is provided as an example, and that various other data structures may be used.

Embodiments of the present disclosure may also include enabling a user to access the data structure by selecting a selected phase tab, a selected event tab, and a selected event feature of the video filmlet for display. A user may be any person or entity that may be provided access to data stored in a data structure. In some embodiments, the user may be a surgeon or other medical professional. For example, the surgeon may access the data structure and/or a video filmlet associated with the data structure for review or training purposes. In some embodiments, the user may be an administrator (e.g., a hospital administrator), a manager, a lead surgeon, or other individual who may need to access the video clips. In some embodiments, the user may be a patient who may access a video clip of his or her surgery. Similarly, the user may be a relative, a guardian, an attending physician, an insurance agent, or another representative of the patient. The user may include various other entities that may include, but are not limited to, insurance companies, regulatory agencies, police or research agencies, medical associations, or any other entity that may be provided access to the video clips. The user's selection may include any manner for identifying a particular phase tag, event tag, and/or event feature. In some implementations, the user's selection can be made through a graphical user interface, such as on a display of the computing device. In another example, the user's selection may be made through a touch screen. In additional examples, the user's selection may be made through a speech input, and the speech input may be processed using a speech recognition algorithm. In yet another example, the user's selection may be made by a gesture (e.g., a hand gesture), and the gesture may be analyzed using a gesture recognition algorithm. In some embodiments, the user may not select all of the following three: a selected phase tag, a selected event tag, or a selected event feature, but a subset of them may be selected. For example, a user may select only event features, and may be allowed to access information associated with the data structure based on the selected event features.

FIG. 7 is an illustration of an exemplary user interface 700 for selecting indexed video clips for display consistent with the disclosed embodiments. The user interface 700 may include one or more search boxes 710, 720, and 730 for selecting video clips. The search box 710 may allow the user to select one or more surgical stages to be displayed. In some embodiments, the user interface 700 may provide the proposed surgical stage based on a stage tag included in the data structure 600. For example, as the user begins typing in the search box 710, the user interface 700 may suggest phase tag descriptions to search based on the characters the user has entered. In other embodiments, the user may select the phase tab using a radio button, a check box, a drop down list, a touch interface, or any other suitable user interface function. Similar to the phase tags, the user may select video clips based on the event tags and event characteristics using search boxes 720 and 730, respectively. User interface 700 may also include drop down buttons 722 and 732 to access drop down lists and further filter results. As shown in fig. 7, selecting the drop down button 732 may allow a user to select an event feature based on a sub-category of the event feature. For example, the user may select "surgeon skill level" in a drop down list associated with the drop down button 732, which may allow the user to search in the search box 730 based on the surgeon's skill level. While "surgeon skill level" and various other event feature sub-categories are provided as examples, it should be understood that a user may select any feature or characteristic of a surgical procedure. For example, the user may refine the surgeon skill level based on the surgeon, qualifications, age of experience, and/or any indication of the surgical skill level, as discussed in more detail below. The user may access the data structure by clicking, tapping, or otherwise selecting search button 740.

The display of the video filmlet may include any process of presenting one or more frames of the video filmlet or a portion of the video filmlet to a user. In some implementations, displaying can include electronically transmitting at least a portion of the video filmlet for viewing by a user. For example, displaying the video filmlet may include transmitting at least a portion of the video filmlet over a network. In other embodiments, displaying the video clips may include making the video clips available to the user by storing the video clips in a location accessible to the user or in a device being used by the user. In some implementations, displaying the video clips can include: causing the video filmlet to be played on a visual display device such as a computer or video screen. For example, displaying may include sequentially presenting frames associated with a video filmlet and may also include presenting audio associated with the video filmlet.

Some aspects of the disclosure may include: a lookup is performed in a data structure of the surgical video filmlet matching the at least one selected stage tag, selected event tag, and selected event features to identify a matching subset of the stored video filmlets. Performing a lookup may include any process for retrieving data from a data structure. For example, based on the at least one selected phase tag, the event tag, and the selected event characteristic, a corresponding video filmlet or portion of a video filmlet may be identified from the data structure. The subset of stored video clips may include: a single identified video filmlet or a plurality of identified video filmlets associated with the user's selection. For example, the subset of stored video clips may include surgical video clips having at least one of: a phase tag that is identical to the selected phase tag, an event tag that is identical to the selected event tag, and an event feature that is identical to the selected event feature. In another example, the subset of stored video clips may include surgical video clips having at least one of: a stage label that is similar to the selected stage label (e.g., according to a selected similarity measure), an event label that is similar to the selected event label (e.g., according to a selected similarity measure), and/or an event feature that is similar to the selected event feature (e.g., according to a selected similarity measure). In some embodiments, performing a search may be triggered by selecting search button 740, as shown in FIG. 7.

In some exemplary embodiments, identifying a matching subset of the stored video snippets comprises: computer analysis is used to determine the degree of similarity between the matching subset of stored videos and the selected event features. Thus, "match" may refer to an exact match or may refer to an approximate or closest match. In one example, the event features may include numerical values (such as amounts, sizes, lengths, areas, volumes, etc., e.g., as described above), and the degree of similarity may be based on a comparison of the numerical values included in the selected event features with corresponding numerical values of the stored video. In one example, any similarity function (including, but not limited to, affinity function, correlation function, polynomial similarity function, exponential similarity function, distance-based similarity function, linear function, non-linear function, etc.) may be used to calculate the degree of similarity. In one example, a graph matching algorithm or a hypergraph matching algorithm (such as exact matching algorithm, inexact matching algorithm) may be used to determine the degree of similarity. As another illustrative example, a video filmlet associated with the "prepare" stage tag may also be retrieved to obtain a stage tag that includes the terms "prepare (prep)," prepare (prepare), "pre-procedure," "pre-procedure," or other terms that may refer to the "prepare" stage tag, which may be similar but not exactly matching. The degree of similarity may refer to any measure of how closely a subset of the stored videos match the selected event characteristics. The degree of similarity may be expressed as a similarity ranking (e.g., in the range of 1 to 10, 1 to 100, etc.), a percentage of matches, or by any other means of expressing how closely the degree of matches is. Using computer analysis may include using computer algorithms to determine a degree of similarity between selected event features and event features of one or more surgical procedures included in the data structure. In one example, a k-nearest neighbor algorithm may be used to identify the most similar entry in the data structure. In one example, entries of the data structure and user-entered event features may be embedded in a mathematical space (e.g., using any dimension reduction or data embedding algorithm), and the distance between the embedded entries and the user-entered features may be used to calculate a degree of similarity between the two. Further, in some examples, the entry in the embedded mathematical space that is closest to the feature of the user input may be selected as the entry in the data structure that is most similar to the data of the user input.

Some aspects of the present invention may involve causing the display of the matching subset of stored video clips to the user, thereby enabling the user to view surgical clips of the at least one intraoperative surgical event that share selected event characteristics while skipping playback of video clips lacking the selected event characteristics. A surgical clip may refer to any video or video clip that captures a surgical procedure, as described in more detail above. In some implementations, causing the matching subset of the stored video snippets to be displayed can include executing instructions for playing the video. For example, a processing device executing the methods described herein may access a matching subset of video clips, and the processing device may be configured to present the stored video clips to a user on a screen or other display. For example, the stored video clips may be displayed in a video player user interface, such as video playback area 410, as discussed in further detail below with reference to fig. 4. In some implementations, causing the matching subset of stored video clips to be displayed to the user can include sending the stored video clips for display, as described above. For example, the matching subset of video clips may be transmitted over a network to a computing device associated with the user, such as a desktop computer, a laptop computer, a mobile handset, a tablet computer, smart glasses, a heads-up display, a training device, or any other device capable of displaying video clips.

Skipping playback may include any process that results in videos lacking the selected event characteristics not being presented to the user. For example, skipping playback may include designating a short segment as not to be displayed and not displaying the short segment. In embodiments where a matching subset of video clips is sent, skipping playback may include preventing sending of video clips lacking selected event characteristics. This can be done by: selectively transmitting only those portions of the short slice that are associated with the matching subset; selectively transmitting a flag associated with the portion of the short slice associated with the matching subset; and/or skipping the short slice portions that are not associated with the matching subset. In other implementations, a video clip lacking the selected event characteristic may be transmitted, but may be associated with one or more instructions to not present the video clip lacking the selected event characteristic.

According to various exemplary embodiments of the present disclosure, enabling a user to view a surgical short sheet of at least one intraoperative surgical event having a selected event feature while skipping playback of portions of the selected surgical event lacking the selected event feature may include: the user is sequentially presented with portions of the surgical clips of the plurality of intraoperative surgical events sharing the selected event feature while skipping playback of portions of the selected surgical event lacking the selected event feature. In other words, one or more portions of the video filmlet may be identified, such as associated with a selected event feature, for example, by a lookup function in the data structure. A surgical short sheet that enables a user to view at least one intraoperative surgical event having a selected event characteristic may include: one or more of the identified portions are sequentially presented to the user. Any portion of the video filmlet that is not identified may not be rendered. In some implementations, the video filmlet may be selected based on the selected event tag and the selected phase tag. Thus, in embodiments consistent with the present disclosure, enabling a user to view a surgical short having a selected event feature of at least one intraoperative surgical event while skipping playback of a portion of the selected surgical event lacking the selected event feature may comprise: the method further includes sequentially presenting to the user portions of the surgical clips of the plurality of intraoperative surgical events sharing the selected event feature and associated with the selected event tag and the selected phase tag while skipping playback of portions of the selected surgical events lacking the selected event feature or not associated with the at least one of the selected event tag and the selected phase tag.

As mentioned above, the stored event signatures may include various kinds of signatures related to the surgical procedure. In some example embodiments, the stored event characteristics may include adverse outcomes of surgical events. For example, the stored event characteristics may identify whether the event is an adverse event or is associated with a complication (including the examples described in more detail above). Thus, causing the matching subset to be displayed may include: enabling the user to view the surgical clip of the selected adverse outcome while skipping playback of the surgical event lacking the selected adverse outcome. For example, in response to a user wishing to know how a surgeon is dealing with a vascular injury during a laparoscopic procedure, rather than displaying the entire procedure to the user, the user may select a vascular injury event, after which the system may display only a portion of a video clip where the event occurred. The stored event characteristics may similarly identify outcomes, including desired and/or expected outcomes. Examples of such results may include: complete recovery of the patient, whether a leak occurred, the amount of leak that occurred, whether the amount of leak is within a selected range, whether the patient is readmitted after discharge, the length of stay after surgery, or any other outcome that may be associated with surgery. In this way, the user can ascertain the long-term effects of a particular technology while viewing. Thus, in some embodiments, the stored event characteristics may include these or other results, and causing the matching subset to be displayed may include: enabling the user to view the surgical short of the selected result while skipping playback of the surgical event lacking the selected result.

In some embodiments, the stored event characteristics may include surgical techniques. Thus, the stored event signatures may identify whether a particular technique was performed. For example, multiple techniques may be applied at a particular stage of a surgical procedure, and the event features may identify which technique is being applied. In this way, a user interested in learning a particular technique can filter the video results so that only procedures using the specified technique are displayed. Causing the matching subset to be displayed may include: enabling the user to view the surgical clip of the selected surgical technique while skipping playback of the surgical clip not associated with the selected surgical technique. For example, a user may be enabled to sequentially view non-contiguous portions of video taken from the same surgical procedure or different surgical procedures. In some embodiments, the stored event characteristics may include the identity of a particular surgeon. For example, the event characteristics may include the identity of the particular surgeon performing the surgical procedure. The surgeon may identify based on his or her name, identification number (e.g., employee number, medical registration number, etc.), or any other form of identity. In some embodiments, the surgeon may be identified based on identifying a representation of the surgeon in the captured video. For example, various facial and/or speech recognition techniques may be used, as discussed above. In this way, if a user wishes to learn the skill of a particular surgeon, the user may be caused to do so. For example, causing the matching subset to be displayed may include: enabling the user to view the short piece showing the activity of the selected surgeon while skipping playback of the short piece lacking the activity of the surgeon. Thus, for example, if multiple surgeons are participating in the same surgical procedure, the user may choose to view the activities of only a subset of the teams.

In some embodiments, the event characteristics may also be associated with other healthcare providers or healthcare professionals that may be involved in the surgical procedure. In some examples, the features associated with the healthcare provider may include any feature of the healthcare provider involved in the surgical procedure. Some non-limiting examples of such healthcare providers may include the title of any member of a surgical team, such as a surgeon, anesthesiologist, nurse, registered nurse anesthesiologist (CRNA), surgical technician, resident doctor, medical student, doctor's assistant, and the like. Further non-limiting examples of such features may include: authentication, experience level (such as years of experience, experience of similar past surgery, success rate of similar past surgery, etc.), demographic characteristics (such as age), etc.

In other embodiments, the stored event characteristics may include a time associated with a particular surgical procedure, surgical stage, or portion thereof. For example, the stored event characteristics may include a duration of the event. Causing the matching subset to be displayed may include: enabling the user to view a short piece of the event exhibiting the selected duration while skipping playback of a short piece of the event of a different duration. In this way, for example, a user who may wish to view a particular procedure that is completed faster than normal may set a time threshold to view a specified procedure that is completed within the threshold. In another example, a user who may wish to view more complex events may set a time threshold to view a procedure that includes events of longer duration than a selected threshold, or a procedure that includes the longest duration event in a selected event group. In other embodiments, the stored event characteristics may include a start time of the event, an end time of the event, or any other time indicator. Causing the matching subset to be displayed may include: the user is enabled to view a short slice within a particular surgical procedure, within a stage associated with the event, or within a selected portion of the particular surgical procedure that shows the event according to a selected time. While skipping playback of the event clips associated with different times.

In another example, the stored event characteristics may include patient characteristics. The term "patient characteristic" refers to any physical, social, economic, demographic, or behavioral characteristic of a patient, as well as a characteristic of the patient's medical history. Some non-limiting examples of such patient characteristics may include: age, gender, weight, height, Body Mass Index (BMI), menopausal status, typical blood pressure, characteristics of a patient's genome, educational status, educational level, socioeconomic status, income level, occupation, insurance type, health status, self-assessed health, functional status, dysfunction, duration of disease, disease severity, number of diseases, disease characteristics (such as disease type, tumor size, histological grade, number of infiltrated lymph nodes, etc.), utilization of medical care, number of medical visits, medical intervals, regular medical resources, family status, marital status, number of children, family support, race, ethnic group, culture, religion type, mother tongue, characteristics of medical tests performed on a patient in the past (such as test type, test time, test results, etc.), characteristics of medical treatments performed on a patient in the past (such as treatment type, culture, religion type, test result, etc.), characteristics of medical treatments performed on a patient in the past (such as treatment type, disease size, number of infiltrated lymph nodes, etc.) Treatment time, treatment outcome, etc.), etc. Some non-limiting examples of such medical tests may include: blood tests, urine tests, stool tests, medical imaging (e.g., ultrasound, angiography, Magnetic Resonance Imaging (MRI), Computed Tomography (CT), X-ray, electromyography, Positron Emission Tomography (PET), etc.), physical examination, electrocardiography, amniocentesis, pap test, skin allergy test, endoscopy, biopsy, pathology, blood pressure measurement, oxygen saturation test, pulmonary function test, etc. Some non-limiting examples of such medical treatments may include: medication, diet therapy, surgery, radiotherapy, chemotherapy, physical therapy, psychotherapy, blood transfusion, fluid infusion, etc. Thus, causing the matching subset to be displayed may include: enabling the user to view the patient episode exhibiting the selected patient characteristic while skipping playback of the patient episode lacking the selected patient characteristic.

In some embodiments, the selected patient physical characteristic may include a type of anatomical structure. As used herein, an anatomical structure may be any particular portion of a living organism. For example, the anatomical structure may include any particular organ, tissue, cell, or other structure of the patient. In this way, for example, if the user wishes to view video relating to a pneumothorax pocket surgery, the portion of the short sheet may be presented while other unrelated portions may be skipped. The stored event characteristics may include various other patient characteristics, such as patient demographics, medical conditions, medical history, previous treatments, or any other relevant patient descriptive information. This may enable the viewer to view the surgery of a patient that matches very specific features (e.g., caucasian 70 to 75 years old, coronary heart disease patient who has previously undergone bypass surgery). In this way, videos of one or more patients matching those particular criteria may be selectively presented to the user.

In yet another example, the stored event characteristics may include physiological responses. As used herein, the term "physiologic response" refers to any physiologic change that can occur in response to an event within a surgical procedure. Some non-limiting examples of such physiological changes may include: changes in blood pressure, changes in oxygen saturation, changes in lung function, changes in respiratory rate, changes in blood composition (chemical counts, etc.), bleeding, leaks, changes in blood flow to tissue, changes in tissue condition (such as changes in color, shape, structural condition, functional condition, etc.), changes in body temperature, changes in brain activity, changes in perspiration, or any other physical change in response to a surgical procedure. In this way, the user can prepare for events that may occur during a surgical procedure by selectively viewing those events (and skipping playback of unmatched actionable events).

In some examples, the event characteristics may include surgeon skill level. The skill level may include any indication of the relative abilities of the surgeon. In some embodiments, the skill level may include a score reflecting the surgeon's experience or proficiency in performing the surgical procedure or a particular technique within the surgical procedure. In this way, the user can compare how differently experienced surgeons handle the same surgical procedure by selecting different skill levels. In some embodiments, the skill level may be determined based on the surgeon's identity (determined via data entry (manual entry of the surgeon's ID) or by machine vision). For example, the disclosed method may include the steps of: the video clips are analyzed to determine the identity of the surgeon through biometric analysis (e.g., facial, voice, etc.) and to identify a predetermined skill level associated with the surgeon. The predetermined skill level may be obtained by accessing a database that stores skill levels associated with particular surgeons. The skill level may be based on past performance of the surgeon, the type and/or level of training or education of the surgeon, the number of surgical procedures the surgeon has performed, the type of surgical procedures the surgeon has performed, the qualification of the surgeon, the experience level of the surgeon, patient or other healthcare professional's evaluations of the surgeon, past surgical results and complications, or any other information relevant to assessing the healthcare professional's skill level. In some implementations, the skill level can be automatically determined based on computer analysis of the video filmlet. For example, disclosed embodiments may include: analysis is performed on video clips that capture the performance of the procedure, the performance of a particular technique, the decisions made by the surgeon, or similar events. The skill level of the surgeon may then be determined based on how well the surgeon performed during the event, which may be based on timeliness, effectiveness, adherence to preferred techniques, lack of damage or adverse effects, or any other skill indicator that may be gleaned from analyzing the short patch.

In some embodiments, the skill level may be a global skill level assigned to individual surgeons or may reference a particular event. For example, a surgeon may have a first skill level with respect to a first technique or procedure and may have a second skill level with respect to a different technique or procedure. The skill level of the surgeon may also vary throughout the event, technique, and/or procedure. For example, a surgeon may act at a first skill level within a first portion of the tab, but may act at a second skill level within a second portion of the tab. Thus, the skill level may be the skill level associated with a particular location of the short patch. The skill level may also be a plurality of skill levels during the event or may be an aggregation of a plurality of skill levels during the event, such as an average, a rolling average, or other form of aggregation. In some embodiments, the skill level may be a general skill level required to perform a surgical procedure, a surgical stage, and/or an intraoperative surgical event, and may not be dependent on a particular surgeon or other healthcare professional. The skill level may be expressed in different ways, including as a numerical scale (e.g., 1 to 10, 1 to 100, etc.), as a percentage, as a scale of text-based indicators (e.g., "high skill," "medium skill," "unskilled," etc.), or any other suitable format that expresses the skill of the surgeon. Although the skill level is described herein as a surgeon's skill level, in some embodiments, the skill level may be associated with another healthcare professional, such as a surgical technician, a nurse, a physician assistant, an anesthesiologist, a doctor, or any other healthcare professional.

Embodiments of the present disclosure may also include accessing summary data related to a plurality of surgical procedures that are similar to a particular surgical procedure. Summary data may refer to data collected and/or combined from multiple sources. The summary data may be compiled from a plurality of surgical procedures having a relationship to a particular surgical procedure. For example, a surgical procedure may be considered similar to a particular surgical procedure if the surgical procedure includes the same or similar surgical procedure stages, includes the same or similar intra-operative events, or is associated with the same or similar tags or characteristics (e.g., event tags, stage tags, event features, or other tags).

The present disclosure may also include presenting statistical information associated with the selected event characteristics to the user. Statistical information may refer to any information that may be useful for analyzing multiple surgical procedures together. Statistical information may include, but is not limited to, mean, data trend, standard deviation, variance, correlation, causal relationships, test statistics (including t-statistics, chi-squared statistics, f-statistics, or other forms of test statistics), order statistics (including sample maxima and minima), graphical representations (e.g., charts, graphs, plots, or other visual or graphical representations), or the like. As an illustrative example, in embodiments where the user selects an event feature that includes a particular surgeon identity, the statistical information may include: the average duration of time the surgeon performed the surgery (or the stage or event of the surgery), the rate of adverse or other outcomes of the surgeon, the average skill level of the surgeon performing the intraoperative events, or similar statistical information. One of ordinary skill in the art will appreciate other forms of statistical information that may be presented in accordance with the disclosed embodiments.

Fig. 8A and 8B are flow diagrams illustrating an example process 800 for video indexing consistent with the disclosed embodiments. Process 800 may be performed by a processing device, such as at least one processor. For example, the at least one processor may include one or more Integrated Circuits (ICs) including: an Application Specific Integrated Circuit (ASIC), a microchip, a microcontroller, a microprocessor, all or part of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a server, a virtual server, or other circuitry suitable for executing instructions or performing logical operations. The instructions executed by the at least one processor may be preloaded into a memory integrated with or embedded in the controller, for example, or may be stored in a separate memory. The memory may include: random Access Memory (RAM), Read Only Memory (ROM), hard disk, optical disk, magnetic media, flash memory, other permanent, fixed, or volatile memory, or any other mechanism capable of storing instructions. In some embodiments, the at least one processor may comprise more than one processor. Each processor may have a similar configuration, or the processors may have different configurations that are electrically connected or disconnected from each other. For example, the processor may be a separate circuit or integrated in a single circuit. When more than one processor is used, the processors may be configured to operate independently or in concert. The processor may be electrically coupled, magnetically coupled, optically coupled, acoustically coupled, mechanically coupled, or coupled by other means that allow them to interact.

In some implementations, a non-transitory computer-readable medium may contain instructions that, when executed by a processor, cause the processor to perform process 800. At step 802, process 800 may include accessing a video clip to index, the video clip to index including a clip of a particular surgical procedure. The video clips may be accessed from local storage (such as a local hard drive) or may be accessed from a remote source (e.g., over a network connection). In another example, a video filmlet may be captured using one or more image sensors or generated by another process. At step 804, process 800 may include analyzing the video clip to identify a video clip location associated with a surgical stage of a particular surgical procedure. As discussed above, the location may be associated with a particular frame, frame range, time index, time range, or any other location identifier.

Process 800 may include generating a stage tag associated with a surgical stage, as shown at step 806. This may be done, for example, by Video Content Analysis (VCA) using techniques such as one or more of the following: video motion detection, video tracking, shape recognition, object detection, fluid flow detection, device identification, behavioral analysis, or other forms of computer-aided situational awareness. When a learned feature associated with a stage is identified in the video, tags can be generated that partition the stage. For example, the tag may include a predetermined name for the stage. At step 808, process 800 may include associating a stage label with the video filmlet location. For example, the stage label may indicate that the identified video clip location is associated with a surgical stage of a particular surgical procedure. At step 810, process 800 may include analyzing the video clips using one or more of the VCA techniques described above to identify event locations for particular intra-operative surgical events within the surgical phase. The process can include associating an event tag with the event location of the particular intraoperative surgical event, as shown at step 812. For example, the event tag may indicate that a video clip is associated with a surgical event at the event location. As with the phase tag, the event tag may include a predetermined name for the event. At step 814, in fig. 8B, process 800 may include storing event characteristics associated with a particular intraoperative surgical event. As discussed in more detail below, the event characteristics may include an adverse outcome of the surgical event/surgical technique, surgeon skill level, patient characteristics, the identity of the particular surgeon, physiological response, duration of the event, or any other characteristic or characteristic associated with the event. The event characteristics may be determined manually (e.g., input by a viewer) or may be determined automatically by artificial intelligence applied to the machine vision, for example, as described above. In one example, the event features may include skill levels (as required minimum skill levels, skill levels exhibited during the event, etc.), the machine learning models may be trained using training examples to determine such skill levels from the video, and the trained machine learning models may be used to analyze video snippets to determine skill levels. Examples of such training examples may include video clips depicting events and markers indicating corresponding skill levels. In another example, the event characteristics may include time-related characteristics of the event (such as a start time, an end time, a duration, etc.), and such time-related characteristics may be calculated by analyzing an interval in the video filmlet that corresponds to the event. In yet another example, the event features may include an event type, the machine learning model may be trained using training examples to determine the event type from the video, and the trained machine learning model may be used to analyze the video snippets and determine the event type. Examples of such training examples may include a video clip depicting an event and a flag indicating the event type. In additional examples, the event features may include: information related to the medical instrument involved in the event (such as the type of medical instrument, the use of the medical instrument, etc.), a machine learning model may be trained using the training examples to identify such information related to the medical instrument from the video, and the trained machine learning model may be used to analyze the video snippets and determine the information related to the medical instrument involved in the event. Examples of such training examples may include: a video clip depicting an event including use of the medical instrument, and a marker indicating information related to the medical instrument. In yet another example, the event features may include information related to the anatomical structure involved in the event (such as a type of anatomical structure, a condition of the anatomical structure, a change in the anatomical structure related to the event, etc.), the machine learning model may be trained using the training examples to identify such information related to the anatomical structure from the video, and the trained machine learning model may be used to analyze the video snippets and determine the information related to the anatomical structure involved in the event. Examples of such training examples may include video clips depicting events involving the anatomical structure, and markers indicating information related to the anatomical structure. In additional examples, the event features may include information related to medical outcomes associated with the events, the machine learning model may be trained using training examples to identify such information related to the medical outcomes from the videos, and the trained machine learning model may be used to analyze the video snippets and determine the information related to the medical outcomes associated with the events. Examples of such training examples may include video clips depicting medical results, and markers indicating medical results.

At step 816, process 800 may include: at least a portion of the video clips of the particular surgical procedure are associated with at least one of a stage tag, an event tag, and an event feature in a data structure. In this step, various tags are associated with the video filmlet to allow access to the filmlet using the tags. As previously described, various data structures may be used to store related data in an associated manner.

At step 818, process 800 may include: the user is enabled to access the data structure by selecting at least one of a selected phase tag, a selected event tag, and a selected event feature of the video filmlet for display. In some implementations, the user can select the selected phase tab, the selected event tab, and the selected event feature through a user interface of the computing device (such as user interface 700 shown in fig. 7). For example, data entry fields, drop down menus, icons or other selectable items may be provided to enable a user to select a surgical procedure, a stage of a procedure, an intra-operative event, and features of the procedure and patient. At step 820, process 800 may include: a lookup is performed in a data structure of the surgical video filmlet matching the at least one selected stage tag, selected event tag, and selected event features to identify a matching subset of the stored video filmlets. At step 822, process 800 may include: causing the matching subset of the stored video clips to be displayed to the user, thereby enabling the user to view surgical clips of the at least one intraoperative surgical event that share the selected event characteristic while skipping playback of the video clips lacking the selected event characteristic. By this filtering, the user can quickly view only those video segments corresponding to the user's attention while skipping playback of a large amount of video data that is not related to the user's attention.

In preparing a surgical procedure, it may be beneficial for a surgeon to review video clips of a surgical procedure having similar surgical events. However, it may be too time consuming for the surgeon to view the entire video or jump around to find the relevant portion of the surgical clip. Accordingly, there is a need for an unconventional method that efficiently and effectively enables a surgeon to view a surgical video summary that summarizes snippets of related surgical events while skipping other unrelated snippets.

Aspects of the present disclosure may relate to generating surgical summary snippets, including methods, systems, apparatuses, and computer-readable media. For example, a surgical clip can be compared to previously analyzed surgical clips to identify and tag relevant intraoperative surgical events. The surgeon may be enabled to look at a summary of the surgery summarizing the intraoperative surgical events while skipping most other extraneous snippets. For ease of discussion, a method is described below, and an understanding of aspects of the method applies equally to systems, apparatuses, and computer-readable media. For example, some aspects of such methods may occur electronically over a network, wired, wireless, or both. Other aspects of this method may occur using non-electronic means. In the broadest sense, the method is not limited to a particular physical and/or electronic tool, but can be accomplished using many different tools.

Consistent with the disclosed embodiments, a method may involve accessing a particular surgical clip that contains a first set of frames associated with at least one intraoperative surgical event. A surgical clip may refer to any video, group of video frames, or video clip that includes a representation of a surgical procedure. For example, a surgical clip may include one or more video frames taken during a surgical procedure. Accessing the surgical clip can include retrieving the video from a storage location, such as a storage device. The surgical blades may be accessed from a local memory, such as a local hard drive, or may be accessed from a remote source (e.g., over a network connection). As described in more detail above, video may include any form of recorded visual media, including recorded images and/or sound. The Video may be stored as a Video file, such as an Audio Video Interleave (AVI) file, a Flash Video Format (FLV) file, a QuickTime file Format (MOV), MPEG (MPG, MP4, M4P, etc.), Windows Media Video (WMV) file, a Material Exchange Format (MXF) file, or any other suitable Video file Format. Additionally or alternatively, in some examples, accessing a particular surgical short may include: one or more image sensors are used to capture a particular surgical short.

As described above, an intraoperative surgical event can be any event or action associated with a surgical procedure or stage. A frame may refer to one of a plurality of still images constituting a video. The first set of frames may include frames taken during an intraoperative surgical event. For example, a particular surgical short may depict a surgical procedure performed on a patient and captured by at least one image sensor in an operating room. The image sensor may include, for example, cameras 115, 121, and 123 and/or 125 located in the operating room 101. In some embodiments, the at least one image sensor may be at least one of above an operating table in an operating room or within a patient. For example, the image sensor may be located above the patient, or may be located within a surgical cavity, organ, or vessel of the patient, as described above. The first set of frames may include a representation of an intraoperative surgical event, including: anatomical structures, surgical tools, healthcare professionals performing intraoperative surgical events, or other visual representations of intraoperative surgical events. However, in some embodiments, some or all of the frames may not contain a representation of an intraoperative surgical event, but may be otherwise associated with an event (e.g., a frame taken while performing the event, etc.).

Consistent with the present disclosure, a particular surgical clip may contain a second set of frames that are not associated with surgical activity. For example, a surgical procedure may involve a significant number of periods of downtime during which no significant surgical activity is performed and there is no substantial reason to review a short sheet. Surgical activity may refer to any activity performed in connection with a surgical procedure. In some embodiments, surgical activity may refer broadly to any activity associated with a surgical procedure, including: preoperative activity, perioperative activity, intraoperative activity, and/or postoperative activity. Thus, the second set of frames may include frames that are not associated with any such activity. In other embodiments, the surgical activity may refer in a narrow sense to an active set such as a physical manipulation of an organ or tissue of a patient performed by a surgeon. Thus, the second set of frames may include various activities associated with preparation, providing anesthesia, monitoring vital signs, collecting or preparing surgical tools, discussions between healthcare professionals, or other activities that may not be considered surgical activities.

According to the present disclosure, the method may comprise the steps of: historical data is accessed based on historical surgical clips of prior surgeries. Historical data may refer to data in any format that was previously recorded and/or stored. In some embodiments, the historical data may be one or more video files that include historical surgical clips. For example, the historical data may include a series of frames taken during a previous surgical procedure. However, the history data is not limited to the video file. For example, the historical data may include information stored as text representing at least one aspect of the historical surgical short. For example, the historical data may include a database of information that summarizes or otherwise references historical surgical clips. In another example, the historical data may include information stored as a numerical value representative of at least one aspect of the historical surgical patch, for example. In additional examples, the historical data may include statistical information and/or statistical models based on analysis of historical surgical clips. In yet another example, the historical data may include a machine learning model trained using training examples, and the training examples may be based on historical surgical clips. Accessing historical data may include: receiving historical data via electronic transmission, retrieving historical data from storage (e.g., a memory device), or any other process for accessing data. In some embodiments, the historical data may be accessed from the same source as the particular surgical clips discussed above. In other embodiments, the historical data may be accessed from a separate source. Additionally or alternatively, accessing historical data may include generating historical data, such as by analyzing historical surgical clips of prior surgeries or by analyzing data based on historical surgical clips of prior surgeries.

According to an embodiment of the present disclosure, the history data may include the following information: this information distinguishes portions of the surgical clip into frames associated with intra-operative surgical events and frames not associated with surgical activity. This information may distinguish portions of the surgical clip in various ways. For example, in conjunction with historical surgical clips, frames associated with surgical and non-surgical activities may have been distinguished. This may have been done previously, for example, by manually marking surgical activities with a receipt or by training an artificial intelligence engine to distinguish surgical and non-surgical activities. The historical information may, for example, identify a set of frames (e.g., using a start frame number, a frame number, an end frame number, etc.) for the surgical short. The information may also include time information such as a start timestamp, an end timestamp, a duration, a range of timestamps, or other information related to the timing of the surgical clip. In one example, the historical data may include various indicators and/or rules that distinguish between surgical and non-surgical activity. Some non-limiting examples of such indicators and/or rules are discussed below. In another example, the historical data may include a machine learning model trained to identify portions of the video corresponding to surgical activity and/or portions of the video corresponding to non-surgical activity, e.g., based on historical surgical clips.

Various indicators may be used to distinguish between surgical and non-surgical activities (manually, semi-manually, or automatically (e.g., via machine learning)). For example, in some embodiments, the information that distinguishes portions of the historical surgical short into frames associated with the intraoperative surgical event may include an indicator of at least one of the presence or movement of a surgical tool. The surgical tool may be any instrument or device that may be used during a surgical procedure, which may include, but is not limited to, cutting instruments (such as scalpels, scissors, saws, etc.), grasping and/or clamping instruments (such as Billroth clips, "mosquito" hemostats, non-invasive hemostats, Deschamp needles, Hopfner hemostats, etc.), retractors (such as Farabef C laminar flow hooks, blunt tooth hooks, tine hooks, slotted probes, compacting forceps, etc.), tissue unifying instruments and/or materials (such as needle holders, surgical needles, staplers, clips, tapes, meshes, etc.), protective equipment (such as facial and/or respiratory protective equipment, headgear, shoe covers, gloves, etc.), laparoscopes, endoscopes, patient monitoring devices, etc. Video or image analysis algorithms, such as those described above with reference to video indexing, may be used to detect the presence and/or motion of surgical tools within the short segment. In some examples, a motion metric of the surgical tool may be calculated, and the calculated motion metric may be compared to a selected threshold to distinguish between surgical activity and non-surgical activity. For example, the threshold may be selected based on the type of surgery, based on the time or within the surgery, based on the stage of the surgery, based on parameters determined by analyzing video clips of the surgery, based on parameters determined by analyzing historical data, and so forth. In some examples, signal processing algorithms may be used to analyze the calculated motion metrics at different times within the video clips of the surgery to distinguish between surgical and non-surgical activity. Some non-limiting examples of such signal processing algorithms may include: machine learning-based signal processing algorithms trained using training examples to distinguish surgical activities from non-surgical activities, artificial neural networks (such as recurrent neural networks, long-short term memory neural networks, deep neural networks, etc.) configured to distinguish surgical activities from non-surgical activities, Markov models, Viterbi models, etc.

In some exemplary embodiments, the information distinguishing portions of the historical surgical short into frames associated with the intraoperative surgical event may include tools and anatomical features detected in the associated frames. For example, the disclosed method may include the steps of: image and/or video analysis algorithms are used to detect tools and anatomical features. The tool may comprise a surgical tool (as described above) or other non-surgical tool. The anatomical feature may comprise an anatomical structure (as defined in more detail above) or other portion of a living organism. The detection of the presence of both the surgical tool and the anatomical structure in one or more associated frames may serve as an indicator of surgical activity, as surgical activity typically involves a surgical tool interacting with the anatomical structure. For example, in response to detecting the first tool in a set of frames, the set of frames may be determined to be associated with an intra-operative surgical event, and in response to not detecting the first tool in the set of frames, the set of frames may be identified as not associated with the intra-operative surgical event. In another example, in response to detecting the first anatomical feature in a set of frames, the set of frames may be determined to be associated with an intra-operative surgical event, and in response to not detecting the first anatomical feature in the set of frames, the set of frames may be identified as not associated with the intra-operative surgical event. In some examples, the video clips may be further analyzed to detect interactions between the detected tools and anatomical features, and distinguishing between surgical and non-surgical activities may be based on the detected interactions. For example, in response to detecting the first interaction in a set of frames, the set of frames may be determined to be associated with an intra-operative surgical event, and in response to not detecting the first interaction in the set of frames, the set of frames may be identified as not being associated with the intra-operative surgical event. In some examples, the video clips may be further analyzed to detect actions performed by the detected tools, and distinguishing between surgical and non-surgical activities may be based on the detected actions. For example, in response to detecting the first action in a set of frames, the set of frames may be determined to be associated with an intraoperative surgical event, while in response to not detecting the first action in the set of frames, the set of frames may be identified as not associated with an intraoperative surgical event. In some examples, the video clips may be further analyzed to detect changes in the condition of the anatomical feature, and distinguishing between surgical and non-surgical activity may be based on the detected changes. For example, in response to detecting a first change in a set of frames, the set of frames may be determined to be associated with an intraoperative surgical event, while in response to not detecting the first change in the set of frames, the set of frames may be identified as not associated with an intraoperative surgical event.

Some aspects of the invention may involve distinguishing a first set of frames from a second set of frames in a particular surgical short sheet based on information from historical data. For example, this information may provide context useful in determining which frames of a particular surgical clip are associated with intraoperative events and/or surgical activities. In some embodiments, distinguishing the first set of frames from the second set of frames in a particular surgical short slice may involve the use of a machine learning algorithm. For example, a machine learning model may be trained using training examples of historical data-based information to identify intraoperative events and/or surgical activities.

According to the present disclosure, the first and second sets of frames may be distinguished by analyzing the surgical short slice to identify information similar to that of the historical data. Fig. 9 is a flow diagram illustrating an example process 900 for distinguishing a first set of frames from a second set of frames. It is to be understood that process 900 is provided as an example. One of ordinary skill will appreciate that various other processes for distinguishing the first set of frames from the second set of frames are consistent with the present disclosure. At step 910, the process 900 may include analyzing the particular surgical clip to detect the medical instrument. A medical instrument may refer to any tool or device used to treat a patient, including a surgical tool as described above. In addition to the surgical tools listed above, the medical devices may include, but are not limited to, stethoscopes, gauze sponges, catheters, cannulas, defibrillators, needles, trays, lights, thermometers, pipettors or drip chambers, oxygen masks and tubes, or any other medical implement. For example, the machine learning model may be trained using training examples to detect medical instruments in images and/or videos, and the trained machine learning model may be used to analyze a particular surgical short and detect medical instruments. Examples of such training examples may include videos and/or images of a surgical procedure, and markers indicating the presence of one or more particular medical instruments in the videos and/or images, or along with markers indicating no particular medical instruments in the videos and/or images.

At step 920, process 900 may include analyzing the particular surgical clip to detect anatomical structures. The anatomical structure may be any organ, portion of an organ, or other portion of a living organism, as discussed above. One or more video and/or image recognition algorithms as described above may be used to detect medical instruments and/or anatomical structures. For example, a machine learning model may be trained using training examples to detect anatomical structures in images and/or videos, and the trained machine learning model may be used to analyze a particular surgical short and detect anatomical structures. Examples of such training examples may include videos and/or images of a surgical procedure, and stored markers indicating one or more specific anatomical structures in the videos and/or images, or along with markers indicating that the videos and/or images do not have specific anatomical structures.

At step 930, process 900 may include analyzing the video to detect relative movement between the detected medical instrument and the detected anatomical structure. The relative movement may be detected using a motion detection algorithm, for example, based on pixel changes between frames, optical flow, or other forms of motion detection algorithms. For example, a motion detection algorithm may be used to estimate motion of the medical instrument in the video, as well as to estimate motion of the anatomical structure in the video, and the estimated motion of the medical instrument may be compared to the estimated motion of the anatomical structure to determine relative movement. At step 940, process 900 may include distinguishing a first set of frames from a second set of frames based on the relative movement, wherein the first set of frames includes surgically active frames and the second set of frames includes non-surgically active frames. For example, in response to a first relative movement pattern in a set of frames, the set of frames may be determined to include surgical activity, while in response to a second relative movement pattern being detected in the set of frames, the set of frames may be identified as not including non-surgical activity frames. Thus, presenting a summary of the first set of frames may enable a surgeon preparing for surgery to skip over non-surgical active frames during a curtailed presented video review. In some embodiments, skipping the frames of non-surgical activity may include skipping a majority of the frames capturing the non-surgical activity. For example, not all frames capturing non-surgical activity may be skipped, such as frames just before or after an intra-operative surgical event, frames capturing non-surgical activity that provides context to the intra-operative surgical event, or any other frame that may be relevant to the user.

In some exemplary embodiments of the present disclosure, distinguishing the first set of frames from the second set of frames may also be based on a detected relative position between the medical instrument and the anatomical structure. The relative position may refer to a distance between the medical instrument and the anatomical structure, an orientation of the medical instrument relative to the anatomical structure, or a positioning of the medical instrument relative to the anatomical structure. For example, the relative position may be estimated based on the relative position between the detected medical instrument and the anatomical structure within one or more frames of the surgical short. For example, the relative positions may include: distance (e.g., in pixels, real world measurements, etc.), direction, vector, etc. In one example, an object detection algorithm may be used to determine the position of the medical instrument and determine the position of the anatomical structure, and the two determined positions may be compared to determine the relative position. In one example, a set of frames may be determined to include surgical activity in response to a first relative position in the set of frames, and the set of frames may be identified as non-surgical activity frames in response to a second relative position being detected in the set of frames. In another example, the distance between the medical instrument and the anatomical structure may be compared to a selected threshold, and distinguishing the first set of frames from the second set of frames may also be based on a result of the comparison. For example, the threshold may be selected based on the type of medical instrument, the type of anatomy, the type of surgery, and so forth. In other embodiments, distinguishing the first set of frames from the second set of frames may also be based on a detected interaction between the medical instrument and the anatomical structure. The interaction may comprise any action of the medical instrument that may affect the anatomy and vice versa. For example, the interaction may include contact between the medical instrument and the anatomical structure, action of the medical instrument on the anatomical structure (such as cutting, clamping, applying pressure, scraping, etc.), reaction of the anatomical structure (such as reflex action), or any other form of interaction. For example, the machine learning model may be trained using training examples to detect interactions of the medical instrument with the anatomical structure from the video, and the trained machine learning model may be used to analyze video clips and detect interactions between the medical instrument and the anatomical structure. Examples of such training examples may include a video clip of a surgical procedure, and a marker indicating the presence of a particular interaction between a medical instrument and an anatomical structure in the video clip, or along with a marker indicating no particular interaction of a medical instrument with an anatomical structure in the video clip.

Some aspects of the present disclosure may involve presenting a summary of a first set of frames of a particular surgical clip to a user upon request by the user, while skipping presenting a second set of frames to the user. The summary of the first set of frames may be presented in various forms. In some implementations, the summary of the first set of frames can include a video file. The video file may be a compilation of video clips that include the first set of frames. In some implementations, individual ones of the video clips can be presented individually to the user, or a single compiled video can be presented. In some implementations, a separate video file may be generated for the summary of the first set of frames. In other implementations, the summary of the first set of frames may include instructions to identify frames to be included for presentation and instructions to identify frames to be skipped. To the user, the execution of the instructions may appear as if a continuous video has been generated. Various other forms may also be used, including presenting the first set of frames as still images.

The presentation may include any process for delivering the summary to the user. In some implementations, this may include causing the summary to be displayed on a display, such as a computer screen or monitor, a projector, a mobile phone display, a tablet, a smart device, or any device capable of displaying images and/or audio. Presenting may also include sending or otherwise making accessible to the user the summary of the first set of frames. For example, the summary of the first set of frames may be sent to a computing device of the user over a network. As another example, the aggregated location of the first set of frames may be shared with the user. The second set of frames is skipped by not including the second set of frames in the summary. For example, if the summary is presented as a video, a video filmlet containing the second set of frames may not be included in the video file. The first set of frames may be presented in any order, including chronological order. In some cases, at least some of the frames in the first set of frames may be logically presented in a non-temporal order. In some embodiments, the summary of the first set of frames may be associated with more than one intraoperative surgical event. For example, a user may request to view multiple intraoperative surgical events in a particular surgical short. Presenting the summary of the first set of frames to the user may include: the first group of frames is displayed chronologically and the chronologically sequential frames of the second group are skipped.

The user may be any individual or entity that may need to access the surgical summary snippet. In some embodiments, the user may be a surgeon or other medical professional. For example, a surgeon may request a short surgical summary for review or training purposes. In some embodiments, the user may be a manager, chief surgeon, insurance company personnel, regulatory agency, police or research authority, or any other entity that may need access to a surgical clip. Various other examples of users are provided above with reference to video indexing techniques. The user may submit the request through a computing device, such as a laptop, desktop, mobile phone, tablet, smart glasses, or any other form of computing device capable of submitting the request. In some implementations, the request can be received electronically over a network and the summary can be presented based on the received request.

In some example embodiments, the user's request may include an indication of at least one type of intraoperative surgical event of interest, and the first set of frames may depict at least one intraoperative surgical event of the at least one type of intraoperative surgical event of interest. The type of intraoperative surgical event can be any category into which intraoperative surgical events can be classified. For example, the type may include the type of procedure being performed, the stage of the procedure, whether the surgical event is adverse, whether the surgical event is part of a planned procedure, the identity of the surgeon performing the surgical event, the purpose of the surgical event, the medical condition associated with the surgical event, or any other category or classification.

Embodiments of the present disclosure may also include deriving the first set of frames for storage in a medical record of the patient. As described above, a particular surgical tab may depict a surgical procedure performed on a patient. Using the disclosed methods, a first set of frames associated with at least one intraoperative surgical event can be associated with a medical record of a patient. As used herein, a medical record may include any form of documentation of information related to the health of a patient, including diagnosis, treatment, and/or care. The medical records may be stored in a digital format, such as an Electronic Medical Record (EMR). Deriving the first set of frames may include transmitting the first set of frames or otherwise enabling the first set of frames to be stored in the medical record or otherwise associating the first set of frames with the medical record. This may include, for example, transmitting the first set of frames (or a copy of the first set of frames) to an external device, such as a database. In some embodiments, the disclosed methods may include the steps of: the first set of frames is associated with a unique patient identifier and the medical record including the unique patient identifier is updated. The unique patient identifier may be any indicator, such as an alphanumeric string, that uniquely identifies the patient. The alphanumeric string may make the patient anonymous, which may be desirable for privacy purposes. In cases where privacy may not be an issue, the unique patient identifier may include the patient's name and/or social security number.

In some exemplary embodiments, the disclosed method may further comprise the steps of: an index of at least one intraoperative surgical event is generated. As described above, the index may refer to a data storage form that enables retrieval of the associated video frame. Indexing may speed up retrieval in a more efficient and/or effective manner than non-indexing. The index may include a list or other detailed listing of intraoperative surgical events depicted in or otherwise associated with the first set of frames. Deriving the first set of frames may include generating a compilation of the first set of frames, the compilation including an index and the compilation configured to enable viewing of at least one intra-operative surgical event based on selection of one or more index entries. For example, by indexing to select "incision," the user may be presented with a compilation of surgical clips depicting incisions. Various other intraoperative surgical events may be included on the index. In some implementations, the compilation may contain a series of frames of different intra-operative events stored as a continuous video. For example, a user may select multiple intra-operative events by indexing, and frames associated with the selected intra-operative events may be assembled into a single video.

Embodiments of the present disclosure may also include generating a cause and effect summary. The causal summary may allow a user to view the short slices or images associated with the causal phase of the surgical procedure and the short slices or images associated with the causal phase without having to view intermediate short slices or images. As used herein, "cause" refers to a trigger or action that causes a particular result, phenomenon, or condition. "result" refers to a phenomenon or condition that can be attributed to a cause. In some embodiments, the result is an adverse result. For example, the results may include: bleeding, mesenteric emphysema, injury, transition to unplanned open surgery (e.g., abdominal wall incision), incision significantly larger than the planned incision, and the like. The cause may be an action that causes or may cause an adverse result, such as a surgeon's error. For example, the errors may include: technical errors, communication errors, administrative errors, judgment errors, decision errors, errors related to medical device utilization, or other forms of errors that may occur. The outcome may also include a positive or expected outcome, such as a successful procedure, or stage.

In embodiments where a cause and effect summary is generated, the historical data may also include historical surgical outcome data and corresponding historical cause data. The historical surgical outcome data may indicate a portion of the historical surgical short associated with the outcome, and the historical reason data may indicate a portion of the historical surgical short associated with a corresponding reason for the outcome. In such an embodiment, the first set of frames may include a cause frame set and a result frame set, and the second set of frames may include an intermediate frame set.

FIG. 10 is a flow chart illustrating an exemplary process 1000 of generating a cause and effect summary consistent with the disclosed embodiments. The process 1000 is provided as an example, and one of ordinary skill will appreciate that various other processes for generating a cause and effect summary are consistent with the present disclosure. At step 1010, process 1000 may include: the particular surgical patch is analyzed to identify a surgical outcome and a corresponding cause of the surgical outcome, the identification based on the historical outcome data and the corresponding historical cause data. The analysis may be performed using image and/or video processing algorithms, as discussed above. In some embodiments, step 1010 may include using a machine learning model trained to use historical data to identify surgical outcomes and corresponding causes of the surgical outcomes to analyze a particular surgical short. For example, the machine learning model may be trained based on historical data with known or predetermined surgical outcomes and corresponding causes. The trained models can then be used to identify surgical outcomes and corresponding causes in other clips, such as a particular surgical clip. Examples of training examples used to train such machine learning models may include a video clip of a surgical procedure, and markers indicating surgical outcomes corresponding to the video clip, and possibly corresponding causes of the surgical outcomes. Such training examples may be based on historical data, e.g., including video clips from historical data, including results determined based on historical data, etc.

At step 1020, process 1000 may include: a set of result frames in a particular surgical slice is detected based on the analysis, the set of result frames being within a result phase of the surgical procedure. The outcome stage may be a time interval or portion of the surgical procedure associated with the outcome, as described above. At step 1030, process 1000 may include: a set of cause frames in a particular surgical slice is detected based on the analysis, the set of cause frames being within a cause phase of the surgical procedure that is temporally distant from an outcome phase. In some embodiments, the outcome phase may include a surgical phase in which the outcome may be observed, and the set of outcome frames may be a subset of the frames in the outcome phase. The cause phase may be a time interval or portion of the surgical procedure associated with the cause of the outcome in the outcome phase. In some embodiments, the cause phase may include a surgical phase in which the cause occurs, and the set of cause frames may be a subset of the frames in the cause phase. The set of intermediate frames is in an intermediate stage between the set of cause frames and the set of result frames. At step 1040, process 1000 may include: a causal summary of the surgical clip is generated, wherein the causal summary includes a set of causal frames and a set of effect frames and skips over a set of intermediate frames. In some embodiments, the causal summary may be similar to the summary of the first set of frames, as described above. Thus, the causal summary may include a compilation of video clips associated with a set of causal frames and a set of effect frames. Presenting the summary of the first set of frames to the user (as described above) may include a cause and effect summary.

Fig. 11 is a flow diagram illustrating an example process 1100 of generating a surgical summary snippet consistent with the disclosed embodiments. Process 1100 may be performed by a processing device. In some implementations, a non-transitory computer-readable medium is provided that may contain instructions that, when executed by a processor, cause the processor to perform the process 1100. At step 1110, process 1100 may include: a particular surgical clip is accessed that includes a first set of frames associated with at least one intraoperative surgical event and a second set of frames not associated with surgical activity. As discussed in more detail above, the first set of frames may be associated with a plurality of intraoperative surgical events and may not necessarily be consecutive frames. Further, in some embodiments, the first set of frames may include a cause frame set and a result frame set, and the second set of frames may include an intermediate frame set, as discussed above with reference to process 1000.

At step 1120, process 1100 may include: accessing historical data based on historical surgical clips of prior surgeries, wherein the historical data includes the following information: this information distinguishes portions of the surgical clip into frames associated with intra-operative surgical events and frames not associated with surgical activity. In some embodiments, the information that distinguishes portions of the historical surgical snippet into frames associated with the intraoperative surgical event may include an indicator of at least one of the presence or movement of a surgical tool and/or anatomical feature. At step 1130, process 1100 may include: based on the information of the historical data, the first set of frames is distinguished from the second set of frames in the particular surgical slice.

At step 1140, process 1100 may comprise: upon request by the user, the user is presented with a summary of the first set of frames of the particular surgical clip while skipping the presentation of the second set of frames to the user. The user's request may be received from a computing device, which may include a user interface that enables the user to make the request. In some implementations, the user may also request frames associated with a particular type or category of intraoperative event. Based on the steps described in process 1100, a summary including frames associated with the intra-operative event and skipping frames not associated with the surgical activity may be presented to the user. This summary may be used, for example, by the surgeon as a training video summarizing the intraoperative surgical events while skipping most other extraneous snippets.

In preparing a surgical procedure, it may be beneficial for a surgeon to review video clips of multiple surgical procedures with similar surgical events. Conventional methods may not allow a surgeon to easily access a video clip of a surgical procedure having similar surgical events. Furthermore, even if the short piece is accessed, it may be too time consuming to look at the entire video or to find the relevant portion of the video. Accordingly, there is a need for an unconventional method that efficiently and effectively enables a surgeon to view videos that are compiled of short pieces of surgical events from surgical procedures performed on different patients.

Aspects of the present disclosure may relate to surgical preparation, including methods, systems, devices, and computer-readable media. In particular, a compiled video of different events in a surgical procedure performed on different patients may be presented to a surgeon or other user. The compilation may include a summary of surgical videos from different intraoperative events of similar surgeries, which may be automatically aggregated in combination. The surgeon may be enabled to enter case specific information to retrieve a compilation of video segments selected from similar surgeries for different patients. The compilation may include one intraoperative event from one surgical procedure and other different intraoperative events from one or more second surgical procedures. For example, different complications that occur when performing surgery on different patients may all be included in one compiled video. In the case where videos of multiple surgeries contain the same event with shared features (e.g., similar techniques are employed), the system may skip over one or more intraoperative slices to avoid redundancy.

For ease of discussion, a method is described below, and an understanding of aspects of the method applies equally to systems, apparatuses, and computer-readable media. For example, some aspects of such methods may occur electronically over a network, wired, wireless, or both. Other aspects of this method may occur using non-electronic means. In the broadest sense, the method is not limited to a particular physical and/or electronic tool, but can be accomplished using many different tools.

Consistent with the disclosed embodiments, a method is provided that may involve accessing a repository of multiple sets of surgical video clips. As used herein, a repository may refer to any storage location or collection of storage locations where video clips may be electronically stored. For example, the repository may include a memory device, such as a hard disk drive and/or a flash drive. In some implementations, the repository may be a network location, such as a server, a cloud storage location, a shared network drive, or any other form of storage accessible over a network. The repository may include a database of surgical video clips taken at various times and/or locations. In some embodiments, the repository may store additional data other than surgical video clips.

As described above, a surgical video clip may refer to any video, group of video frames, or video clip that includes a representation of a surgical procedure. For example, a surgical clip may include one or more video frames taken during a surgical procedure. A set of surgical video clips may refer to one or more surgical videos or groupings of surgical video clips. The video clips may be stored in the same location or may be selected from different storage locations. Although not necessarily so, the videos within the collection may be correlated in some manner. For example, a video filmlet within a collection may include the following videos: recorded by the same camera, recorded at the same facility, recorded at the same time or within the same time frame, depicting surgical procedures performed on the same patient or group of patients, depicting the same or similar surgical procedures, depicting surgical procedures that share common characteristics (such as similar complexity, including similar time, including use of similar techniques, including use of similar medical instruments, etc.), or sharing any other characteristic or characteristic.

The multiple sets of surgical video clips may reflect multiple surgical procedures performed on different patients. For example, a plurality of different individuals undergoing the same or similar surgical procedures or undergoing surgical procedures employing similar techniques may be included in a common set or multiple sets. Alternatively or additionally, one or more sets may include surgical clips taken from a single patient but at different times. The multiple surgical procedures may be of the same type (e.g., all including appendectomy), or may be of different types. In some embodiments, the multiple surgical procedures may share a common characteristic, such as the same or similar stage or intraoperative event.

The plurality of sets of surgical video clips may further include: an intraoperative surgical event, a surgical outcome, a patient characteristic, a surgeon characteristic, and an intraoperative surgical event characteristic. Examples of such events, results, and features are described throughout this disclosure. The surgical results may include: the outcome of the surgery as a whole (e.g., whether the patient recovered or completely recovered, whether the patient was discharged and then admitted again, whether the surgery was successful), or the outcome of a separate stage or event within the surgery (e.g., whether a complication occurred or whether the technique was successful).

Some aspects of the present disclosure may relate to enabling a surgeon to prepare a planned surgery to enter case specific information corresponding to the planned surgery. Contemplated surgical procedures may include any surgical procedure that has not yet been performed. In some embodiments, the surgical procedure may be a planned surgical procedure that the surgeon intends to perform on the patient. In other embodiments, the envisaged surgical procedure may be a hypothetical procedure and may not necessarily be associated with a particular patient. In some embodiments, the contemplated surgical procedure may be experimental and may not be in widespread practice. Case specific information may include any characteristic or feature of the envisaged surgical procedure or patient envisaged or assumed. For example, case-specific information may include, but is not limited to, characteristics of the patient on which the procedure is to be performed, characteristics of the surgeon performing the procedure, characteristics of other healthcare professionals involved in the procedure, the type of procedure being performed, unique details or aspects of the procedure, the type of equipment or tools involved, the type of technology involved, complications of the procedure, the location of the procedure, the type of medical condition being treated or some aspect thereof, the surgical outcome, intra-operative event results, or any other information that may define or describe the surgical procedure contemplated. For example, case-specific information may include the patient's age, weight, medical condition, vital signs, other physical characteristics, past medical history, family medical history, or any other type of patient-related information that may have some direct or indirect effect on the underlying outcome. Case specific information may also include: an indicator of the skill level of the performing surgeon, the surgical technique employed, complications encountered, or any other information about the surgeon, the procedure, the tools or facilities used.

Case-specific information can be entered in various ways. In some embodiments, the surgeon may enter case-specific information through a graphical user interface. The user interface may include: one or more text fields, prompts, drop down lists, check boxes, or other fields or mechanisms for entering information. In some implementations, a graphical user interface can be associated with a computing device or processor that performs the disclosed methods. In other embodiments, the graphical user interface may be associated with an external computing device, such as a mobile phone, a tablet, a laptop, a desktop computer, a computer terminal, a wearable device (including a smart wristwatch, smart glasses, smart jewelry, heads-up display, etc.), or any other electronic device capable of receiving user input. In some embodiments, case-specific information may be entered at an earlier time or over a period of time (e.g., days, months, years, or longer). Some or all of the case-specific information may be extracted from a hospital or other medical facility database, an electronic medical record, or any other location where patient data and/or other medical data may be stored. In some embodiments, case specific information corresponding to the contemplated surgical procedure may be received from an external device. For example, case specific information may be retrieved or otherwise received from an external computing device, server, cloud computing service, network device, or any other device external to the system performing the disclosed methods. In one example, at least a portion of the case-specific information corresponding to the contemplated surgical procedure can be received from an electronic health record (EMR) or from a system that processes EMRs (e.g., an EMR of a particular patient on which the procedure is to be performed, an EMR associated with the contemplated surgical procedure, etc.), from a scheduling system, from an electronic record corresponding to a medical professional associated with the contemplated surgical procedure, or from a system that processes an electronic record, etc.

In some exemplary embodiments, the case-specific information may include characteristics of the patient associated with the procedure envisioned. For example, as mentioned earlier, case specific information may include characteristics of the patient envisaged. Patient characteristics may include, but are not limited to, the patient's sex, age, weight, height, fitness, heart rate, blood pressure, body temperature, medical condition or disease, medical history, previous treatment, or any other relevant characteristic. Other exemplary patient features are described throughout the present disclosure. In some embodiments, the characteristics of the patient may be directly input by the surgeon. For example, patient characteristics may be entered through a graphical user interface, as described above. In other embodiments, the patient's characteristics may be retrieved from a database or other electronic storage location. In some embodiments, the characteristics of the patient may be received from a medical record of the patient. For example, patient characteristics may be retrieved from a medical record or other information source based on an identifier or other information entered by the surgeon. For example, the surgeon may enter a patient identifier, and may use the patient identifier to retrieve a patient's medical record and/or patient characteristics. As described herein, the patient identifier may be anonymous (e.g., an alphanumeric code or a machine-readable code), or it may identify the patient in a discernible manner (e.g., a patient name or social security number). In some examples, the case-specific information may include characteristics of two or more patients associated with the envisioned procedure (e.g., a contemplated surgical procedure involving two or more patients, such as an implant).

Case specific information may include information related to a surgical tool in accordance with the present disclosure. The surgical tool may be any device or instrument used as part of a surgical procedure. Some exemplary surgical tools are described throughout this disclosure. In some embodiments, the information related to the surgical tool may include at least one of a tool type or a tool model. The tool type may refer to any classification of the tool. For example, the tool type may refer to the type of instrument being used (e.g., "scalpel," "scissors," "forceps," "hair puller," or other type of instrument). The tool type may include various other classifications, such as whether the tool is electronic, whether the tool is used for minimally invasive surgery, the material from which the tool is constructed, the dimensions of the tool, or any other distinguishing characteristic. The tool model may refer to a particular brand and/or manufacturer of the instrument (e.g., "15921 mosquito hemostat").

Embodiments of the present disclosure may further include: the case-specific information is compared to data associated with multiple sets of surgical video clips to identify a set of intra-operative events that are likely to be encountered during the contemplated surgical procedure. The data associated with the multiple sets of surgical videos may include any stored information about the surgical video clips. The data may include information identifying an intraoperative surgical event, surgical stage, or surgical event characteristic depicted in or associated with the surgical video clip. The data may include other information such as patient or surgeon characteristics, characteristics of the video (e.g., date of capture, file size, information about the capture device, location of capture, etc.), or any other information about the surgical video clip. This data may be stored as a tag or other data within the video file. In other embodiments, the data may be stored in a separate file. In some implementations, the surgical video clips can be indexed to associate the data with the video clips. Accordingly, the data may be stored in a data structure, such as data structure 600 described above. In one example, comparing case-specific information to data associated with one or more surgical video clips (e.g., with multiple sets of surgical video clips) may include: one or more similarity measures between the case-specific information and data associated with the one or more surgical video clips are calculated, for example, using one or more similarity functions. Further, in one example, the calculated similarity measure may be compared to a selected threshold to determine whether an event occurring in the one or more surgical video clips is likely to occur in the contemplated surgical procedure, for example, using a K-nearest neighbor algorithm, to predict that an event that typically occurs the K-most similar surgical video clips is likely to be encountered during the contemplated surgical procedure. In some examples, a machine learning model may be trained using training examples to identify intraoperative events likely to be encountered during a particular surgical procedure from information related to the particular surgical procedure, and the trained machine learning model may be used to analyze case-specific information for the envisioned surgical procedure and identify a set of intraoperative events likely to be encountered during the envisioned surgical procedure. Examples of such training examples may include information related to a particular surgical procedure, as well as markers indicating intra-operative events likely to be encountered during the particular surgical procedure.

The set of intra-operative events likely to be encountered during the contemplated surgical procedure may be determined based on the data. For example, case specific information may be compared to data associated with multiple sets of surgical video clips. This may include comparing features of the contemplated surgical procedures (as represented in case-specific information) to identify surgical video clips associated with surgical procedures having the same or similar features. For example, if the case-specific information includes a medical condition of a patient associated with a contemplated procedure, a set of surgical video clips associated with a surgical procedure on a patient having the same or similar medical condition may be identified. As another example, a surgeon preparing to perform catheterization on a 73 year old male with a family history of diabetes, high cholesterol, high blood pressure, and heart disease may enter the case specific information in order to extract a video clip for review of patients sharing similar features (or patients predicted to present similarly to the particular patient). The set of intra-operative events likely to be encountered during the envisioned surgical procedure may include intra-operative surgical events encountered during the surgical procedure associated with the identified surgical video clip. In some embodiments, a variety of factors may be considered in identifying a surgical video clip and/or a set of intraoperative events likely to be encountered.

Whether an intraoperative event is considered likely to be encountered during a contemplated surgical procedure may depend on how frequently the intraoperative event occurs in a surgical procedure similar to the contemplated surgical procedure. For example, intraoperative events can be identified based on the number of times they occur in similar procedures, a ratio of the number of times they occur in similar procedures, or other statistical information based on multiple sets of surgical video clips. In some implementations, an intraoperative event can be identified based on comparing the likelihood to a threshold. For example, an intraoperative event may be identified if it occurs in more than 50% or any other percentage of similar surgical procedures. In some embodiments, the set of intraoperative events can include a hierarchy of intraoperative events (tier) based on their likelihood of occurrence. For example, a group may include an intra-operative event layer with a high likelihood of occurrence and one or more intra-operative event layers with a low likelihood of occurrence.

According to some embodiments of the present disclosure, machine learning or other artificial intelligence techniques may be used to identify the set of intraoperative events. Thus, comparing the case-specific information to data associated with the plurality of sets of surgical video clips may include: an artificial neural network is used to identify a set of intraoperative events that are likely to be encountered during a contemplated surgical procedure. In one example, the artificial neural network may be manually configured, may be generated from a combination of two or more other artificial neural networks, and the like. In one example, an artificial neural network may be fed training data that relates various case-specific information to likely intraoperative events encountered. In some embodiments, the training data may include one or more sets of surgical video clips and data associated with the surgical clips included in the repository. The training data may also include non-video related data, such as patient characteristics or past medical history. Using an artificial neural network, a trained model may be generated based on training data. Thus, using the artificial neural network may include providing case-specific information to the artificial neural network as input. As an output of the model, the set of intraoperative events that are likely to be encountered during the envisioned surgical procedure may be identified. Various other machine learning algorithms may be used, including: logistic regression, linear regression, random forest, K Nearest Neighbor (KNN) model (e.g., as described above), K-means model, decision tree, cox proportional hazards regression model, naive bayes model, Support Vector Machine (SVM) model, gradient boosting algorithm, or any other form of machine learning model or algorithm.

Some aspects of the disclosure may further include: using the case-specific information and the identified set of intraoperative events that are likely to be encountered, a particular frame in a particular set of the plurality of sets of surgical video clips that corresponds to the identified set of intraoperative events is identified. A particular frame in a particular set of the plurality of sets of surgical video clips may be a location in the video clip where an intraoperative event occurred. For example, if a set of intraoperative events includes a complication, the particular frame may include a video clip depicting or otherwise associated with the complication. In some embodiments, the particular frame may include some surgical video clips before or after the occurrence of the intraoperative event, for example, to provide context for the intraoperative event. Furthermore, the particular frame may not necessarily be consecutive. For example, if the intraoperative event is an adverse event or adverse outcome, the particular frame may include frames corresponding to the adverse outcome and the cause of the adverse outcome, which may not be consecutive. The particular frame may be identified based on a frame number (e.g., frame number, start and end frame numbers, start and subsequent frame numbers, etc.), based on temporal information (e.g., start and stop times, duration, etc.), or any other manner for identifying a particular frame of a video filmlet.

In some embodiments, the particular frame may be identified based on an indexing of a plurality of surgical video clips. For example, as described above, video clips may be indexed to associate clip locations with phase tags, event tags, and/or event characteristics. Thus, identifying a particular frame in a particular set of the plurality of sets of surgical video clips may include: performing a lookup or search for an intra-operative event using a data structure, such as data structure 600 described with respect to fig. 6.

In accordance with the present disclosure, the identified particular frames may include frames from the plurality of surgical procedures performed on different patients. Thus, the identified particular frames may form a compilation of snippets associated with intraoperative events from surgical procedures performed on different patients, which may be used for surgical preparation. For example, the best video filmlet examples (in terms of video quality, clarity, representativeness, compatibility with contemplated surgeries, etc.) may be selected from different surgeries performed for different patients, and associated with each other, so that the preparing surgeon may view the best of a set of video filmlets, e.g., without having to individually review the videos of individual cases one by one.

Embodiments of the present disclosure may also include skipping portions of the identified particular frame, for example, to avoid redundancy, shorten the resulting compilation, remove portions that are less relevant or provide little information, and the like. Thus, some embodiments may include: determining that the first and second sets of video clips from different patients contain frames associated with intraoperative events sharing a common characteristic. The first and second sets of video clips may include frames of the identified particular frames corresponding to the identified set of intra-operative events. The common feature may be any feature of the intra-operative event that is relevant to determining whether frames from both the first set and the second set are included. A common characteristic may be used to determine whether the first set and the second set are redundant. For example, the intraoperative event may be a complication that occurs during a surgical procedure, and the common characteristic may be the type of complication. If the complications in the first and second sets of frames are of the same type, it may not be efficient or beneficial for a surgeon preparing for surgery to view both the first and second sets of frames. Thus, only one set is selected for presentation to the surgeon and the other set is skipped. In some embodiments of the present disclosure, the common features may include features of different patients. For example, common characteristics may include the age, weight, height, or other demographics of the patient, may include the patient's condition, and the like. Various other patient features described throughout this disclosure may also be shared. In other embodiments, the common characteristic may include an intra-operative surgical event characteristic of the contemplated surgical procedure. The intraoperative surgical event characteristics may include any significant characteristic or characteristic of an intraoperative event. For example, adverse consequences of a surgical event, surgical technique, surgeon skill level, identity of a particular surgeon, physiological response, duration of an event, or any other characteristic or characteristic associated with an event.

According to various exemplary embodiments of the present disclosure, determining that the first and second sets of video clips from different patients contain frames associated with intraoperative events sharing common features may include: an implementation of a machine learning model is used to identify common features. In one example, a machine learning model may be trained using training examples to identify frames of video clips having particular features, and the trained machine learning model may be used to analyze first and second sets of video clips from different patients to identify frames associated with intra-operative events that share common features. Examples of such training examples may include a video filmlet and a label indicating a particular feature of a particular frame of the video filmlet. Various machine learning models are described above, and may include: a logistic regression model, a linear regression model, a random forest model, a K-nearest neighbor (KNN) model, a K-means model, a decision tree, a cox proportional hazards regression model, a naive Bayes model, a Support Vector Machine (SVM) model, a gradient boosting algorithm, a deep learning model, or any other form of machine learning model or algorithm. Some embodiments of the present disclosure may further include: training a machine learning model using the example video filmlet to determine whether two sets of video filmlets share a common feature, and wherein implementing the machine learning model comprises: a trained machine learning model is implemented. In one example, an example video filmlet may be a training filmlet, which may include a paired set of video filmlets known to share a common feature. The trained machine learning model may be configured to determine whether two sets of video clips share a common feature.

The disclosed embodiments may further include: the second set is skipped from the compilation to be presented to the surgeon and the first set is included in the compilation to be presented to the surgeon. As used herein, a compilation may include a series of frames that may be presented for continuous and/or coherent playback. In some implementations, the compilation may be stored as a separate video file. In other embodiments, the assembly may be stored as instructions for presenting the series of frames from their respective surgical video clips, for example, stored in a repository. The assembly may include additional frames in addition to those included in the first set, including other frames from the identified particular frame.

Some aspects of the disclosure may further include: enabling the surgeon to view a presentation that includes a compilation containing frames from different surgical procedures performed on different patients. The presentation may be any form of visual display including a compilation of frames. In some implementations, the presentation can assemble a video. The presentation may include other elements such as menus, controls, indexes, timelines, or other content in addition to assembly. In some embodiments, enabling the surgeon to view the presentation may include: with or without audio presentation, data for displaying the presentation is output using a display device such as: screens (e.g., OLED, QLED LCD, plasma, CRT, DLPT, electronic paper, or similar display technologies), projectors (e.g., projectors, slide projectors), 3D displays, smart glasses, or any other visual presentation mechanism. In other embodiments, enabling the surgeon to view the presentation may include: the presentation is stored in a location accessible to one or more other computing devices. Such storage locations may include local storage (such as flash memory hard drives), network locations (such as servers or databases), cloud computing platforms, or any other accessible storage location. Thus, the presentation may be accessed from an external device for display on the external device. In some implementations, outputting the video can include sending the video to an external device. For example, enabling the surgeon to view the presentation may include: the presentation is sent over a network to a user device or other external device for playback on the external device.

The presentation may stitch together different clips from different procedures, presenting the clips to the surgeon in chronological order as may occur during the surgical procedure. The short film may be presented as a continuous play, or may be presented as follows: requiring the surgeon to act affirmatively on the order of the subsequent clips to be played. In some cases where it may be beneficial for the surgeon to view multiple alternative techniques or to view different responses to adverse events, multiple alternative patches from different surgical procedures may be presented sequentially.

Some embodiments of the present disclosure may further include: a common surgical timeline is enabled to be displayed along the presentation, the common surgical timeline including one or more chronological markers corresponding to one or more of the identified particular frames. For example, the common surgical timeline may be overlaid on the presentation, as discussed above. An example surgical timeline 420 including chronological markers is shown in fig. 4. The chronological indicia may correspond to indicia 432, 434, and/or 436. Thus, the chronological indicia may correspond to a surgical phase, an intraoperative surgical event, a decision-making node, or other noticeable occurrence along the particular frame identified that is presented. The markers may be represented along the timeline as shapes, icons, or other graphical representations, as described in more detail above. The timeline may be presented along with frames associated with surgeries performed on a single patient, or may be presented along with a compilation of video clips from surgeries performed on multiple patients.

According to some embodiments of the present disclosure, enabling the surgeon to view the presentation may include: a discrete set of video clips of different surgical procedures performed on different patients is sequentially displayed. Each discrete set of video clips may correspond to a different surgical procedure performed on a different patient. In some embodiments, sequentially displaying discrete sets of video clips may appear as a continuous video to the surgeon or another user. In other implementations, playback may be stopped or paused between discrete sets of video clips. The surgeon or other user may manually start the next set of video clips in the sequence.

According to some implementations of the disclosure, presenting may include: the simulated surgical procedure is displayed based on the identified set of intraoperative events that are likely to be encountered and/or the particular frames in the particular set of the identified plurality of sets of surgical video clips that correspond to the identified set of intraoperative events. For example, a machine learning algorithm, such as a generate countermeasure Network (generic adaptive Network), may be used to train a machine learning model, such as an artificial neural Network, a deep learning model, a convolutional neural Network, etc., using training examples to generate a simulation of a surgical procedure based on a plurality of sets of intraoperative events and/or frames of a surgical video clip, and the trained machine learning model may be used to analyze an identified set of intraoperative events that are likely to be encountered, and/or particular frames of particular ones of the identified sets of surgical video clips that correspond to the identified set of intraoperative events, and generate a simulated surgical procedure.

In some implementations, sequentially displaying the discrete sets of video clips can include: displaying an index of discrete sets of video clips such that a surgeon or other user can select one or more of the discrete sets of video clips. The index may be a text-based index, such as a listing of intraoperative events, surgical stages, or other indicators of different discrete sets of video clips. In other embodiments, the index may be a graphical display (such as the timeline described above) or a combination of graphical and textual information. For example, the index may include a timeline that analyzes the discrete sets into corresponding surgical stages, as well as textual stage indicators. In such embodiments, the discrete sets may correspond to different surgical stages of a surgical procedure. The discrete sets may be displayed using different colors, with different shading, with a bounding box or separator, or other visual indicator for distinguishing the discrete sets. The text stage indicator may describe or otherwise identify the corresponding surgical stage. Text phase indicators may be displayed within the timeline, above the timeline, below the timeline, or anywhere such that they identify discrete structures. In some implementations, the timeline may be displayed in a list format, and the text stage indicator may be included within the list.

In accordance with the present disclosure, the timeline may include intraoperative surgical event markers corresponding to intraoperative surgical events. The intraoperative surgical event marker may correspond to an intraoperative surgical event associated with a location in a surgical video clip. The surgeon may be enabled to click on the intraoperative surgical event marker to display at least one frame depicting the corresponding intraoperative surgical event. For example, clicking on an intraoperative surgical event may cause the display of the compiled video to jump to a location associated with the selected marker. In some embodiments, the surgeon is able to view additional information about the event or occurrence associated with the marker, which may include information summarizing aspects of the procedure or information derived from past surgical procedures. As described in more detail above. Any of the features or functions described above with reference to the timeline of the overlaid surgical video may also be applied to the compiled video described herein.

Embodiments of the present disclosure may also include training a machine learning model to generate an index of the repository based on the intra-operative surgical event, the surgical outcome, the patient characteristics, the surgeon characteristics, and intra-operative surgical event characteristics, and generating an index of repositories. Comparing the case-specific information to the data associated with the plurality of sets may include searching the index. Various machine learning models described above may be used, including: a logistic regression model, a linear regression model, a random forest model, a K-nearest neighbor (KNN) model, a K-means model, a decision tree, a cox proportional hazards regression model, a naive Bayes model, a Support Vector Machine (SVM) model, a gradient boosting algorithm, a deep learning model, or any other form of machine learning model or algorithm. A training data set of surgical video clips with known intra-operative surgical events, surgical results, patient characteristics, surgeon characteristics, and intra-operative surgical event characteristics may be used to train the model. The trained model may be configured to determine an intra-operative surgical event, a surgical result, a patient characteristic, a surgeon characteristic, and an intra-operative surgical event characteristic based on additional surgical video clips not included in the training set. When applied to a surgical video clip application in a repository, video clips may be tagged based on the identified characteristics. For example, a video filmlet may be associated with a filmlet location, a phase tag, an event location and/or an event tag as described above with reference to video indexing. Accordingly, the repository may be stored as a data structure, such as data structure 600 described above.

Fig. 12 is a flow chart illustrating an example process 1200 of performing surgical preparation consistent with the disclosed embodiments. Process 1200 may be performed by a processing device, such as one or more collocated or decentralized processors described herein. In some implementations, a non-transitory computer-readable medium is provided that may contain instructions that, when executed by a processor, cause the processor to perform the process 1200. The process 1200 is not necessarily limited to the steps shown in the diagram 1200, and any steps or processes of the various embodiments described throughout this disclosure may also be included in the process 1200. At step 1210, process 1200 may include accessing a repository of multiple sets of surgical video clips that reflect multiple surgical procedures performed on different patients. The plurality of sets of surgical video clips may include: an intraoperative surgical event, a surgical outcome, a patient characteristic, a surgeon characteristic, and an intraoperative surgical event characteristic. In some implementations, the repository can be indexed, for example, using process 800 to facilitate retrieval and identification of multiple sets of surgical video clips.

At step 1220, process 1200 may include: enabling a surgeon preparing the envisaged surgery to enter case specific information corresponding to the envisaged surgery. As noted above, the contemplated surgical procedure may be a planned procedure, a hypothetical procedure, a trial procedure, or another procedure that has not yet occurred. Case specific information may be entered manually by the surgeon, for example, through a user interface. In some embodiments, some or all of the case-specific information may be received from a medical record of the patient. The case-specific information may include characteristics of the patient associated with the procedure envisioned, the information including information related to the surgical tool (e.g., tool type, tool model, tool manufacturer, etc.), or any other information that may be used to identify the relevant surgical video clips.

At step 1230, process 1200 may include: the case-specific information is compared to data associated with multiple sets of surgical video clips to identify a set of intra-operative events that are likely to be encountered during the contemplated surgical procedure. A set of intra-operative events likely to be encountered may be determined, for example, based on machine learning analysis performed on historical video clips, other historical data in addition to video data, or any other form of data from which predictions may be derived. At step 1240, process 1200 may include: using the case-specific information and the identified set of intraoperative events that are likely to be encountered, a particular frame in a particular set of the plurality of sets of surgical video clips that corresponds to the identified set of intraoperative events is identified. The particular frames identified may include frames from multiple surgical procedures performed on different patients, as described earlier.

At step 1250, process 1200 may include: it is determined that the first and second sets of video clips from different patients contain frames associated with intraoperative events sharing common characteristics, as described earlier. At step 1260, process 1200 may include: the inclusion of the second set is skipped from the compilation to be presented to the surgeon and the first set is included in the compilation to be presented to the surgeon, as described earlier.

At step 1270, process 1200 may include enabling the surgeon to view a presentation that includes a compilation containing frames from different surgical procedures performed on different patients. As described above, enabling the surgeon to view the presentation may include: outputting data to enable display of the presentation on a screen or other display device, storing the presentation in a location accessible to another computing device, transmitting the presentation, or any other process or method that may enable viewing of the presentation and/or assembly.

In preparing a surgical procedure, it may be beneficial for a surgeon to review video clips of past surgical procedures. However, in some cases, only a particularly complex portion of the surgery may be associated with the surgeon. For surgeons, identifying portions of a surgical video based on the complexity of the procedure using conventional methods may be too difficult and time consuming. Accordingly, there is a need for an unconventional method of efficiently and effectively analyzing the complexity of surgical clips and enabling surgeons to quickly review relevant portions of surgical video.

Aspects of the present disclosure may relate to surgical preparation, including methods, systems, devices, and computer-readable media. In particular, in preparation for surgery, a surgeon may need to view portions of the surgical video that have a particular level of complexity. For example, within a typical conventional surgical video, a highly skilled surgeon may want to view only a single event of exceptional complexity. However, it can be time consuming for the surgeon to find the proper video and the proper position in the video. Accordingly, in some embodiments, methods and systems for analyzing the complexity of a surgical clip are provided. For example, the process of viewing a surgical video clip based on complexity may be accelerated by automatically tagging portions of the surgical video with complexity scores, allowing the surgeon to quickly find frames of interest based on complexity.

For ease of discussion, a method is described below, and an understanding of aspects of the method applies equally to systems, apparatuses, and computer-readable media. For example, some aspects of such methods may occur electronically over a network, wired, wireless, or both. Other aspects of this method may occur using non-electronic means. In the broadest sense, the method is not limited to a particular physical and/or electronic tool, but can be accomplished using many different tools.

Consistent with the disclosed embodiments, a method may involve analyzing frames of a surgical short to identify anatomical structures in a first set of frames. As described above, a surgical clip may refer to any video, group of video frames, or video clip that includes a representation of a surgical procedure. For example, a surgical clip may include one or more video frames taken during a surgical procedure. The first set of frames may be a grouping of one or more frames included within the surgical slice. In some embodiments, the first set of frames may be consecutive frames, however, this need not be the case. For example, the first set of frames may comprise a plurality of consecutive groups of frames.

As discussed above, an anatomical structure may be any particular portion of a living organism, including, for example, an organ, tissue, tube, artery, cell, or other anatomical portion. The first set of frames may be analyzed to identify anatomical structures using various techniques, such as those described above. In some embodiments, the frames of the surgical clips may be analyzed using an object detection algorithm, as described above. For example, an object detection algorithm may detect objects based on appearance, image features, templates, and the like. In some embodiments, identifying the anatomical structure in the first set of frames comprises: a machine learning model trained to detect anatomical structures is used, for example as described above. For example, images and/or videos may be input into a machine learning model as training data along with recognition of anatomical structures known to be depicted in the images and/or videos. As a result, the trained model may be used to analyze the surgical short to identify anatomical structures in the first set of frames. For example, an artificial neural network configured to identify anatomical structures in the images and/or video may be used to analyze the surgical short to identify anatomical structures in the first set of frames. Various other machine learning algorithms may be used, including: logistic regression, linear regression, random forest, K-nearest neighbor (KNN) model, K-means model, decision tree, cox proportional hazards regression model, naive Bayes model, Support Vector Machine (SVM) model, gradient boosting algorithm, deep learning model, or any other form of machine learning model or algorithm.

Some aspects of the present disclosure may also include accessing first historical data based on analysis of first frame data taken from a first set of prior surgeries. In general, the frame data may include any image or video data describing a surgical procedure, as described herein. The first historical data and/or the first frame data may be stored in one or more storage locations. Thus, accessing the first historical data may include retrieving the historical data from a storage location. In other embodiments, accessing the first historical data may include: the first history data and/or the first frame data are received, for example, from an image capture device or a computing device. Consistent with embodiments of the present disclosure, accessing historical data may include: first frame data is retrieved or received and analyzed to identify first historical data.

The historical data may be any information about prior surgical procedures. Some non-limiting examples of such historical data are described above. In some embodiments, the first historical data may include complexity information associated with a first set of prior surgical procedures. The complexity information may include any data indicative of the complexity of the surgical procedure, as discussed further below. The first historical data may include any other information related to the first set of surgical procedures that may be collected from the first frame of data. For example, the first frame of data may include or indicate information associated with a prior surgical procedure, including: the anatomy involved, the medical tool used, the type of surgical procedure performed, the intraoperative events (including adverse events) that occurred during the procedure, the medical condition exhibited by the patient, patient characteristics, surgeon characteristics, the skill level of the surgeon or other healthcare professional involved, timing information (e.g., the duration of the interaction between the medical tool and the anatomy, the duration of the surgical phase or intraoperative event, the time between the appearance of the medical tool and the first interaction between the medical tool and the anatomy, or other relevant duration or timing information), the condition of the anatomy, the number of surgeons or other healthcare professionals involved, or any other information associated with a prior surgical procedure.

In embodiments where the first historical data includes complexity information, such information may indicate or be associated with the complexity of the surgical procedure or a portion thereof. For example, the first historical data may include an indication of a statistical relationship between a particular anatomical structure and a particular surgical complexity level. A statistical relationship may be any information that may indicate some correlation between a particular surgical complexity level and a particular anatomical structure. For example, when a specific blood vessel is exposed during a surgical operation, a specific portion of an organ is scratched, or a specific amount of blood is detected. Such events may be statistically correlated with surgical complexity. Similarly, detection of large amounts of fat or poor conditions of the organ may also be correlated with complexity. These are merely examples, and any condition or event associated with the complexity of a surgical procedure may be used as an indication of the complexity of the surgical procedure.

In some implementations, the first historical data can be identified from the first frame data using one or more image or video analysis algorithms (including object detection algorithms and/or motion detection algorithms). In other embodiments, the first historical data may be identified from the first frame data using a machine learning model trained to identify historical data based on the frame data. For example, the machine learning model may be trained using the training examples to identify historical data from the frame data (as described above), and the trained machine learning model may be used to analyze the first frame data to determine the first historical data. Examples of such training examples may include images and/or videos depicting a surgical procedure or portion of a surgical procedure, and markers indicating the complexity of the surgical procedure or portion of the surgical procedure. For example, such indicia may be generated manually, may be generated by a different process, may be read from memory, and the like.

Embodiments of the present disclosure may involve analyzing a first set of frames using first historical data and using identified anatomical structures to determine a first surgical complexity level associated with the first set of frames. As used herein, complexity may be a value or other classifier that indicates the relative complexity of a surgical procedure or a portion of a surgical procedure. For example, the complexity may be based on the difficulty of the surgical procedure relative to other surgical procedures. The difficulty may be based on the level of surgeon skill required to perform one or more techniques involved in the surgical procedure, the likelihood of an adverse event (such as a tear, bleeding, injury, or other adverse event) occurring, the success rate of the surgical procedure, or any other indicator of the difficulty of the procedure. Surgical procedures with higher relative difficulty levels may be associated with higher levels of complexity.

As another illustrative example, the complexity level may be based on the duration or time requirements to complete the surgical procedure or portion thereof. For example, procedures or techniques that require a longer execution time may be considered more complex and may be associated with a higher degree of complexity. As another example, the complexity level may be based on the number of steps required to perform the surgical procedure or portion thereof. For example, procedures or techniques that require more steps may be considered more complex and may be associated with a higher degree of complexity. In some embodiments, the level of complexity may be based on the surgical technique or type of procedure being performed. Certain techniques or procedures may have a predetermined complexity, and the complexity level may be based on the complexity of the technique or procedure involved. For example, a cholecystectomy may be considered more complex than a reticulectomy, and thus, a surgical procedure involving a cholecystectomy may be assigned a higher degree of complexity. Other factors that may be related to complexity may include: information relating to the severity of the disease, complication factors, the anatomy involved, the type of medical tool used, the type of surgical procedure performed, intraoperative events (including adverse events) that occurred during the procedure, the patient's physiological response, the medical condition exhibited by the patient, patient characteristics, surgeon characteristics, skill levels of the surgeon or other healthcare provider involved, timing information (e.g., duration of interaction between a medical tool and an anatomical structure, duration of a surgical phase or intraoperative event, time between presentation of a medical tool and a first interaction between the medical tool and an anatomical structure, or other relevant duration or timing information), condition of an anatomical structure, number of surgeons or other healthcare professionals involved, or any other information associated with a prior surgical procedure. The surgical complexity may not be limited to any of the examples described above, and may be based on a combination of factors, including the examples provided above.

The complexity of the surgical procedure may be expressed in various ways. In some implementations, the complexity level may be expressed as a value. For example, the surgical complexity level may be a value within a range of values corresponding to a complexity scale (e.g., on a scale of 0-5, 0-100, or any other suitable scale). Percentages or other fractions may also be used. In general, a higher value may indicate a higher degree of complexity, however, in some embodiments, the surgical complexity may be the inverse of the value. For example, complexity level 1 may indicate a higher complexity than complexity level 7. In other implementations, the complexity level may be expressed as a text-based indicator of complexity. For example, the first set of frames may be assigned a complexity level "high complexity", "medium complexity", "low complexity", or various other classifiers. In some embodiments, the surgical complexity level may correspond to a standardized scale or index used to represent the surgical complexity. The surgical complexity may be specific to a particular type of surgery (or a subset of the types of surgery), or may be a general complexity applicable to any surgery.

As mentioned above, the first surgical complexity level may be determined by analyzing the first set of frames using historical data. Analyzing the first set of frames may include any process for determining a complexity level based on information included in the first set of frames. Examples of analyses for determining the complexity of a surgical procedure are provided in more detail below.

Further, the first surgical complexity may be determined using the identified anatomical structure. In some embodiments, the type of anatomical structure involved in the procedure may be indicative, at least in part, of the complexity of the surgical procedure. For example, procedures performed on certain anatomical structures (e.g., anatomical structures associated with the brain or heart of a patient) may be considered more complex. In some embodiments, the condition of the anatomical structure may also be correlated to a determined complexity level, as discussed in more detail below.

Some aspects of the present disclosure may involve analyzing frames of the surgical short to identify a medical tool, an anatomical structure, and an interaction between the medical tool and the anatomical structure in the second set of frames. For example, the second set of frames may indicate a portion of the surgical clip that is performing a surgical procedure on the anatomical structure. The medical tool may include any equipment or device used as part of a medical procedure. In some embodiments, the medical tool may be a surgical tool, as discussed above. For example, the medical tool may include, but is not limited to, a cutting instrument, a grasping and/or clamping instrument, a retractor, a tissue unifying (tissue unifying) instrument and/or material, a protective device, a laparoscope, an endoscope, a patient detection device, a patient imaging device, or the like. As discussed above, the interaction may include any action of the medical instrument that may affect the anatomy, and vice versa. For example, the interaction may include contact between the medical instrument and the anatomical structure, an action of the medical instrument on the anatomical structure (such as cutting, clamping, grasping, applying pressure, scraping, etc.), a physiological response of the anatomical structure, or any other form of interaction.

As with the first set of frames, the second set of frames may be a grouping of one or more frames included within the surgical slice. The second set of frames may be consecutive frames or may comprise groups of consecutive frames. In some embodiments, the first set of frames and the second subset may be completely different. In other embodiments, the first and second sets of frames may include at least one common frame occurring in both the first and second sets of frames. As with the first set of frames, the second set of frames may be analyzed using various techniques to identify a medical tool, an anatomical structure, and an interaction between the medical tool and the anatomical structure. In some embodiments, the frames of the surgical clips may be analyzed using object detection algorithms (e.g., appearance-based detection algorithms, image feature-based detection algorithms, template-based detection algorithms, etc.) and/or motion detection algorithms. In some implementations, identifying the medical tool, the anatomical structure, and the interaction between the medical tool and the anatomical structure in the second set of frames can include using a machine learning model trained to detect the medical tool, the anatomical structure, and the interaction between the medical tool and the anatomical structure. For example, the machine learning model may be trained using the training examples to detect the medical tool and/or the anatomical structure and/or the interaction between the medical tool and the anatomical structure from the images and/or videos, and the trained machine learning model may be used to analyze the second set of frames to detect the medical tool and/or the anatomical structure and/or the interaction between the medical tool and the anatomical structure. Examples of such training examples may include images and/or video clips of a surgical procedure, and markers indicating at least one of: a medical tool depicted in the image and/or video filmlet, an anatomical structure depicted in the image and/or video filmlet, and an interaction between the medical tool and the anatomical structure depicted in the image and/or video filmlet.

In some exemplary embodiments, identifying the anatomical structure in the first set of frames may be based on the identification of the medical tool and a first interaction between the medical tool and the anatomical structure. In some embodiments, the medical tool identified in the first set of frames may be the same tool as the medical tool identified in the second set of frames. Thus, the interaction between the medical tool and the anatomical structure in the second set of frames may be a later interaction between the medical tool and the anatomical structure. This may, for example, facilitate determining a time between a first interaction and a later interaction, which may be indicative, at least in part, of a surgical complexity.

Embodiments of the present disclosure may also include accessing second historical data based on analysis of second frame data taken from a second set of prior surgeries. In some embodiments, the first set of prior surgical procedures and the second set of prior surgical procedures may be of the same type. For example, the first and second historical data may relate to a first and second set of appendectomies, respectively. The first and second sets may differ according to characteristics. As one non-limiting example, the first group may relate to patients exhibiting peritonitis and the second group may include patients not exhibiting peritonitis.

In some embodiments, the first frame data and the second frame data may be the same (i.e., the first history data and the second history data may be based on the same frame data). For example, the first historical data and the second historical data may be based on different analyses of the same frame data. As an illustrative example, consistent with the present embodiment, the first frame data may include an estimate of surgical contact force not included in the second frame data. In some embodiments, the first and second historical data may be based on different subsets of the same frame data (e.g., different surgical stages and/or different surgical procedures).

In some embodiments, the first frame data and the second frame data may be different (i.e., accessed or stored in different data structures). For example, different frames of the same surgical procedure may be analyzed to generate first and second historical data.

In other embodiments, the first set of prior surgeries and the second set of prior surgeries may differ in at least one respect. For example, the first and second sets may include appendectomies but may be different because the first set includes appendectomies that detect abnormal fluid leakage events, while abnormal fluid leakage events are not detected in the second set. In some embodiments, the first set of prior surgeries and the second set of prior surgeries may have at least one surgical procedure in common (e.g., both sets may include an incision). However, in other embodiments, the first set of prior surgical procedures and the second set of prior surgical procedures may not have a common surgical procedure.

In some embodiments, a method may include the steps of: the method includes tagging a first set of frames with a first complexity, tagging a second set of frames with a second complexity, and storing the first set of frames with the first tag and the second set of frames with the second tag in a data structure. This may enable the surgeon to select the second complexity level and thereby cause the second set of frames to be displayed while skipping the display of the first set of frames. In some implementations, a method can include receiving a selection of a complexity level (e.g., receiving a selection based on a user input to an interface). Further, a method may include accessing a data structure to retrieve a selected frame. A method may include displaying frames marked with a selected complexity while skipping frames not marked with the selected complexity.

Similarly to the first history data and frame data, the second history data and frame data may be stored in one or more storage locations. In some embodiments, the second historical data may be stored in the same storage location as the first historical data. In other embodiments, the first and second historical data may be stored at separate locations. Consistent with other embodiments, accessing the first historical data may include: the second history data and/or the second frame data are received, for example, from an image capture device or a computing device. Further, as with the first historical data, accessing the second historical data may include: second frame data is retrieved or received and analyzed to identify second historical data. In some implementations, the first historical data and the second historical data can be the same. In other embodiments, the first and second historical data may be different. The second historical data may include information about the second frame of data, similar to the first historical data discussed above. The second historical data may include any of the information described above with reference to the first historical data, such as medical tool information, anatomical structure information, and/or associated complexity information. In embodiments where the second historical data includes complexity information, such information may indicate or be associated with the complexity information. For example, the second historical data may include an indication of a statistical relationship between a particular anatomical structure and a particular surgical complexity level.

Some aspects of the present disclosure may involve analyzing the second set of frames using the second historical data and using the identified interactions to determine a second surgical complexity level associated with the second set of frames. The first surgical complexity level may be similar to the second surgical complexity level, and thus may be based on one or more of the example factors provided above with reference to the first surgical complexity level. In some embodiments, the second surgical complexity level may be identified in the same format as the first surgical complexity level (e.g., as a value within the same scale, etc.), however, in some embodiments, a different form of representation may be used.

Consistent with embodiments of the present disclosure, the first and second surgical complexity levels may be determined according to various methods. In some embodiments, disclosed embodiments may include: at least one of the first surgical complexity level or the second surgical complexity level is determined using a machine learning model trained to identify the surgical complexity level using frame data captured from a prior surgical procedure. For example, a machine learning algorithm may be used to develop a machine learning model. Training data (which may include frame data captured from a prior surgical procedure and markers indicating the complexity of the surgical procedure known to correspond to the frame data) may be provided to a machine learning algorithm to develop a trained model. The machine learning algorithm may include logistic regression, linear regression, random forest, K-nearest neighbor (KNN) model, K-means model, decision tree, cox proportional hazards regression model, naive Bayes model, Support Vector Machine (SVM) model, artificial neural network, gradient boosting algorithm, or any other form of machine learning model or algorithm. Thus, the first historical data may include a machine learning model trained using first frame data captured from a first set of prior surgeries. Similarly, the second historical data may include a machine learning model trained using second frame data captured from a second set of prior surgeries. As a result, when the trained model is provided with the first set of frames and the second set of frames, the trained model may be configured to determine the first surgical complexity and the second surgical complexity, respectively.

In some exemplary embodiments, determining at least one of the first complexity level or the second complexity level may be based on a physiological response. As discussed above, the physiological response may include any physical or anatomical condition or response of the patient that directly or indirectly results in a surgical procedure. For example, the physiological response may include: heart rate changes, physical movement, failure or reduced function of one or more organs, temperature changes, a patient's verbal response, changes in brain activity, respiratory rate changes, perspiration changes, changes in blood oxygen levels, changes in heart function, activation of the sympathetic nervous system, endocrine responses, cytokine production, acute phase responses, neutrophilia, lymphocyte proliferation, or any other physical change in response to surgery. In some embodiments, the physiological response may indicate the complexity of the surgical procedure. For example, a surgical procedure that triggers certain physiological responses may be considered more complex and thus may have a higher level of complexity. For example, a machine learning model may be trained using training examples to identify physiological responses from images and/or videos, and the trained machine learning model may be used to analyze a first set of frames to identify a first physical response, and/or a second set of frames to identify a second physiological response, and a first surgical complexity may be determined based on the identified first physiological response, and/or a second surgical complexity may be determined based on the identified second physiological response. Examples of such training examples may include images and/or video clips of a surgical procedure, and markers indicating physical reactions depicted in the images and/or video clips.

In some exemplary embodiments, determining at least one of the first surgical complexity or the second surgical complexity may be based on a condition of the anatomical structure, as mentioned above. For example, the condition may relate to a detected deterioration, tearing, bleeding, swelling, discoloration, deformation of the anatomical structure or any characteristic of the anatomical structure reflecting its current state. In some embodiments, the condition of the anatomical structure may include a medical condition affecting the anatomical structure. The medical condition may indicate the purpose or type of surgical procedure being performed and, thus, the complexity of the association. For example, if the gallbladder exhibits a large polyp, this may indicate that a cholecystectomy is involved in the surgical procedure, which may be useful for determining complexity. In other embodiments, the medical condition may indicate one or more concurrency factors associated with the surgical procedure. For example, a large hemorrhage occurring at the anatomical structure may indicate a complication that has occurred during the surgical procedure, which may affect the surgical complexity. Alternatively or additionally, the medical condition itself may be associated with a certain degree of complexity. In some embodiments, the condition of the anatomical structure may be a state of the anatomical structure based on a current time period or stage of the surgical procedure. For example, an incision made in an anatomical structure may affect the condition of the anatomical structure and thus change the complexity compared to the complexity before the incision. For example, a machine learning model may be trained using training examples to identify a condition of an anatomical structure from an image and/or video, the trained machine learning model may be used to analyze a first set of frames to identify a first condition of a first anatomical structure, and/or a second set of frames to identify a second condition of a second anatomical structure (which may be the same anatomical structure as the first anatomical structure or may be a different anatomical structure), and a first surgical complexity level may be determined based on the identified first condition, and/or a second surgical complexity level may be determined based on the identified second condition. Examples of such training examples may include images and/or video clips of the anatomical structure, and markers indicating the condition of the anatomical structure.

In some embodiments of the present disclosure, determining at least one of the first surgical complexity level or the second surgical complexity level may be based on patient characteristics. Patient characteristics may include, but are not limited to: age, gender, weight, height, Body Mass Index (BMI), menopausal status, typical blood pressure, characteristics of a patient's genome, educational status, educational level, economic status, income level, occupational level, insurance type, health status, self-assessed health, functional status, dysfunction, disease duration, disease severity, disease quantity, disease characteristics (such as disease type, tumor size, histological grade, number of infiltrated lymph nodes, etc.), utilization of medical care, number of medical visits, medical intervals, regular medical resources, family status, marital status, number of children, family support, race, ethnic group, culture, religion, religious type, native language, characteristics of medical tests performed on a patient in the past (such as test type, test time, test results, etc.), characteristics of medical treatments performed on a patient in the past (such as treatment type, type of therapy, or the like, Time of treatment, treatment outcome, etc.), or any other relevant characteristic. Other example patient features are described throughout this disclosure. These features can be associated with certain surgical complications. For example, an elderly and/or overweight patient may be associated with the following surgical procedures: the surgery has a higher complexity than younger or better-bodied patients.

According to some embodiments, determining at least one of the first surgical complexity level or the second surgical complexity level may be based on a skill level of a surgeon associated with the surgical short. For example, if the surgeon depicted in the surgical short-cut has a low skill level, the surgery, which may generally be considered to have a low complexity as a result of the reduced performance skills, may become more complex. Thus, as discussed above, the skill level may be an indication of the surgeon's ability to perform a surgical procedure or a particular technique within a surgical procedure. In some embodiments, the skill level may relate to past performance of the surgeon, the type and/or level of training or education the surgeon has received, the number of surgeries the surgeon has performed, the types of surgeries the surgeon has performed, the qualifications of the surgeon, the years of experience of the surgeon, the patient's or other medical professional's evaluation of the surgeon, past surgical results, past surgical complications, or any other information relevant to assessing the skill level of the surgeon. Alternatively or additionally, the skill level of the surgeon may be determined by computer analysis of the video clips. For example, artificial intelligence may be used to classify the skill level of the surgeon, as discussed in more detail below. Although the skill level is described herein as a surgeon's skill level, in some embodiments, the skill level may be associated with another healthcare professional, such as an anesthesiologist, nurse, registered nurse anesthesiologist (CRNA), surgical technician, resident doctor, medical student, doctor's assistant, or any other healthcare professional. Accordingly, reference to a surgeon for use throughout this disclosure is a shorthand for any relevant medical professional.

Some embodiments of the present disclosure may also include determining a level of skill in the surgical short sheet exhibited by the healthcare provider. Determining at least one of the first complexity level or the second complexity level may be based on the determined skill level exhibited by the healthcare provider. The healthcare provider's skill level may be based on an analysis of the first set of frames or the second set of frames using image and/or video analysis algorithms, such as object and/or motion detection algorithms. For example, a healthcare provider may perform one or more techniques in a manner that demonstrates a certain level of skill. In one example, a machine learning model may be trained using training examples to determine a healthcare provider's skill level from images and/or videos, and the trained machine learning model may be used to analyze a surgical short and determine the skill level provided in the surgical short that is exhibited by the healthcare provider. An example of such a training example may include a video clip depicting a portion of a surgical procedure, and a marker indicating a level of skill exhibited in the video clip. In other embodiments, the skill level may be determined based on the identity of the healthcare provider in the surgical short sheet. For example, based on the identity of the surgeon, the associated skill level may be determined from an external source, such as a database that includes skill level information for various surgeons. Thus, one or more facial recognition algorithms may be used to identify a healthcare provider, and the identity of the healthcare provider may be used to determine the skill level of the healthcare provider.

In some exemplary embodiments, determining at least one of the first surgical complexity level or the second surgical complexity level may be based on an analysis of electronic medical records. In some embodiments, information related to the medical history of the patient (which may be included in the electronic medical record) may be related to the complexity of the surgical procedure performed on the patient. For example, the electronic medical records may include surgical history (such as a list of surgical procedures performed on the patient, surgical reports, etc.), obstetrical history (such as a list of pregnancies, and possibly details associated with pregnancies, such as complications, outcomes, etc.), allergies, past and present medications, immunization history, growth charts and/or development history, annotations from past medical encounters (e.g., such annotations may include details about complaints, physical examinations, medical assessments, diagnoses, etc.), test results, medical images (such as X-ray images, computed tomography images, magnetic resonance imaging images, positron emission tomography images, single photon emission computed tomography images, ultrasound images, electrocardiogram images, electroencephalography images, electromyography images, magnetoencephalography images, etc.), and/or information based on the medical images, Medical videos and/or medical video-based information, orders, prescriptions, medical history of the patient's home, etc.

According to an embodiment of the present disclosure, determining the first surgical complexity level may further comprise: a medical tool in the first set of frames is identified. In some embodiments, the medical tool identified in the first set of frames may correspond to the medical tool identified in the second set of frames. For example, it is possible to identify the same tool in both sets of frames. In other embodiments, the medical tool identified in the first set of frames may be different from the medical tool identified in the second set of frames. Determining the first surgical complexity level may be based on a type of medical tool. The type of tool present in the first set of frames may indicate the type and/or complexity of the procedure being performed. For example, if the medical tool is a specialized tool that is used only for certain procedures or types of procedures, the degree of complexity may be determined based at least in part on the complexity associated with those procedures or types of procedures.

In some example embodiments, determining the first surgical complexity level may be based on events occurring after the first set of frames. For example, a surgical event such as a leak occurring in a frame subsequent to a first set of frames depicting a stitch may advertise a complexity level associated with the first set of frames. (e.g., a stitching procedure based solely on the first set of frames, which may otherwise be associated with a low level of complexity, may rise to a higher level of complexity when starting from a shot that determines that leakage is likely to occur due to stitching inadequacies.) later events may include any event related to a surgical procedure that affects the surgical complexity of a short slice, including various examples of intraoperative surgical events described throughout this disclosure. As another example, an event occurring after the first set of frames may be an adverse event occurring after the first set of frames, such as bleeding. The occurrence of the event may provide context for determining the complexity of the first surgical procedure. In some implementations, events occurring after the first set of frames can be identified based on analysis of the additional frames. For example, the event may occur before the second set of frames and may be identified based on analyzing frames between the first set of frames and the second set of frames. In other implementations, the occurrence of an event between the first set of frames and the second set of frames may be inferred based on the second set of frames without analyzing additional frames. Further, in some embodiments, the event may occur after the second set of frames.

Similarly, in some embodiments, determining the second surgical complexity level may be based on events occurring between the first set of frames and the second set of frames. The event may occur at other times, including at the first set of frames, before the first set of frames, or after the second set of frames. In some embodiments, the first surgical complexity level and/or the second surgical complexity level may be determined based on occurrences of events based on a machine learning model trained to relate events and/or event timings to various complexity levels. As an illustrative example, determining the second surgical complexity level may be based on an indication to call an additional surgeon after the first set of frames. The indication to call an additional surgeon may include, for example, a surgeon present in the second set of frames but not present in the first set of frames. Calling an additional surgeon may indicate that the surgeon performing the surgical procedure needs assistance and/or guidance, which may be relevant to determining the complexity of the surgical procedure. In another example, determining the second surgical complexity level may be based on an indication to administer a particular medication after the first set of frames. For example, the drug may include anesthesia (e.g., local, regional, and/or general anesthesia), barbiturates, benzodiazepines (benzodiazepines), sedatives, coagulants, or various other drugs that may be administered during a surgical procedure. Administration of the drug may be relevant to determining the complexity of the surgical procedure. In some embodiments, administration of the drug may indicate one or more complications that may have occurred, which may also be relevant to determining the complexity of the surgical procedure.

In accordance with an embodiment of the present disclosure, determining the second surgical complexity level may be based on an elapsed time from the first set of frames to the second set of frames. For example, the elapsed time from the first set of frames to the second set of frames may represent the time between the first interaction with the medical tool with the anatomical structure when the anatomical structure first appears in the surgical short. As another example, the elapsed time may indicate a time between two surgical stages and/or intraoperative surgical events. For example, in embodiments where determining the first surgical complexity further includes identifying medical tools in the first set of frames, the first set of frames may indicate one surgical stage (such as an incision) and the second set of frames may indicate a second surgical stage (such as a suture). The time elapsed between these two phases or events may be indicative, at least in part, of the surgical complexity. (e.g., an elapsed time greater than the normal time for a particular procedure may indicate that the procedure is more complex than normal.) other durations within the surgical procedure may also indicate the complexity of the surgical procedure, such as the duration of the action, the duration of the event, the duration of the surgical phase, the duration between the action and the corresponding physiological response, and so forth. The surgical clips may be analyzed to measure such duration, and the determination of the complexity of the surgical procedure may be based on the determined duration.

Embodiments of the present disclosure may also include comparing the first surgical complexity level and/or the second surgical complexity level to a selected threshold. In some embodiments, the selected threshold may be used to select which frames should be selected for display and/or inclusion in a data structure. For example, the disclosed method may include the steps of: the first surgical complexity level is determined to be less than a selected threshold and the second surgical complexity level is determined to exceed a selected threshold. This may indicate that the second set of frames is associated with a complexity that satisfies the minimum complexity, whereas the first set of frames is not. Accordingly, the disclosed method may further comprise the steps of: in response to determining that the first surgical complexity level is less than the selected threshold and determining that the second surgical complexity level exceeds the selected threshold, the second set of frames is stored in the data structure while the first set of frames is omitted from the data structure. The surgeon or other user may use the data structure to select for display a video that meets the minimum complexity requirement.

Some embodiments of the present disclosure may also include tagging a first set of frames having a first surgical complexity level; tagging a second set of frames having a second surgical complexity level; and generating a data structure comprising a first set of frames with a first label and a second set of frames with a second label. The data structure may associate the first and second sets of frames and other frames of the surgical short with corresponding complexity levels such that it is indexed for retrieval. Such indexing may correspond to video indexing discussed in detail above. For example, the surgical complexity level may be an event characteristic as described above and illustrated in the data structure 600 shown in fig. 6. Thus, generating the data structure may enable the surgeon to select a second surgical complexity level and thereby cause the second set of frames to be displayed while skipping the display of the first set of frames. For example, a video may be selected for playback based on the process 800 described above with reference to fig. 8A and 8B.

Fig. 13 is a flow diagram illustrating an example process 1300 of analyzing the complexity of a surgical clip, consistent with the disclosed embodiments. Process 1300 may be performed by at least one processing device, such as a processor described herein. As one example, the processor may include processor 1412 as illustrated in fig. 14. Throughout this disclosure, the term "processor" is used as a shorthand for "at least one processor". In other words, a processor may include one or more structures that perform logical operations whether such structures are collocated, connected, or distributed. In some implementations, a non-transitory computer-readable medium may contain instructions that, when executed by a processor, cause the processor to perform process 1300. The process 1300 is not necessarily limited to the steps shown in the diagram 1300, and any steps or processes of the various embodiments described throughout this disclosure may also be included in the process 1300. At step 1310, process 1300 may include: the frames of the surgical short are analyzed to identify anatomical structures in the first set of frames, as previously discussed. In some embodiments, the anatomical structure may be identified using image and/or video analysis algorithms (such as object or motion detection algorithms), as previously discussed. In other embodiments, the anatomical structure may be identified using a machine learning model trained to detect anatomical structures, as described earlier.

At step 1320, the process 1300 may include accessing first historical data based on an analysis of first frame data taken from a first set of prior surgeries. In some embodiments, the first historical data may include a machine learning model trained using first frame data taken from a first set of prior surgeries, as previously described. At step 1330, process 1300 may include: the first set of frames is analyzed using the first historical data and using the identified anatomical structure to determine a first surgical complexity level associated with the first set of frames. For example, a machine learning model may be trained using training data (e.g., training data based on historical data based on analysis of frame data taken from a prior surgical procedure) to identify a surgical complexity level associated with a set of frames, and the trained machine learning model may be used to analyze a first set of frames to determine a first surgical complexity level associated with the first set of frames.

At step 1340, process 1300 may include: the frames of the surgical short are analyzed to identify the medical tool, the anatomical structure, and the interaction between the medical tool and the anatomical structure in the second set of frames, as described in more detail previously. For example, an object detection algorithm and/or a motion detection algorithm may be used to analyze the second set of frames to detect medical tools and/or anatomical structures and/or interactions between medical tools and anatomical structures. In another example, a machine learning model may be used that is trained using training examples to detect medical tools and/or anatomical structures and/or interactions between medical tools and anatomical structures in images and/or videos. At step 1350, the process 1300 may include accessing second historical data based on an analysis of second frame data taken from a second set of prior surgeries. In some implementations, the first historical data and the second historical data can be the same. In other embodiments, the first and second historical data may be different. At step 1360, the process 1300 may include: the second set of frames is analyzed using the second historical data and using the identified interactions to determine a second surgical complexity level associated with the second set of frames, as previously described.

The operating room schedule may need to be adjusted based on the delays associated with the surgical procedure. The disclosed systems and methods may involve analyzing surgical clips to identify characteristics of the surgery, patient condition, and other characteristics to determine adjustments to an operating room schedule. Conversely, if the surgical procedure is completed before the scheduled time, the schedule may need to be adjusted. Accordingly, there is a need to use information obtained from surgical clips during surgery to adjust operating room schedules in an efficient and effective manner.

Aspects of the present disclosure may relate to adjusting an operating room schedule, including methods, systems, devices, and computer-readable media, which may include a scheduled time associated with completion of an ongoing surgical procedure, and scheduled times to begin and complete future surgical procedures.

Both methods and systems for enabling adjustment of an operating room schedule are described below, with the understanding that aspects of the method or system may be performed electronically over a network, wired, wireless, or both. Other aspects of such a method or system may be performed using non-electronic means. In its broadest sense, the method or system is not limited to a particular physical and/or electronic tool, but can be accomplished using many different tools. For ease of discussion, a method is first described below, and an understanding of aspects of the method applies equally to systems, apparatuses, and computer-readable media.

The disclosed embodiments may relate to receiving visual data tracking an ongoing surgical procedure from an image sensor located in a surgical operating room. As used herein, visual data may include any form of recorded visual media, including a recorded image, one or more frames or images or short slices, and/or data derived directly or indirectly from the foregoing. In addition, the video data may include sound. For example, the visual data may include a sequence of one or more images captured by an image sensor (such as cameras 115, 121, 123, and/or 125, as described above in connection with fig. 1). Some of the cameras (e.g., cameras 115, 121, and 125) may capture video/image data of the surgical table 141 and the camera 121 may capture video/image data of the surgeon 131 performing the surgery. In some cases, the camera may capture video/image data associated with a surgical team member, such as an anesthesiologist, nurse, surgical technician, or other healthcare professional located in the operating room 101.

In various embodiments, the image sensor may be configured to capture visual data by converting visible light, X-ray light (e.g., via fluoroscopy), infrared or ultraviolet light into an image, a sequence of images, video, and any other form of representation. The image/Video data may be stored as computer files using any suitable format, such as JPEG, PNG, TIFF, Audio Video Interleave (AVI), Flash Video Format (FLV), QuickTime File Format (MOV), MPEG (MPG, MP4, M4P, etc.), Windows Media Video (WMV), Material Exchange Format (MXF), uncompressed format, lossless compression Format, lossy compression Format, or other Audio or Video formats.

The image sensor may be any sensor capable of capturing image or video data. A single sensor may be used, or multiple image sensors may be positioned in the surgical operating room (e.g., the sensors may be positioned throughout the operating room). In an illustrative embodiment, an example image sensor may be positioned over a patient. The example image sensor may be above the operating table, next to a device located in the operating room, or anywhere else capable of detecting information about the surgical procedure. As shown in fig. 1, the image sensor may include cameras 115 to 125. In some cases, the image sensor may be a wearable device (e.g., a head-mounted camera, a body-mounted camera, or any sensor capable of being associated with a person). Additionally or alternatively, the example image sensor may be positioned on the surgical tool (i.e., as part of the surgical instrument). For example, the image sensor may be part of a bronchoscope tube, laparoscope, endoscope, or any other medical instrument configured for use inside or outside a patient (e.g., for use in procedures such as gastroscopy, colonoscopy, hysteroscopy, cystoscopy, flexible enteroscopy, wireless capsule endoscopy, etc.).

The image sensor (particularly when part of a surgical instrument) may include one or more light emitting sources for emitting light of a suitable wavelength, such as visible light, infrared light and/or ultraviolet light. The light-emitting source may include any suitable source (e.g., a light-emitting diode (LED) that emits visible light, a fluorescent light source, an incandescent light source, an infrared LED, an ultraviolet LED, and/or other types of light sources). The image sensor may not be limited to capturing light, but may be configured to process other signals for generating visual data related to the captured signals. For example, the image sensor may be configured to capture ultrasound, changes in electromagnetic fields, or any other suitable signals (e.g., distribution of forces over a surface), etc., to generate visual data related to the captured information.

The surgical procedure may include any medical procedure associated with or involving a manual or surgical treatment of the patient's body. The surgical procedure may include cutting, abrading, suturing, and/or other techniques involving measuring, treating, or physically altering body tissues and/or organs. Some non-limiting examples of such surgical procedures may include: laparoscopic surgery, thoracoscopic surgery, bronchoscopic surgery, microscopic surgery, open surgery, robotic surgery, appendectomy, carotid endarterectomy, carpal tunnel release, cataract surgery, cesarean section, cholecystectomy, colectomy (such as partial colectomy, total colectomy, etc.), coronary angioplasty, coronary artery bypass surgery, debridement (e.g., wounds, burns, infections, etc.), free skin graft, hemorrhoidectomy, hip replacement, hysterectomy, hysteroscopy, groin hernia repair, knee arthroscopy, knee replacement, mastectomy (such as partial mastectomy, total mastectomy, modified radical mastectomy, etc.), prostatectomy, prostate removal, shoulder arthroscopy, spinal surgery (such as spinal fusion, etc.) Laminectomy, foraminotomy, discectomy, disc replacement, an intervertebral implant, etc.), tonsillectomy, cochlear implant surgery, resection of a brain tumor (e.g., meningioma, etc.), interventional procedures such as percutaneous transluminal coronary angioplasty, transcatheter aortic valve replacement, minimally invasive surgery to clear cerebral hemorrhage, or any other medical procedure involving some form of incision. Although the present disclosure is described with reference to surgical procedures, it is to be understood that it may be applicable to other forms of medical procedures or general procedures.

The operating room may be any suitable facility (e.g., a room within a hospital) in which surgical procedures may be performed in a sterile environment. The operating room may be configured to be brightly lit and have overhead surgical lights. Operating rooms may feature controlled temperature and humidity, and may be windowless. In an example embodiment, the operating room may include an air handler that may filter the air and maintain a slightly higher pressure within the operating room to prevent contamination. The operating room may include a power backup system in the event of a power outage, and may include a supply of oxygen and anesthetic gases. The room may include storage space for common surgical supplies, containers for disposables, anesthesia carts, operating tables, cameras, monitors, and/or other items used for surgery. Dedicated scrub areas used by surgeons, anesthesiologists, department of surgery doctors (ODPs), and nurses prior to surgery may be part of the operating room. In addition, a map included in the operating room may enable a terminal cleaning person (terminal cleaner) to rearrange the operating table and equipment into a desired layout during cleaning. In various embodiments, one or more operating rooms may be part of an operating suite (operating suite), which may form different sections within a healthcare facility. The surgical suite may include one or more washrooms, preparation and recovery rooms, storage and cleaning facilities, offices, dedicated corridors, and possibly other support units. In various embodiments, the surgical suite may be climate and air controlled and separated from other departments.

In various embodiments, visual data captured by an image sensor may track an ongoing surgical procedure. In some cases, the visual data may be used to track a region of interest (ROI), such as a region of the patient's body where surgery is performed (e.g., region 127 as shown in fig. 1). In example embodiments, the cameras 115-125 may track the ROI via camera motion, camera rotation, or by zooming towards the ROI, thereby capturing visual data. For example, the camera 115 may be mobile and pointed at an ROI that requires video/image data to be captured during, before, or after a surgical procedure. For example, as shown in fig. 1, the camera 115 may be rotated as indicated by arrow 135A showing a pitch direction and arrow 135B showing a yaw direction of the camera 115. In various embodiments, the pitch and yaw angles of the camera (e.g., camera 115) may be controlled to track the ROI.

In an example embodiment, the camera 115 may be configured to track surgical instruments (also referred to as surgical tools, medical instruments, etc.), anatomical structures, hands of the surgeon 131, incisions, movements of anatomical structures, and/or any other objects within the location 127. In various embodiments, the camera 115 may be equipped with a laser 137 (e.g., an infrared laser) for precise tracking. In some cases, the camera 115 may be automatically tracked via a computer-based control application that uses image recognition algorithms to position the camera to capture video/image data of the ROI. For example, the control application may identify the anatomy, identify the surgical tool at a particular location within the anatomy, the surgeon's hand, bleeding, motion, etc., and track the location with the camera 115 by rotating the camera 115 at the appropriate yaw and pitch angles. In some embodiments, the control application may control the position (i.e., yaw and pitch) of the various cameras 115-125 to capture video/image data from more than one ROI during a surgical procedure. Additionally or alternatively, a human operator may control the position of the various cameras 115-125, and/or the human operator may supervise the control application while controlling the position of the cameras.

As used herein, the term "anatomical structure" may include any particular portion of a living organism, including, for example, one or more organs, tissues, tubes, arteries, cells, or any other anatomical portion. In some cases, a prosthesis, an artificial organ, or the like may be considered an anatomical structure.

Cameras 115-125 may also include zoom lenses for zooming in on or more ROIs. In an example embodiment, the camera 115 may include a zoom lens 138 for magnifying the ROI (e.g., a surgical tool near the anatomical structure). The camera 121 may include a zoom lens 139 for capturing video/image data from a larger area around the ROI. For example, camera 121 may capture video/image data of the entire location 127. In some embodiments, the video/image data obtained from the camera 121 may be analyzed to identify a ROI during the surgical procedure, and the control application may be configured to zoom the camera 115 toward the ROI identified by the camera 121.

In various embodiments, the control application may be configured to coordinate the position of the various cameras and the zoom during the surgical procedure. For example, the control application may direct the camera 115 to visually track the anatomy and may direct the cameras 121 and 125 to track the surgical instrument. Cameras 121 and 125 may track the same ROI (e.g., surgical instrument) from different perspectives. For example, video/image data obtained from different perspectives may be used to determine the position of the surgical instrument relative to the surface of the anatomical structure.

In various embodiments, the control of the position and zoom of the cameras 115-125 may be rule-based and follow algorithms developed for a given surgical procedure. For example, the control application may be configured to direct the camera 115 to track the surgical instrument, the camera 121 to the location 127, the camera 123 to track the movement of the surgeon's hand, and the camera 125 to the anatomy. The algorithm may include any suitable logic statements to determine the position and zoom (zoom in) of the cameras 115 to 125 from various events during the surgical procedure. For example, the algorithm may direct at least one camera to a region of the anatomy where bleeding occurred during surgery.

In various instances, when a camera (e.g., camera 115) tracks a moving or deformed object (e.g., when camera 115 tracks a moving surgical instrument or a moving/pulsating anatomical structure), the control application may determine a maximum allowable zoom of camera 115 such that the moving or deformed object does not fall out of the field of view of the camera. In an example embodiment, the control application may initially select a first zoom of the camera 115, evaluate whether the moving or deformed object is out of the field of view of the camera, and adjust the zoom of the camera as needed to prevent the moving or deformed object from being out of the field of view of the camera. In various embodiments, the camera zoom may be readjusted based on the direction and speed of the moving or deforming object. In some cases, the control application may be configured to predict future positions and orientations of the cameras 115-125 based on movements of the surgeon's hands, movements of the surgical instrument, movements of the surgeon's body, historical data likely to reflect the next step, or any other data from which future movements may be derived.

Visual data captured by the image sensor may be transmitted via a network to a computer system for further analysis and storage. For example, fig. 14 shows an example system 1401 that can include a computer system 1410, a network 1418, and image sensors 1421 (e.g., cameras located within an operating room) and 1423 (e.g., image sensors that are part of a surgical instrument) connected to the computer system 1401 via the network 1418. The system 1401 may include a database 1411 for storing various types of data related to previously performed surgical procedures (i.e., historical surgical procedure data, which may include historical images, video or audio data, text data, physician notes, data obtained by analyzing historical surgical procedure data, other data related to historical surgical procedures). In various embodiments, the historical surgical data may be any surgical data related to a previously performed surgical procedure. Additionally, system 1401 may include one or more audio sensors 1425, lighting devices 1427, and a schedule 1430.

Computer system 1410 may include: one or more processors 1412 for analyzing visual data collected by the image sensors, data storage 1413 for storing visual data and/or other types of information, an input module 1414 for inputting any suitable inputs to computer system 1410, and software instructions 1416 for controlling various aspects of the operation of computer system 1410.

One or more processors 1412 of system 1410 may include a multi-core processor for concurrently processing multiple operations and/or streams. For example, the processor 1412 may be a parallel processing unit to concurrently process visual data from different image sensors 1421 and 1423. In some implementations, the processor 1412 may include one or more processing devices, such as, but not limited to, any of the Pentium or Xeon family of microprocessors manufactured by Intel, the Turion family manufactured by AMDTM, or various processors from other manufacturers. The processor 1412 may include multiple coprocessors each configured to perform specific operations such as floating point arithmetic, graphics, signal processing, string processing, or I/O interfacing. In some embodiments, the processor may include: a Field Programmable Gate Array (FPGA), a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), etc.

Database 1411 may include one or more computing devices configured with appropriate software to perform operations to provide content to system 1410. The database 1411 may include, for example: oracle database, SybaseTM database, and/or other relational or non-relational databases, such as Hadoop sequence files, HBaseTM, or Cassandra. In an illustrative embodiment, the database 1411 may include a computing component (e.g., a database management system, a database server, etc.) configured to receive and process requests for data stored in a memory device of the database and to provide the data from the database. As previously discussed, the database 1411 may be configured to collect and/or maintain data associated with a surgical procedure. Database 1411 may collect data from a variety of sources, including, for example, online resources.

Network 1418 may include any type of connection between the various computing components. For example, the network 1418 may facilitate the exchange of information via a network connection that may include an internet connection, a local area network connection, Near Field Communication (NFC), and/or other suitable connection capable of sending and receiving information between components of the system 1401. In some embodiments, one or more components of system 1401 may communicate directly over one or more dedicated communication links.

Various example embodiments of system 1401 may include a computer-implemented method, a tangible, non-transitory computer-readable medium, and a system. For example, the computer-implemented method may be performed by at least one processor that receives instructions from a non-transitory computer-readable storage medium, such as medium 1413, as shown in fig. 14. Similarly, systems and apparatuses consistent with the present disclosure may include at least one processor and a memory, and the memory may be a non-transitory computer-readable storage medium. As used herein, a non-transitory computer-readable storage medium refers to any type of physical memory that can store information or data readable by at least one processor. Examples may include: random Access Memory (RAM), read-only memory (ROM), volatile memory, non-volatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage medium (whether some or all of which are physically located in or near an operating room, in another room of the same facility, at a remote location, or in a cloud-based server farm). Singular terms (such as "memory" and "computer readable storage medium") may additionally refer to structures, such as memories or computer readable storage media. As referred to herein, unless otherwise specified, "memory" may include any type of computer-readable storage medium. A computer-readable storage medium may store instructions for execution by at least one processor, including instructions for causing a processor to perform steps or stages consistent with embodiments herein. Additionally, one or more computer-readable storage media may be utilized in implementing a computer-implemented method. The term "computer-readable storage medium" should be understood to include tangible articles and to exclude carrier waves and transitory signals.

The input module 1414 may be any suitable input interface that provides input to the one or more processors 1412. In an example embodiment, the input interface may be a keyboard to input alphanumeric characters, a mouse, a joystick, a touch screen, an on-screen keyboard, a smartphone, an audio capture device (e.g., a microphone), a gesture capture device (e.g., a camera), and other devices for inputting data. When a user enters information, the information may be displayed on a monitor to ensure the correctness of the input. In various embodiments, the input may be analyzed, validated, or altered before being submitted to the system 1410.

The software instructions 1416 may be configured to control various aspects of the operation of the system 1410, which may include: receive and analyze visual data from the image sensor, control various aspects of the image sensor (e.g., moving the image sensor, rotating the image sensor, zooming the zoom lens operating the image sensor toward an example ROI, and/or other movement), control various aspects of other devices in the operating room (e.g., control operation of the audio sensor, chemical sensor, light emitting device, and/or other devices).

As previously described, the image sensor 1421 may be any suitable sensor capable of capturing image or video data. Such sensors may be, for example, cameras 115 to 125.

Audio sensor 1425 may be any suitable sensor that captures audio data. The audio sensor 1425 may be configured to capture audio by converting sound into digital information. Some examples of audio sensor 1425 may include: microphones, unidirectional microphones, bidirectional microphones, cardioid directional microphones, omni-directional microphones, on-board microphones, wired microphones, wireless microphones, any combination of the above, and any other sound capture device.

The light emitting device 1427 may be configured to emit light, for example, so that better image capturing can be achieved by the image sensor 1421. In some embodiments, the emission of light may be coordinated with the photographing operation of the image sensor 1421. Additionally or alternatively, the emission of light may be continuous. In some cases, the emission of light may be performed at selected times. The emitted light may be visible light, infrared light, ultraviolet light, deep ultraviolet light, X-rays, gamma rays, and/or in any other portion of the spectrum.

As described below, the schedule 1430 may include an interface that displays the scheduled times associated with the completion of the ongoing surgical procedure, as well as the scheduled times for starting and completing future surgical procedures. The schedule 1430 may be implemented using any suitable approach (e.g., as a stand-alone software application, as a website, as a spreadsheet, or any other suitable computer-based application or paper-based document). Example schedule 1430 may include a list of procedures and a list of start times and completion times associated with particular procedures. Additionally or alternatively, the schedule 1430 may include a data structure configured to represent information related to the schedule of at least one operating room and/or information related to the schedule of at least one surgical procedure (such as a scheduled time associated with completion of an ongoing surgical procedure), and scheduled times for starting and completing future surgical procedures.

Fig. 15 shows a schedule 1430, which may include listed procedures, such as procedures a-C (e.g., surgical procedures, or any other suitable medical procedure that may be performed in an operating room using the schedule 1430). For each procedure a to C, a corresponding start time and end time may be determined. For example, for past procedure a, start time 1521A and completion time 1521B may be actual start and completion times. (since procedure A is complete, the schedule 1430 may be automatically updated to reflect the actual time). Fig. 15 shows that for current procedure B, start time 1523A may be an actual time, while completion time 1523B may be an estimated time (and recorded as an estimated time). Additionally, for procedure C scheduled to be performed in the future, a start time 1525A and a completion time 1525B may be estimated and recorded. It should be noted that the schedule 1430 is not limited to displaying and or maintaining the listed procedures and the start/completion times of the procedures, but may include various other data associated with the example surgical procedure. For example, the schedule 1430 can be configured to allow users of the schedule 1430 to interact with various elements of the schedule 1430 (for the case when the schedule 1430 is represented by a computer-based interface, such as a web page, a software application, and/or another interface). For example, the user may be allowed to click on or otherwise select zones 1513, 1515, or 1517 to obtain details of the procedure A, B or C, respectively. Such details may include: patient information (e.g., name of patient, age, medical history, etc.), surgical information (e.g., type of surgery, type of tool used for surgery, type of anesthesia used for surgery, and/or other characteristics of surgery), and healthcare provider information (e.g., name of surgeon, name of anesthesiologist, experience of surgeon, success rate of surgeon, surgeon rating based on surgeon's surgical outcome, and/or other data related to surgeon). Some or all of the foregoing information may already be present in areas 1513, 1515, and 1517 without further mining.

In various embodiments, surgical information may be entered by a healthcare provider (e.g., a nurse, surgical assistant, surgeon, and/or other healthcare professional) via an example table 1601 as shown in fig. 16. For example, table 1601 may have: an "urgency" field in which a healthcare provider may specify the urgency of a scheduled surgical procedure; a "surgical type" field in which the healthcare provider can specify the type of surgical (or the name of the surgical); a "complications" field in which a healthcare provider may specify a patient's medical history of events that may lead to complications during a surgical procedure; a "patient profile" field (such as "name," "address," "date of birth," "contact," and "emergency contact") in which a healthcare provider can specify corresponding information about a patient. Further, form 1601 may include a "medical history" field that may be used to describe the patient's medical history (e.g., the "medical history" field may be a drop-down list, a space where a healthcare provider may enter text describing the patient's medical history, or any other suitable graphical user interface element that may be used for the patient's medical history description). Additionally, the table 1601 may include a "surgical team" related field that may specify the name and responsibility of medical personnel scheduled to provide a surgical procedure for a patient. Information about multiple healthcare providers may be added via the "add next member" button, as shown in FIG. 16. The form 1601 is merely one illustrative example of a form having a few exemplary fields that may be used to enter information about a surgical procedure into the schedule 1430, and any other suitable form that allows relevant information of the schedule 1430 to be entered may be used. The number of information fields on the form and the type of information used to capture the identification may be a matter of administrator preference. Additionally or alternatively, information for the surgical procedure may be received from other sources, such as a Hospital Information System (HIS), an Electronic Medical Record (EMR), a scheduled operating room schedule, a digital calendar, an external system, and so forth.

Aspects of embodiments for enabling adjustment of an operating room schedule may include: accessing a data structure containing information based on historical surgical data; and analyzing the visual data of the ongoing surgical procedure and the historical surgical procedure data to determine an estimated time of completion of the ongoing surgical procedure. In various embodiments, any of the steps of the method may be performed by one or more processors of system 1410 executing software instructions 1416.

The data structures may be stored in a database 1411 and may be accessed via a network 1418 or may be stored locally in a memory of the system 1410. The data structures containing historical surgical data may include any suitable data (e.g., image data, video data, text data, numerical data, spreadsheets, formulas, software code, computer models, and/or other data objects), and any suitable relationship (or combination of data values) between various data values. The data may be stored in the following manner: linearly, horizontally, hierarchically, relationally, irrelatively, one-dimensionally, multi-dimensionally, operatively, in an ordered manner, in an unordered manner, in an object-oriented manner, in a centralized manner, in a decentralized manner, in a distributed manner, in a custom manner, or in any manner that enables data access. As non-limiting examples, the data structure may include: arrays, associative arrays, linked lists, binary trees, balanced trees, heaps, stacks, queues, collections, hash tables, records, tag associations, ER models, and graphs. For example, the data structure may include: XML code, XML database, RDBMS database, SQL database or NoSQL alternatives for data storage/Search, such as, for example, MongoDB, Redis, Couchbase, Datastax Enterprise Graph, Elastic Search, splun, Solr, Cassandra, Amazon dynamod db, Scylla, HBase and Neo 4J. The data structure may be a component of the disclosed system or a remote computing component (e.g., a cloud-based data structure). The data in the data structure may be stored in contiguous or non-contiguous memory. Furthermore, as used herein, data structures do not require that information be co-located. For example, the data structure may be distributed over multiple servers, which may be owned or operated by the same or different entities. Thus, the term "data structure" as used herein in the singular includes a plurality of data structures.

In an example embodiment, the data structure may include: the type of procedure (e.g., bypass surgery, bronchoscopy, or any other surgical procedure as described above), one or more characteristics of the patient (e.g., age, gender, medical considerations that may affect the procedure, past medical history, and/or other patient information), the name and/or characteristics of the operating surgeon and/or anesthesiologist, and the time required to complete the procedure. In some cases, the time to complete the procedure may include: the time to prepare the operating room, the time to prepare the patient for the surgery, the time required for medical personnel (i.e., nurses, surgeons, anesthesiologists, etc.), the time required for the patient to be anesthetized or to fall asleep, the time required to clean the operating room, or any other surgery related time required to place the operating room in the condition for the next surgery.

In an example embodiment, the data structure may be a relational database having one or more database tables. For example, FIG. 17A illustrates an example of a data structure 1701, which may include data tables 1711 and 1713. In an example embodiment, the data structure 1701 may be part of a relational database, may be stored in memory, or the like. Tables 1711 and 1713 may include a plurality of records (e.g., records 1 and 2 as shown in fig. 17A), and may have various fields, such as fields "record number", "operation", "age", "sex", "medical attention", "time", and "other data". For example, the field "record number" may include a recording label that may be an integer, the field "procedure" may include the name of the surgical procedure, the field "age" may include the age of the patient, the field "gender" may include the gender of the patient, the field "medical attention" may include information about the medical history of the patient (which may be related to the surgical procedure having the name as shown in the field "procedure"), the field "time" may include the time required for the surgical procedure, and the field "other data" may include a link to any other suitable data related to the surgical procedure. For example, as shown in fig. 17A, 1711 may include links to: data 1712A, which may correspond to image data, data 1712B, which may correspond to video data; data 1712C, which may correspond to textual data (e.g., annotations recorded during or after surgery, patient records, post-operative reports, etc.), and data 1712D, which may correspond to audio data. In various embodiments, image, video, or audio data may be captured during a surgical procedure. In some cases, the video data may also include audio data. The image, video, text, or audio data 1712A-1712D are just some of the data that may be collected during a surgical procedure. Other data may include vital sign data of the patient, such as heart rate data, blood pressure data, blood test data, blood oxygen levels, or any other patient-related data recorded during the surgical procedure. Some additional examples of data may include: room temperature, the type of surgical instrument used, or any other data related to the surgical procedure and recorded before, during, or after the surgical procedure.

As shown in fig. 17A, tables 1711 and 1713 may include records of surgical procedures. For example, record 1 of table 1711 indicates that bypass surgery was performed on a 65 year old male with kidney disease and completed within 4 hours. For example, record 2 of table 1711 indicates that bypass surgery was performed on a 78 year old female with no background medical condition that could complicate surgery and completed within 3 hours. Table 1713 shows that a 65 year old male was subjected to bypass surgery by dr.mac and a 78 year old female was subjected to bypass surgery by dr.doe. The patient characteristics listed in table 1711, such as age, gender, and medical considerations, are just some of the exemplary patient characteristics, and any other suitable characteristics may be used to distinguish one surgery from another. For example, patient characteristics may also include: patient allergies, patient anesthesia tolerance, various details of the patient (e.g., how many arteries need to be treated during a bypass surgery), the weight of the patient, the size of the patient, anatomical details of the patient, or any other patient-related feature that may have an effect on the duration (and success) of the surgery.

The data structure 1701 may have any other number of suitable tables that may characterize any suitable aspect of the surgical procedure. For example, 1701 may include the following table: the table indicates the associated anesthesiologist identity, the time of day of the surgery, whether the surgery was a first, second, third procedure performed by the surgeon (e.g., during the surgeon's lifetime, during a particular day, etc.), the associated anesthesiologist nurse assistant, whether there were any complications during the surgery, and any other information related to the surgery.

Accessing the data structure may include reading information from the data structure and/or writing information to the data structure. For example, reading from and/or writing to the data structure may include reading and/or writing any suitable historical surgical data, such as historical visual data, historical audio data, historical textual data (e.g., notes during an example historical surgical procedure), and/or other historical data formats. In an example embodiment, accessing the data structure may include reading data from and/or writing data to database 111 or any other suitable electronic storage repository. In some cases, the written data may include print data (e.g., a print report on a sheet that includes historical data).

The disclosed embodiments may further include: the visual data of the ongoing surgical procedure is analyzed using the data structure to determine an estimated time to completion of the ongoing surgical procedure. The estimated completion time may be any suitable indicator of the estimated completion of the surgical procedure, including, for example: a time of day that the surgical procedure is expected to be completed, a time remaining before completion, an estimated total duration of the surgical procedure, a probability distribution time value for completion of the surgical procedure, and the like. Moreover, the completion time may include additional statistical information indicating a likelihood of completion based on historical surgical data (e.g., a standard deviation associated with the historical completion time, an average historical completion time, a mean of the historical completion times, and/or other statistical indicators of the completion time). In some examples, the machine learning model may be trained using training examples to estimate a completion time of a surgical procedure from images and/or videos, and the trained machine learning model may be used to analyze visual data and determine an estimated completion time of the surgical procedure in progress. Examples of such training examples may include images and/or video of a surgical procedure, as well as markers indicating an estimated time of completion of the surgical procedure. For example, the labels of the training examples may be based on at least one of a data structure containing information based on historical surgical data, historical data, user input, and the like. For example, the training examples may include images and/or video from at least one of a data structure containing information based on historical surgical data, historical data, and the like.

In one example, prior to starting a surgical procedure, historical surgical procedure data may be analyzed to determine an initial estimated time to completion (also referred to herein as a completion time) of the surgical procedure in progress, or the initial estimated time to completion of the surgical procedure in progress may be received by other means (e.g., from a user, from a scheduling system, from an external system, etc.).

In some implementations, the average historical completion time may be used to determine the estimated completion time. For example, the average historical completion time may be calculated for historical surgical procedures of the same type as the surgical procedure in progress, and the average historical completion time may be used as the estimated completion time. In another example, similar historical surgeries may be selected (e.g., using a K-nearest neighbor algorithm, using similarity measures between surgeries, etc.), and an average historical completion time may be calculated for the selected similar historical surgeries.

The analysis of the historical data may involve any suitable statistical data analysis (such as determining expected time to completion values based on a probability distribution function, using bayesian inference (interference) to determine how the probability distribution function is affected by various patient/surgeon characteristics (e.g., the age of the patient)), linear regression, and/or other methods of quantifying statistical relationships. For example, fig. 17B shows an example graph 1703 of points 1715 representing a distribution of completion times for a particular surgical procedure (e.g., a bypass surgical procedure) for patients of different ages. For example, point 1715A shows that in certain cases, for a patient of age a0, the time T0 is required to complete the surgical procedure. The data of point 1715 may be used to construct a linear regression model 1717, and the regression model 1717 may be used to determine the expected time to completion T1 for patients of age a1 from the point 1718 on the linear regression model. While the example diagram 1703 shows the dependence of completion time on one characteristic parameter of the patient (e.g., patient age), completion time may depend on multiple characteristic parameters (e.g., weight of the patient, characteristics of the medical professional performing the surgical procedure, characteristics of the anesthesiologist, and other data describing the patient or the procedure), as previously discussed, and the points 1715 may be plotted in a multi-dimensional cartesian coordinate system, and the regression model 1717 may comprise a multivariate regression model. In other examples, the regression model 1717 may include a non-linear regression model.

In an example embodiment, determining the estimated completion time may be based on one or more stored characteristics associated with a healthcare professional performing the ongoing surgical procedure. Such characteristics may include the healthcare professional's age, name, age of experience, location, past performance, and/or other information describing the healthcare professional, e.g., as described above. The features may be stored using any suitable electronic (or in some cases, paper) storage using any suitable data structure. In an example embodiment, the features may be stored in a database (e.g., database 1411 as shown in FIG. 14). For example, an expected completion time may be estimated based on an analysis of historical data for a given healthcare professional for a given type of surgical procedure (e.g., the expected completion time may be an average completion time determined from historical data for the given healthcare professional for the given type of surgical procedure). Also, using historical data for a given healthcare professional for a given type of surgical procedure, other statistics (e.g., standard deviation from expected completion time, correlation of expected completion time with other characteristics of the surgical procedure, such as age of the patient or time of day the procedure was performed, and/or other statistics generated from historical completion time) may be determined.

Fig. 18 illustrates an exemplary implementation of obtaining a completion time 1815 using a machine learning model 1813. The model 1813 may have as input parameters 1811 various characteristics of the patient, various characteristics of the medical personnel, and the type of surgical procedure being administered to the patient. For example, as shown in fig. 18, parameter P1 may indicate the type of surgery, parameter P2 may indicate the age of the patient, parameter PN may indicate the weight of the patient, and so forth. Various other parameters may be used, such as the type of surgical instrument being used, the size of the anatomy being operated on, and so forth.

In various embodiments, completion time 1815 may be calculated using a model 1813, which may include a machine learning model (such as a neural network, a decision tree, a model based on an ensemble learning method (such as random forest)), or any other machine learning model, e.g., as described above. In some cases, model 1813 may be configured to return a single number associated with the completion time, and in some embodiments, model 1813 may be configured to return a probability distribution of the completion time.

In various embodiments, the model 1813 may be trained using the following data sets: the data set contains suitable parameters 1811 corresponding to historical surgical data, which may include historical completion times for various patients undergoing a given surgical procedure.

Embodiments of the present disclosure may further include: visual data of the ongoing surgical procedure is analyzed along with historical surgical procedure data to determine an estimated time to completion of the ongoing surgical procedure. Such analysis may be performed by machine learning and/or other techniques described herein to determine an estimated completion time. In one example embodiment, to determine the completion time of a surgical procedure, the method may utilize a machine learning model: the machine learning model takes as input information, such as the type of surgery, one or more of the visual data of the surgical procedure in progress (such as an image of the surgery or video data of the surgery), patient and/or medical personnel characteristics, and returns as output an estimated time to completion. In some examples, historical surgical data as well as visual data of the surgical procedure in progress may be analyzed, e.g., using a visual similarity function, using an imprecise graph matching algorithm on a graph representing the visual data, using a K-nearest neighbor algorithm, etc., to identify records in the historical surgical data that are similar to the surgical procedure in progress. Further, in some examples, the identified records may be used to determine an estimated time to completion of the surgical procedure in progress. For example, a completion time function (such as a mean, median, pattern, statistical function, linear function, non-linear function, etc.) may be calculated from the identified records, and the estimated completion time for the surgical procedure in progress may be based on the calculated function. In an example embodiment, visual data may be collected for an ongoing surgical procedure at times separated by predetermined time intervals (e.g., visual data may be collected every second, every few seconds, every few tens of seconds, every minute, or any other suitable interval). Additionally or alternatively, the visual data may be collected at a time requested by medical personnel (e.g., the visual data may be collected at a time requested by a surgeon and/or anesthesiologist and/or nurse or any other designated individual). For example, the surgeon may generate the following visual/audio signals (e.g., gestures, body gestures, visual signals generated by light sources produced by the medical instrument, spoken words, or any other triggers): the visual/audio signal may be captured by one or more image/audio sensors and identified as a trigger for collecting visual data. Additionally or alternatively, visual data may be collected based on characteristic events detected during a surgical procedure, as described further below.

In various embodiments, adjusting the operating room schedule may include: training a machine learning model using historical visual data to estimate a completion time, and wherein calculating the estimated completion time comprises: implementing the trained machine learning model. An example of input data for a machine learning model may include a plurality of visual data records and parameters. The recording of visual data may be a plurality of frames of an image set and/or video taken by an image sensor for a particular time interval during a surgical procedure. For example, the visual data record may be video data for the first few minutes of the surgical procedure, the visual data record may be video data for the next few minutes of the surgical procedure, and the visual data record may be video data for the subsequent few minutes of the surgical procedure. In some examples, the machine learning model may be trained and/or used as described above.

Aspects of the disclosed embodiments may include accessing a schedule of a surgical operating room, including scheduled times associated with completion of an ongoing surgical procedure. In an example embodiment, accessing may include reading information from and/or writing information to a schedule. An example of such a schedule may include schedule 1430, or a data structure containing information similar to that described with respect to schedule 1430. For example, reading from/writing to schedule 1430 may include: reading and/or writing any suitable data relating to past, present or future surgical procedures, which are, respectively, previously performed or in progress in the surgical room or scheduled to be performed. Such data may include: the name of the procedure, the surgeon performing the procedure, the name of the patient, any characteristic parameters associated with the patient or/and medical personnel, the start time (or estimated start time) of the procedure, and the completion time (or estimated completion time) of the procedure. In various embodiments, system 1410 may be used to read and/or write schedule 1430.

Various embodiments may also include: calculating, based on the estimated completion time of the surgical procedure in progress, whether the expected completion time is likely to result in a discrepancy relative to a scheduled time associated with completion; and outputting a notification when the difference is calculated, thereby enabling subsequent users of the surgical operating room to adjust their schedules accordingly. For example, an estimated (also referred to as an expected) completion time for the surgical procedure in progress may be obtained using any of the methods discussed above (e.g., using the machine learning model and/or the linear regression model of historical surgical data described above). The expected completion time may be compared to an estimated completion time for the example medical procedure (e.g., estimated completion time 1523B as shown in fig. 15), and if the expected completion time does not substantially match time 1523B (e.g., the expected completion time is later than or before time 1523B), the method may be configured to calculate a difference between the expected completion time and time 1523B. If the difference is less than a predetermined threshold (e.g., the threshold may be one minute, several minutes, five minutes, ten minutes, fifteen minutes, and/or other time value), the method may determine that the expected completion time is substantially the same as time 1523B. Alternatively, if the difference is large enough (i.e., greater than a predetermined threshold), the method may calculate (i.e., determine) an expected completion time based on the estimated completion time of the surgical procedure in progress that is likely to result in a difference relative to the scheduled time associated with completion. In various embodiments, the estimated completion time may be a duration of time to complete the surgical procedure, and the expected completion time may be an expected time to complete the surgical procedure.

In various implementations, if a discrepancy is detected, a notification may be output when the discrepancy is determined (e.g., the discrepancy may be determined by calculating the discrepancy between the expected completion time and time 1523B). In an example embodiment, the notification may include an updated operating room schedule. For example, updates to the time table 1430 may include: a text update, a graphics update, or any other suitable update (e.g., video data, animation, or audio data). Additionally or alternatively, the notification may be implemented as a warning signal (e.g., a light signal, an audio signal, and/or other type of transmitted signal). In some cases, the notification may be an SMS message, an email, and/or other type of communication delivered to any suitable device (e.g., smart phones, laptops, pagers, desktops, televisions, and others previously discussed) owned by various users (e.g., various medical personnel, administrators, patients, relatives or friends of patients, and other individuals of interest). For example, the notification may be an electronic message sent to a device (as described earlier) associated with a subsequently scheduled user of the surgical operating room (e.g., a surgeon, anesthesiologist, and/or other healthcare professional). Such notifications may enable various users (e.g., users of an operating room) to adjust their schedules according to updates to the schedules. In various embodiments, the updated operating room schedule may enable queued healthcare professionals to prepare for subsequent surgical procedures. For example, if the expected completion time of the surgical procedure is later than the estimated completion time (e.g., time 1523B), a queued healthcare professional (e.g., surgeon, anesthesiologist, nurse, etc.) may delay preparing the surgical procedure. Alternatively, if the expected completion time of the surgical procedure is before time 1523B, a queued healthcare professional (e.g., surgeon, anesthesiologist, nurse, etc.) may begin preparing the surgical procedure at an earlier time than previously scheduled.

Aspects of the disclosed embodiments may further include: determining a degree of difference relative to a scheduled time associated with completion, in response to the first determined degree, outputting a notification; and in response to the second determined degree, forgoing outputting the notification. For example, if the degree of the first determination exceeds a predetermined threshold (e.g., exceeds five minutes, tens of minutes, and/or other time measure), some embodiments may determine that such degree of the first determination may affect the scheduled times of other surgical procedures. For such cases, a notification of the difference may be sent to any suitable recipient (e.g., a healthcare provider managing subsequent surgical procedures). Alternatively, if it is determined that the degree of the second determination is sufficiently small (e.g., less than a predetermined threshold), the embodiment may be configured not to send a notification.

Aspects of the disclosed embodiments may further include: it is determined whether the expected completion time is likely to result in a delay of at least a selected threshold amount of time relative to a scheduled time associated with completion. In some embodiments, such a determination may be made using a suitable machine learning model (such as model 1813 described above). The selected threshold amount may be any suitable predetermined amount (e.g., minutes, tens of minutes, half-hours, and/or other time measurement). For example, the selected threshold amount may be based on the operation of a surgical operating room. Additionally or alternatively, the selected threshold amount may be based on future events in a schedule of the surgical operating room. For example, if there are thirty minutes scheduled for the second surgery after the first surgery is completed, the threshold amount selected for the first surgery cannot exceed thirty minutes. Additionally or alternatively, the selected threshold amount of time may be selected based on a subsequent user of the surgical operating room. For example, if a subsequent user's surgery may require a large amount of pre-preparation, the selected threshold amount may be small enough (e.g., a few minutes). Alternatively, if the subsequent user's surgical procedure may not require a significant amount of priming and may be easily postponed or rescheduled, the selected threshold amount may be sufficiently large (e.g., thirty minutes, one hour, and/or other time measure). In some cases, the urgency or importance of the subsequent user's surgery may determine the selected threshold amount. For example, for an urgent follow-up surgery, an early notification may be required, thereby requiring a short selected threshold amount.

In response to determining that the expected completion time is likely to result in a delay of at least a selected threshold amount of time, the disclosed embodiments may include outputting a notification. As previously described, the notification may be any type of electronic or paper data that may be output (such as by the system 1410 shown in FIG. 14) for analysis of the completion time. In an example embodiment, consistent with the disclosed embodiments, system 1410 may be configured to output a notification to a healthcare provider's device as an electronic message. In response to determining that the expected completion time is unlikely to result in a delay of at least a selected threshold amount of time, the method may be configured to forgo outputting the notification.

In some cases, the disclosed embodiments may also include determining whether the surgical procedure is likely to end prematurely (i.e., the expected completion time of the surgical procedure is shorter than the planned time for the surgical procedure). In response to determining that the expected completion time is likely to be less than the planned time for the surgical procedure by at least a selected threshold amount of time, an embodiment may be configured to output the notification and/or forgo outputting the notification.

Fig. 19 illustrates an example process 1901 of adjusting an operating room schedule consistent with the disclosed embodiments. At step 1911, the process may include receiving visual data from the image sensor. The visual data may include image/video data that tracks the surgical procedure in progress. In an example embodiment, the visual data may be collected by various image sensors. In some cases, two or more image sensors (e.g., cameras) may capture visual data of the same region of the surgery (e.g., ROI) from different viewpoints. Additionally or alternatively, two or more image sensors may capture visual data of the ROI using different magnifications. For example, a first image sensor may capture an overview of the ROI, and a second image sensor may capture the closest region near the surgical tool that is located within the ROI.

At step 1913, the process 1901 may include accessing a data structure containing historical surgical data as described above. At step 1915, the process 1901 may include: the visual data and historical surgical data of the ongoing surgical procedure are analyzed to determine an estimated time of completion of the ongoing surgical procedure. As previously described, the analysis may use statistical methods of analyzing the first historical surgical data (e.g., calculating an average estimated time to completion for a surgical procedure of the same type as the surgical procedure in progress and having similar characteristics as the surgical procedure in progress). Additionally or alternatively, the analysis may involve training and using machine learning methods to determine an estimated completion time of the surgical procedure in progress. In some cases, a plurality of different analysis methods may be used, and the estimated completion time may be determined as an average time of completion times obtained using the different analysis methods.

At step 1917, the process 1901 may include accessing a surgical operating room schedule using any suitable means. For example, accessing may include accessing via a wired or wireless network, via an input device (e.g., keyboard, mouse, etc.), or via any other device that allows data to be read from/written to a schedule.

At step 1919, the process 1901 may include: calculating whether the expected completion time is likely to result in a difference relative to the scheduled time associated with completion of the surgical procedure, as described above. If the difference is expected (YES in step 1921), the process 1901 may include outputting a notification at step 1923, as described above. After step 1923, process 1901 may be complete. If the difference is not expected (NO in step 1921), then process 1901 may be complete.

Aspects of the disclosed embodiments that enable adjustment of an operating room schedule may include analyzing visual data, wherein the analyzing process may include: the method further includes detecting a characteristic event in the received visual data, accessing information based on the historical surgical data to determine an expected completion time of the surgical procedure after occurrence of the characteristic event in the historical surgical data, and determining an estimated completion time based on the determined expected completion time. For example, a characteristic event may be detected in the received visual data, as described above. In some examples, the historical surgical data may include a data structure that relates characteristic events to expected completion times of the surgical procedure. For example, historical surgical data may include the following data structures: the data structure specifies a first time to complete the surgical procedure from a first event and a second time to complete the surgical procedure from a second event, which may be different from the first time. In addition, the detected characteristic event may be used to access a data structure to determine a time since the characteristic event occurred to complete the surgical procedure.

In various embodiments, the characteristic events detected in the received visual data may refer to specific procedures or actions performed by a medical professional (e.g., by a surgeon, by an anesthesiologist, nurse, and/or other medical professional). For example, characteristic events of a laparoscopic cholecystectomy may include: trocar placement, calot triangle dissection, closing and cutting of cystic duct and artery, gallbladder incision, gallbladder packaging, liver bed cleaning and solidification, gallbladder contraction and the like. In another example, surgical signature events for cataract surgery may include: povidone iodine injection, corneal incision, capsulorhexis, phacoemulsification, cortical aspiration, intraocular lens implantation, intraocular lens adjustment, wound closure, and the like. In yet another example, the surgical signature events of pituitary surgery may include: preparation, nasal incision, nasal retractor installation, tumor access, tumor removal, nasal columella replacement, suturing, nasal compression, and the like. Some other examples of surgical feature events may include: incisions, laparoscopic positioning, sutures, and the like. In this context, the characteristic events may include: any event that occurs frequently during a particular stage of a surgical procedure, any event that often indicates a particular complication within a surgical procedure, or any event that occurs frequently in response to a particular complication within a surgical procedure. Some non-limiting examples of such feature events may include: use a particular medical tool, perform a particular action, inject a particular substance, call a particular specialist, order a particular device, instrument, equipment, medication. Blood, blood products, or supplies, specific physiological responses, and the like.

A characteristic event (also referred to as an intraoperative surgical event) may be any event or action that occurs during a surgical procedure or stage. In some embodiments, the intraoperative surgical event may include an action performed as part of a surgical procedure, such as an action performed by a surgeon, surgical technician, nurse, physician's assistant, anesthesiologist, physician, or any other medical professional. The intraoperative surgical event can be a planned event, such as an incision, administration of a drug, use of a surgical instrument, resection, ligation, implantation, suturing, stapling, or any other planned event associated with a surgical procedure or stage. In some embodiments, the intraoperative surgical event may include an adverse event or complication. Some examples of intraoperative adverse events may include: bleeding, mesenteric emphysema, injury, transition to unplanned open surgery (e.g., abdominal wall incision), incision significantly larger than planned, etc. Some examples of intraoperative complications may include: hypertension, hypotension, bradycardia, hypoxemia, adhesions, hernia, atypical dissection, dural tears, periodor injury, arterial infarction, and the like. Intraoperative events may include other errors, including: technical errors, communication errors, administrative errors, judgment errors, decision errors, errors related to medical device utilization, error communication, or any other errors.

In various embodiments, the event may be brief or may last for a duration of time. For example, a transient event (e.g., an incision) may be determined to occur at a particular time during a surgical procedure, and a prolonged event (e.g., bleeding) may be determined to occur within a certain time interval. In some cases, the extended event may include a well-defined start event and a well-defined end event (e.g., start of stitch and end of stitch), and the stitch is an extended event. In some cases, the prolonged event is also referred to as a stage during surgery.

The process of accessing information based on historical surgical data to determine an expected completion time of a surgical procedure after occurrence of a characteristic event in the historical surgical data may involve: the completion time of the historical surgical procedure, including the occurrence of characteristic events, is analyzed using suitable statistical methods. For example, the completion time may be analyzed to determine an average completion time for such procedures, and this average completion time may be used as the expected completion time for the surgical procedure. In some embodiments, may include determining an estimated completion time (i.e., the time at which an example surgical procedure including the characteristic event will be completed) based on the determined expected completion time (i.e., the duration of time required to complete the surgical procedure).

Embodiments of adjusting the operating room schedule may further comprise: the machine learning model is trained using historical visual data to detect feature events. In various embodiments, the machine learning model that identifies the feature (or features) may be trained via any suitable method, such as, for example, a supervised learning method. For example, historical visual data containing features corresponding to feature events may be represented as input data to a machine learning model, and the machine learning model may output names of feature events corresponding to the features within the historical visual data.

In various embodiments, detecting the feature events includes implementing a trained machine learning model. The trained machine learning model may be an image recognition model for recognizing a feature (or features) within the visual data that may be used as a trigger (or triggers) for a feature event. The machine learning model may identify features within one or more images or within a video. For example, features within a video may be identified to detect motion and/or other changes between frames of the video. In some embodiments, the image analysis may include an object detection algorithm, such as Viola-Jones object detection, Convolutional Neural Network (CNN), or any other form of object detection algorithm. Other example algorithms may include: a video tracking algorithm, a motion detection algorithm, a feature detection algorithm, a color-based detection algorithm, a texture-based detection algorithm, a shape-based detection algorithm, a boosting-based detection algorithm, a face detection algorithm, or any other suitable algorithm for analyzing video frames.

In some cases, feature events may be classified as positive (i.e., events that result in positive results) and negative (i.e., events that result in negative results). Positive and negative results may have different effects on the estimated completion time.

In some cases, the image recognition model may be configured not only to recognize features within the visual data, but also to form conclusions about aspects of the surgical procedure in progress (or history) based on an analysis of the visual data (or historical visual data). For example, by analyzing visual data of an example surgical procedure, the image recognition model may be configured to determine a skill level of the surgeon, or to determine a measure of success of the surgical procedure. For example, if it is determined that no adverse events exist in the visual data, the image recognition model may assign a high level of success to the surgical procedure and update (e.g., increase) the surgeon's skill level. Alternatively, if an adverse event is determined in the visual data, the image recognition model may assign a low level of success to the surgical procedure and update (e.g., reduce) the surgeon's skill level. The algorithm that assigns a level of success to a surgical procedure and the process of updating the skill level of the surgeon may be determined based on a number of factors, such as the type of adverse event detected during an example surgical procedure, the likelihood of an adverse event during the surgical procedure, a given specific characteristic of the patient (e.g., the age of the patient), the average number of adverse events for the same type of historical surgical procedures performed for patients having similar patient characteristics, the standard deviation relative to the average number of adverse events for the same type of historical surgical procedures performed for patients having similar patient characteristics, and/or other indicators of adverse events.

In some cases, the process of analyzing the visual data may include determining a skill level of the surgeon in the visual data, as discussed above. In some cases, calculating the estimated completion time may be based on the determined skill level. For example, an estimated completion time may be determined for each determined skill level of the surgical procedure. In an example embodiment, such estimated completion times may be based on historical completion times corresponding to historical surgical procedures performed by surgeons having the determined skill level. For example, the average historical completion time calculated for the historical completion times described above may be used to determine the estimated completion time. Such estimated completion times may be stored in a database and may be retrieved from the database based on the determined skill level.

Using a machine learning approach to detect feature events may be one possible approach. Additionally or alternatively, various other methods may be used to detect feature events in visual data received from an image sensor. In one embodiment, the characteristic events may be identified by a medical professional (e.g., surgeon) during the surgical procedure. For example, the surgeon may identify the characteristic event using a visual or audio signal (e.g., a gesture, a body posture, a visual signal generated by a light source produced by a medical instrument, spoken, or any other signal) from the surgeon that may be captured by one or more image/audio sensors and identified as a trigger for the characteristic event.

In various embodiments, enabling adjustment of the operating room schedule may include: historical completion times of the surgical procedure after occurrence of the characteristic event in the historical visual data are analyzed. For example, embodiments may include: an average historical completion time (also referred to herein as an average historical completion time) for the surgical procedure after the occurrence of the characteristic event in the historical visual data is calculated and used as the estimated completion time for the surgical procedure in progress. However, in some cases, the estimated completion time may be calculated using other methods discussed above (e.g., using machine learning methods), and the average historical completion time may be updated based on the determined actual completion time of the ongoing surgical procedure (as determined after completion of the ongoing surgical procedure). In various embodiments, the average historical completion time may be first updated using the estimated completion time, and then the update may be completed after the surgical procedure is completed.

Additionally or alternatively, analyzing the historical completion time after the occurrence of the characteristic event to estimate the completion time may include using a machine learning model. The machine learning model may be trained using training examples to estimate a completion time after an event occurs, and the trained machine learning model may be used to estimate the completion time based on the occurrence of feature events. Examples of such training examples may include indications of feature events and a flag indicating a desired estimated completion time. In one example, the training examples may be based on historical surgical data, e.g., representing actual completion times of historical surgical procedures after characteristic events occurred in the historical surgical procedures. In another example, the training examples may be based on user input, may be received from an external system, and so on. The machine learning model may also be trained such that the estimation of the completion time is based on other input parameters, such as various features of the patient, various features of the medical personnel, and the type of surgical procedure being administered to the patient (e.g., parameters 1811 as shown in fig. 18) and one or more feature events during the surgical procedure. Further, such input parameters may be provided to a trained machine learning model to estimate the completion time.

As previously mentioned, embodiments of the present disclosure may include a system, process, or computer-readable medium for analyzing visual data and historical surgical data of an ongoing surgical procedure to determine an estimated time of completion of the ongoing surgical procedure. In an example embodiment, the analyzing may include determining the estimated completion time based on an analysis of historical time. The estimate of the completion time may be determined using any suitable method, such as a machine learning method (as described above), or by calculating an average historical completion time for the surgical procedure and using such average historical completion time as the estimated completion time.

Aspects of embodiments for enabling adjustment of an operating room schedule may further include: medical tools in the visual data are detected, and an estimated completion time is calculated based on the detected medical tools. The medical tool (also referred to as a surgical tool) may be one of the characteristic parameters of the surgical procedure (such as the parameters P1 through PN shown in fig. 18), which may have an effect on calculating the estimated completion time of the surgical procedure. As discussed above, in an example embodiment, a machine learning method may be used to calculate the estimated completion time based on various parameters P1 through PN (e.g., the type of medical tool as used during the surgical procedure). Moreover, detection of the medical instrument in the visual data tracking the surgical procedure in progress may be accomplished using any suitable method (e.g., using a suitable image recognition algorithm as described above). In one example, a first completion time may be estimated in response to detecting a first medical tool, and a second completion time may be estimated in response to detecting a second medical tool, which may be different from the first completion time. In one example, a first completion time may be estimated in response to detecting a first medical tool, and a second completion time may be estimated in response to not detecting a medical tool, the second completion time may be different from the first completion time.

In some cases, embodiments for analyzing video data may further include: anatomical structures in the visual data are detected, and an estimated completion time is calculated based on the detected anatomical structures. The anatomical structure may be detected and identified in the visual data using an image recognition algorithm. Additionally or alternatively, the anatomical structure may be identified by a healthcare professional during an ongoing surgical procedure (e.g., the healthcare professional may use gestures, sounds, words, and/or other signals) to identify the anatomical structure. Visual data depicting the anatomical structure of the surgical procedure in progress may be used to calculate the estimated completion time. Such visual data may be used as input to a machine learning method, for example, to obtain an estimated completion time. In one example, a first completion time may be estimated in response to detecting the first anatomical structure, and a second completion time may be estimated in response to detecting the second anatomical structure, the second completion time may be different from the first completion time. In one example, a first completion time may be estimated in response to detecting the first anatomical structure, and a second completion time may be estimated in response to not detecting the anatomical structure, the second completion time may be different from the first completion time.

Aspects of embodiments for analyzing video data may include: an interaction between the anatomical structure and the medical tool is detected in the visual data, and an estimated completion time is calculated based on the detected interaction. For example, as described above, interaction between the anatomical structure and the medical tool may be detected. The interaction may include any action of the medical tool that may affect the anatomy, and vice versa. For example, the interaction may include contact between the medical tool and the anatomical structure, an action of the medical tool on the anatomical structure (such as cutting, clamping, grasping, applying pressure, scraping, etc.), a physiological response of the anatomical structure, light emitted by the medical tool toward the anatomical structure (e.g., the medical tool may be a laser emitting light toward the anatomical structure), sound emitted toward the anatomical structure, an electromagnetic field generated near the anatomical structure, a current induced in the anatomical structure, or any other suitable form of interaction. In one example, a first completion time may be estimated in response to detecting a first interaction between the anatomical structure and the medical tool, and a second completion time may be estimated in response to detecting a second interaction between the anatomical structure and the medical tool, the second completion time may be different from the first completion time. In one example, a first completion time may be estimated in response to detecting a first interaction between the anatomical structure and the medical tool, and a second completion time may be estimated in response to not detecting an interaction between the anatomical structure and the medical tool, the second completion time may be different from the first completion time.

Visual data depicting the anatomy and medical tool of the surgical procedure in progress may be used to calculate an estimated completion time. For example, such visual data may be used as input to a machine learning method to obtain an estimated completion time, e.g., as described above.

As previously discussed, the present disclosure relates to methods and systems for enabling adjustment of an operating room schedule and a non-transitory computer-readable medium that may contain instructions that, when executed by at least one processor, cause the at least one processor to perform operations that enable adjustment of an operating room schedule, and may include the various steps of the methods for enabling adjustment of an operating room schedule as described above.

The disclosed systems and methods may involve analyzing surgical clips to identify characteristics of the surgical procedure, patient condition, and other characteristics for determining insurance claims. Insurance claims may need to be determined for various steps of the surgical procedure. Steps of a surgical procedure may need to be identified, and insurance claim criteria may need to be associated with the identified steps. Thus, there is a need to identify steps of a surgical procedure using information obtained from surgical clips, and to associate insurance claims with those steps.

Aspects of the present disclosure may relate to methods, systems, apparatuses, and computer-readable media for analyzing surgical images to determine insurance claims. For ease of discussion, a method is described below, and an understanding of aspects of the method applies equally to systems, apparatuses, and computer-readable media. For example, some aspects of such methods may occur electronically over a network, wired, wireless, or both. Other aspects of this method may occur using non-electronic means. In the broadest sense, the method is not limited to a particular physical and/or electronic tool, but can be accomplished using many different tools.

Consistent with the disclosed embodiments, a method of analyzing a surgical image to determine an insurance claim may include: video frames taken during a surgical procedure on a patient are accessed. Embodiments for analyzing the surgical image may include determining a stage of the surgical procedure, an event during the surgical procedure, an anatomical structure being operated on, a surgical instrument used during the surgical procedure, an interaction of the surgical instrument with the anatomical structure, a motion of the surgical instrument, a moving anatomical structure deformation of the anatomical structure, a color change of the anatomical structure, a leak (e.g., bleeding) of the anatomical structure, an incision within the anatomical structure, or any other change (e.g., anatomical structure rupture) of the anatomical structure during the example surgical procedure using any suitable method (e.g., using a machine learning method).

In various embodiments, an insurance claim may include information about how much an insurance company and/or insurance plan (such as a government health insurance plan) can pay for a given surgical procedure or segment (portion) thereof. For example, insurance claims may cover the costs associated with all or some of the segments of the surgical procedure. The surgical section may correspond to a section of a surgical flap of a surgical procedure. In some cases, an insurance claim may cover the entire cost associated with a certain section of the surgical procedure, and in other cases, the insurance claim may partially cover the cost associated with a certain section of the surgical procedure. Depending on the type of surgery (e.g., if the surgery is optional for the patient), the insurance claim may not cover the costs associated with a certain segment (or the entirety) of the surgery. In other examples, different patients and/or different surgeries (or different actions associated with a surgery) may have different means of claim settlement (e.g., different criteria for claim settlement) based on the condition of the patient and/or the characteristics of the surgery.

In some embodiments, accessing video frames taken during a surgical procedure may include: the database (e.g., database 1411 as shown in fig. 14) is accessed through a suitable computer-based software application. For example, the database may be configured to store video frames taken during various surgical procedures, and may be configured to store any other information related to the surgical procedures (e.g., comments from the surgeon performing the surgical procedure, vital signals collected during the surgical procedure). As described herein, a surgical procedure may include any medical procedure associated with or involving a manual activity or a surgical activity performed on a patient's body.

Consistent with the disclosed embodiments, video frames taken during a surgical procedure are analyzed to identify at least one medical instrument, at least one anatomical structure, and at least one interaction between the at least one medical instrument and the at least one anatomical structure in the video frames, e.g., as described above. In various embodiments, analyzing video frames taken during a surgical procedure may include using image recognition, as discussed herein. When analyzing a surgical short, at least some frames may capture anatomical structures (also referred to herein as biological structures). Such portions of the surgical tabs may include one or more medical instruments (as described herein) that interact with one or more anatomical structures.

Medical instruments and anatomical structures are identifiable in a surgical short using image recognition, as described in this disclosure and consistent with various disclosed embodiments. The interaction between the medical instrument and the anatomical structure may comprise any action of the medical instrument that may affect the anatomical structure, and vice versa. For example, the interaction may include contact between the medical instrument and the anatomical structure, action of the medical instrument on the anatomical structure (such as cutting, clamping, grasping, applying pressure, scraping, etc.), a physiological response of the anatomical structure, emission of light by the medical instrument toward the anatomical structure (e.g., the surgical tool may be a light emitting laser), sound emitted toward the anatomical structure, an electromagnetic field near the anatomical structure, a current induced in the anatomical structure, or any other form of interaction.

In some cases, detecting the interaction may include identifying a proximity of the medical instrument to the anatomical structure. For example, by analyzing a surgical video clip, the distance between the medical instrument and a point (or set of points) of the anatomy may be determined by image recognition techniques as described herein.

Aspects of the disclosed embodiments may further include: a database of claim criteria is accessed that is associated with the medical instrument, the anatomical structure, and an interaction between the medical instrument and the anatomical structure. For example, the relevance of the claim criteria to one or more medical instruments, one or more anatomical structures, and one or more interactions between the medical instruments and anatomical structures may be represented in a data structure (such as one or more tables, linked lists, XML data, and/or other forms of formatted and/or stored data). In some implementations, the correlations can be established by code-generating machine learning models. In various instances, the claim criteria can be stored in a data structure along with information about how the code relates to the medical instrument, the anatomical structure, and the interaction between the medical instrument and the anatomical structure.

Fig. 20 shows an example of a data structure 2001 for providing information on how claim criteria relate to medical instruments, anatomical structures, and interactions between medical instruments. For example, data structure 2001 may include a plurality of tables, such as tables 2011, 2013, and 2015. In various embodiments, the example table may include records (e.g., rows) and fields (e.g., columns). For example, table 2011 may have a field named "record" that contains a record label (e.g., "1" as shown in fig. 20). For each record, a field named "criteria" may contain the claim criteria (e.g., criteria "1.20: 11.30.50"), a field named "surgical field" may contain the number, and possibly the name of a certain field of the surgery (e.g., "1, incision, bypass), a field named" first instrument "may contain the number, and possibly the name of a first medical instrument used during the field of the surgery (e.g.," 20, scalpel "), a field named" second instrument "may contain the number, and possibly the name of a second medical instrument used during the field of the surgery (if such instruments are present) (e.g.," 11, forceps "), a field named" other data "may contain any relevant data, which may also be used to characterize the surgery or a field thereof (e.g., such data may include the duration of the section of surgery, a sequence of events during the section of surgery, a sequence of instruments used during surgery (e.g., "scalpel- > forceps" may indicate that a scalpel was used before forceps), and/or other features of the section). The example table 2013 may contain other relevant fields, such as a field named "first anatomy", which may contain a number, and possibly the name of the anatomy associated with record "1" (e.g., "30, internal mammary artery"), as labeled by the field named "record" in table 2013. Further, the example table 2015 can include a field for identifying records named "records," and a field "interaction," which can contain a description of an interaction between the medical instrument and the anatomical structure, which can be represented by a number and possibly a name (e.g., "50, left internal mammary artery incision"). In addition, table 2015 can include a field named "interaction data" that can include links to image data 2012A, video data 2012B, text data 2012C, and/or audio data 2012D, as shown in table 2015.

In various embodiments, the claim criteria can have an internal data structure, as shown by structure 2020. For example, a first number of the claim criteria can be a number associated with a section of a surgical procedure (e.g., number "1"), a second set of numbers can be associated with surgical instruments used during the section of the surgical procedure (e.g., numbers "20: 11" can be associated with a first instrument labeled "20" and a second instrument labeled "11"), a third set of numbers can be associated with an anatomical structure being operated on (e.g., "30"), and a fourth set of numbers can be associated with the interaction of the instruments and the anatomical structure (e.g., "50"). In different examples, the claim criteria may be set by an insurance plan or regulatory body. In some examples, a single claim criterion may be associated with an entire surgical procedure.

Determining claim criteria based on the medical instrument, the anatomical structure, and the interaction of the medical instrument with the anatomical structure using the data structure may be one possible approach. Additionally, a criteria generation machine learning method can be used to determine claim criteria for a surgical procedure or segment thereof. For example, the criteria generation machine learning method may take a section of a surgical short as an input and output the claim criteria for the section of the surgical short represented by the section of the surgical short. In various implementations, the criteria generation machine learning method may be a collection of various machine learning methods configured for various tasks. For example, the criteria-generating machine learning method may include a first image recognition algorithm for identifying a medical instrument in a section of the surgical short and a second image recognition algorithm for identifying an anatomical structure in the section of the surgical short. In various embodiments, the image recognition algorithm may be any suitable algorithm (e.g., neural network) as described herein and consistent with various disclosed embodiments.

The disclosed embodiments may also include comparing the identified at least one interaction between the at least one medical instrument and the at least one anatomical structure to information in a claim criteria database to determine at least one claim criteria associated with a surgical procedure. For example, embodiments may include comparing the identified interactions with various details about interactions stored in a database. Thus, as an example, a machine learning model (e.g., an image recognition algorithm) may be configured to recognize and classify interactions within a surgical slice (e.g., interactions may be classified by assigning names to the interactions or determining types of interactions). For example, the name or type of interaction may be "incision of left internal mammary artery". In some embodiments, the machine learning model may be configured to analyze the surgical short and select the most appropriate interaction from a list of possible interactions. Once the interaction is identified, the name (or other identification of the interaction) may be compared to the identification of the interaction stored in the database, and the database may be used to find claim criteria corresponding to the identified interaction, or a surgical procedure that includes the identified interaction.

Identifying interactions using machine learning algorithms is one possible approach. Additionally or alternatively, the interaction may be identified by the surgeon performing the surgical procedure, a nurse practitioner present during the surgical procedure, and/or other healthcare professional. For example, an interaction may be identified by selecting a segment of the surgical clip that corresponds to the interaction and assigning a name that may tag the segment. In various embodiments, a computer-based software application may be used to perform various manipulations of a section of a surgical clip (such as assigning a name tag to a different section, selecting a different section, and/or other data operations). The computer-based software application may be configured to store relevant data (e.g., name tags for different sections of the surgical short, and start and finish times for the sections of the surgical short) in a database.

Various embodiments may also include: outputting at least one claim criterion for use in obtaining insurance claims for the surgical procedure. For example, the criteria-generating machine learning model can be used to output at least one claim criterion, as described above. Alternatively, the claim criteria may be output via a query to the following database: the database contains claim criteria corresponding to the interaction of the medical instrument with the anatomical structure.

In some cases, outputting the claim criteria may include: the claim criteria is transmitted to the insurance provider using any suitable transmission method consistent with the disclosed embodiments and discussed herein.

In some cases, the at least one output claim criterion includes a plurality of output claim criteria. For example, the plurality of claim criteria can correspond to one or more segments of a surgical procedure. In one embodiment, the first claim criterion may correspond to incision related sections and the second claim criterion may correspond to suture related sections, for example. In some cases, the plurality of claim criteria may correspond to a plurality of medical instruments for performing one or more surgical actions during a segment of a surgical procedure. When there is more than one surgeon (or any other healthcare professional) during a surgical procedure, multiple claim criteria may be determined for the procedure performed by each surgeon. When more than one claimable procedure is performed in a single segment, more than one claim criterion can be output for that single segment.

In an example implementation, at least two of the plurality of output claim criteria may be based on different interactions with a common anatomical structure. For example, the first interaction may include a first medical instrument interacting with the anatomical structure, and the second interaction may include a second medical instrument interacting with the anatomical structure. In some cases, the same instrument may be used for different types of interaction with the anatomical structure (e.g., forceps may be used to interact with the anatomical structure in different ways).

In some embodiments, the at least two output claim criteria are determined based in part on the detection of two different medical instruments. For example, the first and second medical instruments may be detected in the surgical short sheet using any suitable method (e.g., using a suitable machine learning method or using information from a healthcare provider). Both the first and second medical instruments may be used simultaneously, and in some cases, the second medical instrument may be used after the first medical instrument is used. The use of the first medical instrument may partially overlap (in time) the use of the second medical instrument. In such a case, two or more claim criteria may be output regardless of whether the medical instruments triggering the criteria are used at the same time or at different times.

In various embodiments, determining at least two claim criteria may be based on analysis of post-operative surgical reports. For example, to determine the claim criteria for a particular segment of a surgical procedure, a post-operative surgical report may be consulted to obtain information about the segment of the surgical procedure. Any information related to the surgical field and/or information obtained from post-operative reports may be used to determine claim criteria (e.g., events occurring during the surgical field, surgical instruments used, anatomical structures undergoing surgery, surgical instrument interaction with anatomical structures, imaging performed, various measurements performed, number of surgeons involved, and/or other surgical actions).

In various embodiments, video frames of a surgical short can be taken from an image sensor located above a patient, as described herein and consistent with various described embodiments. For example, image sensors 115, 121, 123, and/or 125 as described above in connection with fig. 1 may be used to capture video frames of a surgical short. Additionally or alternatively, video frames may be taken from an image sensor associated with a medical device, as described herein and consistent with various described embodiments. Fig. 3 illustrates one example of a medical device having an associated image sensor, as described herein.

Embodiments of analyzing the surgical image to determine insurance claims may include: the database is updated by associating at least one claim criterion with the surgical procedure. The database may be updated using any suitable manner (e.g., using a machine learning model, by sending the appropriate data to the database, by SQL commands, by writing information to memory, etc.). For example, as described above, a surgical short of a surgical procedure may be analyzed to determine various sections of the surgical procedure that may be associated with claim criteria. Once the claim criteria are determined, the criteria can be associated with the surgical procedure and configured to be stored in a data structure. The data structure may take any form or structure so long as it is capable of or retains data. As one example, the data structure may be a relational database and include a table having table fields that store information about the surgical procedure (e.g., the example table fields may include the name of the surgical procedure) and that store the claim criteria associated with the surgical procedure.

Various embodiments may include: generating a correlation between the processed claim criteria and at least one of a plurality of medical instruments in the historical video snip, a plurality of anatomical structures in the historical video snip, or a plurality of interactions between medical instruments and anatomical structures in the historical video snip; and updating the database based on the generated correlations. In an exemplary embodiment, the correlations may be generated using any suitable means, such as using machine learning methods and/or using input by healthcare professionals, healthcare administrators, and/or other users. The correlation may be represented by a table as described above (e.g., tables 2011 to 2015 shown in fig. 20). In some cases, a correlation of the processed claim criteria may be generated (e.g., claim criteria related to a portion of a historical surgical procedure for which a health insurance company of the patient has previously claimed a healthcare provider). For example, historical surgical data (e.g., historical surgical clips) may be analyzed (e.g., using machine learning methods) to determine one or more medical instruments in the historical video clips, one or more anatomical structures in the historical video clips, or one or more interactions between medical instruments and anatomical structures in the historical video clips. Provided that the segment of historical surgery has associated processed claim criteria (e.g., the processed claim criteria are assigned to the segment of historical surgery using any suitable method available in the past, such as input from a healthcare provider), the processed claim criteria can be correlated with information obtained from historical surgical data (e.g., information about medical instruments, anatomical structures, and interactions between medical instruments and anatomical structures identified in the historical surgical data).

In various embodiments, a machine learning method for generating correlations may be trained, as discussed in this disclosure. Historical surgical data may be used as part of the training process. For example, a historical surgical short for a given segment of the surgical procedure may be provided as a machine learning input, which thereafter determines the claim criteria. The claim criteria can be compared to the processed claim criteria for a given segment of the surgical procedure to determine whether the machine learning model outputs a correct prediction. Various parameters of the machine learning model may be modified, for example, using a back propagation training process.

In various embodiments, as discussed herein, historical video frames may be used to train any suitable machine learning model of various tasks based on information contained within the video frames (i.e., information based on any suitable images). As previously discussed, the machine learning model may detect at least one of a medical tool, an anatomical structure, or an interaction between the medical tool and the anatomical structure. Once the model identifies the correlations, the correlations may be extrapolated to the current video undergoing analysis.

In some cases, generating the correlation may include implementing a statistical model. For example, processed historical claim criteria for similar segments of historical surgery may be analyzed to determine relevance. Correlations may be between claim criteria and various aspects of a segment of a surgical segment. The surgical section may be characterized by a medical instrument, an anatomical structure, and an interaction between the medical instrument and the anatomical structure. If different processed claim criteria are used for such similar sections, a correlation can be generated by evaluating the most likely claim criteria that should be used. For example, if for a certain segment of a given type of historical surgery, the processed claim criterion C1 is used 100 times, the processed claim criterion C2 is used 20 times, and the processed claim criterion C3 is used 10 times, the claim criterion C1 can be selected as the most likely claim criterion that should be used.

In some cases, when the processed claim criteria are different for the same (or similar) segments of the historical surgical procedure, features of the segments can be analyzed to determine whether some difference in the features of the segments can be responsible for the difference in the processed claim criteria. In various embodiments, the difference in characteristics of the segments of the historical surgery may be correlated with the difference in the processed claim criteria (as measured using any suitable statistical method).

In various embodiments, after generating the correlations as described above, the database may be updated based on the generated correlations. For example, for a given medical instrument interacting with a given anatomical structure, the expected claim criteria (or, in some cases, a set of possible claim criteria) may be associated with and stored in a database. A particular claim criterion of a set of possible claim criteria can be further narrowed down based on characteristics associated with the surgical section identified in the surgical short.

Additionally or alternatively, the disclosed embodiments can include receiving processed claim criteria associated with the surgical procedure, and updating the database based on the processed claim criteria. The processed claim criteria may be provided by a healthcare provider, a healthcare administrator, and/or other user. Alternatively, as discussed herein, the processed claim criteria can be provided via a machine learning method for analyzing historical surgical procedures and identifying the processed claim criteria used for the historical surgical procedures. In various implementations, the processed claim criteria can be different from at least one of the output claim criteria. This may occur after the correct code is manually identified by a healthcare professional, or after further machine learning analysis to determine more accurate claim criteria candidates.

As previously described, some embodiments may include: detecting the at least one of a plurality of medical instruments, a plurality of anatomical structures, or a plurality of interactions between medical instruments and anatomical structures in the historical video snippets using a machine learning model. As described herein, the machine learning method may be any suitable image recognition method trained to recognize one or more medical instruments, anatomical structures, and interactions between instruments and structures. In an example embodiment, the machine learning method may employ a plurality of image recognition algorithms, and each algorithm is trained to recognize a specific medical instrument or a specific anatomical structure.

Aspects of the disclosed embodiments may further include: video frames taken during a surgical procedure are analyzed to determine a condition of an anatomical structure of a patient, and at least one claim criterion associated with the surgical procedure is determined based on the determined condition of the anatomical structure. For example, a procedure performed on an anatomical structure in a poor condition may justify a higher claim than a procedure performed on an anatomical structure in a better condition. In an example embodiment, a machine learning approach may be used to determine the condition of a patient's anatomy based on information obtained from various sensors. The condition of the anatomical structure may be determined based on observed visual features of the anatomical structure, such as size, color, shape, translucency, surface reflectivity, fluorescence, and/or other image features. The condition may be based on one or more of the following: anatomical structures, temporal features of anatomical structures (motion, shape changes, etc.), acoustic features (e.g., sounds transmitted through, produced by, and other aspects of the anatomy), imaging of anatomical structures (e.g., using X-rays, using magnetic resonance, and/or otherwise), or electromagnetic measurements of structures (e.g., electrical conductivity of anatomical structures, and/or other properties of structures). Image recognition may be used to determine the condition of the anatomical structure. Additionally or alternatively, other specialized sensors (e.g., magnetic field sensors, resistive sensors, acoustic sensors, or other detectors) may be used for condition determination.

In various embodiments, when determining the condition of the anatomical structure, the claim criteria may be identified, for example, using a suitable machine learning pattern. For example, the machine learning model may take the condition of the anatomical structure as one possible parameter for determining one or more claim criteria. Fig. 21 illustrates an example system 2101 for determining one or more claim criteria (e.g., criteria 2137 as schematically illustrated in fig. 21). In an example embodiment, the surgical short 2111 may be processed by a machine learning method 213, and the method 213 may identify the medical instrument 2116, the anatomical structure 2118, interactions 2120 of the medical instrument with the anatomical structure, and various parameters 2122 (also referred to herein as characteristics or features), such as parameters C1-CN describing the instrument 2116, the anatomical structure 2118, the interactions 2120, and any other information that may affect claim criteria. The example parameter C1 may be a size of an incision, the parameter C2 may be a condition of the anatomy (e.g., a size, color, shape, and/or other image characteristic of the anatomy), and the parameter CN may be a location of interaction with the example medical instrument and the example anatomy. Information about the medical instrument 2116, the anatomy 2118, the interactions 2120, and the parameters 2122 may be used as input 2110 to a computer-based software application, such as a machine learning model 2135. The model 2135 can process the input 2110 and output one or more claim criteria associated with a section of the surgical procedure having the information described by the input 2110.

In some of these embodiments, analyzing the surgical image to determine an insurance claim may include: analyzing video frames taken during a surgical procedure to determine changes in a condition of a patient's anatomy during the surgical procedure; and determining the at least one claim criterion associated with the surgical procedure based on the determined change in condition of the anatomical structure. The process of analyzing the video frames to determine changes in the condition of the patient's anatomy may be performed using any suitable machine learning method. For example, a change in the condition of the anatomical structure may include a change in the shape, color, size, location, and/or other image characteristics of the anatomical structure. Such variations may be determined by image recognition algorithms as described herein and consistent with various described embodiments. The image recognition algorithm may identify an anatomical structure in a first set of frames of a surgical procedure, identify an anatomical structure in a second set of frames of the surgical procedure, and evaluate whether the anatomical structure changes from the first set of frames to the second set of frames. If a change is observed, the image recognition algorithm may define the change by assigning a change correlation identifier. As a few examples, the change-related identifier can be a string "remove tumor," "remove appendix," "remove blocked carotid artery," and/or other data describing the change. The change-related identifier may be selected from a preconfigured list of identifiers and may include one of the parameters of the surgical procedure (such as parameters C1 through CN as shown in fig. 21) that is used as an input to a machine learning model (e.g., model 2135) to output a claim criterion (e.g., criterion 2137). In this manner, claim criteria may be associated with a surgical procedure based on the determined change in condition of the anatomical structure.

The disclosed embodiments may further include: analyzing video frames taken during a surgical procedure to determine usage of a particular medical device; and determining at least one claim criterion associated with the surgical procedure based on the determined usage of the particular medical device. The use of certain medical instruments may affect claim criteria. For example, the detection of certain disposable medical devices may trigger a claim for those devices. Or the use of expensive imaging machines (MRI, CT, etc.) may trigger a claim for using the device. Moreover, the use of certain devices, regardless of their cost, can be associated with complexity and, thus, cost of the procedure.

Some embodiments may further include: analyzing video frames taken during a surgical procedure to determine the type of particular medical device used; and in response to the first determined type of use, determining at least a first claim criterion associated with the surgical procedure; and in response to the second determined type of use, determining at least a second claim criterion associated with the surgical procedure, the at least first claim criterion being different from the at least second claim criterion. The type of use may be any technique or manipulation of the medical device, such as incision making, imaging, suturing, surface treatment, radiation therapy, chemotherapy, cutting, and/or other treatment modalities. In various embodiments, the usage type may be analyzed by analyzing video frames (i.e., surgical clips) taken during a surgical procedure.

Consistent with various embodiments described herein, detection of the type of use may be made by image recognition, as previously discussed. In some cases, the position of the device relative to the anatomy may be used to determine the interaction of the medical device with the anatomy. In various embodiments, for each type of treatment using a medical device, corresponding claim criteria may be used. In some cases, the same medical device may be used for different treatment types, which may have different associated claim criteria. For example, forceps may be used first to clamp the anatomical structure and then used to extract the anatomical structure. In some examples, the type of use of a particular medical device may be determined by analyzing video frames taken during a surgical procedure. For example, a machine learning model may be trained using training examples to determine a type of use of a medical device from images and/or videos of a surgical procedure, and the trained machine learning model may be used to analyze frames of video taken during the surgical procedure and determine the type of use of a particular medical device. Examples of such training examples may include images and/or videos of at least a portion of a surgical procedure, and markers indicating the applicable type of a particular medical device in the surgical procedure.

In some examples, a machine learning model may be trained using training examples to determine claim criteria for a surgical procedure based on information related to the surgical procedure. Examples of such training examples may include information related to a particular surgical procedure, as well as indicia indicating desired claim criteria for the particular surgical procedure. Some non-limiting examples of such information relating to a surgical procedure may include: images and/or video of a surgical procedure, information based on analysis of images and/or video of a surgical procedure (some non-limiting examples of such analysis and information are described herein), anatomical structures associated with a surgical procedure, conditions of anatomical structures associated with a surgical procedure, medical instruments used in a surgical procedure, interactions between medical instruments and anatomical structures in a surgical procedure, stages of a surgical procedure, events occurring in a surgical procedure, information based on analysis of post-operative reports of a surgical procedure, and so forth. Further, in some examples, the trained machine learning model may be used to analyze video frames taken during a surgical procedure to determine the at least one claim criterion associated with the surgical procedure. In other examples, the trained machine learning model may be used to determine the at least one claim criterion associated with a surgical procedure based on any information related to the surgical procedure, including: such as at least one interaction between at least one medical instrument and at least one anatomical structure in a surgical procedure (e.g., the at least one interaction between the at least one medical instrument and the at least one anatomical structure identified by analyzing video frames taken during the surgical procedure), an analysis of a post-operative surgical report of the surgical procedure, a condition of the anatomical structure of the patient (e.g., a condition of the anatomical structure of the patient determined by analyzing video frames taken during the surgical procedure), a change in a condition of the anatomical structure of the patient during the surgical procedure (e.g., a change in a condition of the anatomical structure of the patient during the surgical procedure determined by analyzing video frames taken during the surgical procedure), a use of a particular medical device (e.g., a use of a particular medical device determined by analyzing video frames taken during the surgical procedure), A type of use of a particular medical device (e.g., a type of use of a particular medical device determined by analyzing video frames taken during a surgical procedure), a particular type of medical supply volume used during a surgical procedure (e.g., a volume of a particular type of medical supply used during a surgical procedure and determined by analyzing video frames taken during a surgical procedure), and so forth.

Additionally, embodiments may include analyzing video frames taken during a surgical procedure to determine a particular type of medical supply volume used during the surgical procedure, and determining the at least one claim criterion associated with the surgical procedure based on the determined volume. In an example embodiment, a particular type of medical supply amount may be determined using an image recognition algorithm for observing a video frame of a surgical procedure that may indicate the medical supply amount used during the surgical procedure. The medical supply may be any material used during surgery, such as a drug, a needle, a catheter, or any other disposable or consumable material. The supply volume may be determined from video frames of the surgery. For example, the amount of medication used by a patient may be determined by observing Intravenous (IV) equipment used to supply medication and fluid to the patient. They may be counted at the replacement venous blood or fluid bag. In various embodiments, a suitable machine learning model may be used to identify a particular type of medical supply volume used during, before, and/or after a surgical procedure, and based on the determined volume, determine at least one claim criterion associated with the surgical procedure. The machine learning model may be trained using historical surgical clips of historical surgery and medical supplies used during the historical surgery. In some examples, the particular type of medical supply used in a surgical procedure may be determined by analyzing video frames taken during the surgical procedure. For example, a machine learning model may be trained using training examples to determine a particular type of medical supply volume used in a surgical procedure from images and/or videos of the surgical procedure, and the trained machine learning model may be used to analyze video frames taken during the surgical procedure and determine the particular type of medical supply volume used in the surgical procedure. Examples of such training examples may include images and/or videos of at least a portion of a particular surgical procedure, as well as markers indicating the particular type of medical supply used in the particular surgical procedure.

Aspects of a method of analyzing surgical images to determine insurance claim criteria are illustrated by an example process 2201 shown in fig. 22. At step 2211 of process 2201, the method may include the steps of: video frames taken during a surgical procedure on a patient are accessed. The video frames may be taken using any suitable image sensor and may be accessed using machine learning methods and/or by a healthcare provider, as discussed above. At step 2213, the method may include the steps of: video frames taken during a surgical procedure are analyzed to identify at least one medical instrument, at least one anatomical structure, and at least one interaction between the at least one medical instrument and the at least one anatomical structure in the video frames, as described above. For example, the frame may be analyzed using a suitable machine learning method, such as an image recognition algorithm, as previously discussed. At step 2215, the method may include the steps of: a database of claim criteria is accessed that is associated with the medical instrument, the anatomical structure, and an interaction between the medical instrument and the anatomical structure. At step 2217, the method may include the steps of: comparing the identified at least one interaction between the at least one medical instrument and the at least one anatomical structure with information in a claim criteria database to determine at least one claim criteria associated with a surgical procedure, as previously described, and at step 2219, the method may include the steps of: outputting the at least one claim criterion for use in obtaining insurance claims for the surgical procedure.

As previously discussed, the present disclosure relates to methods and systems for analyzing surgical images to determine insurance claims and non-transitory computer-readable media that may contain instructions that, when executed by at least one processor, cause the at least one processor to perform operations such that surgical images can be analyzed to determine insurance claims as described above.

The disclosed systems and methods may involve analyzing surgical clips to identify features of the surgery, patient condition, and intra-surgical events to obtain information for populating post-surgical reports. Post-operative reports may be filled in by: analyzing surgical data obtained from the surgical procedure to obtain characteristics of the surgical procedure, a patient condition, and an intra-surgical event; and extracting information from the analyzed data for use in populating the post-operative report. Therefore, there is a need to analyze surgical data and extract information from the surgical data that can be used to populate post-operative reports.

Aspects of the present disclosure may relate to populating a postoperative report of a surgical procedure, including methods, systems, devices, and computer-readable media. For ease of discussion, a method is described below, and an understanding of aspects of the method applies equally to systems, apparatuses, and computer-readable media. For example, some aspects of such methods may occur electronically over a network, wired, wireless, or both. Other aspects of this method may occur using non-electronic means. In the broadest sense, the method is not limited to a particular physical and/or electronic tool, but can be accomplished using many different tools.

Consistent with the disclosed embodiments, a method for populating a postoperative report of a surgical procedure may include the steps of: an input of an identifier of a patient is received. Further, the method may comprise the steps of: an input of an identifier of a healthcare provider is received. The post-operative report may be any suitable computer-based or paper-based report documenting the surgical procedure. In various embodiments, the post-operative report may include: a plurality of frames of a surgical short, audio data, image data, text data (e.g., physician notes), etc. In an example embodiment, the post-operative report may be filled in, partially filled in, or unfilled. For example, a post-operative report may contain fields (e.g., a reported area) for saving various details obtained during the surgical procedure. In an example embodiment, at least some fields may have associated features (also referred to as field names) that may determine what type of information may be entered in the field. For example, a field having an associated name of "patient name" may allow entry of the patient's name in the field. The field named "pulse plot" may be a field for displaying the patient's pulse during the surgical procedure plotted as a function of time. In various embodiments, when a report is not filled out, all fields in the report may be empty; when a report is partially filled out, some of the fields may contain information obtained from a surgical procedure; and when the report is completely filled out (or mostly filled out), most of the fields may contain information related to the associated surgical procedure. In some examples, at least a portion of the post-operative report may have a free tabular format, allowing a user and/or automated process to enter data (such as free text) in various organizations and/or formats, which may include other elements, such as links, images, videos, audio recordings, digital files, etc., optionally embedded in or accompanying the free text in some examples. It should be understood that any of the details described herein that are included in specific fields in the post-operative report may likewise be included in the post-operative report as part of such free-text information embedded in or accompanying the free-text.

Fig. 23 shows an example post-operative report 2301. Report 2301 may contain a number of fields, segments, and sections. Different fields may contain different types of information. For example, field 2310 may contain the name of the surgical procedure, field 2312 may contain the name of the patient, and field 2314 may contain the name of the healthcare provider. The field 2316 may include the name of the stage of the surgical procedure and the field 2318 may include the sequential number of the stages (e.g., the first stage of the surgical procedure). Multiple instances of fields 2314 and/or 2316 may be included in post-operative report 2301 to describe multiple stages of a surgical procedure. Report 2301 may include segments 2315 that may describe particular events during a surgical procedure. There may be multiple segments in the report 2301 that describe multiple events. One or more of these events may be linked to a particular surgical stage, while other events may not be linked to any surgical stage. In an example embodiment, segment 2315 may include: a field 2320 containing the name of the event, a field 2321A containing the start time of the event, a field 2321B containing the completion time of the event, and a field 2324 containing a description of the event (e.g., field 2324 may contain comments from a healthcare provider describing the event). Segment 2315 may include: a section 2326 for containing image fields, such as image field 1 through image field N, and a section 2328 for containing event-related surgical clips. For example, section 2328 may include fields V1 through Vn. In addition, segment 2315 may include a section 2329 that may contain links to various other data related to the surgical procedure. In various embodiments, the postoperative report may be divided into different sections indicated by tab pages 2331 and 2333, as shown in fig. 23. For example, when the user selects tab page 2331, information relating to a first portion of a surgical report may be displayed, while when the user selects tab page 2333, information relating to a second portion of the surgical report may be displayed. In various embodiments, the surgical report may include any suitable number of portions.

Fig. 23 also shows information that may be uploaded into report 2301 via upload entry form 2337. For example, a user may click on a field (e.g., field V1 as shown in FIG. 23, and table 2337 may be presented to the user to upload data for field V1. in various embodiments, the fields, sections, and tab pages as shown in FIG. 23 are merely illustrative, and any other suitable fields, sections, and tab pages may be used.

In various embodiments, information that fills out at least part of the post-operative report may be obtained from a surgical short sheet of the surgical procedure. Such information may be referred to as image-based information. In addition, information about the surgical procedure may be derived from annotations by a healthcare provider or user, previously submitted forms of the patient (e.g., the patient's medical history), medical devices used during the surgical procedure, and so forth. Such information may be referred to as auxiliary information. In an example embodiment, the auxiliary information may include vital signs such as pulse, blood pressure, body temperature, respiratory rate, blood oxygen level, etc. reported by various medical devices used during the surgical procedure. The image-based information as well as the auxiliary information may be processed by a suitable computer-based software application and the processed information may be used to fill out post-operative reports. For example, fig. 24A shows an example of a process 2401 for processing information and populating a post-operative report 2301. In an example implementation, the image-based information 2411 and the auxiliary information 2413 may be used as input to a computer-based software application 2415, and the application 2415 may be configured to process the information 2411 and 2413, extract data for various fields present in a post-operative report (e.g., report 2301 as shown in fig. 24A), and fill in the various fields (as schematically indicated by arrows 2430A-2430D). Fig. 24B illustrates an example system 2402 for processing information and populating a post-operative report 2301. System 2402 may differ from system 2401 in that various data processed by applications 2415 may be stored in database 2440 prior to populating post-operative report 2301. By storing the data in database 2440, the data can be readily accessed for use in generating various other reports. Database 2440 may be configured to execute a software application to map data from database 2440 to fields of report 2301, as schematically illustrated by arrows 2431A-2431D.

As described above, embodiments of populating post-operative reports may include receiving input of identifiers for patients and healthcare providers. The identifier of the patient may be any suitable data or physical indicator (e.g., the patient's name, date of birth, social security number or other government identifier, patient number or other unique code, patient image, DNA sequence, voice ID, or any other indicator that uniquely identifies the patient.

In various embodiments, a patient identifier may be received as input. This may be done using any suitable transmission process (e.g., a process of transmitting data over a wired or wireless network, a process of transmitting data using a suitable input device such as a keyboard, mouse, joystick, etc.). In some cases, "receiving input" may include receiving by mail or courier (e.g., a paper document delivered in person).

Similar to the patient identifier, the healthcare provider's identifier may be any suitable indication of identity, such as a name, code, affiliation, address, employee number, physician license number, or any other mechanism that identifies the healthcare provider. In an example embodiment, the identifier may be an alphanumeric string that uniquely identifies the healthcare provider.

The disclosed embodiments may further include: input is received of a surgical short sheet of a surgical procedure performed on a patient by a healthcare provider. The surgical short sheet may be received as input by a computer-based software application for analyzing the input (e.g., application 2415 shown in fig. 24A), and/or in some cases, receiving the input may include receiving the input by a healthcare professional or user. This may occur, for example, when a healthcare professional or user uploads a video filmlet from a storage location and/or directly from a sensor that takes the video filmlet.

The surgical short sheet of surgery may include any form of recorded visual data, including recorded image and/or video data, which may also include sound data. The visual data may include a sequence of one or more images captured by an image sensor, such as cameras 115, 121, 123, and/or 125, as described above in connection with fig. 1. Some of the cameras (e.g., cameras 115, 121, and 125) may capture video/image data of the surgical table 141, and the camera 121 may capture video/image data of the surgeon 131 performing the surgery. In some cases, the camera may capture video/image data associated with a surgical team member, such as an anesthesiologist, nurse, surgical technician, etc. located in the operating room 101.

In various embodiments, the image sensor may be configured to capture a surgical short by converting visible light, X-ray light (e.g., via fluoroscopy), infrared light, or ultraviolet light into an image, a sequence of images, video, or the like. The image/Video data may be stored as computer files using any suitable Format, such as JPEG, PNG, TIFF, Audio Video Interleave (AVI), Flash Video Format (FLV), QuickTime file Format (MOV), MPEG (MPG, MP4, M4P, etc.), Windows Media Video (WMV), Material Exchange Format (MXF), and the like.

The surgical procedure may include any medical procedure associated with or involving a manual procedure (procedure) or surgical procedure on the body of a patient. The surgical procedure may include cutting, abrading, suturing, or other techniques involving physical alteration of body tissues and/or organs. The surgical procedure may also include diagnosing the patient or administering a drug to the patient. Some examples of such surgical procedures may include: laparoscopic surgery, thoracoscopic surgery, bronchoscopic surgery, microscopic surgery, open surgery, robotic surgery, appendectomy, carotid endarterectomy, carpal tunnel release, cataract surgery, cesarean section, cholecystectomy, colectomy (such as partial colectomy, total colectomy, etc.), coronary angioplasty, coronary artery bypass surgery, debridement (e.g., wounds, burns, infections, etc.), free skin graft, hemorrhoidectomy, hip replacement, hysterectomy, hysteroscopy, groin hernia repair, knee arthroscopy, knee replacement, mastectomy (such as partial mastectomy, total mastectomy, modified radical mastectomy, etc.), prostatectomy, prostate removal, shoulder arthroscopy, spinal surgery (such as spinal fusion, etc.) Laminectomy, foraminotomy, discectomy, disc replacement, an intervertebral implant, etc.), tonsillectomy, cochlear implant surgery, resection of a brain tumor (e.g., meningioma, etc.), interventional procedures such as percutaneous transluminal coronary angioplasty, transcatheter aortic valve replacement, minimally invasive surgery to clear cerebral hemorrhage, or any other medical procedure involving some form of incision. Although the present disclosure is described with reference to surgical procedures, it is to be understood that it may be applicable to other forms of medical procedures or general procedures.

In various embodiments, the surgical procedure may be performed on the patient by a healthcare provider, and the patient is identified by an identifier, as described above. A healthcare provider may be a person, a group of people, an organization, or any entity authorized to provide health services to a patient. For example, the healthcare provider may be a surgeon, an anesthesiologist, a nurse practitioner, a pediatrician in general, or any other person or group of persons who may be authorized and/or capable of performing a surgical procedure. In various embodiments, the healthcare provider may be a surgical team performing the surgical procedure and may include a head surgeon, an assistant surgeon, an anesthesiologist, a nurse, a technician, and the like. The healthcare provider may manage the surgery, assist the patient in the surgery, etc. Consistent with the disclosed embodiments, a hospital, clinic, or other organization or facility may also be characterized as a healthcare provider. Likewise, the patient may be the person (or any living thing) on which the surgical procedure is performed.

Aspects of the disclosed embodiments may include analyzing a plurality of frames of a surgical short to derive image-based information for populating a postoperative report of a surgical procedure. In various embodiments, the image-based information may include: information about events occurring during the surgical procedure, information about the stage of the surgical procedure, information about surgical tools used during the surgical procedure, information about the anatomy on which the surgical procedure was performed, data from various devices (e.g., vital signs such as pulse, blood pressure, body temperature, respiration rate, blood oxygen level, etc.), or any other suitable information that can be obtained from images and that can be adapted for documentation in post-operative reports. Some other non-limiting examples of information based on analysis of a surgical short and/or algorithms for analyzing a surgical short and determining the information are described in this disclosure.

In various embodiments, the image-based information may be derived from the surgical patch using any suitable trained machine learning model (or other image recognition algorithm) that is used to identify events within the surgical patch, stages of the surgical procedure, surgical tools, anatomical structures, and the like, e.g., as described above. In some cases, machine learning methods may identify various characteristics of events, phases, surgical tools, anatomical structures, and the like. For example, the characteristic of the event (such as an incision) may include a length of the incision, and the characteristic of the anatomical structure may include a size of the structure or a shape of the structure. In various embodiments, any suitable characteristics may be identified using machine learning methods (e.g., as described above) and, once identified, may be used to populate a surgical report.

In various embodiments, the derived image-based information may be used to populate a post-operative report of the surgical procedure. The process of filling out the post-operative report may include filling out the fields of the report with information specific to the fields. In an example embodiment, populating the post-operative report may be accomplished through a computer-based application (e.g., application 2415 as shown in fig. 24A). For example, the computer-based application may be configured to retrieve a field from a post-operative report, determine a name associated with the field, determine what type of information needs to be entered in the field based on the determined name (e.g., image-based information or any other suitable information), retrieve such information from a surgical short sheet or from ancillary information (e.g., ancillary information 2413 as shown in fig. 24A). In an example embodiment, retrieving information may include deriving image-based information from the surgical short. For example, if the field name is "surgical tool used," then retrieving information may include: an image recognition algorithm for identifying (in a surgical short sheet) surgical tools used during surgery is used, and a surgical report is filled in with the names of the identified tools. Thus, the derived image-based information may be used to fill out a postoperative report of the surgical procedure. Other examples of image-based information that may be used to fill out a report may include: the start time and end time of the procedure or portions thereof, the complications encountered, the condition of the organ, and other information that may be derived through analysis of the video data. These may also include characteristics of the patient, characteristics of one or more healthcare providers, information about the operating room (e.g., the type of devices present in the operating room, the type of image sensors available in the operating room, etc.), or any other relevant data.

Aspects of a method of populating a post-operative report of a surgical procedure are illustrated by an example process 2501 as shown in fig. 25. At step 2511 of process 2501, the method may include the steps of: an input of an identifier of the patient is received, and at step 2513, the method may include the steps of: an input of an identifier of a healthcare provider is received, as described above. At step 2515, the method may include the steps of: input is received of a surgical short sheet of a surgical procedure performed on a patient by a healthcare provider. Receiving input of a surgical short sheet may include: an appropriate computer-based software application or healthcare professional's input is received, as discussed above. At step 2517, the method may include the steps of: the plurality of frames of the surgical short sheet are analyzed to derive image-based information for filling out a post-operative report of the surgical procedure (as described herein), and at step 2519, the method may include the steps of: the derived image-based information is caused to fill out a post-operative report of the surgical procedure, as previously described.

Aspects of a method of populating a postoperative report of a surgical procedure may include: the surgical clips are analyzed to identify one or more stages of the surgical procedure. The stages may be automatically distinguished from one another based on a training model that is trained to distinguish one part of a surgical procedure from another, e.g., as described herein.

For purposes of this disclosure, a phase may refer to a particular period or phase of a process or series of events. Thus, a surgical stage may refer to a sub-portion of a surgical procedure. For example, the surgical stages of laparoscopic cholecystectomy may include: trocar placement, preparation, calot triangle dissection, closing and cutting of cystic duct and artery, gallbladder dissection, gallbladder packaging, liver bed cleaning and solidification, gallbladder contraction and the like. In another example, the surgical stage of cataract surgery may include: preparation, povidone iodine injection, corneal incision, capsulorhexis, phacoemulsification, cortical aspiration, intraocular lens implantation, intraocular lens adjustment, wound closure, and the like. In yet another example, the surgical stage of pituitary surgery may include: preparation, nasal incision, nasal retractor installation, tumor access, tumor removal, nasal columella replacement, suturing, nasal compression, and the like. Some other examples of surgical stages may include: preparation, incision, laparoscopic positioning, suturing, and the like.

In some examples, the user may identify the stage by tagging a segment of the surgical clip with a word/sentence/string that identifies the name or type of the stage. The user may also identify an event, procedure, or device used, and the input may be associated with a particular video clip (e.g., via a look-up table or other data structure). User input may be received through a user interface of a user device, such as a desktop computer, laptop, tablet, mobile handset, wearable device, internet of things (IoT) device, or any other device for receiving input from a user. For example, the interface may provide: one or more drop down menus having one or more phase name selection lists; a data entry field that allows a user to enter a phase name and/or suggest a phase name once a few letters are entered; a selection list of phase names may be selected; a set of selectable icons, each icon associated with a different phase; or any other mechanism that allows a user to identify or select a stage.

In some embodiments, analyzing the surgical procedure to identify one or more stages of the surgical procedure may involve analyzing frames of the video footage using computer analysis (e.g., a machine learning model), for example, as described above. Computer analysis may include any form of electronic analysis using a computing device. In some implementations, the computer analysis can include identifying features of one or more frames of the video filmlet using one or more image recognition algorithms. Computer analysis may be performed for a single frame, or may be performed across multiple frames, e.g., to detect motion or other changes between frames.

In some embodiments, analyzing the surgical procedure to identify at least one stage of the surgical procedure may involve associating a name with the at least one stage. For example, if the identified stage includes cholecystectomy, the name "cholecystectomy" may be associated with the stage. In various embodiments, the derived image-based information (derived from the surgical short of the surgery by identifying the stage) may include an associated stage name as described above.

Further, aspects of the method of populating a post-operative report of a surgical procedure may include: characteristics of at least one of the identified phases are identified. The characteristic of the stage may be any characteristic of the stage, such as the duration of the stage, the location of the stage in a sequence of stages during the surgical procedure, the complexity of the stage, identification of the technique used, information related to the medical instrument used in the stage, information related to the action performed in the stage, a change in the condition of the anatomy during the stage, or any other information that may characterize the stage. The phase characteristics may be expressed in the form of an alphanumeric string. For example, "first stage" may identify the stage as the first stage in a sequence of stages during a surgical procedure, "one hour" may describe the duration of the stage as one hour, "bronchoscopy" may identify the stage as a bronchoscopy, and so on. Additionally or alternatively, the characteristics of the stage may be non-textual data (e.g., image, audio, numerical, and/or video data) collected during the surgical procedure. For example, a representative image of an anatomical structure (or a surgical instrument, or an interaction of a surgical instrument with an example anatomical structure) performed during a stage of a surgical procedure may be used as a characteristic of the stage. In one example, a machine learning model may be trained using training examples to identify characteristics of a surgical stage from images and/or videos. Examples of such training examples may include images and/or videos of at least a portion of a surgical stage of a surgical procedure, and markers indicating one or more characteristics of the surgical stage. Some non-limiting examples of such characteristics may include: the name of the surgical stage, a textual description of the surgical stage, or any other characteristic of the surgical stage described above. Further, in some examples, the trained machine learning model may be used to analyze the surgical short to identify characteristics of the at least one of the identified stages. In various embodiments, the derived image-based information (used to populate the surgical record) may be based on the identified at least one stage and the identified characteristics of the at least one stage. For example, combining both the phase and the characteristic may enable the phase to be recorded in a more meaningful way. For example, during the sewing phase of the valve, if an intraoperative leak (characteristic of the phase) is detected, the phase/characteristic combination may be recorded in a surgical record. In some cases, the derived image-based information may include a section of video taken during a stage of the surgical procedure.

Aspects of a method of populating a postoperative report of a surgical procedure may include: determining at least a start of the at least one phase; and wherein the derived image-based information is based on the determined start. The start of at least one stage may be determined by performing computer image analysis on the surgical short, for example as described above. For example, using a trained machine learning model (such as via a recurrent convolutional neural network), the beginning of a particular stage can be distinguished from the end of a previous stage, and the location can be identified and stored in a surgical record. In another example, a phase may begin when a particular medical instrument first appears in a video clip, and an object detection algorithm may be used to identify a first appearance of the particular medical instrument in the surgical clip.

In some cases, a time stamp may be associated with at least one stage, and the derived image-based information may include the time stamp associated with the at least one stage. The time stamp may be recorded in a number of ways, including the time elapsed from the start of the surgical procedure, the time measured as the time of day, or a time related to some other intraoperative recorded time. In various embodiments, a time stamp may be associated with the start of each identified stage (e.g., a time stamp may be associated with the start location of a surgical stage within a surgical slice). The time stamp may be any suitable alphanumeric identifier or any other data identifier (e.g., an audio signal or image) and may include information about the time (and/or possibly a time range) associated with the start of the identified phase.

An example surgical event (such as an incision) may be detected using a motion detection algorithm, for example, as described above. Such identified surgical events may identify the beginning of a surgical phase. In an example embodiment, an event to begin a surgical phase may be detected based on machine learning techniques. For example, a machine learning model may be trained using a historical surgical short that includes known events to begin a surgical phase.

Further, the disclosed embodiments may include determining at least an end of the at least one stage, and the derived image-based information may be based on the determined end. The end of a surgical phase within a surgical slice may be determined by detecting the location of the end of the surgical phase. In various embodiments, a time stamp may be associated with the identified end of each stage (e.g., a time stamp may be associated with the location of the end of a surgical stage within a surgical slice). As discussed above, the end mark may be recorded in the same manner as the start mark and may be characterized by any suitable alphanumeric identifier or any other data identifier. For example, a surgical short film may be analyzed to identify the beginning of successive surgical stages, and the end of one stage may be the same as the beginning of a successive surgical stage. In another example, a stage may end when a particular medical instrument is last presented in a video clip, and an object detection algorithm may be used to identify the last appearance of the particular medical instrument in the surgical clip.

Embodiments of automatically populating a post-operative report of a surgical procedure may further include: data is transmitted to the healthcare provider, the transmitted data including the patient identifier and the derived image-based information. During or after the surgical procedure, the video taken during the surgical procedure may be sent to a healthcare provider to fill out the patient's associated surgical record. To ensure that the video fills in the proper records, a patient identifier may accompany the video in the transmission. In some embodiments, this may enable the surgical record to be updated automatically with the video without human intervention. In other embodiments, at the transmitting end and/or at the receiving end, a human may select a video to transmit or accept for incorporation into a patient's medical record. In some cases, sending data may involve mailing (or personally delivering) a physical copy of a document (e.g., paper copy, CD-ROM, hard disk, DVD, USB drive, etc.) that describes the data. Additionally or alternatively, transmitting the data may include transmitting the data to at least one of a health insurance provider or a medical accident underwriter.

Aspects of the present disclosure may include: analyzing the surgical short-film to identify at least one recommendation for post-operative treatment; and providing the identified at least one suggestion. As described earlier, the surgical clips may be analyzed in various ways (e.g., using machine learning methods, by a healthcare provider, etc.). In various embodiments, the machine learning method may be configured not only to identify events within the video frame, but also to form conclusions about various aspects of the surgical procedure based on analysis of the surgical procedure short. For example, post-operative wound care may vary depending on the nature of the surgical wound. Video analysis may determine this property and may also provide recommendations for post-operative treatment of the wound site. Such information may be sent to the surgical record and stored. In some cases, the machine learning method may identify intraoperative events (e.g., adverse events) and may provide an indication that these events require specific postoperative treatment. This can be analyzed by machine learning and recommendations for post-operative treatment can be automatically provided. In one example, a first recommendation for post-operative treatment may be identified in response to a first surgical event identified in a surgical patch, and a second recommendation for post-operative treatment may be identified in response to a second event identified in the surgical patch, the second recommendation may be different from the first recommendation. In one example, a first recommendation for post-operative treatment may be identified in response to a first condition of the anatomical structure identified in the surgical patch, and a second recommendation for post-operative treatment may be identified in response to a second condition of the anatomical structure identified in the surgical patch, the second recommendation may be different from the first recommendation. In some examples, the machine learning model may be trained using training examples to generate recommendations for post-operative treatment from surgical images and/or surgical videos, and the trained machine learning model may be used to analyze the surgical short and identify the at least one recommendation for post-operative treatment. Examples of such training examples may include images or videos of at least a portion of a surgical procedure, and markers indicating a desired recommendation for post-operative treatment corresponding to the surgical procedure.

Such suggestions may include: proposing physical therapy, medication, further physical examination, continuing surgery, etc. In some cases, the advice may not be directly related to the medical activity, but may include dietary advice, sleep advice, physical activity advice, or stress management advice. In various embodiments, the identified recommendations may be provided to a healthcare professional responsible for post-operative treatment of the patient. Additionally or alternatively, the recommendation may be provided to a third party, which may be a patient, family member, friend, or the like.

In one embodiment, the analysis of the surgical short sheet may include: identifying that during a given time of a surgical procedure, a surgeon may be working very closely with the patient's bowel (e.g., using an energy device). When such an event is identified (e.g., using an object detection algorithm, using a trained machine learning model, etc.), a notification (e.g., a push notification) may be sent to alert the surgeon (or any other healthcare professional supervising the post-operative treatment of the patient) to further analyze the surgical short-cut and plan out specialized procedures to avoid catastrophic post-operative events (e.g., bleeding, cardiac arrest, etc.).

In various embodiments, populating a postoperative report of a surgical procedure may include: enabling the healthcare provider to alter at least a portion of the derived image-based information in the post-operative report. For example, a healthcare provider (also referred to as a healthcare professional) may access post-operative reports via a software application configured to display information in the post-operative reports. In various embodiments, a healthcare professional may be enabled to alter some or all of the fields within the post-operative report. In some embodiments, a particular field may be locked as unchangeable without administrative rights. Examples of modifiable fields may be those that contain: text-based data (e.g., which may be altered by inputting new data via a keyboard, mouse, microphone, etc.), image data (e.g., by uploading one or more images related to a surgical procedure, information superimposed on the one or more images, etc.), video data (e.g., by uploading one or more videos related to a surgical procedure superimposed on one or more frames of the one or more videos, etc.), audio data (e.g., audio data captured during a surgical procedure, etc.).

In various embodiments, a version tracking system may be used to track updates to post-operative reports. In an example embodiment, the version tracking system may maintain all of the data that was previously used to fill out post-operative reports. The version tracking system may be configured to track differences between different versions of a post-operative report, and may be configured to track information about the party making the changes to the report (e.g., name of healthcare professional, time of update, etc.).

In some embodiments, the post-operative report of the filling out surgery may be configured such that at least part of the derived image-based information to be identified in the post-operative report is automatically generated data. In various embodiments, since the derived image-based information is used to fill out the post-operative report, filling out the report may include identifying how the derived image-based information is generated. For example, if an elevated heart rate is determined using computer vision analysis of detected blood vessel pulses, the source of the determination may be annotated as being video-based determination. Similarly, video analysis may automatically estimate the amount of blood loss as a result of a rupture, and along with the estimated loss, a surgical report may note that the amount of loss is based on the video analysis's estimate. Indeed, any indication derived from video analysis may be so annotated in the post-operative report using any text, graphic, or icon-based information to reflect the source of the data. For example, a movie icon may be displayed next to data derived from a video. Alternatively, if a healthcare professional identifies an event within a surgical patch and provides a segment of the surgical patch corresponding to the identified event as derived image-based information, such information may be considered to be generated by the healthcare professional and may not be classified as automatically generated data.

Disclosed embodiments may include: the surgical clips are analyzed to identify surgical events within the surgical clips, e.g., as described above. As previously discussed, this analysis may be performed using a machine learning model. The identification may be derived from historical data of surgical events that have been identified and the name of the event. Thus, when a similar event is detected through machine learning, the previously identified name of the event may be similarly applied to the current event identification.

Further, consistent with the disclosed embodiments, not only may events be identified, but also characteristics of the surgical event. The characteristic of the surgical event may be the type of event or any other information characterizing the event. For example, if the event is a cut, the machine learning model may be configured to return the name "cut" as the type of event, and the length and depth of the cut as the characteristics of the event. In some cases, a predetermined list of possible types of various events may be provided to the machine learning model, and the machine learning model may be configured to select a type from the list of event types to accurately characterize the event. The number of characteristics may vary depending on the type of event identified. Some rather simple events may have a relatively short list of associated characteristics, while other events may have more associated alternative characteristics.

As discussed, machine learning models are one way to identify events, where example trained models are used to identify (or determine) events. The training may involve any suitable method, such as a supervised learning method, for example. For example, a historical surgical short that contains features corresponding to an event may be represented as input data to a machine learning model, and the machine learning model may output the name of the event corresponding to the features within the short. Various parameters of the machine learning model may be adjusted to train the machine learning model to correctly identify events corresponding to features in the historical visual data. For example, if the machine learning model is a neural network, parameters of such neural network (e.g., weights of the network, number of neurons, activation functions, bias of the network (bias), number of layers within the network, etc.) may be adjusted using any suitable method (e.g., back propagation processes may be used to adjust weights of the neural network).

In one embodiment, the event may be identified by a medical professional (e.g., surgeon), and the event may be tagged as it occurs. If the machine learning model identifies a surgical activity as a potentially interesting but lacking an associated name for that activity, the associated short-cut may be saved and the user may be prompted later to provide the associated name.

In some cases, the surgeon may mark events during the surgical procedure for subsequent identification. For example, the surgeon may mark the event using a visual or audio signal (e.g., a gesture, a body posture, a visual signal generated by a light source produced by the medical instrument, spoken words, etc.) that may be captured by one or more image/audio sensors and recognized as a trigger for the event.

In various embodiments, the derived image-based information may be based on the identified surgical event and the identified characteristics of the event. After an event and one or more characteristics of the event are identified as discussed earlier, the combination may be analyzed to determine image-based information that may not be possible to derive from the event or characteristics alone. For example, if a particular characteristic of a particular event is associated with a known risk of a post-operative complication, the risk may be determined and included in the image-based information. Alternatively, the derived image-based information may include one or more of the following, as an example: the name of the event, the section of the surgical clip corresponding to the event, the name and/or image of the surgical instrument used during the event, the name and/or image of the anatomy being operated on during the event, the image of the surgical instrument's interaction with the anatomy, the duration of the event, and/or any other information derived from the video.

As mentioned, the surgical clips may be analyzed to determine event names for the identified surgical events. As described above, the event name may be determined using a suitable machine learning model. Alternatively, the name of the event may be identified by a healthcare professional. In various implementations, the derived image-based information may include the determined event name.

Aspects of the disclosed embodiments may also include associating a time stamp with the identified surgical event. The process of associating a time stamp with the identified surgical event may be similar to the process of associating a time stamp with a stage of the surgical procedure. For example, a time stamp may be associated with the start of an event of a surgical procedure (e.g., the start of a surgical event within a surgical slice or some other intermediate location or range of locations). The time stamp may be any suitable alphanumeric identifier or any other graphical or data identifier. For example, the time marker may be an icon or other graphic that appears on the active or static timeline of some or all of the surgical procedures. If active, the timestamp can be clickable (or otherwise selectable) to present a short slice of the associated event. The indicia may be made to appear in the filmstrip by text or graphic overlays on the filmstrip or by embedded identifying audio indicators for playback presentation. Such indicators may include one or more pieces of information, such as time data (time or time range of occurrence), location data (where an event occurs), or characterization data (describing characteristics of occurrence). In some cases, a time stamp may be associated with the end of the event (e.g., a time stamp may be associated with the end location of the event within the surgical slice). The derived image-based information may include such multiple time stamps for multiple events and/or multiple locations within an event.

In some embodiments, providing the derived image-based information may be done in a form that enables updating of the electronic medical record. For example, the derived image-based information may include text data, image data, video data, audio data, etc., which may be in the form of a software application that may be uploaded to store and display electronic medical records (e.g., a standalone application for storing and displaying medical records, a Web interface for displaying medical records using information stored in a database, etc.). In various embodiments, the software application for storing and displaying medical records may include an interface for updating electronic medical records using the derived image-based information. The interface may include graphical user elements for uploading image, video, and audio data, for uploading text data, for typing text data into an electronic medical record, for updating an electronic medical record using a computer mouse, and so forth.

In various implementations, the derived image-based information may be based in part on user input. For example, a user, such as a healthcare professional, may provide input when taking a surgical short, e.g., as described above, and the derived image-based information may be based in part on such input. For example, such input may indicate a particular point in time within the surgical slice.

In various embodiments, the derived image-based information may include a first portion associated with a first portion of a surgical procedure and a second portion associated with a second portion of the surgical procedure. The multipart splitting of the image-based information may facilitate the classification of the image-based information. For example, if a first portion of the surgery involves making multiple incisions and a second portion of the surgery involves suturing, such portions may be used to sort those portions of the surgery. In some cases, a first set of sensors may be used to collect image-based information during a first portion of a surgical procedure, and a different set of sensors may be used to collect image-based information during a second portion of the surgical procedure. For example, during a first portion, an image sensor located on the surgical instrument may be used to capture the surgical short blade, and during a second portion of the surgical procedure, an aerial image sensor (i.e., an image sensor located above the operating table) may be used to capture the surgical short blade.

In various embodiments, the post-operative report may include a first portion corresponding to a first portion of the surgical procedure and a second portion corresponding to a second portion of the surgical procedure. The beginning of the first portion of the post-operative report may be indicated by a first location (e.g., the first location may be a pointer in a data file, a location of a cursor in a text file, a data record in a database, etc.). The beginning of the second portion of the post-operative report may be indicated by a second location, which may be any suitable indication of the following locations in the file: the location is the starting point of the second portion of the post-operative report (e.g., the first location may be a pointer in a data file, a location of a cursor in a text file, a data record in a database, etc.). In various embodiments, the post-operative report may be divided into multiple parts based on the corresponding part of the surgical procedure. In an example embodiment, a machine learning method (or healthcare provider) may identify various portions of a surgical procedure and configure a post-operative report to have such identified portions. The post-operative report may not be limited to two portions, but may include more or less than two portions.

Aspects of the disclosed embodiments may include receiving a preliminary post-operative report. The post-operative report may be received by any entity, whether an organization, a person, or a computer (e.g., an insurance company or healthcare organization, a healthcare professional, or a computer-based program for populating the post-operative report, such as application 2415 shown in fig. 24A). In various embodiments, analyzing the preliminary post-operative report may involve selecting a first location and a second location within the preliminary post-operative report, the first location associated with a first portion of the surgical procedure and the second location associated with a second portion of the surgical procedure. Such a selection may enable someone (or a machine) to analyze the report to jump directly to the region of interest in the report. Accordingly, analyzing the preliminary post-operative report may include identifying an indicator of one or more of the first location and the second location. The indicator may be any suitable alphanumeric or graphical indicator. For example, the indicator of the first location may be a text string "this is the start of the first portion of the post-operative report" or a graphical start icon. In one example, a Natural Language Processing (NLP) algorithm may be used to analyze the textual information included in the post-primary report to identify portions of the textual information that discuss different aspects of the surgical procedure (such as different surgical stages, different surgical events, different uses of medical instruments, etc.), and associate the identified portions of the textual information with different portions of the surgical procedure (e.g., with a corresponding surgical stage, with a corresponding surgical event, with a corresponding use of a medical instrument, etc.). Further, in some examples, the first and second locations (and additional locations) within the preliminary post-operative report may be based on and/or linked with the identified portion of the textual information.

Further, embodiments may include: a first portion of the derived image-based information is inserted at the selected first location and a second portion of the derived image-based information is inserted at the selected second location. For example, a first portion of the post-operative report may include a first set of fields that may be filled in by derived image-based information captured during the first portion of the surgical procedure, and a second portion of the post-operative report may include a second set of fields that may be filled in by derived image-based information captured during the second portion of the surgical procedure. In another example, the first portion of the derived image-based information may correspond to a first portion of a surgical procedure and the second portion of the derived image-based information may correspond to a second portion of the surgical procedure, the first location within the preliminary post-operative report may be identified as corresponding to the first portion of the surgical procedure (as described above), the second location within the preliminary post-operative report may be identified as corresponding to the second portion of the surgical procedure (as described above), and in response, the first portion of the derived image-based information may be inserted at the first location and the second portion of the derived image-based information may be inserted at the second location. Some non-limiting examples of the first and second portions of the surgical procedure may include: different surgical stages, different surgical events, different medical instrument uses, different actions, etc.

Aspects of the present disclosure may further include: analyzing the surgical short to select at least a portion of at least one frame of the surgical short; and causing the selected at least part of the at least one frame of the surgical short to be included in a post-operative report of the surgical procedure. For example, if the post-operative report includes fields configured to hold one or more images of a surgical instrument used during the surgical procedure, the example machine learning model may be configured to identify one or more frames of the surgical short and select a portion of the identified frames that contain the surgical instrument. In addition, the selected portion (or portions) of the at least one frame may be inserted (e.g., filled) into the post-operative report. The machine learning model may also be configured to extract other relevant frames of the surgical short. For example, a frame of the surgical short that depicts the anatomy as the surgical focus, or a frame that depicts the interaction between the surgical instrument and the anatomy, may be extracted. These related frames may also fill out post-operative reports.

The disclosed embodiments may further include: receiving a preliminary post-operative report, and analyzing the preliminary post-operative report and the surgical short to select the at least a portion of the at least one frame of the surgical short. For example, the machine learning model may be configured to analyze post-operative reports and identify discussions of adverse events (e.g., bleeding). Adverse events may be identified, for example, by indications stored in post-operative reports, using NLP algorithms, and the like. The indication may be, for example, an indication of the name of the adverse event. It may include the time when an adverse event occurred during the surgical procedure. The adverse event may be determined using a machine learning model configured to retrieve a surgical short of the surgical procedure and identify a portion of the frame showing visual data representing the adverse event (e.g., a portion of the frame showing bleeding). Further, in some examples, the identified portion of the frame may be inserted into a post-operative report in conjunction with, or otherwise associated with, a discussion of the adverse event.

Additional aspects of the disclosed embodiments may include: the preliminary post-operative report and the surgical short are analyzed to identify at least one inconsistency between the preliminary post-operative report and the surgical short. In various implementations, inconsistencies may be determined by comparing information stored in the report with information derived by a machine learning model that determines errors. For purposes of illustration, one of a hypothetical indefinite number of potential inconsistencies may occur when a medical professional indicates in a report that the surgical site is closed with a suture and a video reveals that the site is closed with staples. This video posting may be performed, for example, using a computer-based software application (e.g., application 2415 as shown in fig. 24A) in which the post-operative report is compared to a video clip of the associated procedure. If the discrepancy is annotated, the computer-based software application may determine the source of the error, may annotate the error, may send a notification of the error, and/or may automatically correct the error. For example, the application may analyze various versions of the preliminary post-operative report (e.g., a version tracking system as described above) to identify at which step of generating the preliminary post-operative report the discrepancy first occurred.

As previously mentioned, embodiments of the present disclosure may include providing an indication of the identified at least one inconsistency. The indication may be provided by sending a notification to the healthcare professional using any suitable means, as discussed above.

As previously described, various embodiments may include receiving input of a patient identifier and input of an identifier of a healthcare provider. Further, the method may comprise the steps of: input is received of a surgical short sheet of surgery performed on a patient by a healthcare provider, as previously described. The method may further comprise the steps of: the plurality of frames of the surgical clip are analyzed to identify phases of the surgical procedure based on detected interactions between the medical instrument and the biological structure, and based on the interactions, names are associated with the respective identified phases. For example, at least some of the frames of the surgical clip may indicate a portion of the surgical clip that is performing a surgical procedure on a biological structure (also referred to herein as an anatomical structure). As discussed above, the interaction may include any action of the medical instrument that may affect the biological structure, and vice versa. For example, the interaction may include contact between the medical instrument and the biological structure, action of the medical instrument on the biological structure (such as cutting, clamping, grasping, applying pressure, scraping, etc.), a physiological response of the biological structure, emission of light by the medical instrument toward the biological structure (e.g., the surgical tool may be a laser emitting light toward the biological structure), sound emitted toward the anatomical structure, an electromagnetic field generated near the biological structure, a current induced in the biological structure, or any other suitable form of interaction.

In some cases, detecting the interaction may include identifying proximity of the medical instrument to the biological structure. For example, by analyzing a surgical video clip, the image recognition model may be configured to determine the distance between the medical instrument and a point (or a set of points) on the biological structure.

Aspects of the present disclosure may involve associating names with the identified various stages based on detected interactions between the medical instrument and the biological structure. The name may be associated with the identified respective phase using any suitable means. For example, the name may be provided by a user, as described above, or may be automatically determined using a suitable machine learning method, as described above. In particular, the process of identifying stages of a surgical procedure involves associating a name with each identified stage. In various embodiments, the name associated with a stage may include a name of a biological structure and a name of a surgical instrument interacting with the structure.

In various embodiments, the name associated with the identified stage may be updated, modified, quantified, or otherwise altered during an ongoing surgical hand stage or after completion of a surgical stage. For example, the machine learning model may initially determine the name of the surgical stage as an "incision" and later, based on the detected interaction between the medical instrument and the biological structure, may update the name of the surgical stage to an exemplary name of "a Lanz incision made via laparoscopic surgery using laparoscopic scissors extending centrally towards the rectus abdominis". Additionally or alternatively, a separate record (also referred to herein as an annotation) may be added to the name identifying the surgical stage, and the annotation contains various details and/or features of the surgical stage. These details may include: an instrument used during the surgical stage, a light used during the surgical stage, a pressure value of pressure exerted on an example biological structure, an area where the pressure is exerted, one or more images of the biological structure and/or medical instrument during the surgical stage, an identification of an event (e.g., an adverse event such as bleeding), or any other relevant information characterizing the surgical stage.

Aspects of the present disclosure may also involve transmitting data to a healthcare provider, the transmitted data including a patient identifier, a name of an identified stage of a surgical procedure, and a time stamp associated with the identified stage.

Embodiments may include: at least the beginning of each identified phase is determined and a time stamp is associated with the beginning of each identified phase, as discussed above. Additionally or alternatively, the time stamp may identify the end of the identified phase, as discussed above. The transmitted data may include: text, graphics, video data, animation, audio data, and the like. In some cases, the data sent may be SMS messages, emails, etc. delivered to any suitable device (smartphone, laptop, desktop, TV, etc.) owned by various healthcare providers (e.g., various medical personnel, administrators, and other interested individuals or systems). In some cases, the transmitted data may also be provided to the patient, the patient's relatives, or friends.

Further, aspects of the present disclosure may include: fill out the sent data to the post-operative report as follows: enabling the healthcare provider to change the phase names in the post-operative report. Such changes may be made through an interface that enables post-operative reporting changes. For example, the interface may allow a healthcare provider to update the phase names by typing new phase names using a keyboard. In various embodiments, the interface may also be configured to alter the names of various events identified in the surgical short sheet and recorded in the post-operative report.

The disclosed systems and methods may involve analyzing a surgical short to identify events during a surgical procedure, comparing the events to a proposed sequence of events, and determining whether any events from the proposed sequence of events were not performed during the surgical procedure. It may be desirable to identify skipped surgical events during or after surgery. The events may be compared to a sequence of suggested events, and when some events are not performed during the surgical procedure, as determined by the comparison to the sequence of suggested events, a notification may be provided to indicate which event has been skipped. Thus, there is a need to analyze surgical clips and identify events skipped during surgery.

Aspects of the present disclosure may be directed to enabling determination and notification of skipped events in surgical procedures, including related methods, systems, devices, and computer-readable media.

For ease of discussion, a method is described below, wherein an understanding of aspects of the method applies equally to systems, apparatuses, and computer-readable media. For example, some aspects of such methods may occur electronically over a network, wired, wireless, or both. Other aspects of this method may occur using non-electronic means. In the broadest sense, the method is not limited to a particular physical and/or electronic instrument, but can be accomplished using many different instruments.

Disclosed embodiments may include: enabling determination and notification of skipped events may involve accessing video frames taken during a particular surgical procedure. As used herein, a frame of video may include continuous or discontinuous images captured by an image capture device. Such images may be captured, for example, by cameras 115, 121, 123, and/or 125, as described above in connection with fig. 1. In some cases, a frame of video may have a corresponding audio signal forming a soundtrack of the video, where the audio signal is captured by an audio capture device (e.g., microphone D111 as shown in fig. 1). The video frames may be stored as separate files or may be stored in a combined format, such as a video file, which may include corresponding audio data. In some embodiments, the video may be stored as raw data and/or images output from an image capture device. In other embodiments, video frames may be processed. For example, a video file may include: audio Video Interleave (AVI), Flash Video Format (FLV), QuickTime File Format (MOV), MPEG (MPG, MP4, M4P, etc.), Windows Media Video (WMV), Material Exchange Format (MXF), uncompressed Video files, lossless compressed Video files, lossy compressed Video files, or any other suitable Video file format.

As used herein, a particular surgical procedure may include any medical action, operation, diagnosis, or other medically-related treatment or action. Such procedures may include cutting, resecting, suturing, or other techniques involving physical alteration of body tissues and organs. Some examples of such surgical procedures may include: laparoscopic surgery, thoracoscopic surgery, bronchoscopic surgery, microscopic surgery, open surgery, robotic surgery, appendectomy, carotid endarterectomy, carpal tunnel release, cataract surgery, cesarean section, cholecystectomy, colectomy (such as partial colectomy, total colectomy, etc.), coronary artery bypass, debridement (e.g., wound, burn, infection, etc.), free skin graft, hemorrhoidectomy, hip replacement, hysteroscopy, inguinal hernia repair, sleeve gastrectomy, abdominal hernia repair, knee arthroscopy, knee replacement, mastectomy (such as partial mastectomy, total mastectomy, modified radical mastectomy, etc.), prostatectomy, prostate removal, shoulder arthroscopy, spinal surgery (such as spinal fusion, etc.) Laminectomy, foraminotomy, discectomy, disc replacement, an interbody implant, etc.), tonsillectomy, cochlear implant surgery, resection of a brain tumor (e.g., meningioma, etc.), interventional procedures such as percutaneous transluminal coronary angioplasty, transcatheter aortic valve replacement, minimally invasive surgery to clear cerebral hemorrhage, thoracoscopy, bronchoscopy, hernia repair, hysterectomy (e.g., simple hysterectomy or radical hysterectomy), radical prostatectomy, partial nephrectomy, thyroidectomy, partial colectomy, or any other medical procedure involving some form of: incision, diagnosis, treatment, or tissue alteration, or, for example, to a treatment, diagnosis, drug administration, resection, repair, transplantation, reconstruction, or improvement.

The deviation between the particular surgical procedure and the proposed sequence of events may be specific to the surgical procedure, as each type of surgical procedure may involve one or more events in its own proposed sequence of events. When one such suggested sequence is not followed, a deviation can be said to have occurred, and a notification can be provided (e.g., as described below). In some gallbladder surgeries (such as laparoscopic or robotic cholecystectomy), for example, the deviation may include neglecting to clear the triangular adipose and fibrous tissue of the hepatocytes, separating the gallbladder from the liver, exposing the gallbladder plate, or failing to identify the cystic duct and gallbladder artery that enters the gallbladder. For another example, in some appendiceal surgical procedures (such as laparoscopic or robotic appendectomy), the deviation may include disregarding adhesions to the surrounding dissected appendix, or may include a base that does not circumferentially identify the appendix. In some hernia surgeries (such as laparoscopic hernia repair), the deviation may include neglecting to reduce the hernia content, neglecting to surround the fascia before anchoring the mesh, neglecting to isolate the fascia surrounding the hernia, or neglecting to identify and/or isolate the inguinal tube component, etc. Examples of such inguinal members may be testicular arteries, tendril-like venous plexuses, nerves, vessels, etc. In some uterine surgical procedures, such as laparoscopic simple hysterectomies, the deviation may include disregarding the identification and/or ligating of the uterine artery, disregarding the identification of the ureter, and the like. In some other uterine surgical procedures, such as robotic radical hysterectomies, the deviation may include neglecting to identify the iliac vessels, neglecting to identify the obturator nerve, and the like. In some prostate surgical procedures (such as robotic radical prostatectomy), the deviation may include disregarding the bladder neck in the anterior bladder wall, disregarding the bladder neck in the posterior bladder wall, disregarding the ureteral orifice, and/or disregarding other anatomical structures. In procedures involving the kidney, such as laparoscopic or robotic partial nephrectomy, the deviation may include disregarding identifying the hilum, and in the case of disregarding identifying the hilum, may include disregarding identifying at least one of an artery, a vein, and a collection system including the ureter. In thyroid surgery (such as open or robotic thyroidectomy), deviation may include neglecting to identify the recurrent laryngeal nerve. In colonic surgery (such as colectomy or segmental colectomy, whether open, laparoscopic or robotic), the deviation may include neglecting to dissect the colon and retroperitoneal cavity, neglecting to dissect the colon and liver, neglecting to dissect the colon and splenic flexure, or neglecting to perform anastomosis, neglecting to visualize the non-adherent and/or non-tensioned colon, neglecting to perform anastomosis, neglecting to visualize the non-tensioned and/or well-perfused and/or technically well-sealed anastomosis. The foregoing are just a few examples. More broadly, any divergence from an expected or recognized course of action can be considered a deviation.

The surgical procedure may be performed in an operating room or any other suitable location. The operating room may be a facility (e.g., a room within a hospital) in which the surgical procedure may be performed in a sterile environment. The operating room may be configured to be brightly lit and have overhead surgical lights. Operating rooms may feature controlled temperature and humidity, and may be windowless. In an exemplary embodiment, the operating room may include an air handler that filters the air and maintains a slightly higher pressure within the operating room to prevent contamination. The operating room may include a power backup system in the event of a power outage, and may include a supply of oxygen and anesthetic gases. The room may include storage space for common surgical supplies, containers for disposables, anesthesia carts, operating tables, cameras, monitors, and other items used for surgery. Dedicated scrub areas used by surgeons, anesthesiologists, operation department doctors (ODPs), and nurses prior to surgery may be part of the operating room. In addition, a map included in the operating room may enable the end cleaning personnel to rearrange the operating tables and equipment into a desired layout during cleaning. In various embodiments, one or more operating rooms may be part of a surgical suite, which may form different sections within a healthcare facility. The surgical suite may include one or more washrooms, preparation and recovery rooms, storage and cleaning facilities, offices, dedicated corridors, and possibly other support units. In various embodiments, the surgical suite may be climate and/or air controlled and separated from other departments.

Accessing video frames of video taken during a particular surgical procedure may include: the frame is received from an image sensor (or multiple image sensors) located in the operating room. The image sensor may be any detector capable of capturing image or video data. A video frame may comprise at least a portion of one of a number of still images that make up a movie (a short film of any duration). Video capture may occur when one or more still images, or portions thereof, are received from the image sensor. Alternatively or additionally, the capturing may be performed when retrieving one or more still images or portions thereof from a memory in the storage location. For example, the video frames may be accessed from a local storage (such as a local hard drive) or may be accessed from a remote source (e.g., over a network connection). In an example embodiment, the video frames may be retrieved from a database 1411 as shown in fig. 14. For example, the processor 1412 of the system 1410 may be configured to execute instructions (e.g., instructions implemented as software 1416) to retrieve video frames from the database 1411. Video frames of a particular surgery may be retrieved.

Aspects of embodiments that enable determination and notification of skipped events may further include: stored data identifying a proposed sequence of events for a surgical procedure is accessed. As used herein, an event of a surgical procedure (also referred to as a surgical event) may refer to an action performed as part of a surgical procedure (e.g., an intraoperative surgical event), such as an action performed by a surgeon, surgical technician, nurse, physician assistant, anesthesiologist, physician, or any other health care professional. The intraoperative surgical event can be a planned event, such as an incision, administration of a drug, use of a surgical instrument, resection, ligation, implantation, suturing, stapling, or any other planned event associated with a surgical procedure or stage.

Examples of surgical events for laparoscopic cholecystectomy may include: trocar placement, calot triangle dissection, clamping closure and cutting of cystic duct and artery, cystic dissection, cystic packaging, cleaning and solidification of liver bed, cystic contraction and the like. In another example, the surgical events of cataract surgery may include: povidone iodine injection, corneal incision, capsulorhexis, phacoemulsification, cortical aspiration, intraocular lens implantation, intraocular lens adjustment, wound closure, and the like. In yet another example, the surgical signature events of pituitary surgery may include: preparation, nasal incision, nasal retractor installation, tumor access, tumor removal, nasal columella replacement, suturing, nasal compression, and the like. The foregoing are merely a few examples to illustrate the distinction between surgical and intra-surgical events and are not intended to limit the embodiments described herein. Some other examples of common surgical events may include: incisions, laparoscopic positioning, sutures, and the like.

In some embodiments, the surgical event may include: unplanned events, adverse events, or complications. Some examples of adverse surgical events may include: bleeding, mesenteric emphysema, injury, transition to unplanned open surgery (e.g., abdominal wall incision), incision significantly larger than planned, etc. Some examples of intraoperative complications may include: hypertension, hypotension, bradycardia, hypoxemia, adhesions, hernia, atypical dissection, dural tears, periodor injury, arterial infarction, and the like. In some cases, the surgical event may include other errors, including: technical errors, communication errors, management errors, judgment errors, situational awareness errors, decision errors, errors relating to medical identification utilization, and the like. In various embodiments, the event may be brief or may last for a duration of time. For example, a transient event (e.g., an incision) may be determined to occur at a particular time during a surgical procedure, and a prolonged event (e.g., bleeding) may be determined to occur within a certain time interval. In some cases, the extended event may include an explicit start event and an explicit end event (e.g., start of stitch and end of stitch), and the stitch is an extended event. In some cases, the prolonged event is also referred to as a stage during surgery.

In various embodiments, the suggested event may be an event required during a surgical procedure. Similarly, the suggested event may be an event that is proposed to occur during a surgical procedure. For example, a recommended event during a bronchoscopy may include inserting a bronchoscope through the patient's nose or mouth, down the patient's throat and into the patient's lungs. The suggested sequence of events may include the suggested sequence of events. In some cases, a surgical event may identify a set of sub-events (i.e., more than one sub-event or step). For example, an event of administering general anesthesia to a patient may include several steps, such as a first step of providing a drug to the patient via an IV line to induce unconsciousness, and a second step of administering a suitable gas (e.g., isoflurane or desflurane) to maintain general anesthesia.

In an example embodiment, the suggested event may include administration of a pain relief medication to the patient, placement of the patient in a preferred location, obtaining a biopsy sample from the patient, or any other proposed event not required.

The suggested sequence of events may include any suitable established sequence of events used during a surgical procedure. The suggested sequence of events may be established by a healthcare professional (e.g., surgeon, anesthesiologist, or other healthcare professional) analyzing historical surgical procedures and determining surgical guidelines. An example of a proposed sequence of events can include, for example, examining the appendix base in a circumferential view. In some cases, the suggested sequence of events may be based on a critical safety view (CVS), as is known in the art. For example, during laparoscopic cholecystectomy, a critical safe field of view may be used to identify the cystic duct and the cystic artery to minimize damage to the bile duct. In other embodiments, the determination of mandatory and suggested event sequences may be automatically determined by applying artificial intelligence to historical surgical video clips.

By way of illustration, in some embodiments CVS may be used to avoid bile duct injury. CVS can be used to identify two tubular structures, i.e., the cystic duct and the cystic artery, segmented in a cholecystectomy. The CVS may be used as a procedure in an open cholecystectomy, in which two gallbladder structures are putatively identified, after which the gallbladder is removed from the plate to be freely suspended and attached by the two gallbladder structures. In laparoscopic surgery, complete separation of the gallbladder body from the gallbladder plate makes trimming the gallbladder structure difficult. Thus, for laparoscopy, it may be desirable to be able to separate the lower portion (about one third) of the gallbladder from the gallbladder plate. Two other requirements may be triangular clearance of fat and fibrous tissue from the hepatocytes, and two and only two structures attached to the gallbladder. The gallbladder structure may not be trimmed and segmented until all three elements of CVS are obtained. Intraoperative CVS should be confirmed in a "timeout" that shows three elements of CVS. It should be noted that CVS is not an anatomical method, but a conceptual target recognition method similar to that used during the safety hunting (safe hunting) procedure.

The proposed sequence of events may include conditional terms. As an illustrative example, a proposed sequence of events for bypass surgery may include: (1) administering general anesthesia to a patient, (2) preparing an artery to be used as a bypass graft, (3) making an incision in the center of the patient's chest, through the chest plate (sternum), near the patient's heart and coronary arteries, (4) connecting a heart-lung extracorporeal circulation machine, (5) while the patient's heart is beating, suturing a section of the artery around the opening below the blockage in the affected coronary artery, (6) checking whether the patient's heart is continuously pumping blood, (7) if the patient's heart stops beating, activating the heart-lung extracorporeal circulation machine, (8) attaching the other end to the opening made on the aorta, and so on. As described above, the cardiopulmonary extracorporeal circulation machine enabled event may be part of a suggested sequence of events, and may be triggered by any suitable surgical event (or lack thereof), such as a cardiac arrest surgical event. In some cases, the suggested sequence of events may include a decision tree for determining a next event in the sequence of events. In some examples, the suggested sequence of events may include events that need to occur within a particular time interval that may be specified in the suggested sequence of events. For example, it may be desirable for an event to occur within a particular time interval of a surgical procedure, within a particular time interval after a surgical procedure is initiated, within a particular time interval before a surgical procedure is completed, within a particular time interval after a second event of a surgical procedure occurs (e.g., after a second event is completed, after a second event is initiated, etc.), within a particular time interval before a second event of a surgical procedure occurs, and so forth.

Accessing stored data identifying a proposed sequence of events for a surgical procedure may include: the stored data is retrieved from a suitable storage location (e.g., a data storage device such as a memory, hard disk, database, server, etc.). In an example embodiment, the stored data may be retrieved from a database 1411 as shown in FIG. 14. For example, the processor 1412 of the system 1410 may be configured to execute instructions (e.g., instructions implemented as software 1416) to retrieve stored data from the database 1411. The stored data for a particular surgical procedure may be retrieved. In some examples, identifying the suggested sequence of events may include: a suggested sequence of events of the plurality of alternative sequences is selected. For example, the suggested sequence of events may be selected based on: based on the type of surgical procedure, based on the medical instrument being used or planned for the surgical procedure, based on the condition of the anatomy associated with the surgical procedure, based on features of the patient associated with the surgical procedure (some examples of such features are described above), based on features of the surgeon or healthcare professional associated with the surgical procedure (some examples of such features are described above), based on features of the operating room associated with the surgical procedure, and so forth. In some examples, the suggested sequence of events may be selected (or modified) during the surgical procedure according to one or more events that have occurred. For example, the occurrence of a particular event in a surgical procedure may indicate the type of surgical procedure (e.g., the location and/or length of an incision may indicate whether the surgical procedure is an open surgical procedure or a laparoscopic surgical procedure, the use of a particular medical instrument may indicate a particular technique (which may require selection of a particular sequence of events, etc.), or a technique selected by the surgeon for a particular surgical procedure, and a corresponding suggested sequence of events may be selected. And in response to a second event occurring in a particular ongoing surgical procedure, a second suggested sequence of events may be selected for the remainder of the particular ongoing surgical procedure, which may be different from the first suggested sequence of events. In some examples, image data taken from a particular ongoing surgical procedure may be analyzed to select a suggested sequence of events for the remainder of the particular ongoing surgical procedure. For example, the image data may be analyzed to detect events and/or conditions in the particular ongoing surgical procedure (e.g., as described above), and the suggested sequence of events may be selected based on the detected events and/or conditions. In another example, a machine learning model may be trained using a training example to select a suggested sequence of events based on images and/or video, and the trained machine learning model may be used to analyze image data and select the suggested sequence of events for the remainder of a particular ongoing surgical procedure. Examples of such training examples may include images and/or videos depicting a first portion of a surgical procedure, and markers indicating a desired selection of a suggested sequence of events for a remaining portion of the surgical procedure.

Fig. 26 schematically illustrates an example suggested sequence of events 2601. For example, event E1 (e.g., connected cardiopulmonary extracorporeal circulation machine) may be the first event in the suggested sequence. Event E1 may need to occur during surgical time interval T1A to T1B. Event E2 (e.g., a suture) may be a second event and may need to occur during time interval T2A to T2B of the surgical procedure (or in other examples, during time interval T2A to T2B after event E1 is completed, during time interval T2A to T2B after event E1 is completed, etc.). After event E2 is complete, the condition statement C1 may be evaluated (e.g., determining the pulse of the patient's heart). If the condition statement C1 evaluates to a value V1 (e.g., if the patient is not having a pulse), an event E3 may be required (e.g., activation of the cardiopulmonary extracorporeal circulation machine) during the time interval T3A to T3B. If statement C1 evaluates to a value of V2 (e.g., ten pulses per minute), event E4 may be required (e.g., administration of a first drug to the patient) during time interval T4A to T4B, while if statement C1 evaluates to a value of V3 (e.g., hundred pulses per minute), event E5 may be required (e.g., administration of a second drug to the patient) during time interval T5A to T5B.

Aspects of the method of enabling determination and notification of skipped events may further include: the accessed video frames are compared to the proposed sequence of events to identify an indication of deviation between a particular surgical procedure and the proposed sequence of events for that surgical procedure. In some examples, a machine learning model may be trained using training examples to identify from images and/or videos an indication of a deviation between a surgical procedure and a proposed sequence of events for the surgical procedure, and the trained machine learning model may be used to analyze frames of the video and identify an indication of a deviation between a particular surgical procedure and the proposed sequence of events for the surgical procedure. Examples of such training examples may include a sequence of events and images and/or video depicting a surgical procedure, as well as a flag indicating whether the surgical procedure is deviating from the sequence of events.

In some examples, comparing the accessed video frame to the suggested sequence of events may include: the video frames are analyzed and events within the video frames are identified, for example, as described above. For example, recognition of events within a view frame may be accomplished using a trained machine learning model, e.g., as described above. In one example, identifying the event may include at least one of: identifying the type of event, identifying the name of the event, identifying characteristics of the event (some examples of such characteristics are described above), identifying the time of occurrence (or time interval) of the event, and so forth. Further, in some examples, the identified events may be compared to a suggested sequence of events to identify an indication of a deviation between a particular surgical procedure and the suggested sequence of events for that surgical procedure. In some examples, the analysis of the video frames and the identification of events within the video frames may occur while a particular surgical procedure is in progress, and deviations between the particular surgical procedure and the proposed sequence of events for the surgical procedure may occur while the particular surgical procedure is in progress. In other examples, the analysis of the video frames and the identification of events within the video frames may occur after completion of a particular surgical procedure, and/or deviations between a particular surgical procedure and a suggested sequence of events for that surgical procedure may be identified after completion of the particular surgical procedure.

Detecting feature events using a machine learning approach may be one possible approach. Additionally or alternatively, various other methods may be used to detect feature events in video frames received from an image sensor. In one embodiment, the characteristic events may be identified by a medical professional (e.g., surgeon) during the surgical procedure. For example, a characteristic event may be identified using a visual or audio signal from the surgeon (e.g., a gesture, a body posture, a visual signal generated by a light source produced by a medical instrument, spoken words, etc.) that may be captured by one or more image/audio sensors and identified as a trigger for the characteristic event.

Further, comparing the accessed video frames to the suggested sequence of events may include: the identified sequence of events within the video frame is compared to a proposed sequence of events for the surgical procedure. For example, fig. 27 shows a suggested (or mandatory) event sequence 2701 and an identified event sequence 2702 within a video frame. When comparing sequence 2702 to sequence 2702, the deviation of sequence 2702 from sequence 2701 can be determined. Sequence 2702 can be skewed relative to sequence 2701 in a number of ways. In some cases, sequence 2702 may have different events than sequence 2701. For example, as shown in fig. 27, the sequence 2701 may have events E1 through E4, and the sequence 2702 may have events S1 through S5. The sequence 2701 and the sequence 2702 can be compared for each of the intervals I1 to I4 shown in fig. 27. For example, event E1 in sequence 2701 may be compared to event S1 in sequence 2702 for interval I1 of the sequence. In an example embodiment, event E1 may be biased with respect to event S1. Alternatively, event E1 may be substantially the same as event S1. In some cases, event E1 may be substantially different from event S1.

In various embodiments, to quantify the difference between event E1 and event S1, a suitable metric function F (E1, S1) may be defined, which may have a range of values. In an example embodiment, the metric function F may return a single number that determines the difference between the events E1 and S1. For example, if F (E1, S1)<F0(E1) Events E1 and S1 are determined to be substantially the same, whereas if F (E1, S1)>F1(E1) Event E1 and S1 determine to be substantially different. Here, the value F0And F1May be any suitable predetermined threshold that may be selected for each event type (i.e., threshold F for event E1)0(E1) And F1(E1) Threshold F, which may be different from event E20(E2) And F1(E2)). In various instances, events E1 and S1 may be characterized by a set of parameters (also referred to as event signatures). For example, event E1 may be represented by parameter P1E1To PNE1As shown in fig. 27. Parameter P1E1To PNE1May include text, numbers, or data that may be represented by an array of numbers (e.g., an image). For example, parameter P1E1The type of event E1 characterized by a text string (e.g., "cut"), parameter P2E1Which may be a number characterizing the length of the incision (e.g., one centimeter), parameter P3E1May be the depth of the cut (e.g., three millimeters), parameter P4 E1There may be a notch location that may be characterized by two numbers (e.g., {10, 20 }). The incision location may be specified by identifying an incision in one or more of the video frames taken during the surgical procedure, and the parameter PNE1The type of surgical tool used for the incision (e.g., a "CO 2 laser") may be indicated. Event E1 may have as many parameters as necessary to adequately characterize the event. In addition, event E1 may start fromSum of TSE1And completion time TFE1To characterize, start time and TSE1And completion time TFE1May be defined to any suitable accuracy (e.g., to milliseconds). TS (transport stream)E1And TFE1Any suitable time format may be used for representation (e.g., the format may be hours: minutes: seconds: milliseconds). Similarly, event S1 may be represented by parameter P1S1To PNS1Start time TSS1And completion time TFS1As shown in fig. 27. As an illustrative example, the parameter { P1E1、P2E1、P3E1、P4E1、PNE1、TSE1、TFE1May be represented by any suitable data structure (e.g., { P1 })E1、P2E1、P3E1、P4E1、PNE1、TSE1TFE1 { "slit", 1 cm]、3[mm]{10, 20}, "CO 2 laser", 13:20:54:80, 13:20:59:76 }).

In various embodiments, the metric function F (E1, S1) may be defined in any suitable manner. As an example embodiment, a metric function may be defined as Where, when event E1 and event S1 are of the same type (e.g., both events have a type "cut"),andis a related numerical parameter, wherein the parameterAndis a text string (or data (such as an image) that may be represented by an array of numbers), and wherein if the text string is presentAndcontaining the same meaning, the function M returns zero, or if the text string containsAndcontaining a different meaning, the function returns one. For theAndcorresponding to the case when the images are substantially the same, the function M may return zero if the images are different, or one if the images are different. In various embodiments, the images may be compared using any suitable image recognition algorithm, described further below. Alternatively, the function M may be configured to perform any suitable algorithm, dependent on the parametersAndtype pair of data representedAnda comparison is made wherein the data may include text strings, numeric arrays, images, video, audio signals, and the like.

For the cases when events E1 and S1 are not of the same type (e.g., event E1 may correspond to "cut", while event S1 may correspond to "administer drug") and when sequence 2702 does not contain the same type of event as event E1, metric function F (E1, S1) may be evaluated as a large predetermined number (or string) indicating that events E1 and S1 are substantially different.

As described above, the deviation between the event sequences 2701 and 2702 may be measured by a suitable metric function F (E) for each interval of the surgical procedures I1 through I4i,Si) An evaluation is performed to determine. The complete deviation can be calculated as the metric function Σi(Ei,Si) Wherein I ═ { I1 … I4 }. However, in various embodiments, it may not be important and/or necessary to calculate all deviations of all events S1-S4 from the corresponding events E1-E4. In each case, there is only a large deviation (i.e., a deviation in which F (E) isi,Si)>F1(Ei) May be important. For such deviations, event Ei、SiMay be identified and stored for further analysis. In addition, the metric function F (E) may be stored as welli,Si) For further analysis. In various embodiments, with event Ei、SiAnd a metric function F (E)i,Si) The relevant data may be stored using any suitable means (e.g., hard disk, database 111, etc.).

The usage metric function may be one possible method of identifying an indication of deviation between a particular surgical procedure and a proposed sequence of events for that surgical procedure. For example, any algorithm for comparing lists and/or graphs may be used to compare the actual sequence of events to the suggested sequence of events and identify an indication of deviation between a particular surgical procedure and the suggested sequence of events for that surgical procedure. Alternatively or additionally, identifying the indication of deviation may be performed using a machine learning model that is trained using training examples to identify the indication of deviation between the sequence of events and the surgical short, for example as described above. In an example embodiment, an illustrative training example may include a surgical short (such as a frame of video taken during a particular type of surgery (e.g., cholecystectomy)) and a suggested sequence of events for that type of surgery. The training examples may be used as input to a machine learning training algorithm, and the resulting machine learning model may be a suitable measure of the deviation between a particular surgical procedure and the proposed sequence of events for that surgical procedure. The measure of deviation can be any suitable measure. In an example embodiment, the metric may list or classify events that are substantially different from the suggested events during the surgical procedure. For example, if a suggested event requires suturing, but surgical glue is used instead during surgery, such an event may be listed or classified as substantially different from the suggested event. Additionally or alternatively, the metric may list suggested events that were not performed during the surgical procedure (e.g., if suturing is required but not performed, such events may be listed as not performed). Also, the metric may list events that were performed during the surgical procedure but were not suggested. For example, an event that administers pain medication to a patient during surgery may be performed and may not be suggested. Additionally, the machine learning model may output deviations between features of events performed during the surgical procedure and features of corresponding proposed events, as described above. For example, if during an incision event during a surgical procedure, the incision length is shorter than the incision described by the proposed event, such deviations may be identified and recorded (e.g., stored) by machine learning methods for further analysis.

In various embodiments, identifying the indication of the deviation includes: the frame is compared to a reference frame that depicts the proposed sequence of events. The reference frame may be a historical frame taken during a historical surgical procedure. In an example embodiment, the video frames and the reference frames depicting the suggested sequence of events may be synchronized according to an event (also referred to herein as a start event), which may be the same as or (substantially similar to) a corresponding start event in the suggested (or mandatory) sequence of events. In some cases, the frame depicting the beginning of the start event may be synchronized with a reference frame depicting the start event in the suggested sequence of events. In some cases, the surgical events may first be related to corresponding reference events in the proposed sequence using any suitable method described above (e.g., using an image recognition algorithm for identifying the events). After linking the example surgical event with the corresponding reference event in the suggested sequence, the frame depicting the beginning of the surgical event may be synchronized with the reference frame depicting the beginning of the corresponding suggested event.

Additionally or alternatively, the indication of the identified deviation may be based on an elapsed time associated with the intraoperative surgical procedure. For example, if the elapsed time associated with the surgical procedure is significantly longer (shorter) than the average elapsed time associated with the surgical procedure having the suggested sequence of events, the method may be configured to determine that a deviation from the suggested sequence of events has occurred.

Aspects of the method may further include: a set of frames associated with a deviation from the surgical procedure is identified and a notification is provided that the deviation has occurred. The notification may include: displaying the identified set of frames associated with the deviation. For example, the set of frames associated with the deviation may depict a particular event (e.g., having different characteristics) during the surgical procedure that is different from the reference's corresponding proposed event. Alternatively, the set of frames associated with the deviation may include frames of events that are not present in the suggested sequence of events. In various embodiments, the notification may include displaying the frame as a still image or as video data. The frames may be displayed on any suitable screen of the electronic device, or (in some cases) may be printed. In some implementations, some of the frames may be selected from the set of frames and displayed using any suitable means (e.g., using a display screen of an electronic device).

Aspects of the method of enabling determination and notification of skipped events may further include: the machine learning model is trained using training examples to identify deviations between the sequence of events and the surgical short, e.g., as described above. For example, training examples may be used as input to a machine learning model, and the measure of deviation returned by the model may be analyzed (e.g., the measure of deviation may be analyzed by a model training expert, such as a healthcare professional). If the measure of deviation returned by the model does not coincide with the desired measure of deviation, various parameters of the machine learning model may be adjusted to correctly predict the measure of deviation. For example, if the machine learning model is a neural network, parameters of such neural network (e.g., weights of the network, number of neurons, activation functions, bias of the network, number of layers within the network, etc.) may be adjusted using any suitable method (e.g., back propagation processes may be used to adjust weights of the neural network). In various embodiments, such adjustments may be made automatically (e.g., using a back propagation process), or in some cases, adjustments may be made by a training expert.

In various embodiments, any suitable, appropriate mathematical metric function G may be used to declare how well the measure of deviation is consistent with the desired measure of deviation. For example, if the measure of deviation from the event is a number (e.g., d) and the desired measure of deviation is another number (e.g., d)0) Then given an event EiMay be Gi(d,d0) May be Gi(d,d0)=d-d0And the metric function may be, for example, a numberAlternatively, in another example embodiment, G may be a vector

To further illustrate the process of determining deviation of sequence 2702 from sequence 2701, fig. 27 shows intervals I1 to I4 at which events E1 to E4 of sequence 2701 may be compared to events S1-S5 of sequence 2702. For example, during interval I1, event S1 may be substantially the same as event E1, while during interval I2, event S2 may deviate from event E2, but may be substantially similar to event E2. For example, event S2 may correspond to a "cut" having a cut length of three centimeters, and event E2 may correspond to a "cut" having a cut length of two centimeters. In an example embodiment, event E3 may be substantially different from event S3 during surgical interval I3 (e.g., event E3 may be identified as "incision" and event S3 may be identified as "suture"). During interval I4, event E4 may be substantially different from event S4, but may be substantially the same as event S5 identified during interval I5 (as indicated by arrow 2711 shown in fig. 27). When calculating the deviation of sequence 2702 from sequence 2701, event S4 of sequence 2702 may be identified as an "inserted" event, which has no corresponding event in sequence 2701. Such a representation of the event S4 may be recorded (e.g., stored on a hard disk, the database 111, or some other location) for further analysis.

Aspects of the disclosed embodiments may further include: an indication of a deviation between a particular surgical procedure and a proposed sequence of events for the surgical procedure is identified. In some cases, identifying the indication of the deviation may include: an indication of the deviation is identified during an ongoing surgical procedure (e.g., as in real-time during the surgical procedure). In various embodiments, the deviation may be identified with a small delay measured from the ongoing event of the surgical procedure due to processing related to identifying the indication of the deviation. The delay may be one millisecond, one second, a few seconds, a few tens of seconds, one minute, a few minutes, or the like. Once a deviation is identified, the disclosed embodiments may include providing a notification during the surgical procedure in progress (e.g., providing a notification as soon as a deviation is identified). For example, providing the notification may occur in real-time during the surgical procedure.

Aspects of the disclosed embodiments may include: an indication that a particular action is to occur in a particular surgical procedure is received. The indication that a particular action is about to occur may be based on an analysis of the frame of the surgical procedure. In an exemplary embodiment, the indication may be received from a computer-based software application, such as a machine learning model for analyzing surgical clips of an ongoing surgical procedure. For example, the machine learning model may be an image recognition algorithm consistent with the disclosed embodiments described herein.

In some embodiments, the image recognition algorithm may identify a surgical tool proximate to the anatomical structure and determine a particular action to occur in the surgical procedure based on the identified surgical tool. In some embodiments, the presence of a surgical tool, an anatomical structure, and/or an interaction between a surgical tool and an anatomical structure may be used as an indicator that a particular action is to occur. As disclosed herein, the image recognition algorithm may analyze the frames of the surgical procedure to identify any of the foregoing. For example, the image recognition algorithm may determine the type of interaction between the instrument and the anatomical structure, the name of the interaction, the name of the anatomical structure involved in the interaction, or any other identifiable aspect of the interaction.

Additionally or alternatively, the location of healthcare professionals in the operating room, movement of any of the healthcare professionals, hand movement of any of the healthcare professionals, the position and/or location of the patient, placement of the medical device, and other spatial features of the healthcare professional, patient, or instrument may further indicate that a particular action is about to occur. In some cases, the indication that a particular action is about to occur may be based on input from a surgeon performing a particular surgical procedure. For example, audio sounds, gestures from any one of the healthcare professionals, or any other signal recognizable within a surgical short, audio data, image data, or device-based data (e.g., data related to a vital sign of a patient) may be used as an indication that a particular action is about to occur.

Disclosed embodiments may include: a preliminary action for the particular action is identified using the suggested sequence of events. For example, for a particular action, such as suturing, the preliminary action may be to clamp a portion of the anatomy with forceps, administer a drug to the patient, reposition an image sensor within the operating room, measure a sign signal, connect a medical device to the patient (e.g., connect an ECMO machine to the patient), or any other operation that needs to be performed before the particular action is performed.

The disclosed embodiments may further include: determining, based on an analysis of the accessed frames, that the identified preliminary action has not occurred; and in response, identifying an indication of the deviation. In one example, determining that the identified preliminary action has not occurred may be accomplished using image recognition, as previously discussed. For example, image recognition may recognize that a preliminary action has not occurred by: determining that the surgical instrument has not emerged in the surgical patch or that there is no interaction between the surgical instrument and the anatomical structure (as identified by analyzing the surgical patch); or determining that there is no change in the anatomical structure (e.g., determining that there is no change in the shape, color, size, or position of the anatomical structure). Additionally or alternatively, the image recognition may determine that there is no preliminary action in other ways (e.g., by determining that a healthcare professional is not yet in proximity to the patient, by determining that the ECMO machine is not yet connected to the patient), or by using any other indication that may be recognized in the surgical short. In an example embodiment, the indication of a deviation between the particular surgical procedure and the suggested sequence of events may be the absence of a preliminary action. Alternatively, if a preliminary action is identified, one or more characteristics of the preliminary action may be an indication of a deviation. For example, when the preliminary action is a cut, the length of the cut may be characteristic of the preliminary action. For example, if the expected incision length is in the range of 10cm to 20cm, and the length is identified as 3cm, such a characteristic of the preliminary action may indicate a deviation.

Aspects of the disclosed embodiments may include: providing notification of a deviation between the particular surgical procedure and the suggested sequence of events prior to performing the particular action. The notification may be any suitable electronic notification as described herein and consistent with the disclosed embodiments. Alternatively, the notification may be any suitable audible, visual, or any other signal (e.g., a tactile signal such as a vibration) that may be sent to a healthcare professional (e.g., a surgeon administering the surgical procedure).

Aspects of the disclosed embodiments may include providing a notification post-operatively (i.e., after completion of the surgical procedure). For example, the deviation may be identified during or after the surgical procedure, and a notification may be provided after evaluating the deviation using any of the methods described above (or any combination thereof). Additionally or alternatively, deviations between a particular surgical procedure and a proposed sequence of events for that surgical procedure may be analyzed and/or evaluated by a healthcare professional.

Aspects of the disclosed embodiments may include: the name of the intraoperative surgical event associated with the deviation is determined. For example, when identifying a deviation between a particular surgical procedure and a proposed sequence of events, the name and/or type of event responsible for the deviation may be identified. For example, identifying a deviation between the events of the sequence 2702 and the events of the suggested sequence 2701 (e.g., when event E3 is substantially different from event S3), the name and/or type of event S3 may be determined (e.g., the name may be "stitch"). Additionally, the name and/or type of event E3 may be determined. In an example embodiment, as described above, the name of the event S3 may be identified using a machine learning image recognition model.

In various embodiments, the name of the intraoperative surgical event associated with the deviation may be the name of the preliminary action prior to the particular action identified in the surgical event. Alternatively, the name of the intraoperative surgical event associated with the deviation may be the name of the particular action. In some cases, the name of the intraoperative surgical event may be a text string containing multiple names of events or actions that result in a deviation. In some cases, punctuation (or any other suitable means, such as characters, paragraph marks, or new lines) may be used to separate different names within a text string. For example, the name of the intraoperative surgical event associated with a deviation may be "clamp artery with column; applying a laser beam; suturing the artery ".

In some implementations, determining the name includes accessing a data structure that associates the name with a video filmlet feature. The data structure may be any suitable data structure, such as structure 1701 shown in FIG. 17A. For example, determining a name may include accessing a surgical clip (referred to herein as a video clip) and determining a video clip feature (such as an event, action, or event feature), as described in and consistent with various embodiments of the present disclosure.

In various embodiments, when the name of the intraoperative surgical event associated with the determined deviation is determined, a notification of the deviation may be provided, the notification including the name of the intraoperative surgical event associated with the deviation. In an example embodiment, notifications may be provided to various users (e.g., medical personnel, administrators, etc.). In some cases, a notification may be provided to the patient, the patient's relatives or friends, and the like. The notification may include: text data, graphics data, or any other suitable data (e.g., video data, animation, or audio data). Additionally or alternatively, the notification may be implemented as a warning signal (e.g., a light signal, an audio signal, etc.). In some cases, the notification may be an SMS message, email, or the like delivered to any suitable device (e.g., smartphone, laptop, desktop, monitor, pager, TV, etc.) owned by various users authorized to receive the notification (e.g., various medical personnel, administrators, patients, relatives or friends of patients, etc.).

Aspects of the disclosed embodiments may include: input is received indicating an action to be performed by the healthcare professional. Such input may enable notification of the deviation (e.g., skipped steps as required according to a suggested sequence of events) to be provided before the surgeon takes action. In some cases, such input from the surgeon or from another healthcare professional may include a press of a button, an audible input, a gesture, or any other suitable input indicating a particular action to be performed by the surgeon, as discussed above.

The action (to be performed by the medical professional) may be any surgery-related action. For example, the action may include suturing, incision, dissection, aspiration, placement of a camera adjacent to or within the patient's body, or any other action that may occur during surgery. In some cases, the action may include administering a medication to the patient or measuring patient vital signs such as pulse, blood pressure, oxygen levels, and the like.

In various instances, receiving input may include receiving input from a healthcare professional. For example, the surgeon may provide input via visual or audio signals (e.g., using gestures, body gestures, visual signals generated by light sources produced by the medical instrument, spoken words, etc.) that may be captured by one or more image/audio sensors and recognized as input indicating that the healthcare professional is to perform an action. In some cases, a healthcare professional may press a button or use any other device (e.g., smartphone, laptop, etc.) to provide input.

In some cases, the input may indicate what type of action is to be performed. For example, the surgeon may announce the name of the action to be performed and may capture an audio signal from the surgeon using a microphone. In an example embodiment, a speech recognition model may be used to recognize one or more words announced by the surgeon.

In some cases, receiving input indicating that a healthcare professional is to perform an action may include receiving input from a user that is not a healthcare professional. For example, input may be received from a person observing the surgical procedure.

Additionally or alternatively, input may be received from a machine learning algorithm trained to identify various surgical events that result in possible future actions during a surgical procedure. For example, a machine learning algorithm may be configured to identify that an incision is to be performed based on a particular surgical event (such as a surgeon holding and/or moving a scalpel near an anatomical structure).

In various embodiments, the indication that a particular action is about to occur may be the entry of a particular medical instrument into a selected region of interest (ROI). Such an indication may be determined, for example, using an object detection algorithm to detect the presence of a particular medical instrument in the selected ROI. In various embodiments, using surgical tools that are present in the vicinity of a given ROI during the time (or time interval) of a surgical procedure (e.g., through a machine learning model), it may be identified that a particular action is to be taken. The presence of a surgical tool near the ROI may indicate different actions to be taken for different times during the surgical procedure. In some cases, the method may include the steps of: providing a notification when a given surgical tool is present near the ROI, and forgoing providing the notification when the surgical tool is not present in the ROI. As described above, the notification may be any suitable notification provided to a healthcare professional, healthcare administrator, or any other person authorized to receive such information.

In various embodiments, identifying the entry of a particular medical instrument into a selected region of interest (ROI) may be accomplished using any suitable method, such as image recognition using frames for an analytical surgical procedure, as described herein and consistent with the disclosed embodiments. In some cases, the ROI may be selected based on the location of the anatomical structure. Alternatively, if a second medical instrument is used during the surgical procedure, the ROI may be selected based on the position of the second medical instrument. Additionally or alternatively, the ROI may be selected based on the field of view of the image sensor. For example, the field of view of a particular image sensor (e.g., a sensor displaying a magnified portion of an anatomical structure) may be used to select the ROI.

In various embodiments, based on input indicating that a healthcare professional is to perform an action, the method may include the steps of: a stored data structure identifying a suggested sequence of events is accessed. The stored data structure may be any suitable data structure, such as an array, associative array, linked list, binary tree, balanced tree, heap, stack, queue, collection, hash table, record, tag union, XML code, XML database, RDBMS database, SQL database, and the like. The data structure may include a suggested sequence of events. For example, the data structure may list the names of events in a table on an event-by-event basis. Alternatively, events may be organized and linked via a list of links. In various embodiments, the data structure may be any suitable data structure configured to identify suggested events and to order the events to form a sequence.

Aspects of the disclosed embodiments may further include: the presence of a surgical tool in a predetermined anatomical region is detected. As used herein, a surgical tool may be any instrument or device that may be used during a surgical procedure, which may include, but is not limited to, cutting instruments (such as scalpels, scissors, saws, etc.), grasping and/or clamping instruments (such as Billroth clamps, "mosquito" hemostats, non-invasive hemostats, Deschamp needles, hopner hemostats, etc.), retractors (such as Farabef C laminar flow hooks, blunt tooth hooks, tine hooks, grooved probes, compacting forceps, etc.), tissue unifying instruments and/or materials (such as needle holders, surgical needles, staplers, clips, tapes, meshes, etc.), protective devices (such as facial and/or respiratory protection devices, headgear, shoe covers, gloves, etc.), laparoscopes, endoscopes, patient monitoring devices, etc. A surgical tool (also referred to as a medical tool or medical instrument) may include any equipment or device used as part of a medical procedure.

The anatomical region may be any region of the anatomy comprising a living organism. For example, an anatomical region may include a cavity (e.g., a surgical cavity), an organ, tissue, a tube, an artery, a cell, or any other anatomical portion. In some cases, a prosthesis, artificial organ, etc. may be considered an anatomical structure and appear within an anatomical region. In one example, a machine learning model may be trained using training examples to identify anatomical regions in images and/or videos, and may be used to analyze various captured frames of a surgical procedure and detect anatomical regions. Examples of such training examples may include images and/or videos, and markers indicating anatomical regions within the images and/or within the videos.

Any suitable means may be used to detect the presence of the surgical tool in the predetermined anatomical region. In an example embodiment, the trained machine learning model may be used to analyze various captured frames of a surgical procedure to detect the presence of a surgical tool in a predetermined anatomical region. The trained machine learning model may be an image recognition model for recognizing image features, such as surgical tools in a predetermined anatomical region. In various embodiments, based on the presence of a surgical tool in the predetermined anatomical region, the method may include the steps of: a stored data structure identifying the suggested sequence of events is accessed, as discussed above.

Aspects of the preferred embodiments may further include: an indication of a deviation between a particular surgical procedure and a proposed sequence of events for the surgical procedure is identified by determining that a surgical tool is present in a particular anatomical region. For example, some embodiments may determine that a deviation has occurred if it is determined (e.g., using a machine learning method, or using instructions from a healthcare professional) that a surgical tool is present in a particular anatomical region. In some cases, some embodiments may determine that a deviation has occurred if a surgical tool is present in a particular anatomical region during a time (or time interval) of the surgical procedure at which the surgical tool should not be present. Alternatively, in some cases, identifying an indication of deviation may include: determining that the surgical tool is not in the particular anatomical region. For example, some embodiments may be configured to determine that a deviation has occurred if a surgical tool is not present in a particular anatomical region during a certain time (or time interval) of a surgical procedure.

Additionally or alternatively, identifying an indication of deviation may include: interactions between the surgical tool and the anatomical structure are identified. The process of identifying interactions between the surgical tool and the anatomical structure may involve analyzing frames of the surgical procedure to identify interactions, e.g., as described above. For example, at least some of the frames of a surgical procedure may indicate a portion of the surgical procedure that is performing the surgical procedure on the anatomical structure. As discussed above, the interaction may include any action of the surgical tool that may affect the anatomy, and vice versa. For example, the interaction may include contact between the surgical tool and the anatomical structure, action of the surgical tool on the anatomical structure (such as cutting, clamping, grasping, applying pressure, scraping, etc.), a physiological response of the anatomical structure, light emitted by the surgical tool toward the anatomical structure (e.g., the surgical tool may be a laser emitting light toward the anatomical structure), sound emitted toward the anatomical structure, an electromagnetic field generated near the anatomical structure, a current induced in the anatomical structure, or any other identifiable form of interaction.

In some cases, identifying the interaction may include identifying that the surgical approach is to the anatomical structure. For example, by analyzing a surgical video clip of a surgical procedure, the image recognition model may be configured to determine the distance between the surgical tool and a point (or set of points) on the surface of or within the anatomical structure.

In various embodiments, if interactions between the surgical tool and the anatomical structure during the surgical procedure are identified, and such interactions are not expected for the reference surgical procedure (i.e., the surgical procedure following the suggested sequence of events), embodiments may be configured to determine that a deviation has occurred. Alternatively, if no interaction between the surgical tool and the anatomical structure is identified (e.g., if no interaction exists during the surgical procedure), and that interaction is expected for the reference surgical procedure, the embodiments may be configured to determine that a deviation has occurred. Some embodiments may be configured to determine that there is no substantial deviation between the surgical procedure and the reference surgical procedure if there is (or is not) an interaction between the surgical tool and the anatomical structure in both the surgical procedure and the reference procedure.

FIG. 28 illustrates, by process 2801, aspects of an embodiment for enabling determination and notification of a bypass event in a surgical procedure. At step 2811, process 2801 may include accessing video frames taken during a particular surgical procedure using any suitable means. For example, access may include access via a wired or wireless network, via an input device (e.g., keyboard, mouse, etc.), or via any other device that allows data to be read/written.

At step 2813, the process 2801 may include accessing stored data identifying a proposed sequence of events for a surgical procedure, as described above. At step 2815, the process 2801 may include: the accessed frames are compared to the proposed sequence of events to identify an indication of a deviation between a particular surgical procedure and the proposed sequence of events for that surgical procedure. The deviation between a particular surgical procedure and the proposed sequence of events for that surgical procedure may be determined using any suitable method described above (e.g., by calculating the difference between different events using a suitable metric function, by using a machine learning model, etc.). At step 2817, the process 2801 may include: the name of the intraoperative surgical event associated with the deviation is determined using any suitable method described above (e.g., using a machine learning model to identify the intraoperative surgical event). Process 2801 may end at step 2819, which is for providing a notification of the deviation, including the name of the intraoperative surgical event associated with the deviation. As described above, the notification may be any suitable notification (e.g., SMS text, video, images, etc.) and may be communicated to a healthcare professional, administrator, or any other authorized individual.

As previously discussed, the present disclosure relates to methods and systems for enabling determination and notification of bypass events in a surgical procedure and non-transitory computer-readable media that may contain instructions that, when executed by at least one processor, cause the at least one processor to perform operations that enable determination and notification of bypass events in a surgical procedure. The operations may include various steps of a method for enabling determination and notification of a bypass event in a surgical procedure, as described above.

The disclosed systems and methods may involve analyzing current and/or historical surgical clips to identify features of the surgical procedure, patient condition, and other features to predict and improve surgical outcome. Conventional methods of providing decision support for a surgical procedure may not be performed in real-time or may not be able to determine decision-making nodes in a surgical video and develop recommendations to perform specific actions that improve the outcome of the surgical procedure. In such cases, the surgeon may miss critical decision points and/or fail to perform specific actions that may improve the outcome, and the procedure may lead to sub-optimal outcomes for the patient. In contrast, some embodiments of the present disclosure provide unconventional methods of providing decision support for surgical procedures efficiently, and in real-time.

In accordance with the present disclosure, a method of providing decision support for a surgical procedure is disclosed. The surgical procedure may include a procedure performed by one or more surgeons. A surgeon may include any person performing a surgical procedure, including a doctor or other medical professional, any person assisting in the surgical procedure, and/or a surgical robot. The patient may include any person undergoing a surgical procedure, non-limiting examples of which may include: inserting an implant into a patient, cutting, suturing, removing tissue, grafting, cauterizing, removing an organ, inserting an organ, removing a limb or other body part, adding a prosthesis, removing a tumor, performing a biopsy, performing debridement, bridging, and/or any other action for treating or diagnosing a patient. The implant or implant unit may comprise: stents, monitoring units, and/or any other material used within the body to replace missing biological structures, support damaged biological structures, or augment existing biological structures. During surgery, surgical tools, such as laparoscopes, cameras, cutters, needles, drills, and/or any other devices or implants may be used. In addition, during surgery, medications (such as narcotics, intravenous fluids, therapeutic drugs, and/or any other compound or formulation) may be used.

Decision support may include providing recommendations that may guide the surgeon in making decisions. Decision support may include: analyzing video clips of prior similar surgeries, identifying the course of action most likely to lead to a positive outcome, and providing corresponding recommendations to the operating surgeon. More generally, decision support for a surgical procedure may include providing information to a medical professional during the surgical procedure, such as suggestions to take or avoid actions (in the information of lighting decisions). In some embodiments, decision support may include providing a computerized interface for alerting a medical professional to a situation. The interface may include, for example: a display, a speaker, a light, a haptic feedback assembly, and/or any other input and/or feedback structure. In some embodiments, providing surgical decision support may include: real-time recommendations are provided to the surgeon (i.e., the method of providing decision support for the surgical hand may be performed in real-time during the surgical procedure). Real-time recommendations may include: the recommendation is provided via an interface in an operating room (e.g., the operating room depicted in fig. 1). The real-time recommendations may be updated during the surgical procedure.

In some embodiments, a method may include the steps of: a video clip of a surgical procedure performed on a patient by a surgeon in an operating room is received. The video clips may include video captured by one or more cameras and/or sensors. The video clips may include: continuous video, video clips, video frames, intra-cavity video, and/or any other video clip. The video clips may depict any aspect of the surgical procedure, and may depict any other aspect of the patient (internal or external), medical professional, robot, medical tool, action, and/or surgical procedure. In some embodiments, the video filmlet may include images from at least one of an endoscope or an intra-body camera (e.g., images of intra-cavity video). The endoscope may include: rigid or flexible tubing, lights, optical fibers, lenses, oculars, cameras, communication components (e.g., wired or wireless connections), and/or any other components that facilitate the collection and transmission of images from within a patient. The intrabody camera may include any image sensor used to collect images from within the patient before, during, or after a surgical procedure.

Receiving the video filmstrip may be via a sensor (e.g., an image sensor above the patient, within the patient, or located elsewhere in the operating room), a surgical robot, a camera, a mobile device, an external device using a communication device, a shared memory, and/or any other connected hardware and/or software component capable of capturing and/or transmitting images. The video clips may be received directly from the device via a network and/or via a wired and/or wireless connection. Receiving the video clips may include reading, retrieving, and/or otherwise accessing the video clips from a data storage device, such as a database, disk, memory, remote system, online data storage device, and/or any location or medium that may retain information.

Consistent with the disclosed embodiments, an operating room may include any room configured to perform a procedure, including a room in a hospital, in a clinic, in a temporary clinic (e.g., a room or tent configured for a surgical procedure during a disaster relief or war event), and/or in any other location where a surgical procedure may be performed. Fig. 1 depicts an exemplary operating room.

Consistent with the disclosed embodiments, a method for providing decision support for a surgical procedure may include the steps of: at least one data structure including image-related data characterizing a surgical procedure is accessed. Accessing the data structure may include: the data in the data structure is received directly from the device via a network and/or via a wired and/or wireless connection. Accessing the data structure may include retrieving data in the data structure from a data store, consistent with some disclosed embodiments.

Consistent with this embodiment, the data structure may include: primitive types (such as Boolean, character, floating point, integer, reference, and enumeration types); composite types (such as containers, lists, tuples, multi-graphs, associative arrays, sets, multi-sets, stacks, queues, graphs, trees, heaps); any form of hash-based structure or graph. Further examples may include relational databases, tabular data, and/or other forms of information organized for retrieval. The data within the data structure may be organized following a data schema (data schema) that includes data types, key-value pairs, tags, metadata, fields, tags, indexes, and/or other indexing features.

Video and/or image-related data characterizing the surgical procedure may be included within the data structure. Such image-related data may include some or all of the video characterization information and/or the video filmlet itself, the image and/or a pre-processed version of the video and/or image data. In another example, such video and/or image-related data may include information based on analysis of the video and/or image. In yet another example, such video and/or image-related data may include information and/or one or more rules for analyzing the image data. Fig. 17A illustrates an example of a data structure.

Consistent with the disclosed embodiments, image-related data characterizing a surgical procedure may include: data related to event characteristics, event locations, outcomes, deviations between the surgical procedure and mandatory event sequences, skill levels, event locations, intraoperative surgical events, intraoperative surgical event characteristics, characteristic events, leakage situations, events within the surgical phase, tags, mandatory event sequences, skipped events, suggested event sequences, anatomical structures, conditions, contact between anatomical structures and medical instruments, interactions, and/or any other information describing or defining aspects of the surgical procedure.

In some embodiments, a method for providing decision support for a surgical procedure may comprise the steps of: the received video clips are analyzed using image-related data to determine the presence of a surgical decision-making node. The surgical decision-making node may include a time (e.g., a point in time or period) in the surgical video. For example, it may relate to an event or situation that creates an opportunity to pursue an alternative course of action. For example, the decision-making node may reflect the time at which the surgeon may take one or more actions to change the outcome of the procedure, follow the procedure, change to a different procedure, deviate from the procedure, and/or change any other method.

Analyzing the received video clips may include: an image analysis method is performed on one or more frames of a received video filmlet, consistent with disclosed embodiments. Analyzing the received video clips may include, for example: object recognition methods, image classification methods, homography (homography) methods, pose estimation methods, motion detection methods, and/or other video analysis methods, for example, as described above. Analyzing the received video clips may include: using a trained machine learning model and/or training and/or implementing a machine learning model, consistent with the disclosed embodiments. For example, the received video clips may be analyzed using a machine learning model that is trained using training examples to detect and/or identify surgical decision nodes from images and/or videos. For example, the received video clips may be analyzed using an artificial neural network configured to detect and/or identify surgical decision nodes from the images and/or video. In some embodiments, the received video clips may be compared to image-related data to determine the presence of a surgical decision-making node. This may occur, for example, through video analysis or may occur in real-time. In one example, the image-related data may include one or more rules for analyzing the image data (such as a trained machine learning model, an artificial neural network, etc.) and the one or more rules may be used to analyze the received video snippets to determine the presence of a surgical decision-making node. In one example, a Markov model may be utilized based on analysis of frames from a received video clip to determine the presence of a surgical decision-making node. In other examples, an artificial neural network (such as a recurrent neural network or a long-short term memory neural network) may be used to analyze the received video clips and determine the presence of surgical decision-making nodes.

For example, a decision-making node may appear when: when improper access or exposure is detected, retraction of anatomical structures, misunderstanding of anatomical structures or fluid leaks, and/or any other surgical event that creates an opportunity to pursue an alternative course of action. Improper access or exposure may include opening and/or cutting the wrong tissue, organ, and/or other anatomical feature. Retraction may involve movement, distraction, and/or counter-distraction of the tissue to expose the tissue, or organs, and/or other anatomical structures for viewing by the surgeon. The misinterpretation of an anatomical structure or fluid leak may include a misclassification (e.g., a classification of the wrong structure or fluid type) and/or an incorrect estimation of the source and/or severity of the fluid leak. More generally, the misunderstanding may include any incorrect conclusions drawn by the system or a person during the surgical procedure.

In some embodiments, the decision-making node may be determined by analysis of a plurality of different historical procedures, wherein different courses of action occur after a common surgical situation. For example, a plurality of different historical procedures may be included in the historical video clips and/or the received video clips. The historical procedures may depict one or more surgical procedures, one or more patients, one or more conditions, one or more outcomes, and/or one or more surgeons. In some embodiments, the different course of action may include different actions during a surgical procedure, as described herein. The different courses of action may include different actions (e.g., the action of stapling the tear and the action of stapling the tear may be considered different actions). Different courses of action may include different methods of performing the same action (e.g., applying one contact force and applying another contact force may be different methods of performing the same action). Different courses of action may include the use of different medical tools. A common surgical situation may refer to a situation that includes a class of surgical procedures (such as cholecystectomy), a surgical event (e.g., an incision, a fluid leak event, etc.), and/or any other aspect of a surgical procedure that may be common to a plurality of historical surgical procedures.

In some embodiments, determining the presence of a decision-making node may be based on a detected physiological response of the anatomical structure and/or motion associated with the surgical tool. The physiological response may include movement of anatomical structures, leakage, and/or any other physiological activity. The physiological response may include changes in heart rate, respiration rate, blood pressure, body temperature, blood flow, and/or any other biological parameter or state of health. Other non-limiting examples of possible physiological responses are described above. The motion associated with the surgical tool includes any movement (e.g., translation and/or rotation) of the surgical tool. The surgical tool may comprise any surgical tool, as disclosed herein. Detecting the physiological response and/or the motion associated with the surgical tool may include performing an image analysis method, as also described herein.

In some embodiments, a method of providing decision support for a surgical procedure may comprise the steps of: correlations between the results and specific actions taken at the decision-making node are accessed in at least one data structure. The access dependencies may include: determining the existence of a correlation, reading a correlation from memory, and/or determining in any other way that a correlation exists between a particular action and a result. In some implementations, the correlations can be accessed in a data structure based on an index that includes at least one of a tag, label, name, or other identifier of a particular action, decision-making node, and/or result. In some implementations, accessing the correlations can include determining (e.g., generating, looking up, or identifying) the correlations using algorithms such as models, formulas, and/or any other logical methods. Consistent with the disclosed embodiments, a relevance may indicate a probability (e.g., likelihood) of a desired outcome (e.g., a positive outcome) and/or an undesired outcome (e.g., a negative outcome) associated with a particular action. The correlations may include: correlation coefficients, goodness-of-fit measures, regression coefficients, odds ratios, probabilities, and/or any other statistical or logical interrelationships. In one example, one correlation may be used for all decision-making nodes of a particular type, while in another example, multiple correlations may be used for different subsets of a group of all decision-making nodes of a particular type. For example, such subsets may correspond to particular patient groups, particular surgeon groups (and/or other healthcare professionals), particular surgical groups, particular operating room groups, particular prior events in a surgical procedure, any union or intersection of such groups, and so forth.

The particular action may include any action performed by a surgeon (e.g., a human or robotic surgeon) during a surgical procedure or by a human or robot assisting the surgical procedure. Examples of specific actions may include: remedial action, diagnostic action, post-surgical action, deviating surgical action, and/or any other activity that may occur during a surgical procedure. Such actions may include: engaging a medical instrument with a biological structure, administering a drug, cutting, suturing, modifying surgical contact, performing a medical examination, cleaning an anatomical structure, removing excess fluid, and/or any other action that may occur during a surgical procedure.

The particular action may include a single step or multiple steps (e.g., multiple actions performed during a surgical procedure). A step may include any action or subset of actions as described herein. Non-limiting examples of specific actions may include one or more of making an incision, inserting an implant, attaching an implant, and sealing the incision.

In some embodiments, the specific action may include introducing an additional surgeon into the operating room. For example, additional surgeons may have more experience, a higher level of skill, special expertise (e.g., technical expertise, expertise to solve a particular problem, and/or other expertise) than surgeons already in the operating room. Bringing the surgeon to the operating room may include sending a request or a notification instructing the surgeon to come to the operating room. In some embodiments, the additional surgeon may be a surgical robot, and bringing the additional surgeon to the operating room may include enabling the robot and/or providing instructions to the robot to perform and/or assist in the surgical procedure. Providing instructions to the robot may include instructions to perform one or more actions.

In some embodiments, a method of providing decision support for a surgical procedure may comprise the steps of: the suggestions are output to the user to take and/or avoid particular actions. Such recommendations may include any guidance, regardless of the form of guidance (e.g., audio, video, text-based control commands for a surgical robot, or other data transmission providing recommendations and/or directions). In some cases, the guidance may take the form of instructions, in other cases, the guidance may take the form of suggestions. The trigger for such guidance may be the determined presence of the decision-making node and the accessed relevance. Outputting the suggestion may include: the recommendation may be sent to the device, displayed on an interface, and/or any other mechanism for providing information to the determiner. Outputting the suggestion to the user may include: the recommendations are output to a person in the operating room, a surgeon (e.g., a human surgeon and/or a surgical robot), a person assisting the surgery (e.g., a nurse), and/or any other user. For example, outputting the suggestion may include: sending the suggestion to the computer, the mobile device, the external device; smart glasses, projectors, surgical robots, and/or any other device capable of communicating information to a user. In some embodiments, the surgeon may be a surgical robot and the advice (e.g., instructions for taking a particular action and/or avoiding a particular action) may be provided in the form of instructions for the surgical robot.

The output of the suggestion may be made via a network and/or via a direct connection. In some embodiments, outputting the recommendation may include providing an output at an interface in the operating room. For example, outputting the suggestion may include causing the suggestion to be presented via an interface (e.g., a visual and/or audio interface in an operating room). In some embodiments, outputting the suggestion may include: playing a sound, altering lighting (e.g., turning a light on or off, pulsing a light), providing a haptic feedback signal, and/or any other method of alerting a person or providing information to a person or surgical robot.

For example, the recommendation includes a recommendation to perform a medical examination. In some embodiments, the medical examination may include: blood analysis, medical imaging of the patient, urine analysis, data collected by the sensor, and/or any other analysis. The medical imaging may include: intraoperative medical imaging (i.e., imaging that occurs during surgery), such as X-ray imaging, Computed Tomography (CT), Magnetic Resonance Imaging (MRI), other techniques involving contrast agents, ultrasound, or other techniques to create body part images for diagnostic and/or therapeutic purposes.

In some embodiments, a method of providing decision support for a surgical procedure may comprise the steps of: based on the determined presence of the decision-making node, the accessed correlations, and the received results of the medical tests, suggestions (e.g., first suggestions, second suggestions, and/or additional suggestions) are output to the user to take or avoid a particular action. Accordingly, a method of providing decision support for a surgical procedure may comprise the steps of: results of the medical examination are received. The results of the medical examination may include: medical data, sensor data, instrument data, and/or any other information reflecting a biological condition. The results of the medical test may include an indication of the health state and/or condition of the patient. The results may include, for example: the presence or absence of a biomarker, the presence or absence of a tumor, a location of an anatomical feature, an indicator of metabolic activity (e.g., glucose uptake), an enzyme level, a cardiac state (e.g., heart rate), a body temperature, a respiratory indicator, and/or any other health or condition indicator. The results may be received via a network and/or from a connected device. Receiving the results may include receiving and/or accessing a data store, consistent with the disclosed embodiments. For example, a suggestion to take (or avoid) a first action may be output in response to a first value of the received result of the medical test, and the suggestion to take (or avoid) the first action may be refrained from being output in response to a second value of the received result of the medical test.

In some embodiments, the recommendation may include the name of the additional surgeon and/or other identifier (e.g., employee ID). In some embodiments, outputting the recommendation may include providing an indication to an additional surgeon. The indication may include: a notification, an alert, a request from a previous operating room, results of a medical examination, an indication of information that assistance may be needed during a surgical procedure, and/or any other indication. In one example, the additional surgeon may be selected (e.g., from a plurality of alternative additional surgeons) based on one or more of: features of a patient undergoing the surgical procedure, a surgeon currently performing the surgical procedure, an operating room, tools used in the surgical procedure, a condition of an anatomical structure associated with the surgical procedure, interactions between medical instruments and the anatomical structure in the surgical procedure, physiological reactions associated with the surgical procedure, additional features of the surgeon, and the like.

Consistent with this embodiment, the suggestion may include: a description of the current surgical situation, guidance, indication of preemptive or corrective measures, indication of alternative methods, danger zone mapping, and/or any other information that may inform the surgeon about the surgical procedure. The description of the current surgical situation may include a health state and/or a condition of the patient (e.g., a condition reflected in sensor data, such as heart rate monitoring data, brain activity data, body temperature data, leakage data, and/or any other health data). The description of the current surgical situation may also or alternatively include an assessment of current or likely future outcomes. Preemptive and/or corrective action may include following and/or altering the actions of a surgical procedure. Preemptive and/or corrective action may include any action by the surgeon and/or the person assisting the surgery, as well as actions that may result in avoiding negative results. Corrective action may include actions that may improve the results. In some implementations, the danger zone map can include identifying one or more particular actions and likely outcomes (e.g., a particular set of actions associated with negative outcomes such as death, disability, or other undesirable eventualities). The hazardous area map may include, for example, an anatomical area identification, which, if not properly approached, may adversely affect patient safety and surgical outcome. For example, in an inguinal hernia, the risk zones may include: "danger triangle" located between the vas deferens of a male or the round ligament of the uterus of a female ((medial) and the testicular blood vessel (lateral) of a male) and retaining important structures such as the iliac vessels, femoral nerves, the genital branches of the genitourinary femoral nerves, and/or "pain triangle" located between the testicular blood vessel (medial), the psoas major (lateral) and the iliopeubic tract (superior), and retaining important structures such as the genital branches of the genitourinary and lateral femoral nerves is crucial. And the trained machine learning model can be used to analyze video clips and identify and/or map danger areas. Examples of such training examples may include images and/or videos, and markers indicating hazardous areas depicted in the images and/or videos. In one example, the description of the hazardous area map may include a textual description of the relevant identified hazardous area. In another example, the description of the danger zone map can include a visual marker associated with the identified danger zone, for example, superimposed on at least one frame of a video filmlet, in an augmented reality system, etc.

For example, the recommendation may include a recommended placement of a surgical drain (drain), such as to drain inflammatory fluids, blood, bile, and/or other fluids from the patient.

The suggestion may include: a confidence level that a desired surgical effect will occur if a particular action is taken, and/or a confidence level that a desired result will not occur if a particular action is not taken. The confidence level may be based on an analysis of historical surgery, consistent with the disclosed embodiments, and may include a probability (i.e., likelihood) that an outcome will occur. The desired outcome may be a positive outcome, such as an improved health status, successful placement of the medical implant, and/or any other beneficial eventuality. In some embodiments, the desired outcome may include avoiding a possible undesired situation after the decision-making node (e.g., avoiding side effects, post-operative complications, fluid leakage events, negative changes in patient health status, and/or any other undesired situation).

In some embodiments, the output recommendation may be based on an elapsed time since a particular point in the surgical procedure. For example, the recommendations may be based on elapsed time since the surgical event, consistent with the disclosed embodiments. The recommendation may be based on a surgical event occurring at least a specified number of minutes prior to the decision-making node. In some embodiments, the surgical event may include past actions performed by the surgeon prior to the decision-making node. The suggestions may also include alternative courses of action. The course of action may include a set of actions, a sequence of actions, and/or a pattern of actions. The alternative course of action may be different from the action associated with the ongoing surgical procedure being performed by the surgeon.

In some embodiments, the recommendation may include an indication that an undesirable surgical outcome is likely to occur if no particular action is taken. Such indications may include: a confidence level, a description of an undesirable surgical outcome (e.g., a name of the outcome), and/or any other indication.

In some embodiments, the recommendations may be based on the skill level of the surgeon. For example, a surgeon with a high skill level may receive different recommendations than a surgeon with a lower skill level. In some embodiments, the suggestion may include a particular action selected from a plurality of alternative actions, and the selection of the particular action may be based on a skill level of the surgeon and a complexity level associated with the plurality of alternative actions. The skill level may be based on the historical performance score, the number of surgical procedures performed, the total time spent as the surgeon (e.g., number of years, number of hours spent in surgery), an indication of the training level, the skill classification of the surgeon, and/or any other assessment of the skill of the surgeon (whether derived from manual input, data analysis, or video image analysis).

In some embodiments, the recommendation may be based on a surgical event occurring in the surgical procedure prior to the decision-making node (i.e., a prior surgical event). The prior surgical event may comprise any surgical event as described herein, and it is prior to the decision-making node. The prior surgical event may be associated with a positive or negative outcome after the decision-making node, and the recommendation may include a recommendation to perform a particular action that increases the likelihood of achieving a later positive outcome or avoiding a later negative outcome. Accordingly, such a method may comprise the steps of: it is determined that the prior surgical event is associated with a later outcome. Such correlation may be time-based in that the correlation may be determined based on elapsed time between the surgical event and the decision-making node.

In some embodiments, outputting the suggestion may include: the method includes presenting a first instruction to perform a first step, receiving an indication that the first step was performed successfully, and presenting a second instruction to perform a second step in response to receiving the indication that the first step was performed successfully. In some embodiments, outputting the suggestion may include: a first instruction to perform a first step is presented, and an indication that the first step was not successfully performed is received. In some embodiments, outputting the suggestion may include: the second instruction is forgotten to be presented in response to the received indication that the first step was not successfully performed. In some embodiments, in response to receiving an indication that the first step was not successfully performed, outputting the suggestion may include: an alternative instruction to perform an alternative step is presented, the alternative step being different from the second step.

The indication that the first step was successfully performed or was not successfully performed may be based on an analysis of the video filmlet, consistent with the disclosed embodiments. Receiving the indication may include receiving a video filmlet after presenting the instructions to perform the first step, and generating the indication based on an analysis of the video filmlet.

In some embodiments, a method of providing decision support for a surgical procedure may comprise the steps of: a vital sign of the patient is received, and the recommendation can be based on the accessed correlations and vital signs. The vital signs may be received from a medical instrument, device, external device, data storage device, sensor, and/or any other computing component, and may include any indicator or condition of the patient's health state (e.g., heart rate, respiration rate, brain activity, and/or other vital signs). In some embodiments, the vital signs may be received from a connected device via a network and may be detected via conventional sensors or by analysis of a video clip.

In some embodiments, the recommendation may be based on a condition of a tissue of the patient and/or a condition of an organ of the patient. In general, the condition of a tissue or organ may refer to any information indicative of the state or characteristics of the tissue or organ. For example, the condition can be based on an assessment such as whether the tissue or organ is normal, abnormal, damaged, leaking, hydrated, oxidized, dehydrated, retracted, enlarged, shrunken, present, absent, and/or any other appearance or state. Consistent with the disclosed embodiments, the condition of a patient's tissue and/or organ may be determined based on an analysis of the video clips. For example, such analysis may determine the color of tissue, texture of the anatomy, heart rate, lung volume, presence of bumps or other irregularities, and/or any other characteristic of the anatomy. In some embodiments, the recommendation may be based on conditions reflected in the sensor data, such as heart rate monitoring data, brain activity data, body temperature data, leakage data, and/or any other health data.

As another example, the suggestion of a particular action may include a proposal or direction to form a stoma (stoma) or a particular type of stoma (e.g., a loop stoma, a terminal stoma, a loop colostomy, a terminal colostomy, a loop ileostomy, a terminal ileostomy, a urethral regeneration, and/or any other type of stoma). The recommendation may suggest a stoma creation technique, an indication of the colon and/or ileum to create a part of the stoma, and/or the location of the stoma on the patient's skin. Alternatively, the proposal may suggest that the stoma is not created, for example, when its creation is linked to an undesirable outcome.

The recommendations to create or avoid the stoma (or take any other course of action) may be based on the physiological impact on the patient, as well as a metric threshold for possible improvement of the result. The threshold may be selected based on patient characteristics (e.g., age, prior health status, family history, vital signs, and/or other characteristics). For example, a lower threshold may be selected for a patient who previously had a stoma associated with a desired outcome. The threshold may also be based on whether the patient is aware of the likelihood of the stoma prior to the surgical procedure.

One example of a decision-making node may include: for example, in the preparatory phase of appendectomy, a decision is made whether to move the ileum and/or cecum, and the recommendation may include a proposal to move the ileum and/or cecum or a proposal to not move the ileum and/or cecum. Some non-limiting examples of factors that may influence this decision may include: the complexity of the surgery, the age of the patient, the sex of the patient, previous inflammation, and previous surgery. The suggestion may be based on at least one of these factors. Decisions made at this junction may affect the ability to excise the diseased appendix. Another example of a decision-making node may include: for example, during the anatomical and skeletal stages of appendectomy, a decision is made as to whether the appendix can be safely segmented, and recommendations can include proposals to dissect or not dissect the appendix. Some non-limiting examples of factors that may influence this decision may include: the complexity of the procedure, the appendix that is achieved without binding, and whether the ileum/cecum is moved properly. The suggestion may be based on at least one of these factors. A decision made at this node may indicate whether appendicitis will recur ("stub appendicitis"). Another example of a decision-making node may include: for example, in the segmentation stage of appendectomy, a decision is made as to what instrument to use to segment the appendix, and the recommendation may include: proposals for an instrument for segmentation. Some non-limiting examples of factors that may influence this decision may include: the complexity of the procedure, whether a circular view of the appendix was achieved, and the patient's body mass index. The suggestion may be based on at least one of these factors. The decision made at this node may affect the treatment time (length) and cost. Another example of a decision-making node may include: for example, in a stage of division of appendectomy, it is determined whether or not to treat the appendix stump. Some options to avoid the act of treating the appendiceal stump may include cauterization, or suturing (overtew). The suggestion may include: a proposal as to whether to treat the appendix stump, and/or a proposal as to the specific action to be taken to treat the appendix stump. Some non-limiting examples of factors that may influence this decision may include: the complexity of the procedure and the use of that instrument to segment the appendix. The suggestion may be based on at least one of these factors. Decisions made at this junction may affect post-operative infection and fistula rates. Another example of a decision-making node may include: for example, during the packaging stage of appendectomy, a decision is made as to how to remove the excised specimen (e.g., in an endoscope bag (endobag) or through a trocar), and recommendations may include suggestions as to how to remove the excised specimen. For example, the decision may be based on the complexity of the procedure. The decision made at this junction may affect the surgical site infection rate. Another example of a decision-making node may include: for example, in a final examination stage of appendectomy, a determination is made as to whether to perform an irrigation, and the recommendation may include a proposal to perform an irrigation, or a proposal not to perform an irrigation. Some non-limiting examples of factors that may influence this decision may include: complexity of the surgery, pre-existing complications of the patient, and patient gender. The suggestion may be based on at least one of these factors. The decision made at that node may affect the infection rate. Another example of a decision-making node may include: for example, in the final inspection stage of appendectomy, a determination is made as to whether to place a drain, and the recommendation may include a recommendation to place a drain or a recommendation to not place a drain. Some non-limiting examples of factors that may influence this decision may include: the complexity of the surgery, the age of the patient, and pre-existing complications of the patient. The suggestion may be based on at least one of these factors. Decisions made at this node may affect infection rate, complication rate, and post-operative dwell time.

One example of a decision-making node at the entry stage of laparoscopic cholecystectomy may include: the method may include selecting an insertion method (such as a Veres needle, Hasson technology, OptiView) and/or selecting a port arrangement (such as "conventional" and "alternative"), and the proposal may include a proposal for an insertion method and/or a proposal for a port arrangement. One example of a decision-making node during the adhesion release (adhesiolysis) phase of laparoscopic cholecystectomy may include: a selection is made whether to decompress the gallbladder, and the recommendation may include an offer whether to decompress the gallbladder. For example, when the gallbladder is enlarged and/or taut, or when other signs of acute cholecystitis are present, the advice may include an offer to decompress the gallbladder. One example of a decision-making node in a laparoscopic cholecystectomy may include: gallbladder Dissection methods (such as Traditional, Dome-down Dissection, Sub-total, etc.) are selected and the recommendation may include a proposal for gallbladder Dissection method. For example, in the case of severe cholecystitis, a Dome-down diagnosis suggestion may be provided. In another example, for situations where exposure is not available, a recommendation of abandonment may be provided, for example due to an increased risk of large side branches (collatorals) in the liver bed. One example of a decision-making node in a laparoscopic cholecystectomy may include: the choice is whether to place a drain, and the recommendation may include a proposal to place a drain or a proposal not to place a drain.

In some examples, the output of suggestions to the user to take and/or avoid a particular action may be determined using a trained machine learning model. For example, a machine learning model may be trained using training examples to determine recommendations based on information related to surgical decision-making nodes, and the trained machine learning model may be used to determine recommendations to be output to a user to take and/or avoid particular actions for particular occurrences of surgical decision-making nodes based on information related to particular occurrences of surgical decision-making nodes. Some non-limiting examples of such information relating to the presence of surgical decision-making nodes are described above. For example, the information may include: a type of surgical decision-making node, a characteristic of a surgical decision-making node, a time of the surgical decision-making node (e.g., within a surgical procedure), a feature of a patient undergoing the surgical procedure, a feature of a surgeon (or another healthcare professional) performing at least part of the surgical procedure, a feature of an operating room associated with the surgical procedure, an anatomical structure associated with the surgical decision-making node, a condition of the anatomical structure associated with the surgical decision-making node, a medical instrument used in the surgical procedure, an interaction between the medical instrument and the anatomical structure in the surgical procedure, a physiological response associated with the surgical decision-making node, one or more surgical events occurring prior to the surgical decision-making node in the surgical procedure, a duration of the surgical decision-making node, a duration of the one or more of the surgical events occurring prior to the surgical decision-making node in the surgical decision-making node, and a duration of the surgical event occurring at least one or of the duration of the surgical event, The duration of the surgical phase in the surgery, one or more correlations between the results and possible actions that may be taken at the surgical decision-making node, past responses of the user to previously provided recommendations, etc. Examples of such training examples may include information related to surgical decision-making nodes, as well as markers indicating a desire to suggest. For example, the indicia may include desired textual and/or graphical content that is desired to be suggested. In another example, the marking may be based on a correlation between the results and specific actions taken at such surgical decision-making nodes.

Fig. 29 is a flow chart illustrating an example process 2900 for decision support for performing a surgical procedure consistent with the disclosed embodiments. Process 2900 may be performed using at least one processor, such as one or more microprocessors. In some embodiments, process 2900 is not necessarily limited to the illustrated steps, and any of the various embodiments described herein may also be included in process 2900. As will be appreciated by one skilled in the art, the steps of process 2900 may be performed by, for example, a system including system components 1401. In some embodiments, a non-transitory computer-readable medium includes instructions that, when executed by at least one processor, cause the at least one processor to perform operations to provide decision support for a surgical procedure in accordance with process 2900. In some embodiments, process 2900 may be performed in real-time during a surgical procedure. Based on the steps described in process 2900, a surgeon or other user can more efficiently and effectively perform a surgical procedure with positive results and/or avoid negative results.

At step 2902, the process may include: a video clip of a surgical procedure performed on a patient by a surgeon in an operating room is received, consistent with disclosed embodiments and as previously described by way of example. Fig. 1 provides an example of an operating room, a surgeon, a patient, and a camera configured to capture video clips of a surgical procedure. The video filmlet may include images from at least one of an endoscope or an intra-body camera (e.g., images of intra-luminal video).

At step 2904, the process may include: at least one data structure including image-related data characterizing a surgical procedure is accessed, consistent with disclosed embodiments and as previously described by way of example. In some embodiments, accessing the data structure may include: the data in the data structure is received from the device via a network and/or via a connection. Accessing the data structure may include: data is retrieved from the data storage device, consistent with the disclosed embodiments.

At step 2906, the process may include: the received video clips are analyzed using image-related data to determine the presence of surgical decision-making nodes, consistent with the disclosed embodiments and as previously described by way of example. Analyzing the received video clips may include: an image analysis method is performed for one or more frames of a received video filmlet, consistent with disclosed embodiments. Analyzing the received video clips may include: a model trained to determine the presence of surgical decision-making nodes is implemented. The decision-making node may comprise: improper access or exposure, retraction of anatomical structures, misunderstanding of anatomical structures or fluid leaks, and/or any other surgical event, as previously described. In some embodiments, the decision-making node may be determined by analysis of a plurality of different historical procedures, wherein different courses of action occur after a common surgical situation. In some embodiments, determining the presence of a decision-making node may be based on a detected physiological response of the anatomical structure and/or motion associated with the surgical tool.

At step 2908, the process may include: the correlation between the access result and the specific action taken at the decision-making node is accessed in the at least one data structure, as previously described by way of example. As discussed, particular actions may be associated with positive or negative results consistent with the disclosed embodiments. The access dependencies may include: generate a dependency, read a dependency from memory, and/or access any other method of a dependency in a data structure. The particular action may include a single step or multiple steps (e.g., multiple actions performed by the surgeon). The specific action may include summoning additional surgeons to the operating room.

At step 2910, the process may include: suggestions for taking specific actions are output to the user, consistent with the disclosed embodiments and as previously described by way of example. Output suggestions may be based on the determined presence of decision-making nodes and the accessed correlations, consistent with the present embodiments. In some embodiments, outputting the recommendation may include providing the output via an interface in the operating room. In some embodiments, the surgeon is a surgical robot, and the advice (e.g., instructions for taking a particular action and/or avoiding a particular action) may be provided in the form of instructions for the surgical robot. For example, the recommendation includes a recommendation to perform a medical examination. The suggestions (e.g., the first suggestion, the second suggestion, and/or the additional suggestions) may include: based on the determined presence of the decision-making node, the accessed correlations, and the received results of the medical tests, suggestions are made to the user to take or avoid particular actions. The recommendation may include the name of the additional surgeon and/or other identifier (e.g., employee ID). The suggestion may include: a description of a current surgical situation, an indication of a preemptive or corrective action, and/or a hazardous area map. In one example, as previously mentioned, the recommendation may include a recommended placement of a surgical drain (drain) to remove inflammatory fluids, blood, bile, and/or other fluids from the patient. A confidence level that a desired surgical outcome will or will not occur with or without a particular action may be part of the recommendation. The recommendations may be based on the skill level, relevance, and vital signs of the surgeon, and/or surgical events that occurred in the surgical procedure prior to the decision-making node (i.e., prior surgical events). In some embodiments, the recommendation may be based on a condition of a tissue of the patient and/or a condition of an organ of the patient. As another example, the suggestion of the particular action includes creating a stoma, as previously discussed by way of example.

The disclosed systems and methods may involve analyzing current and/or historical surgical clips to identify features of the surgical procedure, patient condition, and other features to estimate surgical contact force. Excessive contact force applied during surgery can have adverse health consequences for the patient. Conversely, insufficient contact force may result in a suboptimal outcome for certain procedures. Evaluating the proper level of force applied in any given surgical situation can be difficult, leading to sub-optimal results for the patient. Thus, there is a need for an unconventional method to determine surgical contact force efficiently, effectively, and in real time or post-operatively.

In accordance with the present disclosure, a method of estimating contact force on an anatomical structure during a surgical procedure is disclosed. The contact force may include any force exerted by a surgeon or by a surgical tool on one or more anatomical structures (e.g., tissues, limbs, organs, or other anatomical structures of a patient) during a surgical procedure. As used herein, the term "contact force" refers to any force that may be applied to an anatomical structure, whether the force is characterized in units of weight (e.g., kilograms or pounds applied), units of force (e.g., newtons), pressure applied to an area (e.g., pounds applied per square inch), tension (e.g., pulling force), or pressure (e.g., pushing force).

The contact force may be applied directly or indirectly in a number of ways. For example, the contact force may be applied by direct contact of the surgeon with the anatomical structure (e.g., by the surgeon's hand), or may be applied by a surgical instrument, tool, or other structure in the surgeon's hand. Where the surgeon is a surgical robot, the robot may apply the contact force via a robotic structure (robotic arm, finger, gripper) either directly or through a tool, instrument, or other structure manipulated by the robot.

The contact force may include a normal (i.e., orthogonal) force, a shear force, and/or a combination of a normal force and a shear force. More generally, the contact force may include any force or pressure applied to any portion of the patient's body during a surgical procedure.

Consistent with the present embodiment, estimating the contact force may include: the images and/or surgical video are analyzed to generate an estimate of the magnitude of the actual contact force according to a scale. Force estimation by image analysis may involve examining the tissue/morphology interface to observe the effect on the tissue. For example, if the modality is a medical device such as forceps pressing against an organ such as the gallbladder, applying machine vision techniques to the location of force application may reveal movement and/or changes in the force applied by the organ's response. An estimate of the magnitude of the applied force may be made for the current video based on historical video clips from previous surgeries where the application of the force was previously observed. The force magnitude estimate may include units of measure (e.g., pounds per square inch, newtons, kilograms, or other physical units), or may be based on a relative scale. The relative scale may include a classification scale, a numerical scale, and/or any other measure. The classification scale may reflect the force level (e.g., a scale that includes multiple levels, such as high force, moderate force, low force, or any other number of levels). The contact force may be estimated from a digital scale, such as a scale of 1 to 10. Also, the force may be estimated at discrete time points, or may be estimated continuously. In some embodiments, the estimation of the contact force may include: an estimate of the contact location, an estimate of the contact angle, and/or any other characteristic of the contact force.

In some embodiments, a method of estimating contact force on an anatomical structure is provided, which may include the steps of: image data of a surgical procedure is received from at least one image sensor in an operating room. The image sensor may include a camera and/or any other image capture device. The image sensor may be configured to collect image data and/or video data and may be positioned anywhere in any operating room, such as, for example, above or within a patient (e.g., in a body lumen). The image data may include: surgical video, video clips, image frames, continuous video, and/or any other information derived from video. For example, the image data may include: pixel data, color data, saturation data, and/or any other data representing an image, regardless of the storage format. The image data may include: time data (e.g., the time at which the image was captured by the sensor), location data, information related to the surgical procedure (e.g., a patient identifier, the name of the surgical procedure), and/or any other metadata. In some embodiments, the image data of the surgery may be collected by an image sensor in the operating room and stored in a data structure (e.g., the data structure of fig. 17A) in, near, or even remote from the operating room. While the force estimation may be done in real time, the estimation may also be done in non-real time, such as when the data is retrieved from the data structure.

In some embodiments, a method of estimating contact force on an anatomical structure is provided, which may include the steps of: the received image data is analyzed to determine the identity of the anatomical structure reflected in the image data. Analyzing the received image data may include any image analysis method, consistent with the present embodiments. Some non-limiting examples of algorithms for identifying anatomical structures in images and/or videos are described above. Analyzing the received image data may include, for example: object recognition methods, image classification, homography, pose estimation, motion detection, and/or other image analysis methods. Analyzing the received image data may include an artificial intelligence method including implementing a machine learning model trained using training examples, consistent with the disclosed embodiments. For example, the received image data may be analyzed using a machine learning model that is trained using training examples to detect and/or identify anatomical structures, e.g., as described above. For example, the received image data may be analyzed using an artificial neural network configured to detect and/or identify anatomical structures from images and/or video. The training examples may include image data labeled or otherwise classified as depicting an anatomical structure (e.g., an image classified as depicting a pancreas).

In some embodiments, a method of estimating contact force on an anatomical structure is provided, which may include the steps of: the received image data is analyzed to determine a condition of the anatomical structure. In general, the condition of an anatomical structure may refer to any information indicative of a state or characteristic of the anatomical structure. For example, a condition may refer to whether an anatomical structure is normal, abnormal, damaged, leaking, hydrated, dehydrated, oxidized, retracted, enlarged, collapsed, present, absent, and/or any other assessment. The conditions may include: a measure of viability of the anatomical structure, a measure of oxygenation level, hydration level, pain level, and/or a measure of any other state of the anatomical structure. In one example, the condition of the anatomical structure may be represented as a numerical vector corresponding to a point in a mathematical space. In some examples, a machine learning model may be trained using training examples to identify a condition of an anatomical structure from images and/or videos, and the trained machine learning model may be used to analyze received image data and determine the condition of the anatomical structure. Examples of such training examples may include images and/or videos of an anatomical structure, and markers indicating a condition of the anatomical structure.

In some implementations, the analysis may determine the condition based on a feature of the anatomical structure indicative of the condition. As non-limiting examples, the analysis may determine the color of the tissue, the texture of the anatomy, the heart rate, the lung volume, and/or any other characteristic of the anatomy. In some embodiments, the recommendation may be based on characteristics reflected in the sensor data, such as heart rate monitoring data, brain activity data, body temperature data, blood pressure data, blood flow data, leakage data, and/or any other health data. Such features of the anatomical structure may be indicative of a condition of the anatomical structure and may be linked to a known condition. For example, decreased brain activity may indicate vascular occlusion, or increased cranial pressure may indicate cerebral hemorrhage. Such correlations may be stored as a data structure, such as the data structure of FIG. 17A.

In some embodiments, a method of estimating contact force on an anatomical structure may comprise the steps of: a contact force threshold associated with the anatomical structure is selected. The contact force threshold may comprise a minimum or maximum contact force. In some implementations, selecting a contact force threshold may be based on information indicative of a likely outcome associated with applying a force above or below the threshold. Selecting the contact force threshold may be based on data indicative of a suggested contact force (e.g., a maximum safe force or a minimum effective force). For example, selecting a contact force threshold may be based on an anatomical structure table that includes corresponding contact force thresholds. The table may include an indication of a condition of the anatomical structure. In some embodiments, the selected contact force threshold may be based on a determined condition of the anatomical structure. For example, the selected contact force threshold may be increased or decreased based on information indicating that the anatomical structure is leaking, has a particular color, has a particular level of retraction, and/or any other condition. In another example, a first contact force threshold may be selected in response to a first determined condition of the anatomical structure, and a second contact force threshold may be selected in response to a second determined condition of the anatomical structure, the second contact force threshold may be different from the first contact force threshold. In yet another example, the determined condition of the anatomical structure may be represented as a vector (as described above), and the contact force threshold may be calculated using a function of the vector representation of the determined condition. In some examples, the selected contact force threshold may be a function of a type of contact force (such as tension, compression, etc.). For example, in response to a first type of contact force, the selected contact force threshold may have a first value, and in response to a second type of contact force, the selected contact force threshold may have a second value, which may be different from the first value.

In some embodiments, the contact force threshold may be associated with a tension level (i.e., a force level that pulls the anatomical structure) or a retraction level. Retraction may involve movement, distraction, and/or counter-distraction of the tissue to expose tissue, organs, and/or other anatomical structures for viewing by the surgeon. In some embodiments, the contact force threshold may be associated with a pressure level (i.e., an amount of contact force pushing against the anatomy) or a compression level. The level of compression may include a degree or amount of compression of the anatomical structure (e.g., a reduction in size of the anatomical structure due to contact forces).

Consistent with the present embodiment, the selected contact force may be based on data relating to the manner of contact between the anatomical structure and the medical instrument. For example, in some embodiments, selecting a contact force threshold may be based on a location of contact between the anatomical structure and the medical instrument, as some regions of the anatomical structure may have greater force sensitivity than other regions. The location may be determined by analyzing the received image data, consistent with the disclosed embodiments. Thus, the selected contact force threshold may be higher at one contact location between the anatomical structure and the medical instrument than at another contact location. The selection of the contact force threshold may also be based on an angle of contact between the anatomical structure and the medical instrument. The contact angle may be determined by analyzing the image data to identify an angle of attack (angle) between the anatomical structure and the medical instrument. For example, a pose estimation algorithm may be used to analyze the image data and determine a pose of the anatomy and/or a pose of the medical instrument, and an angle between the anatomy and the medical instrument may be determined based on the determined pose. In another example, a machine learning algorithm may be trained using a training example to determine an angle between the anatomical structure and the medical instrument, and a trained machine learning model may be used to analyze the image data and determine the angle between the anatomical structure and the medical instrument. Examples of such training examples may include images depicting an anatomical structure and a medical instrument, and markers indicating an angle between the anatomical structure and the medical instrument. In some examples, the selected contact force threshold may be a function of the contact angle associated with the contact force. For example, in response to a first contact angle, the selected contact force threshold may have a first value, and in response to a second contact angle, the selected contact force threshold may have a second value, which may be different from the first value.

In some implementations, selecting the contact force threshold can include implementing and/or using a model (e.g., a statistical model and/or a machine learning model). For example, selecting the contact force threshold may include providing the condition of the anatomical structure to a regression model as an input and selecting the contact force threshold based on an output of the regression model. In some embodiments, the regression model may be fitted to historical data including contact forces applied to anatomical structures having corresponding conditions and surgical results.

In some implementations, selecting the contact force threshold can include selecting the contact force threshold using a machine learning model trained with the training examples. For example, a machine learning model may be trained using training examples to select a contact force threshold based on input data. Such input data may include: image data of a surgical procedure, image data depicting an anatomical structure, a type of surgical procedure, a stage of a surgical procedure, a type of action, a type of anatomical structure, a condition of the anatomical structure, a skill level of a surgeon, a condition of a patient, and the like. Examples of such training examples may include sample input data, as well as markers indicating a desired contact force threshold. In one example, the desired contact force threshold may be selected based on known medical guidelines. In another example, a desired contact force threshold may be manually selected. In yet another example, a desired contact force threshold may be selected based on an analysis of the correlation of applied contact force and results in a historical case or in a defined subset of a set of historical cases, such as selecting a contact force threshold that is highly correlated with positive results (e.g., ensuring positive results from historical data, ensuring positive results for a selected ratio of cases from historical data, etc.). Further, in some examples, a trained machine learning model may be used to analyze such input data corresponding to a particular case (such as a particular surgical procedure, a particular stage of a surgical procedure, a particular action of a surgical procedure, a particular surgeon, a particular patient, a particular anatomical structure, etc.) and select a contact force threshold. For example, the trained machine learning model may be used to analyze image data of the surgical procedure and/or the determined identity of the anatomical structure and/or the anatomical structure is characteristic of the determined condition and/or the current condition of the surgical procedure to select the contact force threshold.

In some implementations, the machine learning model can be trained using training examples to determine contact characteristics (such as contact position, contact angle, contact force) from images and/or videos, and can be used to analyze video clips and determine characteristics of actual contacts occurring in the surgical procedure, such as actual contact position, actual contact angle, actual contact force, and the like. Examples of training examples may include image data depicting a particular contact, and markers indicating characteristics of the particular contact, such as contact position, contact angle, contact force, and the like. For example, the training examples may include measurements of contact force collected using a sensor (e.g., a sensor embedded in the medical instrument). In another example, the training examples may include an estimate of the contact force included in the medical record (e.g., an estimate of the contact force stored in the record, an estimate based on sensor data, or a surgeon's opinion).

In some embodiments, selecting the contact force threshold may be based on one or more actions performed by the surgeon. For example, a method may include the steps of: the image data is analyzed to identify a motion performed by a surgeon (e.g., a human or a surgical robot), for example, using a motion recognition algorithm. In one example, the selected contact force threshold may be based on historical data relating one or more actions performed by the surgeon, the contact force, and the outcome. For example, a contact force threshold may be selected that is highly correlated with positive outcome (e.g., that ensures positive outcome from historical data, ensures positive outcome for a selected rate case from historical data, etc.). In one example, the data structure may specify contact force thresholds for different actions. In one example, the contact force threshold may be based on a skill level of the surgeon, consistent with the disclosed embodiments.

In some embodiments, a method of estimating contact force on an anatomical structure may comprise the steps of: an indication of an actual contact force on the anatomical structure is received. The indication of the actual contact force may be directly or indirectly associated with contact between a surgeon (e.g., a human or robotic surgeon) and the anatomical structure. For example, the actual contact force may be associated with contact between the medical instrument and the anatomical structure (e.g., between the anatomical structure and a retractor, a scalpel, a surgical clip, a drill, a bone cutter, a saw, scissors, forceps, and/or any other medical instrument). In some embodiments, the actual force may be associated with a level of tension, a level of retraction, a level of pressure, and/or a level of compression. The indication may include an estimate of contact force (including contact level), consistent with the disclosed embodiments. More generally, the indication of actual force may include any indication of any contact force (as described herein) applied during the surgical event. In one example, the indication of actual contact force may include at least one of an indication of an angle of contact, an indication of a magnitude or level of contact force, and an indication of a type of contact force, among others.

In some embodiments, the indication of actual contact force may be estimated based on image analysis of the image data. Image analysis of the image data to estimate the indication of contact force may include any image analysis method as disclosed herein. In some embodiments, the indication of the contact force may be based on an image analysis method that correlates the contact force to a change in the anatomy (e.g., a deformation of the anatomy), a position of the surgeon or surgical instrument, a motion of the surgeon and/or surgical instrument, and/or any other characteristic of the surgical event. In some embodiments, the indication of actual contact force may be estimated using a regression model that fits historical data relating contact force to characteristics of the surgical event. Also, an indication of actual contact force may be estimated using a machine learning model, for example as described above.

In some embodiments, the indication of the actual contact force may be based on sensor data that directly or indirectly measures the force. For example, the actual force may be based on a force sensor that measures the force at the contact location between the medical instrument or surgical robot and the anatomical structure (e.g., a force sensor embedded in the medical instrument or robot). In an exemplary embodiment, the indication of the actual contact force may be received from a surgical tool or other medical instrument. Similarly, an indication of the actual contact force may be received from the surgical robot.

In some embodiments, a method of estimating contact force on an anatomical structure may comprise the steps of: the indicated actual contact force is compared to the selected contact force threshold, which may include determining whether the actual contact force exceeds or fails to exceed the selected contact force threshold. Comparing the indicated actual contact force to the selected contact force threshold may comprise: the difference, ratio, logarithm, and/or any other function of the actual contact force from the selected contact force threshold is calculated.

In some embodiments, a method of estimating contact force on an anatomical structure may comprise the steps of: outputting a notification based on a determination that the indicated actual contact force exceeds the selected contact force threshold. Outputting the notification may include sending a suggestion to the device, displaying the notification at an interface, playing a sound, providing haptic feedback, and/or any other method of notifying the individual that excessive force was applied. The notification may be output to a device in the operating room, a device associated with a surgeon (e.g., a human surgeon and/or a surgical robot), and/or any other system. For example, outputting the notification may include: sending the notification to the computer, the mobile device, the external device; a surgical robot, and/or any other computing device. In another example, outputting the notification may include: the notification is recorded in a file.

In some embodiments, the notification may include information indicating that the contact force has exceeded or failed to exceed the selected contact force threshold. In some embodiments, the notification may include information associated with the selected contact force and/or the estimate of the actual contact force, including an indication of the contact angle, the magnitude of the contact force, the contact location, and/or other information related to the contact force.

In some examples, notifications of different intensities (i.e., severity or magnitude) may be provided as a function of the actual force indication. For example, the output notification may be based on a difference between the indicated actual force and a selected force threshold or a comparison of the indicated actual force to multiple thresholds. The notification may be based on the actual force intensity level or the intensity of the difference between the actual force and the selected force threshold. In some implementations, the notification can include information specifying the intensity level.

Consistent with the present embodiment, notifications may be output in real time during a surgical procedure to provide a warning to the surgeon performing the surgical procedure. In some embodiments, the notification may include instructions to the surgical robot to change the force application. As an illustrative example, the notification may include instructions to alter the magnitude, angle, and/or position of the contact force.

In some embodiments, a method of estimating contact force on an anatomical structure may comprise the steps of: it is determined from the received image data that the surgical procedure is in a combat mode, where unusual measurements may be required. In such a case, a typical contact force threshold may be suspended. Determining from the received image data that the surgical procedure may be in a combat mode may include using an image analysis method, as disclosed herein. For example, certain physiological responses and/or surgical activities depicted in the video may indicate that the surgery is in a combat mode. The combat mode determination may include using statistical models (e.g., regression models) and/or machine learning models, such as models trained to identify combat modes using historical examples of surgical videos classified as depicting portions of the surgical procedure in and out of combat mode. In some implementations, the notification may be suspended during the combat mode. For example, the output notification may be delayed indefinitely, or at least until it is determined that the surgical procedure may not be in a combat mode. In some implementations, the output notification may be delayed for a predetermined period of time (e.g., a few minutes or any other period of time). In other examples, the type of notification output may be determined based on whether the patient undergoing the surgical procedure is in a combat mode. In some examples, the contact force threshold may be selected based on whether a patient undergoing a surgical procedure is in a combat mode.

In some embodiments, a method of estimating contact force on an anatomical structure may comprise the steps of: it is determined from the received image data that the surgeon may be operating in a mode that ignores the contact force notification. The contact force notification may include a notification that includes information related to the contact force (e.g., the actual contact force and/or the selected contact force threshold). In some embodiments, determining that the surgeon is likely to be operating in a mode that ignores contact force notifications may include: analyzing one or more indications of actual contact force after the one or more contact force notifications. For example, embodiments may include: after outputting the one or more contact force notifications, it is determined whether the one or more actual contact force indications exceed or fail to exceed the selected contact force threshold. Determining from the received image data that the surgeon is likely to be ignoring contact force notifications to perform the procedure may include using image analysis methods, and may include using statistical models (e.g., regression models) and/or machine learning models. Such a machine learning model may be trained to utilize historical examples of surgical videos that are classified as surgeons who are ignoring and not ignoring contact force notifications to determine that a surgeon is likely to be operating in a mode that is ignoring contact force notifications.

Embodiments may include: further contact force notifications are suspended (delayed) at least temporarily based on determining that the surgeon is likely to be operating in a mode that ignores contact force notifications. In some embodiments, the contact force notification may resume after a predetermined period of time (e.g., a few minutes or any other period of time).

Fig. 30 is a flow chart illustrating an exemplary process 300 of estimating contact force on an anatomical structure consistent with the disclosed embodiments. Process 3000 may be performed by at least one processor, such as one or more microprocessors. In some embodiments, process 3000 is not necessarily limited to the illustrated steps, and any of the various embodiments described herein may also be included in process 3000. As will be appreciated by one skilled in the art, the steps of process 3000 may be performed by a system, for example, including the components of system 1401. In some embodiments, a non-transitory computer-readable medium is provided that contains instructions that, when executed by at least one processor, cause the at least one processor to perform operations for estimating a contact force on an anatomical structure according to process 3000. In some embodiments, process 3000 may be performed in real-time during a surgical procedure.

At step 3002, the process may include: image data of a surgical procedure is received from at least one image sensor in an operating room, as previously described by various examples. The image sensor may be placed anywhere in any operating room and the image data may include any video data, data representing images, and/or metadata.

At step 3004, the process may include: the received image data is analyzed to determine the identity of the anatomical structure and to determine a condition of the anatomical structure reflected in the image data, consistent with the disclosed embodiments and as previously described by way of example. Analyzing the received image data may include any image analysis method, as previously described, and the condition of the anatomical structure may refer to any information indicative of a state or feature of the anatomical structure. As previously discussed, analyzing the received image data may include: a condition of an anatomical structure in the image data is determined using a machine learning model trained using the training examples.

At step 3006, the process may include: a contact force threshold associated with the anatomical structure is selected, the selected contact force threshold being based on the determined condition of the anatomical structure. As discussed in more detail previously, selecting a contact force threshold may be based on data indicative of a suggested contact force (e.g., a maximum safe force or a minimum effective force). Selecting the contact force threshold may be based on the location and/or angle of the contact force, and may include implementing a model (e.g., a statistical model such as a regression model and/or a machine learning model). Further, an anatomical table including corresponding contact force thresholds may be used as part of selecting the contact force threshold. The contact force threshold may be associated with a tension level or a compression level. In some examples, selecting the contact force threshold may include selecting the contact force threshold using a machine learning model trained with the training examples. Further, selecting the contact force threshold may be based on one or more actions performed by the surgeon. Other non-limiting examples of selecting a contact force threshold are described above.

At step 3008, the process may include: an indication of an actual contact force on the anatomical structure, such as a force associated with contact between the medical instrument and the anatomical structure, is received. The actual force may be associated with a tension level, a retraction level, a pressure level, and/or a compression level. The indication of the actual contact force may be estimated based on an image analysis of the image data. The indication of the actual contact force may be based on sensor data that directly or indirectly measures the force. In some embodiments, the indication of the actual contact force may be estimated based on image analysis of the image data and/or may be an indication of the actual contact force received from a surgical tool, surgical robot, or other medical instrument.

At step 3010, the process may include: the indicated actual contact force is compared to the selected contact force threshold, as previously discussed. Comparing the indicated actual contact force to the selected contact force threshold may comprise: the difference, ratio, logarithm, and/or any other function of the actual contact force from the selected contact force threshold is calculated.

At step 3012, the process may include: a notification is output based on a determination that the indicated actual contact force exceeds the selected contact force threshold, as previously described. The output notification may be performed in real-time during the ongoing surgical procedure. For example, outputting the notification may include: providing real-time warnings to the surgeon performing the surgical procedure or instructions to the surgical robot.

The disclosed systems and methods may involve analyzing current and/or historical surgical clips to identify characteristics of the surgical procedure, patient condition, and other characteristics for updating a predicted surgical outcome. During the course of a surgical procedure, conditions may change, or events may reveal changes in the predicted outcome of the surgical procedure. Conventional methods of performing surgery may lack a decision support system that is directed to the outcome of predictions that are updated during real-time as surgical events occur based on those surgical events. As a result, the surgeon may not be aware of the likely surgical outcome, and thus may not perform actions that may improve the outcome or may avoid worsening the outcome. Accordingly, aspects of the present disclosure relate to unconventional methods of updating predicted surgical outcomes efficiently, and in real-time.

In accordance with the present disclosure, systems, methods, and computer-readable media may be provided for updating a predicted outcome during a surgical procedure. For example, the image data may be analyzed to detect a change in the prediction results, and a remedial action may be communicated to the surgeon. The predicted outcome may include an outcome that may occur with an associated confidence or probability (e.g., likelihood). For example, the predicted outcome may include: complications, health status, recovery period, death, disability, internal bleeding, hospital readmission after surgery, and/or any other surgical event. In some embodiments, the predicted outcome comprises a score, such as a lower urinary tract symptom (LUT) outcome score. More generally, the predicted outcome may include any health indicators associated with the surgical procedure.

In some embodiments, the predicted outcome may include a likelihood of readmission, such as a likelihood of a patient undergoing a surgical procedure being readmitted within a particular time interval after the patient is discharged from the hospital after the surgical procedure. The readmission may be based on health conditions associated with the surgical procedure, or may be based on other factors. For example, the likelihood of readmission may be based on patient characteristics (e.g., age, previous health status, family history, vital signs, and/or other patient-related data). the readmission may be defined for different time intervals (e.g., within 24 hours, within a week, within a month, or within another period of time).

In some implementations, the predicted outcome may be based on at least one model, such as a statistical model and/or a machine learning model. For example, the predicted outcome may be based on a statistical correlation between information associated with the surgical procedure (e.g., patient characteristics and/or surgical events) and historical outcomes. The predicted outcome may be generated by a machine learning model that is trained using training examples (e.g., using historical data-based training examples) to associate the outcome with information associated with the surgical procedure (e.g., patient characteristics and/or surgical events).

The disclosed embodiments may include receiving image data associated with a first event during a surgical procedure from at least one image sensor configured to capture an image of the surgical procedure, consistent with the disclosed embodiments. The image data associated with the first event may include: still images, image frames, clips, and/or video related data associated with a surgical procedure. The first event may include any surgical event, consistent with the disclosed embodiments. In an illustrative embodiment, the first event may comprise an action performed by a surgeon (e.g., a human or robotic surgeon). In another example, the first event may include a physiological response to the action. In yet another example, the first event may include a change in a condition of the anatomical structure. Some other non-limiting examples of such surgical events are described above. The image data associated with the first event may be received in memory and/or a data structure, as described herein by way of example.

The image sensor may include any image sensor (e.g., a camera or other detector) as also described herein. In some embodiments, the image sensor may be positioned in an operating room. For example, the image sensor may be positioned above or within a patient undergoing surgery (e.g., an intracavity camera).

The disclosed embodiments may include determining a predicted outcome associated with the surgical procedure based on the received image data associated with the first event, consistent with the disclosed embodiments. The predicted outcome may include any health outcome associated with the surgical procedure, as described above. For example, the predicted outcome may include a likely occurrence that is somehow related to the first event. The predictions may be binary (e.g., likely to result in rupture versus unlikely to result in rupture), or the predictions may provide relative confidence or probability (e.g., chance of rupture ratio; chance of rupture on a scale of 1 to 5, etc.). The determined predictive outcome may include a score (e.g., a LUT outcome score) that reflects a characteristic of the outcome, such as post-operative health status. The predicted outcome may be associated with a confidence or probability.

As mentioned in the previous paragraph, the first event may include any intraoperative occurrence. For example, the first event may include: actions performed by the surgeon, changes in patient characteristics, changes in the condition of the anatomy, and/or any other details. In some implementations, at least one point in time associated with a first event may be received such that an indicator of a time at which the event occurred is received in addition to an indicator of the event itself. The point in time may coincide with a counter on the video timeline, or may include any other marker or indicator that reflects the absolute or relative time at which the event occurred.

Some embodiments may involve identifying an event, such as a first event. For example, such identification may be based on detection of the medical instrument, the anatomical structure, and/or interaction between the medical instrument and the anatomical structure, which may be detected using video analysis techniques described throughout this disclosure. For example, events may be identified by analyzing image data using a machine learning model as described above.

In some embodiments, determining the prediction result may include: interactions between the surgical tool and the anatomical structure are identified, and a predicted outcome is determined based on the identified interactions. For example, interactions between the surgical tool and the anatomical structure may be identified by analyzing the image data, e.g., as described above. Further, in one example, a first outcome may be predicted in response to a first identified interaction, and a second outcome may be predicted in response to a second identified interaction, the second outcome may be different from the first outcome. In another example, a machine learning model may be trained using a training example to predict an outcome of a surgical procedure based on interactions between a surgical tool and an anatomical structure, and the trained machine learning model may be used to predict the outcome based on the identified interactions. Examples of such training examples may include indications of interactions between the surgical tool and the anatomical structure, as well as markers indicating desired predicted outcomes. The desired prediction result may be based on analysis of historical data, based on user input (such as expert opinions), and the like.

In some embodiments, determining the prediction result may be based on a skill level of the surgeon depicted in the image data (such as data previously stored in a data structure). The skill level of the surgeon may be determined based on an analysis of the image data, for example as described above. For example, a facial recognition algorithm may be applied to the image data to identify a known surgeon, and the corresponding skill level may be retrieved from a data structure (such as a database). In some embodiments, the skill level of the surgeon may be determined based on the sequence of events identified in the image data (e.g., based on the length of time that one or more actions are to be performed, based on patient reactions during the surgical procedure detected in the image data, and/or other information indicative of the skill level of the surgeon). In one example, a first outcome may be predicted in response to a first determined skill level, and a second outcome may be predicted in response to a second determined skill level, the second outcome may be different from the first outcome. In another example, a machine learning model may be trained using training examples to predict an outcome of a surgical procedure based on a skill level of a surgeon, and the trained machine learning model may be used to predict the outcome based on the determined skill level. Examples of such training examples may include an indication of the skill level of the surgeon, and a flag indicating the desired predicted outcome. The desired prediction result may be based on analysis of historical data, based on user input (such as expert opinions), and the like.

In some instances, determining the prediction result may also be based on a condition of an anatomical structure depicted in the image data. For example, the predicted outcome may be based on historical outcomes associated with organ conditions. Complications with organs in poor conditions may be greater than, for example, organs in better conditions. In some embodiments, the condition of the anatomical structure may be based on an analysis of the image data as described throughout the present disclosure. The condition of the anatomical structure may be transient or chronic, and/or include a medical condition, such as a condition being treated by surgery or a separate medical condition. The condition of the anatomical structure may be indicated by color, texture, size, hydration level, and/or any other observable feature. In one example, a first outcome may be predicted in response to a first determined condition of the anatomical structure and a second outcome may be predicted in response to a second determined condition of the anatomical structure, the second outcome may be different from the first outcome. In another example, a machine learning model may be trained using a training example to predict an outcome of a surgical procedure based on a condition of an anatomical structure, and the trained machine learning model may be used to predict the outcome based on the determined condition of the anatomical structure. Examples of such training examples may include an indication of a condition of the anatomical structure, and a label indicating a desired prediction result. The desired prediction result may be based on analysis of historical data, based on user input (such as expert opinions), and the like.

Additionally or alternatively, the prediction result may be determined based on an estimated contact force on the anatomical structure. For example, applying too much force to the anatomy may lead to less likely favorable results. For example, the contact force may be estimated by analyzing the image data, e.g. as described above. In another example, the contact force may be received from a sensor, for example, as described above. In one example, a first outcome may be predicted in response to a first estimated contact force, and a second outcome may be predicted in response to a second estimated contact force, the second outcome may be different from the first outcome. In another example, a machine learning model may be trained using a training example to predict an outcome of a surgical procedure based on a contact force on an anatomical structure, and the trained machine learning model may be used to predict the outcome based on the estimated contact force. Examples of such training examples may include an indication of contact force, and a flag indicating a desired prediction result. The desired prediction result may be based on analysis of historical data, based on user input (such as expert opinions), and the like.

Determining the prediction result may be performed in various ways. Determining the predicted outcome may include using a machine learning model trained to determine the predicted outcome based on the historical surgical video and information indicative of surgical outcomes corresponding to the historical surgical video. For example, the received image data of the first event may be analyzed using an artificial neural network configured to predict the outcome of the surgical procedure from the images and/or video. As another example, determining the prediction result may include: a first event is identified based on the received image data, and a model (e.g., a statistical model or a machine learning model) is applied to information related to the first event to predict an outcome. Such a model may receive input including information related to the first event (e.g., an identifier of the first event, a duration of the first event, and/or other characteristics of the first event, such as surgical contact force) and/or information related to the surgical procedure (e.g., patient characteristics, skill level of the surgeon, or other information). Based on inputs such as the examples provided above, the system may return the prediction results as output.

The disclosed embodiments may include receiving image data associated with a second event during the surgical procedure from at least one image sensor configured to capture an image of the surgical procedure, consistent with the disclosed embodiments. The second event may occur after the first event and may be different from the first event. At least one point in time associated with a second event may be received. The image sensor used to capture data associated with the second event may be the same as or different from the image sensor used to capture data associated with the first event.

The disclosed embodiments may include determining a change in the prediction result that reduces the prediction result below a threshold based on the received image data associated with the second event. For example, using any of the above-described methods of determining a prediction result, a new prediction result may be determined and compared to a previously determined prediction result (such as a prediction result determined based on received image data associated with the first event), thereby determining a change in the prediction result. In another example, the new prediction result may be determined based on an analysis of a previously determined prediction result (such as a prediction result determined based on received image data associated with the first event) and received image data associated with the second event. For example, the machine learning model may be trained using the training examples to determine a new prediction result based on previous prediction results and images and/or video, and the trained machine learning model may be used to analyze the previously determined prediction results and received image data associated with the second event to determine the new prediction result. Examples of such training examples may include previously determined predictors and image data depicting events, as well as markers indicating new predictors. In another example, a Markov model may be used to update a previously determined prediction and obtain a new prediction, where the transformation in the Markov model may be based on a value determined by analyzing received image data associated with the second event. As discussed, the predicted outcome may include a probability, a confidence, and/or a score (e.g., a LUT outcome score) that reflects a characteristic of the outcome, such as post-operative health status. Determining a change in the predicted outcome may involve a change in such confidence, probability, or score. In some examples, a change in the prediction may be determined without calculating a new prediction. For example, a machine learning model may be trained using training examples to determine a change in the prediction results based on previous prediction results and images and/or video, and the trained machine learning model may be used to analyze the previously determined prediction results and received image data associated with the second event to determine an occurrence of the change in the prediction results. Examples of such training examples may include previously determined predictors and image data depicting an event, and a flag indicating whether the predictor has changed in response to a second event.

In some implementations, a change in confidence, probability, and/or score may result in a predicted outcome below a threshold (e.g., threshold confidence, threshold probability, threshold score). Such thresholds may be automatically generated using artificial intelligence methods, may be determined based on user input, and so forth. The threshold may correspond to a negative outcome (such as readmission, complications, death, or any other undesirable eventuality) or to a positive outcome.

In some illustrative embodiments, determining a change in prediction results may be based on an elapsed time between two markers. For example, a duration between the incision and the suture that exceeds a threshold may be used as an indicator of increased likelihood of infection. For example, a change in the prediction may be determined in response to a first elapsed time, and no change in the prediction may be determined in response to a second elapsed time.

In some examples, two or more variables may be associated with a positive outcome or a negative outcome, e.g., using statistical methods, using machine learning methods, etc. This variable is endless. Such variables may relate to the condition of the patient, the surgeon, the complexity of the procedure, complications, tools used, time elapsed between two or more events, or any other variable or combination of variables that may have some direct or indirect effect on the predicted outcome. One such variable may be a fluid leak (e.g., magnitude, duration, or determined source). For example, determining a change in the prediction outcome may be based on a magnitude of bleeding. Characteristics of the fluid leak time (e.g., amount of bleeding, source of bleeding) may be determined based on analysis of the image data.

Disclosed embodiments may include: determining a skill level of a surgeon depicted in the image data, and determining a change in the predicted outcome may be based on the skill level. For example, determining a change in the prediction outcome may be based on an updated estimate of the skill level of the surgeon (e.g., the image analysis may determine that the surgeon has generated one or more errors, resulting in a decline in the estimate of the skill level). As another example, the previously determined prediction result may be based on a skill level of a first surgeon, and the change in the prediction result may be based on a skill level of a second surgeon with hand-insertion assistance. The skill level may be determined in various ways, as described herein (e.g., via image analysis as described above and/or by retrieving the skill level from a data structure).

As an additional example, determining a change in the prediction may be based on one or more changes in color, texture, size, condition, or other appearance or feature of at least a portion of the anatomical structure. Examples of conditions of the anatomical structure that may be used for outcome prediction may include: vitality, oxygenation level, hydration level, crisis level, and/or any other indicator of the state of the anatomical structure.

The condition of the anatomical structure may be determined in a variety of ways, such as by a machine learning model trained with examples of known conditions. In some embodiments, the object recognition model and/or the image classification model may be trained using historical examples and implemented to determine a condition of the anatomical structure. The training may be supervised and/or unsupervised. Some other non-limiting examples of methods for determining the condition of an anatomical structure are described above.

Embodiments may include a variety of methods of determining a predicted outcome based on a condition of an anatomical structure and/or any other input data. For example, the regression model may fit historical data including conditions and results of the anatomical structure. More generally, using historical data, the regression model may fit to predict an outcome based on one or more of a variety of input data, including the condition of the anatomical structure, patient characteristics, skill level of the surgeon, estimated contact forces, source of fluid leakage, degree of fluid leakage characteristics, and/or any other input data related to the surgical procedure. The outcome may be predicted based on other known statistical analyses, including, for example, based on a correlation between input data and outcome data associated with the surgical procedure.

Disclosed embodiments may include: a data structure based on image-related data of a prior surgical procedure is accessed, consistent with the disclosed embodiments. Accessing may include reading data from and/or writing data to the data structure. In some embodiments, this may be done using a data structure such as that presented in FIG. 17 or a data structure such as that presented in FIG. 6. The image-related data may comprise any data derived directly or indirectly from the image. This data may include, for example: patient characteristics, surgeon characteristics (e.g., skill level), and/or surgical characteristics (e.g., identifier of the surgical procedure, expected duration of the surgical procedure). The image-related data may include correlations or other data describing statistical relationships between historical intraoperative surgical events and historical outcomes. In some embodiments, the data structure may include: data related to the suggested action, alternative course of action, and/or other actions that may change the probability, likelihood, or confidence of the surgical outcome. For example, the data structure may include information that relates the interruption of the surgical procedure to an improved outcome. Depending on the implementation, the data structure may include information relating the skill level of the surgeon, requesting assistance from another surgeon, and the results. Similarly, the data structure may store relationships between surgical events, actions (e.g., remedial actions), and results. Although a large number of correlation models may be used for prediction as discussed throughout this disclosure, exemplary prediction models may include: fitting a statistical model of historical image-related data (e.g., information related to remedial action) and results; and a machine learning model trained using training data based on the historical examples to predict a result based on the image-related data.

The disclosed embodiments may include identifying a suggested remedial action based on the accessed image-related data. For example, the suggested remedial action includes suggesting that the surgeon use a different tool or procedure; administering a drug, requesting assistance from another surgeon, making corrections to the surgery, leaving the surgery to rest (e.g., become awake), and/or taking any other action that may affect the outcome. When the suggested remedial action includes a suggestion to request assistance, the suggestion may suggest summoning a surgeon with a higher or different level of experience than the operating surgeon. The remedial action by the surgeon to modify the surgery may include: suggesting additional actions to be performed rather than the previous part of the surgery, or avoiding certain desired actions.

Identifying the remedial action may be based on an indication derived at least in part from the image-related data that the remedial action is likely to raise the prediction above a threshold. For example, the data structure may contain a correlation between historical remedial actions and predicted outcomes, and the remedial actions may be identified based on the correlation. In some implementations, identifying the remedial action may include using a machine learning model that is trained using historical examples of the remedial action and the surgical outcome to identify the remedial action. The training may be supervised or unsupervised. For example, the machine learning model may be trained using training examples to identify the remedial action, and the training examples may be based on an analysis of the remedial action and historical examples of the surgical outcome.

The disclosed embodiments may include outputting the suggested remedial action. Outputting the suggested remedial action may include sending the suggestion to a device, causing a notification to be displayed on an interface, playing a sound, providing haptic feedback, and/or any other method of communicating a desired message, whether sent to an operating room, a device associated with a surgeon (e.g., a human surgeon and/or a surgical robot), and/or any other system. For example, outputting the suggested remedial action may include: sending the notification to the computer, the mobile device, the external device; a surgical robot, and/or any other computing device.

Further, in some embodiments, a method may include the steps of: in response to the prediction decreasing below the threshold, a scheduling record associated with the operating room associated with the surgical procedure is updated. For example, a change in the expected duration of a surgical procedure may trigger an automated change in the scheduling record so that the surgical procedure for the next patient is pushed back in time to account for the delay of the current procedure. More generally, any change in predicted outcome may be associated with an increase or decrease in expected duration. In some embodiments, a data structure (e.g., the data structure of fig. 17) may relate the predicted outcome to an expected duration of the corresponding surgical procedure. A model (e.g., a regression model or a trained machine learning model) may be used to generate the expected duration based on the prediction results, consistent with the present embodiments. Thus, if a change in the prediction results affects the duration of the surgical procedure, the surgical schedule may be automatically updated to notify subsequent medical personnel of the change in the operating room schedule. The update may be automatically displayed on the electronic operating room dispatch board. Alternatively or additionally, the update may be broadcast via email or other messaging application (app) to accounts associated with the affected medical professionals. The scheduling may be linked to the prediction result in question, but may also be linked to other factors. For example, even if the predicted outcome changes, machine vision analysis performed on a video clip of a surgical procedure may reveal that the surgical procedure is behind (or ahead) of schedule, and updates to the schedule may be automatically pushed, as previously discussed.

Fig. 31 is a flowchart illustrating an example process 3100 for updating a prediction outcome during a surgical procedure consistent with the disclosed embodiments. Process 3100 can be performed by at least one processor, such as one or more microprocessors. In some embodiments, process 3100 is not necessarily limited to the illustrated steps, and any of the various embodiments described herein may also be included in process 3100. As will be appreciated by one skilled in the art, the steps of process 3100 may be performed by, for example, a system including the components of system 1401. In some implementations, a non-transitory computer-readable medium contains instructions that, when executed by at least one processor, cause the at least one processor to perform operations to update a prediction result according to process 3100. In some embodiments, the process 3100 may be performed in real-time during a surgical procedure.

At step 3102, the process may include: image data associated with a first event during a surgical procedure is received from at least one image sensor configured to capture an image of the surgical procedure, consistent with disclosed embodiments. The image sensor may be positioned anywhere in the operating room (e.g., above the patient, within the patient), as previously discussed.

At step 3104, the process may include: based on the received image data associated with the first event, a predicted outcome associated with the surgical procedure is determined, as previously discussed and illustrated with examples. As discussed, for example, determining the prediction result may include: interactions between the surgical tool and the anatomical structure are identified, and a predicted outcome is determined based on the identified interactions. Determining the prediction result may be based on a skill level of the surgeon depicted in the image data. In some embodiments, determining the prediction outcome may be based on a condition of an anatomical structure depicted in the image data, and may include using a machine learning model trained to determine the prediction outcome based on the historical surgical video and information indicative of surgical outcomes corresponding to the historical surgical video. One example of a predictive outcome may include the likelihood of readmission. Other examples were previously provided.

At step 3106, the process may include: image data associated with a second event during the surgical procedure is received from at least one image sensor configured to capture an image of the surgical procedure, as previously discussed and illustrated with examples.

At step 3108, the process may include: based on the received image data associated with the second event, a change in the prediction result that reduces the prediction result below a threshold is determined, also as discussed previously. For example, determining a change in the predicted outcome may be based on the time elapsed between a particular point in the surgical procedure and a second event. In other examples, determining the change in the prediction result may be based on a magnitude of the bleeding, a color change of at least a portion of the anatomical structure, an appearance change of at least a portion of the anatomical structure. Determining the condition of the anatomical structure may include: a condition of the anatomical structure is determined using a machine learning model trained using the training examples.

At step 3110, the process may include: a data structure based on image-related data of a prior surgical procedure is accessed, as previously discussed and illustrated with an example. As mentioned, a data structure such as that illustrated in fig. 17 may be accessed. This is merely an example, and many other types and forms of data structures may be employed consistent with the disclosed embodiments.

At step 3112, the process may include: a suggested remedial action is identified based on the accessed image-related data, as previously described. For example, the suggested remedial action may include: suggesting an alteration to the surgical procedure, using a different surgical tool, summoning another surgeon, revising the surgical procedure, taking a break, and/or any other action that may affect the outcome of the surgical procedure. Identifying the remedial action may include using a machine learning model that is trained using historical examples of the remedial action and the surgical outcome to identify the remedial action.

At step 3114, the process may include outputting the suggested remedial action, as previously described.

The disclosed systems and methods may involve analyzing current and/or historical surgical clips to identify characteristics of the surgery, patient condition, and other characteristics for detecting fluid leaks. During surgery, fluids may leak. For example, blood, bile or other fluids may leak from the anatomy. In general, the source and extent of fluid leakage may be unknown. Fluid leaks, if left unchecked, can lead to negative health consequences. Accordingly, aspects of the present disclosure relate to unconventional methods of automatically and efficiently determining the source and/or extent of fluid leakage during a surgical procedure.

In accordance with the present disclosure, systems, methods, and computer-readable media may be provided for analyzing fluid leaks during a surgical procedure. The analysis may be performed in real time during an ongoing surgical procedure. Embodiments may include providing information related to fluid leaks to a surgeon in real time. For example, analysis of fluid leaks may enable a surgeon to identify the magnitude and/or source of a fluid leak, which may enable the surgeon to perform remedial actions to mitigate the fluid leak. Fluid leaks may include fluid leaks from an organ or an interior of a tissue to a tissue or a space outside an organ (e.g., from inside a blood vessel to outside, from inside a gallbladder to outside, etc.). The leaked fluid may include blood, bile, chyme, urine, and/or any other type of bodily fluid.

Analysis of fluid leakage during a surgical procedure may include receiving intraoral video of the surgical procedure in real time, consistent with the disclosed embodiments. The intra-cavity video may be captured by an image sensor located within the patient, consistent with the disclosed embodiments. For example, an image sensor located outside the patient's body may collect intra-cavity video (e.g., when the cavity is opened during surgery). Receiving the intra-cavity video in real-time may include receiving the video via a network or directly from an image sensor.

Consistent with the present embodiments, the intra-cavity video may depict various aspects of the surgical procedure. For example, the intra-cavity video may depict a surgical robot and/or a human surgeon performing some or all of the surgery. The intra-cavity video may depict: medical instruments, anatomy, fluid leakage situations, surgical events, and/or any other aspect of a surgical procedure, consistent with the disclosed embodiments.

Analysis of fluid leakage during surgery may involve analyzing frames of an intra-cavity video to determine abnormal fluid leakage conditions in the intra-cavity video, consistent with the disclosed embodiments. Analyzing the frame may include determining an abnormal fluid leak using any image analysis method. For example, analyzing the image may include: a difference image (e.g., an image generated by subtracting pixel data of a previous image from pixel data of a subsequent image) is analyzed using a homography method, applying image registration techniques, and/or other image processing methods. The analysis may employ an object recognition model, a machine learning model, a regression model, and/or any other model. Such a model may be trained using training data including historical examples to determine abnormal fluid leakage situations. For example, the machine learning model may be trained using training examples to detect and/or determine characteristics of abnormal fluid leakage conditions from images and/or videos, and may be used to analyze the intra-cavity video and determine abnormal fluid leakage conditions and/or characteristics of abnormal fluid leakage conditions. Some non-limiting examples of such characteristics may include: the type of fluid, the magnitude of the fluid leak, the location of the fluid leak, the anatomy associated with the fluid leak, etc. Examples of such training examples may include the intraluminal image and/or the intraluminal video, and a flag indicating whether an abnormal fluid leak condition is depicted in the intraluminal image and/or the intraluminal video, and/or a flag indicating a characteristic of the abnormal fluid leak condition depicted in the intraluminal image and/or the intraluminal video.

Determining an abnormal fluid leak condition (i.e., an abnormal fluid leak event) may include determining aspects of a fluid leak, including the presence of fluid in or on an anatomical structure, a fluid leak value above a threshold (e.g., above a predetermined amount, above a number of standard deviations), a fluid type (e.g., blood, bile, urine, chyme, and/or other types), a source of a location of a fluid leak, and/or any other characteristic of a fluid leak condition. Some fluid leaks may be normal (e.g., below a threshold magnitude, in a location associated with a normal fluid leak, a normal fluid type with a particular surgical event, etc.), while other fluid leaks are abnormal (e.g., above a threshold magnitude, in an undesirable location, unconnected to a surgical event associated with a normal fluid leak, and/or an abnormal fluid type).

The disclosed techniques of determining the source of a leak may include identifying a ruptured anatomical organ, blood vessel, and/or other anatomical structure. The ruptured anatomical structures may be identified based on an analysis of fluid leak characteristics (e.g., magnitude, flow direction, color, or other fluid leak characteristics). The ruptured anatomical structure may include: any organ, blood vessel (e.g., artery), passageway (e.g., trachea), tissue (e.g., lining), and/or any other anatomical structure. As used herein, the term rupture may refer to any fracture, tear, perforation or other damage to an anatomical structure.

In some embodiments, the identified fractured anatomical structure may be visible in image frames of intra-luminal video taken by an image sensor in an operating room (e.g., the room depicted in fig. 1). Alternatively or additionally, the ruptured anatomical structure may not be visible in the frame of the intra-cavity video (e.g., it may be occluded by other anatomical structures) and it may be identified based on information that reacts in the frame (e.g., information of the blood vessel with a fluid leak situation). Identifying a broken structure may involve comparing a previous frame with a subsequent frame of the intra-cavity video by using a regression model, by using a machine learning model, by performing an object recognition method, and/or by any other image analysis method. For example, the machine learning model may be trained using training examples to identify a ruptured anatomical organ, blood vessel, and/or other anatomical structure from the intracavity images and/or intracavity videos, and the trained machine learning model may be used to analyze the intracavity videos to identify a ruptured anatomical organ, blood vessel, and/or other anatomical structure. Examples of such training examples may include an intra-luminal image and/or an intra-luminal video, and a marker indicating whether a ruptured anatomical organ, vessel, and/or other anatomical structure should be identified for the intra-luminal image and/or intra-luminal video.

Embodiments may include: the frames of the intra-cavity video are analyzed to identify blood splatter and at least one characteristic of the blood splatter. Blood splatter may refer to the presence of blood and/or leakage of blood. Identifying blood splatter may be based on color data of the intra-cavity video. In some embodiments, the characteristics of the blood splash can be correlated to the source of the blood splash, the intensity (rate) of the blood splash, the color of the blood splash, the viscosity of the blood splash, and/or the volume (magnitude) of the blood splash. More generally, the characteristics of blood splatter may include any characteristic of blood splatter. For example, the training examples may be used to train a machine learning model to identify and/or determine characteristics of blood splatter from images and/or videos, and the trained machine learning model may be used to analyze the intra-cavity video to identify blood splatter and/or characteristics of blood splatter. Some non-limiting examples of such characteristics of blood splash may include: the source of the blood splash, the intensity of the blood splash, the rate of blood splash, the color of the blood splash, the viscosity of the blood splash, the volume of the blood splash, the amount of the blood splash, and the like. Examples of such training examples may include the intraluminal image and/or the intraluminal video, and a marker indicating whether or not blood splash is depicted in the intraluminal image and/or the intraluminal video, and/or a marker indicating characteristics of the blood splash depicted in the intraluminal image and/or the intraluminal video.

Embodiments may include: the frames of the intra-cavity video are analyzed to identify the ejection of blood and/or to identify at least one characteristic of the ejection of blood. In some examples, identifying blood ejection may be based on color data of the intra-cavity video, based on motion within the intra-cavity video, and/or the like. In some embodiments, the characteristics of the blood jet can be correlated to the source of the blood jet, the intensity (rate) of the blood jet, the color of the blood jet, the motion (such as speed, direction, etc.) of the blood jet, and/or the volume (magnitude) of the blood that is jetted. More generally, the characteristics of the blood jet may include any characteristic of the blood jetted and/or any characteristic of the jet. For example, the training examples may be used to train a machine learning model to identify and/or determine characteristics of blood ejection from images and/or videos, and the trained machine learning model may be used to analyze the intra-cavity video to identify blood ejection and/or characteristics of blood ejection. Examples of such training examples may include an intraluminal image and/or an intraluminal video, and a marker indicating whether or not a jet of blood is depicted in the intraluminal image and/or the intraluminal video, and/or a marker indicating a characteristic of the jet of blood depicted in the intraluminal image and/or the intraluminal video.

Further, analyzing the frames of the intra-cavity video may include: characteristics of the abnormal fluid leakage condition are determined. For example, the characteristic may be associated with a volume of the fluid leak, a color of the fluid leak, a type of fluid associated with the fluid leak, a rate of fluid leak, a viscosity of the fluid, a reflectivity of the fluid, and/or any other observable characteristic of the fluid. Further, analyzing the frame may include: the method may include determining a flow rate associated with a fluid leak condition, determining a fluid loss associated with a fluid leak condition, and/or determining any other characteristic of a fluid leak condition. The characteristics of the fluid or fluid leak condition may be determined based on hue, saturation, pixel values, and/or other image data. More generally, determining a characteristic of a fluid or fluid leak condition may include any image analysis method, as disclosed herein. For example, determining characteristics of a fluid or fluid leak condition may include using a trained machine learning model, as described above.

Consistent with the present embodiments, fluid leak analysis may include storing intra-cavity video, and upon determining an abnormal leak condition in the current video, analyzing previous historical frames of the stored intra-cavity video to determine a source of the leak, e.g., via comparison, consistent with the disclosed embodiments. The intra-cavity video may be stored in memory, in a data structure (e.g., the data structure of fig. 17A), and so forth. For example, an abnormal leak condition may be determined when the amount of fluid leaked is above a selected amount (e.g., above a selected threshold used to distinguish the abnormal leak condition from the normal leak condition), and at that point, the leak source may not be visible in the current video (e.g., the leak source may be covered by the leaked fluid, may be outside the current field of view of the current video, etc.). However, the source of the leak may be visible in a previous historical frame of the stored intra-cavity video, and this previous historical frame of the stored intra-cavity video may be analyzed to identify the source of the leak, e.g., as described above. In another example, an abnormal leakage situation may be determined by analyzing the current video using the first algorithm, and at that point, the source of the leakage may not be visible in the current video. In response to such a determination, a second algorithm (which may be more computationally intensive than the first algorithm or otherwise different) may be used to analyze the previous historical frames of the stored intra-cavity video to identify sources of leaks that may be visible in the previous historical frames of the stored intra-cavity video. In yet another example, a trigger (such as a user input, detection of an event in the current video, input from a sensor connected to the patient undergoing the surgical procedure, etc.) may result in an analysis of the current video to determine an abnormal leak condition. Further, in some examples, in response to such a determination, the prior historical frame of the stored intra-cavity video may be analyzed to identify a source of the leak, e.g., as described above.

Analyzing the previous frame to determine the source of the leak may include: the frames at different points in time (e.g., at two or more points in time) are compared. For example, embodiments may include: a difference image is generated (e.g., by subtracting pixel data of frames at two different points in time) and the generated difference image is analyzed. In another example, analyzing a frame may involve determining a characteristic of a fluid leakage condition at different points in time and determining a change in the characteristic.

Embodiments may include enacting remedial action in determining an abnormal fluid leakage situation. The remedial action may include any notification, proposed response, or countermeasures associated with the abnormal fluid leak condition. The remedial action may be the same regardless of the changed characteristic of the fluid leak, or may change based on the changed characteristic of the fluid leak. In the latter case, formulating the remedial action may include selecting the remedial action from a variety of options. Thus, in the latter case, the selection of the remedial action may depend on the determined characteristic or feature of the fluid leakage situation. For example, if the determined extent of bleeding is below a certain threshold and the source of bleeding is identified, the associated remedial action may be a recommendation or instruction to apply pressure to the source of bleeding. If more significant rupture is detected, the remedial action may involve a recommendation or instruction to suture the source of the bleeding. Many different potential remedial actions are possible depending on the type of fluid associated with the leak, the extent of the leak, and the characteristics of the leak situation. To assist in selecting the appropriate remedial action, the data structure may store a relationship between fluid leak conditions, remedial actions, and results. Further, the statistical model may be fitted based on historical fluid leakage conditions, remedial actions, and results, and the remedial actions may be selected based on the model output. Alternatively or additionally, the selection may be based on an output of a machine learning model trained to select a remedial action based on historical fluid leak conditions, the remedial action, and the results. In other examples, a data structure may store a relationship between the fluid leak condition and a suggested remedial action, and the remedial action may be selected from the data structure based on characteristics and/or features of the fluid leak condition. Such a data structure may be based on user input. For example, a first remedial action may be selected (such as sealing the leakage source using the surgical robot) in response to a fluid leakage situation with the identified leakage source, while a second remedial action may be selected (such as providing a notification to the user) in response to a fluid leakage situation without the identified leakage source.

Consistent with the present embodiment, formulating the remedial action may include providing notification of the source of the leak. The notification may include a message identifying the source of the leak, such as a ruptured blood vessel, a ruptured organ, and/or any other ruptured anatomical structure. For example, the notification may include: the identified leaking anatomy, fluid leakage characteristics (e.g., volume, flow, type of fluid, duration of fluid leakage), and/or any other information related to the fluid leakage situation. Further, the notification may include a proposed course of action that may be taken to remedy or otherwise respond to the leak. In another example, the notification may include a visual indicator of the source of the leak, e.g., as an overlay over images and/or video taken from the surgery, as an indicator in an augmented reality device, and so forth. Providing the notification may involve sending the notification to the device, causing the notification to be displayed at an interface, playing a sound, providing haptic feedback, and/or outputting any other method such as the information disclosed above. The notification may be provided to a device in an operating room (e.g., as depicted in fig. 1), a device associated with a surgeon (e.g., a human surgeon and/or a surgical robot), and/or any other system. For example, outputting the notification may include: sending the notification to the computer, the mobile device, the external device; a surgical robot, and/or any other computing device.

In some embodiments, the remedial action may include sending an instruction to the robot. Such instructions may direct the robot to take action to remedy or assist in the remediation of the leak. Alternatively or additionally, the instructions may instruct the robot to stop the current course of action and/or to move aside to allow human intervention.

The remedial action may be based on a variety of inputs. For example, formulating a remedial action may be based on any characteristic of the flow rate, the amount of fluid lost, and/or the fluid or fluid leakage condition, such as described above. The remedial action may be based on a statistical analysis of the characteristics of the fluid or fluid leak condition. For example, the remedial action may be selected based on a known (or determined) correlation between the characteristics of the fluid leakage situation and the outcome. In addition, a data structure (such as that of FIG. 17A) may relate the characteristics of the fluid leak condition to the results. The statistical analysis may include: the regression model is used to identify a remedial action (e.g., to fit the regression model to historical data including fluid leak data, remedial action data, and outcome data).

Consistent with the present embodiments, analyzing frames of the intra-cavity video to determine abnormal fluid leakage conditions in the intra-cavity video may include: it is determined whether the fluid leak condition is abnormal. Some fluid leak situations may be normal (e.g., below a threshold magnitude, in a location associated with a normal fluid leak, a normal fluid type with a particular surgical event, etc.). Some fluid leak situations may be abnormal (e.g., above a threshold magnitude, in a location associated with an abnormal fluid leak, an abnormal fluid type with a particular surgical event, etc.). Characteristics of fluid leakage situations classified as normal and/or abnormal may be stored in a data structure (such as the data structure depicted in fig. 17A). Determining whether the fluid leakage condition is abnormal may include: a regression model (e.g., a model fitted to historical examples) and/or a trained machine learning model (e.g., a model trained using historical examples to determine whether the determined fluid leak condition is abnormal) is used. For example, the machine learning model may be trained using training examples to determine whether a fluid leak condition is normal or abnormal based on information related to the fluid leak condition (such as images and/or video of the fluid leak condition, characteristics of the fluid leak condition determined by analyzing the video depicting the fluid leak condition as described above, the source of the fluid leak condition, the amount of the leak, the type of the fluid leak condition, the type of fluid, features of the patient, the surgical stage in which the leak condition occurred, etc.), and may be used to analyze the information related to the fluid leak condition and determine whether the fluid leak condition is abnormal. Examples of such training examples may include information related to a particular fluid leak condition, and a flag indicating whether the particular fluid leak condition is abnormal.

Some embodiments may include: the frames of the intra-cavity video are analyzed to determine characteristics of the detected fluid leak condition, e.g., as described above. The determined characteristic may be associated with a fluid leak volume, a fluid leak type, a fluid leak rate, a source of the fluid leak, and/or any other observable characteristic of the surgical procedure. The determined characteristic can then be used, for example, to ascertain whether the fluid leak is normal or abnormal. Analyzing the frame to determine such characteristics may include any image analysis method as described herein.

In some embodiments, determining whether the fluid leak condition is an abnormal fluid leak condition may be based on a measurement of blood pressure and/or any other vital signs of the patient undergoing the surgical procedure. The vital signs may be derived from surgical video through image analysis techniques described herein, and/or from sensors configured to measure vital signs. Additionally, to use vital signs as a possible indicator of an abnormality, the abnormality may be based on any characteristic of the surgical procedure, such as the surgical event, the type of surgical procedure, and/or any other aspect of the surgical procedure. For example, a particular fluid leak condition may be determined to be normal in response to a first measurement of blood pressure of a patient undergoing a surgical procedure, and may be determined to be abnormal in response to a second measurement of blood pressure of the patient undergoing the surgical procedure.

Disclosed embodiments may include: a remedial action is enacted in response to determining that the detected fluid leak condition is an abnormal fluid leak condition. Additionally, some embodiments may include: in response to determining that the fluid leak is normal, forgoing the enactment of remedial action. Aborting the remedial action may include delaying the remedial action for a certain period of time or indefinitely. For example, if the analysis of the leak results in a determination that no remedial action is required, the remedial action may be aborted. Alternatively, if the remedial action has already begun and further analysis reveals that the remedial action is unnecessary, aborting the remedial action may include providing an updated notification (e.g., the notification may change the suggested remedial action or otherwise present different information than the previous notification).

Fig. 32 is a flow chart illustrating an example process 3200 for enabling fluid leak detection during a surgical procedure consistent with the disclosed embodiments. Process 3200 may be performed by at least one processor, such as one or more microprocessors. In some embodiments, process 3200 is not necessarily limited to the illustrated steps, and any of the various embodiments described herein may also be included in process 3200. The steps of process 3200 may be performed by, for example, a system that includes the components of system 1401. In some embodiments, process 3200 may be embodied in a non-transitory computer-readable medium containing instructions that, when executed by at least one processor, cause the at least one processor to perform operations of analyzing fluid leaks during a surgical procedure. In some embodiments, process 3200 can be performed in real-time during a surgical procedure.

At step 3202, the process may include: intra-cavity video of a surgical procedure is received in real time, consistent with the disclosed embodiments. Receiving the intra-cavity video in real-time may include receiving the video via a network or directly from an image sensor. In some embodiments, the intra-cavity video may depict a surgical robot performing some or all of the surgical procedure, as previously discussed.

At step 3204, the process may include: frames of the intra-cavity video are analyzed to determine abnormal fluid leakage conditions in the intra-cavity video, as previously discussed and illustrated with examples. As discussed, the fluid may include, for example, blood, bile, urine, and/or any other type of bodily fluid. Determining the source of the leak may include: identifying a ruptured anatomical organ, identifying a ruptured blood vessel, and/or identifying any other ruptured anatomical structure. In some embodiments, step 3204 may include: the frames of the intra-cavity video are analyzed to identify blood splatter and at least one characteristic of the blood splatter. The characteristics of the blood splash can be correlated to the source of the blood splash, the intensity (rate) of the blood splash, the color of the blood splash, the viscosity of the blood splash, and/or the volume (magnitude) of the blood splash. Analyzing the frames of the intra-cavity video may include: characteristics of the abnormal fluid leakage condition are determined. For example, the characteristic may be associated with a volume of the fluid leak, a color of the fluid leak, a type of fluid associated with the fluid leak, a fluid leak rate, a viscosity of the fluid, a reflectivity of the fluid, and/or any other characteristic of the fluid. Further, analyzing the frame may include: the method may include determining a flow rate associated with a fluid leak condition, determining a fluid loss associated with a fluid leak condition, and/or determining any other characteristic of a fluid leak condition. A method may further comprise the steps of: the intra-cavity video is stored and, upon determining an abnormal leak condition, a previous frame of the stored intra-cavity video is analyzed to determine a source of the leak.

At step 3206, the process may include enacting remedial action when an abnormal fluid leakage condition is determined. The selection of the remedial action may depend, for example, on at least one characteristic of the identified blood splash. In some embodiments, the selection of the remedial action may depend on the determined characteristics of the fluid leakage situation.

The disclosed systems and methods may involve analyzing surgical clips to identify events during surgery that may affect a patient's post-discharge risk. The post-discharge risk of the patient may need to be identified after the surgical procedure, based on intra-operative events during the surgical procedure, and based on patient characteristics. Post-discharge risk may be determined by identifying events during the surgical procedure and using historical data to determine how the identified events may affect the post-discharge risk of the patient. Thus, there is a need to analyze surgical clips and identify events during surgery that may affect the patient's risk after discharge.

Aspects of the present disclosure may relate to predicting post-discharge risk after surgery, including methods, systems, devices, and computer-readable media.

For ease of discussion, a method is described below, and an understanding of aspects of the method applies equally to systems, apparatuses, and computer-readable media. For example, some aspects of such methods may occur electronically over a network, wired, wireless, or both. Other aspects of this method may occur using non-electronic means. In the broadest sense, the method is not limited to a particular physical and/or electronic instrument, but can be accomplished using many different instruments.

Consistent with the disclosed embodiments, a method of predicting risk after discharge is provided that may involve accessing video frames taken during a particular surgical procedure on a patient. As used herein, video may include any form of recorded visual media, including recorded images and/or sound. For example, the video may include a sequence of one or more images captured by an image capture device (such as cameras 115, 121, 123, and/or 125, as described above in connection with fig. 1). The images may be stored as separate files or may be stored in a combined format, such as a video file, which may include corresponding audio data. In some embodiments, the video may be stored as raw data and/or images output from an image capture device. In other embodiments, the video may be processed. For example, a video file may include: audio Video Interleave (AVI), Flash Video Format (FLV), QuickTime File Format (MOV), MPEG (MPG, MP4, M4P, or any other Format), Windows Media Video (WMV), Material Exchange Format (MXF), or any other suitable Video file Format.

The video clips may refer to videos that have been captured by the image capture device. In some embodiments, a video filmlet may refer to a video that includes a sequence of images in the order in which they were originally captured. For example, a video filmlet may include video that has not been edited to form a video compilation. In other embodiments, the video clips may be edited in one or more ways to remove frames associated with inactivity during the surgical procedure, or to otherwise assemble frames that were originally taken out of order. Accessing the video clips may include retrieving the video clips from a storage location, such as a memory device. The video clips may be accessed from local storage (such as a local hard drive) or may be accessed from a remote source (e.g., over a network connection). Consistent with this disclosure, indexing may refer to the process of storing data in a manner such that the data may be retrieved more efficiently and/or more effectively. The process of indexing a video filmlet may include associating one or more characteristics or indicators with the video filmlet such that the video filmlet may be identified based on the characteristics or indicators.

The surgical procedure may include any medical procedure associated with or involving a manual procedure (procedure) or surgical procedure on the body of a patient. The surgical procedure may include cutting, abrading, suturing, or other techniques involving physical alteration of body tissues and/or organs. The surgical procedure may also include diagnosing the patient or administering a drug to the patient. Some examples of such surgical procedures may include: laparoscopic surgery, thoracoscopic surgery, bronchoscopic surgery, microscopic surgery, open surgery, robotic surgery, appendectomy, carotid endarterectomy, carpal tunnel release, cataract surgery, cesarean section, cholecystectomy, colectomy (such as partial or total colectomy), coronary angioplasty, coronary artery bypass surgery, debridement (e.g., wound, burn, or infection), free skin graft, hemorrhoidectomy, hip replacement, hysterectomy, hysteroscopy, groin hernia repair, knee arthroscopy, knee replacement, mastectomy (such as partial mastectomy, total mastectomy, or modified radical mastectomy), prostatectomy, prostate removal, shoulder arthroscopy, spinal surgery (such as spinal fusion, etc.) Laminectomy, foraminotomy, discectomy, disc replacement, or an interbody implant), tonsillectomy, cochlear implant surgery, resection of a brain tumor (e.g., meningioma), interventional procedures such as percutaneous transluminal coronary angioplasty, transcatheter aortic valve replacement, minimally invasive surgery to clear cerebral hemorrhage, or any other medical procedure involving some form of incision. Although the present disclosure is described with reference to surgical procedures, it is to be understood that it may be applicable to other forms of medical procedures or general procedures.

In some exemplary embodiments, the accessed video clips may include video clips captured via at least one image sensor located in at least one of a position above an operating table, a surgical cavity of a patient, within an organ of a patient, or within a vasculature of a patient. The image sensor may be any sensor capable of recording video. The image sensor located in a position above the surgical table may include an image sensor positioned outside the patient's body and configured to capture images from above the patient. For example, the image sensor may include a camera 115 and/or 121 as shown in FIG. 1. In other embodiments, the image sensor may be placed within the patient, such as within a cavity, for example. As used herein, a cavity may include any relatively empty space within an object. Thus, a surgical cavity may refer to a space within a patient's body in which a surgical procedure or procedure is being performed. It should be understood that the surgical cavity may not be completely empty, but may include tissues, organs, blood or other fluids present within the body. An organ may refer to any individual region or portion of an organism. Some examples of human patient organs may include the heart or liver. The vasculature may refer to a system or group of vessels within an organism. The image sensor located within the surgical cavity, organ, and/or vasculature may include a camera included on a surgical tool inserted into the patient.

Accessing frames of video taken during a particular surgical procedure may include: accessing at least one of the frame, metadata referenced by the frame, pixel values of pixels of the frame, information based on an analysis of the frame, and the like. For example, frames of video taken during a particular surgical procedure may be accessed by a computerized device that reads information from memory, e.g., for processing by at least one processing device. For example, the processing device may analyze the accessed frames using a machine learning method configured to analyze aspects of the video data, e.g., as described above. For example, the machine learning method may be configured to identify events within the video frames, or identify surgical instruments, anatomical structures, and interactions between surgical instruments and anatomical structures by analyzing the video frames, or the like. In some cases, accessing the video frame may include: the frames are accessed by a healthcare professional, such as a surgeon, anesthesiologist, or any other healthcare professional. In some cases, the video frames may be accessible by the patient, family members of the patient, or any other authorized party.

Aspects of the disclosure may include: stored historical data identifying intraoperative events and associated results is accessed. As used herein, an intraoperative event of a surgical procedure (also referred to as a surgical event) may refer to an action performed as part of a surgical procedure, such as an action performed by a surgeon, surgical technician, nurse, physician's assistant, anesthesiologist, doctor, any other medical professional, surgical robot, or the like. The intraoperative surgical event can be a planned event, such as an incision, administration of a drug, use of a surgical instrument, resection, ligation, implantation, suturing, stapling, or any other planned event associated with a surgical procedure or stage. Additionally or alternatively, an intra-operative event may also refer to an event that occurs with an anatomical structure and/or a medical instrument associated with a surgical procedure, regardless of whether the event includes an action performed by a healthcare professional. One example of such an intraoperative event may include a change in the condition of an anatomical structure. Another example of such an intra-operative event may include a change in the state of the medical device (e.g., from "partially filled" to "filled").

Examples of exemplary intraoperative events for laparoscopic cholecystectomy may include: trocar placement, calot triangle dissection, clamping closure and cutting of cystic duct and artery, cystic dissection, cystic packaging, cleaning and solidification of liver bed, cystic contraction and the like. In another example, the surgical events of cataract surgery may include: povidone iodine injection, corneal incision, capsulorhexis, phacoemulsification, cortical aspiration, intraocular lens implantation, intraocular lens adjustment, wound closure, and the like. In yet another example, the surgical signature events of pituitary surgery may include: preparation, nasal incision, nasal retractor installation, tumor access, tumor removal, nasal columella replacement, suturing, nasal compression, and the like. Some other examples of surgical feature events may include: incisions, laparoscopic positioning, sutures, and the like.

In some embodiments, the intraoperative event may include an adverse event or complication. Some examples of adverse surgical events may include: bleeding, mesenteric emphysema, injury, transition to unplanned open surgery (e.g., abdominal wall incision), incision significantly larger than planned, etc. Some examples of intraoperative complications may include: hypertension, hypotension, bradycardia, hypoxemia, adhesions, hernia, atypical dissection, dural tears, periodor injury, arterial infarction, and the like. In some cases, the surgical event may include other errors, including: technical errors, communication errors, management errors, judgment errors, decision errors, errors related to medical device usage, error communication, and the like. In various embodiments, the event may be brief or may last for a duration of time. For example, a transient event (e.g., an incision) may be determined to occur at a particular time during a surgical procedure, and a prolonged event (e.g., bleeding) may be determined to occur within a certain time interval. In some cases, the extended event may include an explicit start event and an explicit end event (e.g., start of stitch and end of stitch), and the stitch is an extended event. In some cases, the prolonged event is also referred to as a stage during surgery.

In some cases, a surgical event may identify a set of sub-events (i.e., more than one sub-event or step). For example, an event of administering general anesthesia to a patient may include several steps, such as a first step of providing a drug to the patient via an IV line to induce unconsciousness, and a second step of administering a suitable gas (e.g., isoflurane or desflurane) to maintain general anesthesia.

The historical data may include information related to or based on historical (i.e., previously performed) surgical procedures. For example, the historical data may include: historical surgical clips, information based on analysis of historical surgical clips, historical annotations from one or more healthcare providers, historical data from medical devices (e.g., historical sign signals collected during historical surgical procedures), historical audio data, historical data collected from various sensors (e.g., image sensors, chemical sensors, temperature sensors, electrical sensors), or any other historical data that may be relevant to one or more historical surgical procedures.

Accessing stored historical data identifying intraoperative events and associated results may include: a database containing information about intraoperative events and associated results is accessed. For example, the database may include data structures such as tables, databases, or other data organizations that maintain historical intra-operative events and historical results associated with intra-operative events. For example, the intraoperative event may be "bleeding" and the historical association result may be "anemia". In some cases, an intra-operative event may include associated results having different characteristics. For example, an intraoperative event "hemorrhage" may include a first correlation result "loss of hemoglobin" with a first characteristic "drop from 16g/dL to 13 g/dL" and a second correlation result "loss of hemoglobin" with a second characteristic "drop from 15g/dL to 12 g/dL". In some cases, an intraoperative event such as "hemorrhage" may have a first result of "hemoglobin loss" and a second result different from the first result (e.g., "cardiac arrest").

Additionally or alternatively, accessing stored historical data identifying intraoperative events and associated results may include: the historical data is accessed by a computerized device that reads at least a portion of the historical data from the memory, e.g., for processing by at least one processing device. The historical data may include: historical surgical clips, information about historical intraoperative events, and the like, and associated historical results. In some cases, processing the accessed historical data may be performed by a machine learning model configured to analyze aspects of the historical surgical short (and any other historical surgical data). For example, the machine learning method may be configured to identify intraoperative events within a video frame of a historical surgical short by identifying surgical instruments, anatomical structures, and interactions between surgical instruments and anatomical structures in the historical surgical short.

In some cases, accessing stored historical data identifying intraoperative events and associated results may include: the historical data is accessed by a surgeon, anesthesiologist, nurse or any other medical professional. In some cases, the historical data may be accessed by the patient, family members of the patient, or any other party authorized to access the historical data.

In various embodiments, a machine learning model may be used to identify at least one particular intra-operative event in an accessed video frame of a surgical procedure, e.g., as described above. The trained machine learning model may be an image recognition model for recognizing events. For example, a machine learning model may analyze a plurality of video frames to detect motion or other changes within images of the frames represented as the video. In some embodiments, the image analysis may include an object detection algorithm, such as Viola-Jones object detection, Convolutional Neural Network (CNN), or any other form of object detection algorithm. The machine learning model may be configured to return the name of the event, the type of the event, and the characteristics of the event. For example, if the event is a cut, the machine learning model may be configured to return the name "cut" used to characterize the event, as well as the length and depth of the cut, as features of the event. In some cases, a predetermined list of possible names for various events may be provided to the machine learning model, and the machine learning model may be configured to select names from the list of events for accurately characterizing the events.

Some aspects of the disclosed embodiments may include: the occurrence of at least one particular intraoperative event is identified using a machine learning model. For example, the machine learning model may identify a particular intraoperative event occurrence by identifying the beginning of the event. In some cases, the machine learning method may identify characteristics of the event and/or the end of a particular intraoperative event.

The machine learning model may be trained using example training data to identify intraoperative events. Example training input data may include historical snippets of the surgical procedure that include intraoperative events. In various cases, multiple samples of training data may be generated and used to train the machine learning model. During the training process, training data (e.g., first training data) may be used as input data for the model, and the model may perform calculations and output an event identification string for the intraoperative event (e.g., the name of the event). In various embodiments, the event identification string may be compared to known historical names of corresponding intraoperative events to evaluate the model for correlation errors. If the error is below a predetermined threshold, the model may be trained using other input data. Alternatively, if the error is above a threshold, the model parameters may be modified and the training step may be repeated using the first training data.

In addition to or as an alternative to detection with a machine learning model, various other methods may be used to detect feature events in video frames received from an image sensor. In one embodiment, the characteristic events may be identified by a medical professional (e.g., surgeon) during the surgical procedure. For example, the surgeon may identify the characteristic event using a visual or audio signal (e.g., a gesture, a body posture, a visual signal generated by a light source produced by a medical instrument, spoken, or any other suitable signal) from the surgeon that may be captured by one or more image/audio sensors and identified as a trigger for the characteristic event.

Aspects of the disclosure may also include: identifying at least one particular intra-operative event based on at least one of: a detected surgical tool in the accessed frame, a detected anatomical structure in the accessed frame, an interaction between a surgical tool and an anatomical structure in the accessed frame, or a detected abnormal fluid leak condition in the accessed frame.

The surgical tool may be any instrument or device that may be used during a surgical procedure, which may include, but is not limited to, a cutting instrument (such as a scalpel, scissors, or saw), a grasping and/or clamping instrument (such as a Billroth clamp, a "mosquito" hemostat, a non-invasive hemostat, a Deschamp needle, or a Hopfner hemostat), a retractor (such as a Farabef C laminar flow hook, a blunt tooth hook, a sharp tooth hook, a slotted probe, or a compaction clamp), a tissue unifying instrument and/or material (such as a needle holder, a surgical needle, a stapler, a clamp, a tape or mesh), a protective device (such as a facial and/or respiratory protection device, a headgear, a shoe cover, or a glove), a laparoscope, an endoscope, a patient monitoring device, and the like. A surgical tool (also referred to as a medical tool or medical instrument) may include any equipment or device used as part of a medical procedure.

The surgical tools may be detected in the surgical video clips using any suitable means, for example as described above.

Similarly, to detect surgical tools, a machine learning model may be used to detect anatomical structures in a surgical short sheet. The anatomical structure may be any particular portion of a living organism, including, for example, an organ, tissue, a duct, an artery, or any other anatomical portion. In some cases, a prosthesis, implant, or artificial organ may be considered an anatomical structure.

Detecting a surgical tool and/or an anatomical structure using a machine learning method may be one possible method, for example as described above. Additionally or alternatively, various other methods may be used to detect the surgical tool (or anatomical structure) in the surgical video clip received from the image sensor. In one embodiment, the surgical tool (or anatomical structure) may be identified by a medical professional (e.g., surgeon) during the surgical procedure. For example, a surgeon may identify a surgical tool (or anatomical structure) by speaking the name of the surgical tool and/or anatomical structure, such that audio sounds from the surgeon may be captured by one or more audio sensors and identified by a computer-based speech recognition model (or by a human operator used to record information during surgery).

Some aspects of the present disclosure may involve analyzing frames of a surgical short to identify interactions between a medical tool and an anatomical structure, e.g., as described above. For example, at least some of the frames of the surgical clip may indicate a portion of the surgical clip that is performing a surgical procedure on the anatomical structure. As discussed above, the interaction may include any action of the medical instrument that may affect the anatomy, and vice versa. For example, the interaction may include contact between the medical instrument and the anatomical structure, an action of the medical instrument on the anatomical structure (such as cutting, clamping, grasping, applying pressure, or scraping), a physiological response of the anatomical structure, light emitted by the surgical tool toward the anatomical structure (e.g., the surgical tool may be a laser emitting light toward the anatomical structure), sound emitted toward the anatomical structure, an electromagnetic field generated near the anatomical structure, a current induced in the anatomical structure, or any other suitable form of interaction.

In some cases, identifying the interaction may include identifying that the surgical approach is to the anatomical structure. For example, by analyzing a surgical video clip of an example surgical procedure, the image recognition model may be configured to determine the distance between the surgical tool and a point (or set of points) of the anatomical structure.

Aspects of the present disclosure may also relate to detecting abnormal fluid leakage conditions in accessed frames, e.g., as described above. The abnormal fluid leakage may include: bleeding, urine leakage, bile leakage, lymph leakage, or any other leakage. Abnormal fluid leaks may be detected by a corresponding machine-learned model trained to detect abnormal fluid leak events within a surgical video clip taken during a surgical procedure. It should be noted that the machine learning model used to detect the surgical instrument (e.g., the first machine learning model) may be configured and/or trained differently than the machine learning model used to detect the anatomical structure (e.g., the second machine learning model), and may be configured and/or trained differently than the machine learning model used to detect the abnormal leakage (e.g., the third machine learning model). Further, the second machine learning model may be configured and/or trained differently than the third machine learning model. In various embodiments, configuring the machine learning model may include: any suitable parameters of the machine learning model are configured (e.g., selected). For example, if the machine learning model is a neural network, configuring the neural network may include: the number of layers of the neural network, the number of nodes of each layer, the weights of the neural network, or any other suitable parameter of the neural network is selected.

Aspects of the disclosed embodiments may include: analyzing the accessed frame; and identifying at least one particular intra-operative event in the accessed frame based on information obtained from the historical data. As previously described, and consistent with various embodiments, the process of analyzing the accessed frames may be performed by a suitable machine learning model (such as an image recognition algorithm) as described above, consistent with the disclosed embodiments. In various embodiments, information obtained from historical data may be used to train an image recognition algorithm to identify specific intraoperative events based on the accessed frames of the surgical short, as previously described. In one example, the historical data may include a statistical model and/or a machine learning model (e.g., as described above) based on analysis of information and/or video clips from the historical surgical procedure, and the statistical model and/or the machine learning model may be used to analyze the accessed frames and identify the at least one particular intraoperative event in the accessed frames.

Aspects of the disclosure may include: a predicted outcome associated with the particular surgical procedure is determined based on information obtained from the historical data and the identified at least one intraoperative event. For example, the data structure may include historical data identifying relationships between intraoperative events and predicted outcomes. Such data structures may be used to obtain predicted outcomes associated with a particular surgical procedure. For example, fig. 32A shows an example graph 3200 of intraoperative events E1 through E3 connected to possible outcomes C1 through C3 using connections n11 through n 32. In an example embodiment, the connection n11 may include information indicating the probability of the result C1 (i.e., information indicating how often the result C1 occurred in a surgical procedure that included event E1). For example, connection n11 may indicate that, assuming the occurrence of intraoperative event E1, result C1 may occur 30% of the time, connection n12 may indicate that result C2 may occur 50% of the time, and connection n13 may indicate that result C3 may occur 20% of the time. Similarly, assuming the occurrence of intra-operative event E2, connection n22 may indicate the probability of outcome C2, and assuming the occurrence of intra-operative event E2, connection n23 may indicate the probability of outcome C3. Assuming the occurrence of the intra-operative event E3, the connection n32 may indicate the probability of the result C2. Thus, once an intra-operative event is known using information obtained from historical data (e.g., using information from graph C100), the most likely outcome (e.g., outcome C2) may be determined based on the probabilities assigned to n11 through n 13. In another example, the historical information may include a hypergraph, the hyper-edge of which may connect a plurality of intra-operative events with the outcome and may indicate a particular probability of the outcome of the surgical procedure including the plurality of events. Thus, once a number of intraoperative events are known using information obtained from historical data (e.g., from a hypergraph), the most likely outcome may be determined based on the probability assigned to the hyperedge. In some examples, the probabilities assigned to the edges of graph C100 or the hyper-edges of the hyper-graph may be based on an analysis of historical surgeries, e.g., by calculating statistical probabilities of the outcomes of a set of historical surgeries (which include a particular set of intra-operative events corresponding to a particular edge or a particular hyper-edge). In some other examples, the historical information may include a trained machine learning model to predict an outcome based on the intraoperative events, and the trained machine learning model may be used to predict an outcome associated with the particular surgical procedure based on the identified at least one intraoperative event. In one example, the trained machine learning model may be obtained by training a machine learning algorithm using training examples, and the training examples may be based on historical surgery. An example of such a training example may include a list of intraoperative surgical events, and a flag indicating a result corresponding to the list of intraoperative surgical events. In one example, two training examples may have the same list of intraoperative surgical events, but different labels indicating different results.

In some cases, the predicted outcome may be a particular predicted event following the surgical procedure. For example, the predicted outcome may be a risk estimate of a post-discharge accident, a post-discharge adverse event, a post-discharge complication, or a readmission. In some cases, the predicted outcome may be a set of events. For example, such a set of events may include events that assess the "physical and mental well-being" of a patient. These events may occur at specific points in time when assessing a patient's "wellness" (e.g., specific hours of the day after surgery, specific days after surgery, specific weeks, months, years after surgery, or other time intervals). "wellness" can be assessed using any suitable objective measure, for example, using imaging such as CAT scans, ultrasound imaging, visual inspection, the presence of complications that can be determined during a physical examination, or any other suitable means or test for assessing the patient's wellness (e.g., via blood tests). Physical and mental health may also be determined based on objective measures such as by asking the patient to describe his/her general condition.

Determining a predicted outcome associated with the surgical procedure based on the determined intra-operative event may be accomplished using statistical analysis. For example, historical surgical data of past surgical procedures (also referred to as historical surgical procedures) containing intra-operative events may be analyzed to determine historical results of such past surgical procedures. For example, for a given type of historical surgery, surgical outcome statistics may be collected, as shown in fig. 32B. For example, when there is no intraoperative event (e.g., when there is no adverse intraoperative event such as bleeding, cardiac arrest, or any other adverse event), probability distribution 3201A (hereinafter also referred to as probability bars), represented by bars 3211A-3217A, may determine the probability of corresponding outcomes C1-C4. Similarly, when there is an intra-operative event (e.g., an adverse intra-operative event), the probability distribution 3201B, represented by probability posts 3211B-3217B, may determine the probability of the corresponding result C1-C4. In an example embodiment, result C1 may correspond to a particular post-discharge event (e.g., foreign bodies of gauze are left in the patient's body), result C2 may correspond to a particular post-discharge adverse event (e.g., bleeding, pain, nausea, disorder, or any other adverse event), result C3 may correspond to a post-discharge complication (e.g., paralysis, pain, bleeding, or any other complication), and result C4 may correspond to an increased risk of readmission. It should be noted that any other suitable results may be used to evaluate the surgical procedure (e.g., results of evaluating an objective measure of "wellness" of the patient several days after the surgical procedure). In an example embodiment, the heights of the probability posts 3211A-3217A and 3211B-3217B may be related to the probability of occurrence of the corresponding results C1-C4.

In an example embodiment, an intra-operative event may affect the probability of occurrence of results C1-C4, as shown by posts 3211B-3217B, which have a different height than the corresponding posts 3211A-3217A. In the illustrative example, if the intra-operative event corresponds to cardiac arrest during surgery, the bar 3213B corresponding to the probability of outcome C2 (e.g., a disorder) may be higher than the bar 3211B corresponding to the probability of outcome C2 when the intra-operative event is not detected during surgery.

In some cases, statistical analysis may be used to determine a predicted outcome associated with a surgical procedure based on determining several intraoperative events that may occur during the surgical procedure. For example, fig. 33 shows a probability distribution 3201A with probability bars 3211A to 3217A (as described above) corresponding to the probabilities of results C1 to C4 when there are no adverse intra-operative events. Fig. 33 also shows a probability distribution 3201B with probability bars 3211B-3217B corresponding to the probabilities of outcomes C1-C4 when there is a first adverse intraoperative event labeled "B" during the surgical procedure. Likewise, fig. 33 also shows a probability distribution 3201C with probability bars 3211C-3217C corresponding to the probabilities of outcomes C1-C4 when there is a second adverse intra-operative event labeled "C" during the surgical procedure. Further, using the statistics of the surgery (including event "B" and event "C", with event "B" beginning before the beginning of event "C"), a probability distribution 3201BC may be determined, as shown by bars 3211BC to 3217BC corresponding to the probabilities of outcomes C1 to C4.

Additionally or alternatively, using the statistics of the surgery (including event "B" and event "C," with event "B" beginning after the beginning of event "C"), a probability distribution 3201CB may be determined, as shown by bars 3211CB to 3217CB corresponding to the probabilities of results C1 to C4. It should be noted that other probability distributions (in addition to distributions 3201B, 3201C, 3201BC, and 3201 CB) may be determined using suitable statistical data from various characteristics of events "B" and/or "C" and/or combinations thereof. For example, the event characteristic may include a duration of the event, a start time of the event, a completion event of the event, or any other suitable characteristic (e.g., if the event is an incision, the event characteristic may be a length of the incision; if the event is cardiac arrest, the event characteristic may be a blood pressure value during cardiac arrest; or any other suitable characteristic). Fig. 34 illustrates an example embodiment of how the probability distribution is affected by the event characteristics by plotting the heights of the bars 3411 to 3417 corresponding to the probabilities of the results C1 to C4 in a three-dimensional cartesian coordinate system. As shown in fig. 34, one axis is the probability of the results C1 to C4, the other axis represents the results (e.g., results C1 to C4), and the third axis represents the "event signature" of the intra-operative event and is represented by a numerical value (hereinafter referred to as an event signature value), such as, for example, the length of the incision of the intra-operative event as an incision. Fig. 34 shows that the column heights of the columns 3411 to 3417 may be continuously changed as the event characteristic values are changed, and in other examples, the event characteristic values may be discrete. For a given event feature value (e.g., V1 shown in fig. 34), the height value (e.g., H1) of an example column (e.g., 3415) corresponding to the probability of result C3 with event feature V1 may be interpolated (when height H1 is not known for value V1) using the nearby height value of column 3415.

Aspects of the disclosure may include: the predicted outcome is determined based on at least one of a characteristic of the patient, an electronic medical record, or a post-operative surgical report. Patient characteristics may include directly or indirectly characterizing the patient's age, sex, weight, height, and/or any information (e.g., whether the patient has relatives that may take care of the patient may be a characteristic that indirectly affects the predicted outcome of the surgical procedure), to the extent that such characteristics may affect the predicted outcome of the surgical procedure. Some other non-limiting examples of patient characteristics are described above. In one example, similar metrics may be used (e.g., in historical data) to identify surgical procedures similar to a particular surgical procedure (e.g., using a k-nearest neighbor algorithm, using an exhaustive search algorithm, etc.), and the identified similar surgical procedures may be used to determine a predicted outcome, e.g., by computing a statistical function (such as a mean, median, mode, etc.) of the outcomes of the identified similar surgical procedures. The similarity measure may be based on at least one of: patient characteristics, electronic medical records, post-operative surgical reports, intra-operative events that occur during surgery, duration of a stage of surgery, etc.

The electronic medical record (or any other suitable medical record, such as a paper medical record) may contain medical information of the patient (e.g., previous surgery of the patient, previous or current illness of the patient, allergies of the patient, illness of parents of the patient, illness of siblings of the patient, mental health of the patient, or any other medical information about the patient). The medical information may be organized in any suitable manner (e.g., a table, linked list, or any other suitable data structure may be used to organize the information). In some cases, other (non-medical) information related to the patient (e.g., the patient's location, diet, religion, ethnicity, occupation, fitness records, marital status, history of alcohol or smoking, or previous medication history) may be included (recorded) in the electronic medical record. In various embodiments, such information in the electronic medical record may be used to determine a predicted outcome of the surgical procedure.

In various embodiments, post-operative surgical reports may also be used to determine the outcome of the surgical procedure. The post-operative surgical report may include any suitable information related to the surgical procedure. For example, the report may include: the name of the surgical procedure, patient characteristics (as discussed above); the patient's medical history (including the patient's medical reports, other information related to the patient, as discussed above); information about surgical events occurring during the surgical procedure (including information about intraoperative surgical events, such as actions taken during the surgical procedure); and information about adverse or frontal surgical events. In an example embodiment, the surgical event may include an action performed by the surgeon such as an incision, a suture, or any other activity. Adverse surgical events may include any event that may negatively impact surgery and the predicted outcome of the surgery, such as bleeding, tissue rupture, thrombosis, cardiac arrest, or any other adverse surgical event. A frontal surgical event may include determining that at least some steps of the surgical procedure may not be necessary. In addition, if during surgery, it is determined that the patient does not have a tumor, removal of the tumor may not be necessary. In various embodiments, the post-operative surgical reported information may include a surgical short sheet depicting the surgical procedure, audio data, textual data, or any other suitable data recorded before, during, or after the surgical procedure.

In various embodiments, determining a predicted outcome based on at least one of a patient's characteristics, an electronic medical record, or a post-operative surgical report may be accomplished by analyzing historical surgical outcomes based on a plurality of parameters, such as patient characteristics, medical history data found in the electronic medical record or in various events, and event characteristics described in the post-operative surgical report.

In various embodiments, determining the prediction may include using a machine learning model (also referred to herein as an event-based machine learning model) that is trained based on intra-operative events to determine a prediction associated with a particular surgical procedure. Additionally, the event-based machine learning model may be used to predict the surgical outcome based on a variety of other parameters (except intra-operative events), such as patient characteristics, the medical history of the patient, characteristics of one or more medical professionals administering the surgical procedure, or any other suitable parameters.

Fig. 35A illustrates an example event-based machine learning model 3513 that takes input 3510 and outputs predicted results 3515 of a surgical procedure. The input 3510 may include parameters 3523 as shown in fig. 35B, such as patient characteristics and information from medical records as previously discussed. Further input 3510 can include information from post-operative surgical reports, which can include event data 3521, as shown in fig. 35B. In an example embodiment, the event data 3521 may include a list of events (e.g., events E1-EN), and surgical short segments V1-VN corresponding to events E1-EN. In addition, the data 3521 in fig. 35B may include event start events T1A through T1NA and completion times T1B through TNB. A surgical clip (e.g., V1) may be a set of frames of a surgical procedure corresponding to an event (e.g., E1). In an example embodiment, for an example surgical procedure, event E1 may be a transient event (e.g., an incision), for which T1A and T1B may be about the same time; event E2 may be an extended event (e.g., a stitch), for which T2A is the time when the stitch starts and T2B is the time when the stitch ends; and event EN may be the process of administering a drug to reverse anesthesia, with a corresponding start time TNA and finish time TNB.

In various implementations, the event-based machine learning method may be trained using training examples, e.g., as described above. For example, the training examples may be based on historical data. In another example, the training examples may include information related to the surgical procedure (e.g., as described above), and markers indicating the results.

Aspects of the disclosed embodiments may include: a predicted outcome is determined using a trained machine learning model to predict a surgical outcome based on the identified intraoperative event and the identified features of the patient. For example, the intraoperative event may be a "cut", and one of the identified features of the patient may be "65 year old female". In an example embodiment, the identified intraoperative event and one or more identified features of the patient may be used as input data to a trained machine learning model. For example, the input data may be input 3510, as shown in FIG. 35B. In some cases, the additional input data to the trained machine learning model may include, in addition to the identified intraoperative events and the identified features of the patient, healthcare professional data, patient medical data, and any other suitable data that affects the outcome of the surgical procedure. For example, as described above, the event-based model may be trained using various training data that may include or be based on historical events. As described above, the disclosed embodiments may include using a trained machine learning model to predict surgical outcomes. In various embodiments, the predicted surgical outcome may be a probability distribution of a different outcome set, as shown, for example, in plot 3201A shown in fig. 33.

Aspects of embodiments for predicting risk after discharge may further include: features of a patient are identified and a prediction associated with a surgical procedure is determined based on the identified features of the patient. The prediction associated with the surgical procedure based on the identified patient characteristics may be determined using a suitable machine learning model, such as model 3513 shown in fig. 35A, for example.

In some embodiments, the identified patient characteristic may be based on pre-operative patient data (e.g., pre-operative blood test values, pre-operative sign signals, or any other pre-operative characteristic). Additionally or alternatively, the identified patient characteristics may be based on post-operative patient data (e.g., post-operative blood test values, post-operative sign signals, post-operative weight, or any other post-operative characteristics).

In various embodiments, identifying patient characteristics may include: the frames of the accessed surgical clips are analyzed using a machine learning model. The example machine learning model may be an image recognition algorithm, as previously described, for recognizing features within frames of a surgical video clip taken during a surgical procedure. For example, an image recognition algorithm may be used to identify features such as: the size of the anatomy being operated on, the size of the patient, the estimated age of the patient, the sex of the patient, the race of the patient, or any other feature associated with the patient. For example, as described above, a machine learning model for identifying patient features may be trained to identify patient features using historical surgical procedures (including associated historical surgical video clips) and corresponding training examples of historical patient features. Training of the machine learning method for identifying patient features may use any suitable method, as described above. In various embodiments, a training machine learning method may use a training example based on a historical surgical short with labels based on outputting one or more patient features corresponding to the historical surgical short.

Additionally or alternatively, identifying patient characteristics may be derived from an electronic medical record. For example, electronic medical records may be read (or analyzed) using a suitable computer-based software application, and patient characteristics may be identified from the read (analyzed) data. For example, if the electronic record includes "James is a 65 year old white man male with lung disease," the computer-based software application may identify patient characteristics represented by the example record, such as "age: 65 "," name: james "," gender: male "," medical condition: lung disease "and/or" ethnicity: white people ".

Aspects of embodiments for predicting risk after discharge may further include: information identifying the achieved surgical outcome after the surgical procedure (also referred to herein as post-operative information) is received, and the machine learning model is updated by training the machine learning model using the received information. For example, prior machine learning algorithms and/or reinforcement machine learning algorithms may be used to update the machine learning model based on the received information. In an example embodiment, receiving the post-operative information may include: receiving visual or audio data during a physical examination following a surgical procedure, receiving laboratory results (e.g., blood test results, urine test results, medical imaging data, or any other suitable test) following a surgical procedure, receiving data related to a vital sign of a patient (e.g., a pulse of the patient, a blood pressure of the patient, or any other vital sign), and/or receiving an annotation from a healthcare provider (e.g., a physician performing a physical examination on the patient). In some cases, the received post-operative information may be used to determine the surgical outcome achieved. For example, the received information may be analyzed by a healthcare provider (e.g., a physician) and the physician may identify the achieved surgical result (e.g., the achieved surgical result may include a determination by the physician that the patient does not require any further medical intervention). Alternatively, the received post-operative information may be used by a suitable outcome determination machine learning model to determine the achieved surgical outcome. In various embodiments, determining the machine learning model using post-operative information as a result of the input may be different from the machine learning model used to predict the outcome of the surgical procedure based on information obtained during the surgical procedure and other relevant information such as patient characteristics, healthcare provider characteristics, or patient derived medical history, as described above.

In various embodiments, the received information may be used to determine an achieved surgical outcome, and a machine learning model used to predict a surgical outcome based on the identified intraoperative events and the identified patient characteristics may be updated by training the machine learning method using the received information identifying the achieved surgical outcome. Training of the machine learning method may use any suitable method, as described above.

In some embodiments, the output of the machine learning model used to predict the surgical outcome may be a probability of the predicted outcome. In some cases, the model may be trained by comparing the probabilities output by the model to corresponding probabilities of predicted outcomes inferred from historical surgical data (e.g., historical surgical short-film data) of historical surgical procedures. For example, using various historical surgical data, historical probabilities for a given outcome of a surgical procedure for a given type of surgical procedure may be obtained. In some cases, historical deviations (i.e., deviations between a historical surgery and a proposed sequence of events for the surgery) may be used to determine how the historical deviations affect the change in historical probability for a given outcome.

The historical probability values may be compared to the probability values returned by the machine learning method to determine the error of the method. In various embodiments, if the prediction returned by the machine learning method is a probability or a probability vector, the appropriate metric may be the difference between the probability and the historical probability or the difference between the probability vector and the historical probability vector. For example, a probability vector may be used to determine a probability of a prediction of a set of events. For example, if a set of predicted outcomes includes example outcomes C1 through C4 (such as "C1: paralyzed," "C2: died within three months," "C3: transfusion required within two weeks," "C4: no medical intervention required"), the probability vector for the outcome vector { C1, C2, C3, C4} may be { p1, p2, p3, p4}, where p1 through p4 indicate the probabilities of the outcomes C1 through C4.

Aspects of embodiments for predicting post-discharge risk may also include outputting the prediction in a manner that associates the prediction with the patient. In some embodiments, the process of outputting may include sending the prediction to the recipient. The recipient may include one or more healthcare professionals, patients, family members, or any other person, organization, or data storage device. The process that occurs may include sending the information to any suitable electronic device using any suitable electronic method (e.g., using wireless or wired communication, as described above). For example, sending the prediction results may include: the prediction results are sent to a data receiving device (e.g., a laptop, smartphone, or any other suitable electronic device) associated with the healthcare provider. In some cases, sending the information may involve mailing (or personally delivering) a physical copy of the document detailing the prediction (e.g., a paper copy, CD-ROM, hard disk, DVD, USB drive, or any other electronic storage device). Additionally or alternatively, sending the prediction result may include sending the prediction result to at least one of a health insurance provider or a medical malpractice underwriter.

In various embodiments, outputting the prediction may be performed by associating the prediction with the patient. For example, the patient's name and/or any other suitable information related to the patient may be listed in a document describing the predicted outcome.

In some embodiments, sending the prediction result may include: an electronic medical record associated with the patient is updated. The process of updating the electronic medical record may include replacing or modifying any appropriate data in the electronic medical record. For example, updating the medical record may include changing the outcome of the prediction from "anticipating moving hands and feet two weeks after physical therapy" to "anticipating paralysis with the remainder of the patient.

Aspects of the disclosure may include: a data structure containing the proposed sequence of surgical events is accessed, and at least one particular intraoperative event is identified based on identification of a deviation between the proposed sequence of events of the surgical procedure identified in the data structure and the actual sequence of events detected in the accessed frame. The process of accessing the data structure may be performed by any suitable algorithm and/or machine learning model configured to identify deviations, as discussed in this disclosure. For example, a machine learning model may be used to access the data structure and output a deviation between the proposed sequence of surgical events and the actual events performed during the surgical procedure. For example, if during an actual incision event, the incision length is shorter than the incision described by the corresponding proposed event, such deviations may be identified by machine learning methods. The data structure containing the proposed sequence of surgical events may be any suitable data structure described herein, consistent with the disclosed embodiments. For example, the data structure may be a relational database having one or more database tables. The data structure may contain a proposed sequence of surgical events and may include: the name of the event, an image corresponding to the event, video data related to the event, or any other suitable data that may describe the event. The data structure may define the suggested sequence of events by assigning to each event a number associated with the order of the events in the sequence of events.

In various embodiments, identifying a deviation between a proposed sequence of events and an actual sequence of events for a surgical procedure may include: various methods discussed in this disclosure (e.g., as described in connection with fig. 26-28), and related descriptions of these figures. In an example embodiment, the event of the surgery may be identified by analyzing a surgical short of the surgery, as previously described. Identifying a deviation between the surgical procedure and the proposed sequence of events may include utilizing a machine learning method, as described in this disclosure. Identifying at least one particular intraoperative event based on the identification of deviation may include: at least one actual event detected in the accessed frames that deviates from the proposed sequence of events for the surgical procedure is identified, and such identification may be performed by a machine learning method for identifying the deviation.

The bias-based model may be trained using various training data including historical biases. In an example embodiment, the historical deviation may be determined by evaluating the deviation of the historical sequence of events for an example historical surgical procedure of a given type (e.g., bronchoscopy) from a corresponding suggested sequence of events for the same type of surgical procedure. The bias-based model may be trained using any suitable training process, such as described above.

In various embodiments, identifying the deviation comprises: a machine learning model is used that is trained to identify deviations from the suggested sequence of events based on the historical surgical video clips, the historical suggested sequence of events, and information identifying deviations from the historical suggested sequence of events in the historical video clips. Machine learning methods for identifying deviations using historical surgical video clips and historical suggested event sequences are described herein and are not repeated for the sake of brevity.

In various embodiments, identifying the deviation comprises: the frames of the surgical procedure (e.g., frames accessed by any suitable computer-based software application or healthcare professional for analyzing information within the frames, as discussed above) are compared to a reference frame depicting a proposed sequence of events. As previously described, the reference frame may be a historical frame taken during a historical surgical procedure. In an example embodiment, the video frames and the reference frames depicting the mandatory sequence of events may be synchronized according to an event (also referred to herein as a start event), which may be the same as or (substantially similar to) a corresponding start event in the mandatory (or suggested) sequence of events. In some cases, the frame depicting the beginning of the start event may be synchronized with a reference frame depicting the start event in the mandatory (suggested) event sequence. In some cases, the surgical events may first be associated with corresponding reference events in the mandatory sequence using any suitable method described above (e.g., using an image recognition algorithm for identifying the events). After the surgical event is linked to the corresponding reference event in the mandatory sequence, the frame depicting the beginning of the surgical event may be synchronized with the reference frame depicting the beginning of the corresponding mandatory event.

In various embodiments, identifying a deviation between a particular surgical procedure and a proposed sequence of events for that surgical procedure may be based on at least one of: a surgical tool detected in the accessed frame of the surgical short, a detected anatomical structure in the accessed frame, or an interaction between the detected surgical tool and the detected anatomical structure. In some cases, identifying the deviation may be based on abnormal fluid leakage conditions detected in the surgical video clips.

For example, if it is determined (e.g., using a machine learning method, using a visual object detector, using an indication from a healthcare professional, etc.) that a surgical tool is present in a particular anatomical region, the method may determine that a deviation has occurred. In some cases, the method may determine that a deviation has occurred if the surgical tool is present in a particular anatomical region (as identified in the surgical short) during a time (or time interval) of the surgical operation at which the surgical tool should not be present. Alternatively, in some cases, identifying the deviation may include: determining that the surgical tool is not in the particular anatomical region. For example, if a surgical tool is not present in a particular anatomical region during a certain time (or time interval) of a surgical procedure, the method may be configured to determine that a deviation has occurred.

In some cases, when it is determined (e.g., using machine learning methods, using visual object detectors, using instructions from a healthcare professional, etc.) that an anatomical structure is present in the surgical short, it may be further determined that a deviation has occurred. For example, the method may determine that a deviation has occurred if an anatomical structure is identified in a surgical short during a time (or time interval) of the surgical procedure at which the anatomical structure should not be present. Alternatively, in some cases, identifying the deviation may include: determining that the anatomical structure is not present in the surgical drape. For example, if an anatomical structure is not present in the surgical clip during a certain time (or time interval) of the surgical procedure, the method may be configured to determine that a deviation has occurred.

Additionally or alternatively, identifying the deviation may include: interactions between the surgical tool and the anatomical structure are identified. The process of identifying interactions between the surgical tool and the anatomical structure may involve analyzing frames of the surgical procedure to identify interactions, as described above.

In various embodiments, if interactions between the surgical tool and the anatomical structure during the surgical procedure are identified, and such interactions are not suggested (or expected) for the reference surgical procedure (i.e., a surgical procedure following a mandatory (or suggested) sequence of events), the method may be configured to determine that a deviation has occurred. Alternatively, if no interaction between the surgical tool and the anatomical structure is identified (e.g., if no interaction exists during the surgical procedure), and the interaction is suggested for the reference surgical procedure, the method may be configured to determine that a deviation has occurred. The method may be configured to determine that there is no substantial deviation between the surgical procedure and the reference surgical procedure if there is (or is not) an interaction between the surgical tool and the anatomical structure in both the surgical procedure and the reference procedure.

Aspects of the present disclosure may also relate to identifying deviations based on abnormal fluid leakage conditions detected in a surgical video clip. As described above, the abnormal fluid leakage may include: bleeding, urine leakage, bile leakage, lymph leakage, or any other leakage, and may be detected as described above (e.g., by a corresponding machine learning model). For example, the method may determine that a deviation has occurred if an abnormal fluid leak is present in a particular anatomical region (as identified in a surgical short-cut) during a time (or time interval) of the surgical procedure at which the abnormal fluid leak should not be present. Alternatively, in some cases, identifying the deviation may include: it is determined that there is no abnormal fluid leakage in the specific anatomical region. For example, if during a certain time (or time interval) of a surgical procedure, there is no abnormal fluid leak in a particular anatomical region, the method may be configured to determine that a deviation has occurred.

Aspects of the disclosed embodiments may include: determining at least one action likely to improve the prediction result based on the accessed frames (e.g., frames of a surgical short), and providing a recommendation based on the determined at least one action. In various embodiments, determining at least one action may include using a suitable machine learning method for accessing and analyzing frames of a surgical procedure. In some examples, the machine learning model may be trained using training examples to determine actions and/or improvements to the outcome of the surgical procedure that are likely to improve based on information related to the current state of the surgical procedure. Examples of such training examples may include information related to the status of a particular surgical procedure, as well as markers indicating actions that may improve the outcome of the particular surgical procedure, and/or markers of likely improvement to the predicted outcome. Such labeling may be based on analysis of historical data associated with historical surgical procedures, based on user input, and the like. Some non-limiting examples of information related to the current state of the surgical procedure may include: images and/or video of a surgical procedure, information based on analysis of images and/or video of a surgical procedure, characteristics of a patient undergoing a surgical procedure, characteristics of a healthcare professional performing at least part of a surgical procedure, characteristics of a medical instrument used in a surgical procedure, characteristics of an operating room associated with a surgical procedure, intra-operative events occurring in a surgical procedure, a current time, a duration of a surgical phase in a surgical procedure, and so forth. Further, in some examples, the trained machine learning model may be used to analyze information related to the current state of the surgical procedure and determine the at least one action likely to improve the predicted outcome and/or likely improvement to the predicted outcome.

Aspects of the disclosed embodiments may include providing suggestions before performing a particular action. The suggestion may be any suitable electronic notification as described herein and consistent with the disclosed embodiments. Alternatively, the recommendation may be any suitable audio, visual, or any other signal (e.g., a tactile signal such as a vibration) that may be transmitted to a healthcare professional (e.g., a surgeon administering the surgical procedure).

Various disclosed embodiments may include forgoing providing a recommendation when a likely improvement in the predicted outcome due to the determined at least one action is below a selected threshold. For example, if the likelihood of improvement is below 50%, then no advice may be provided. In some cases, the improvement in the first prediction may be offset by an unfavorable second prediction (offset), and may not provide a suggestion to improve the first prediction. For example, if a first prediction is identified as "eliminating a patient's rash" and a second prediction is identified as "cardiac arrest," then even if there is a sufficiently high likelihood of improvement to the first prediction (e.g., there is a 99% chance of eliminating the patient's rash), no advice is provided due to the likelihood of the second prediction being "cardiac arrest" (even if the likelihood of the second outcome is small, e.g., 1%). Thus, selecting to provide a suggestion or forgoing to provide a suggestion may be based on one or more predicted outcomes. For example, the selected threshold may be based on one or more selected outcomes. For example, if the first outcome is that a person is likely to continue to live for twenty years, and the second adverse outcome is cardiac arrest, then advice may still be provided when the likelihood of cardiac arrest is sufficiently low (e.g., below 30%). In some cases, the threshold may be selected based on a characteristic of the patient. For example, if a patient is overweight, the selected threshold for giving up recommendations for providing bariatric surgery may be lowered compared to the same threshold for a less overweight person. In some cases, the at least one action of determining to improve the prediction outcome may also be based on a characteristic of the patient. For example, if the patient is elderly, bypass surgery may not be advised, and such surgery may be advised of younger people.

FIG. 36 illustrates, by process 3601, aspects of an embodiment for predicting risk post discharge. At step 3611, the process 3601 may include: video frames taken during a particular surgical procedure on a patient are accessed using any suitable means. For example, access may be via a wired or wireless network, via a machine learning model, or via any other means that allows data to be read/written. In some cases, accessing the frame may include accessing by a healthcare professional. In such a case, the healthcare professional may use an input device (e.g., a keyboard, mouse, or any other input device) to access the frame.

At step 3613, the process 3601 may include: stored historical data identifying intraoperative events and associated results is accessed, as described above. At step 3615, the process 3601 may include: the accessed frames (e.g., frames of a surgical short) are analyzed and at least one particular intraoperative event in the accessed frames is identified based on information obtained from historical data. The process of analyzing the accessed frames and identifying specific intra-operative events in the accessed frames may be performed by a suitable machine learning model as described above.

At step 3617, the process 3601 may include: based on the information obtained from the historical data and the identified at least one intraoperative event, a predicted outcome associated with the particular surgical procedure is determined, as described above. The process 3601 may end with step 3619, which outputs the prediction in a manner that associates the prediction with the patient. As previously described.

It should be noted that process 3601 is not limited to steps 3611-3619 and that new steps may be added or some of steps 3611-3619 may be replaced or omitted. For example, step 3613 may be omitted.

As previously discussed, the present disclosure relates to methods and systems for predicting post-discharge risk and non-transitory computer-readable media that may contain instructions that, when executed by at least one processor, cause the at least one processor to perform operations that enable predicting post-discharge risk.

The disclosed embodiments may include any of the following black dot directed features alone or in combination with one or more other black dot directed features, whether implemented as a method, executed by at least one processor, and/or stored as executable instructions on a non-transitory computer readable medium:

Accessing at least one video of a surgical procedure

Outputting the at least one video for display

Superimposing a surgical timeline over the at least one video output for display, wherein the surgical timeline includes markers identifying at least one of a surgical phase, an intra-operative surgical event, and a decision-making node

Enabling a surgeon to select one or more markers on the surgical timeline while viewing the playback of the at least one video, and thereby cause the display of the video to jump to a location associated with the selected marker

Wherein the indicia is encoded by at least one of color or criticality level

Wherein the surgical timeline includes textual information identifying portions of the surgical procedure

Wherein the at least one video comprises a compilation of short slices from a plurality of surgical procedures arranged in chronological order,

wherein the short slice compilation depicts complications from the plurality of surgical procedures

Wherein the one or more markers are associated with the plurality of surgical procedures and displayed on a common timeline

Wherein the one or more markers comprise a decision-making node marker corresponding to a decision-making node of the surgical procedure

Wherein selection of the decision-making node indicia enables a surgeon to view two or more alternative video clips from two or more corresponding other surgeries

Wherein the two or more video clips exhibit different behavior

Wherein the one or more markers comprise a decision-making node marker corresponding to a decision-making node of the surgical procedure

Wherein selection of the decision-making node indicia causes display of one or more alternative possible decisions related to the selected decision-making node indicia

Wherein one or more of the evaluation results associated with the one or more alternative possible decisions are displayed in conjunction with the display of the one or more alternative possible decisions

Wherein the one or more estimation results are results of analyzing a plurality of past surgical videos including respective similar decision-making nodes

Wherein information relating to the distribution of past decisions made at respective similar past decision-making nodes is displayed in conjunction with the display of the alternative possible decisions

Wherein the decision-making node of the surgical procedure is associated with a first patient and the respective similar past decision-making node is selected from past surgical procedures associated with patients having similar characteristics to the first patient

Wherein the decision-making node of the surgical procedure is associated with a first medical professional and the respective similar past decision-making node is selected from past surgical procedures associated with medical professionals having similar characteristics to the first medical professional

Wherein the decision-making node of the surgical procedure is associated with a first prior event in the surgical procedure and a similar past decision-making node is selected from past surgical procedures comprising prior events similar to the first prior event

Wherein the indicia comprise intraoperative surgical event indicia

Wherein selection of the intra-operative surgical event marker enables the surgeon to view alternative video clips from different surgical procedures

Wherein the alternative video clips present different ways of handling a selected intraoperative surgical event

Wherein the overlay on the video output is displayed prior to the surgical procedure depicted in the displayed video ending

Wherein the analysis is based on one or more electronic medical records associated with the plurality of past surgical videos

Wherein the respective similar decision-making nodes are similar to the decision-making node of the surgical procedure according to a similarity index

Wherein the analysis comprises an implementation using computer vision algorithms

Wherein the indicia relate to intraoperative surgical events and selection of the intraoperative surgical event indicia enables the surgeon to view alternative video clips from different surgical procedures

Accessing video clips to be indexed, the video clips to be indexed comprising clips of a particular surgical procedure

-analyzing the video clips to identify video clip locations associated with the surgical stage of the particular surgical procedure

Generating a stage tag associated with the surgical stage

Associating the stage tag with the video filmlet location

Analyzing the video clips to identify event locations for particular intra-operative surgical events within the surgical stage

Associate an event tag with the event location of the particular intraoperative surgical event

Storing event features associated with the particular intra-operative surgical event

Associating at least a portion of the video clips of the particular surgery with the phase tags, the event tags, and the event features in a data structure containing additional video clips of other surgeries

Wherein the data structure further includes respective phase tags, respective event tags, and respective event features associated with one or more of the other surgical procedures

Enabling a user to access the data structure by selecting a selected phase tag, a selected event tag, and a selected event feature of a video filmlet for display

Performing a lookup in the data structure of a surgical video clip matching the at least one selected stage tag, selected event tag and selected event features to identify a matching subset of stored video clips

Causing the display of the matching subset of stored video clips to the user, thereby enabling the user to view surgical clips of at least one intraoperative surgical event sharing the selected event feature while skipping playback of video clips lacking the selected event feature

Wherein enabling the user to view a surgical short sheet of at least one intraoperative surgical event having the selected event feature while skipping playback of portions of the selected surgical event lacking the selected event feature comprises: sequentially presenting to the user portions of surgical clips of a plurality of intraoperative surgical events sharing the selected event feature while skipping playback of portions of selected surgical events lacking the selected event feature

Wherein the stored event characteristics may include adverse results of the surgical event

Wherein the step of causing the matching subset to be displayed comprises: enabling the user to view a surgical short of a selected adverse outcome while skipping playback of a surgical event lacking the selected adverse outcome

Wherein the stored event characteristics include surgical technique

Wherein the step of causing the matching subset to be displayed comprises: enabling the user to view a surgical clip of a selected surgical technique while skipping playback of a surgical clip not associated with the selected surgical technique

Wherein the stored event characteristics include surgeon skill level

Wherein the step of causing the matching subset to be displayed comprises: enabling the user to view a tabbing exhibiting a selected surgeon skill level while skipping playback of a tabbing lacking the selected surgeon skill level

Wherein the stored event characteristics include physical patient characteristics

Wherein the step of causing the matching subset to be displayed comprises: enabling the user to view a filmlet exhibiting a selected physical patient characteristic while skipping playback of a filmlet lacking the selected physical patient characteristic

Wherein the stored event characteristics include the identity of the particular surgeon

Wherein the step of causing the matching subset to be displayed comprises: enabling the user to view a tab showing activity of a selected surgeon while skipping playback of a tab lacking activity of the surgeon

Wherein the stored event characteristics include physiological responses

Wherein the step of causing the matching subset to be displayed comprises: enabling the user to view a clip exhibiting a selected physiological response while skipping playback of the clip lacking the selected physiological response

Wherein the step of analyzing the video clip to identify the video clip location associated with at least one of the surgical event or the surgical stage comprises: performing computer image analysis on the video footage to identify at least one of a start location of the surgical stage for playback or a start of a surgical event for playback

Accessing summary data relating to a plurality of surgical procedures similar to the particular surgical procedure

Presenting statistical information associated with the selected event feature to the user

Wherein the accessed video clips include video clips captured via at least one image sensor located in at least one of a position above an operating table, a surgical cavity of a patient, an organ of a patient, or a vasculature of a patient

Wherein identifying the video filmlet location is based on user input

Wherein the step of identifying the video filmlet location comprises: analyzing frames of the video filmlet using computer analysis

Wherein the computer image analysis comprises a neural network model trained using example video frames including previously identified surgical stages, thereby identifying at least one of a video shot location or a stage label

Determining stored event characteristics based on user input

Determining stored event characteristics based on computer analysis of video clips depicting the particular intra-operative surgical event

Wherein generating the stage label is based on computer analysis of a video clip depicting the surgical stage

Wherein the step of identifying a matching subset of the stored video clips comprises: determining a degree of similarity between the matching subset of stored videos and the selected event features using computer analysis

Accessing a specific surgical slice comprising a first set of frames associated with at least one intra-operative surgical event and a second set of frames not associated with surgical activity

Accessing historical data based on historical surgical clips of prior surgery, wherein the historical data comprises the following information: the information distinguishes portions of the surgical clip into frames associated with intra-operative surgical events and frames not associated with surgical activity

Based on the information of the historical data, distinguishing the first group of frames from the second group of frames in the particular surgical short sheet

Upon a user request, presenting to the user a summary of the first set of frames of the particular surgical clip while skipping presenting to the user the second set of frames

Wherein the information that distinguishes portions of the historical surgical snippet into frames associated with intra-operative surgical events includes an indicator of at least one of surgical tool presence or movement

Wherein the information that distinguishes portions of the historical surgical short into frames associated with the intra-operative surgical event includes detected tools and anatomical features in the associated frames

Wherein the user's request includes at least one type of indication of an intraoperative surgical event of interest, and

wherein the first set of frames depicts at least one intra-operative surgical event of the at least one type of intra-operative surgical event of interest

Wherein the user's request comprises a request to view a plurality of intra-operative surgical events in the particular surgical short, and

wherein the step of presenting the summary of the first set of frames to the user comprises: displaying the first group of frames chronologically and skipping the chronologically sequential frames of the second group

Wherein the historical data further includes historical surgical outcome data and corresponding historical reason data

Wherein the first set of frames comprises a cause frame set and a result frame set

Wherein the second set of frames comprises a set of intermediate frames

-analyzing the specific surgical patch to identify a surgical result and a corresponding cause of the surgical result, the identification being based on the historical result data and the corresponding historical cause data

Detecting the set of result frames in the particular surgical slice based on the analysis, the set of result frames being within a result phase of the surgical procedure

Detecting, based on the analysis, a set of cause frames in the particular surgical slice that are within a cause phase of the surgical procedure that is temporally distant from the outcome phase

Wherein the set of intermediate frames is in an intermediate stage between the set of cause frames and the set of result frames

Generating a causal summary of the surgical short-film

Wherein the causal summary comprises the set of causal frames and the set of effect frames and skips over the set of intermediate frames

Wherein presenting the summary of the first set of frames to the user comprises the cause and effect summary

Wherein the causal phase comprises a surgical phase in which the cause occurs

Wherein the cause frame set is a subset of the frames in the cause phase

Wherein the outcome stage comprises a surgical stage in which the outcome can be observed

Wherein the result frame set is a subset of frames in the result phase

Using a machine learning model trained to identify surgical results and corresponding causes of the surgical results using the historical data to analyze the particular surgical short sheet

Wherein the specific surgical short sheet depicts a surgical operation performed on the patient and taken by at least one image sensor in the operating room

Deriving the first set of frames for storage in the patient's medical record

Generating an index of the at least one intra-operative surgical event, and deriving the first set of frames comprises: generating a compilation of the first set of frames, the compilation including the index and being configured to enable viewing of the at least one intra-operative surgical event based on selection of one or more index items

Wherein the compilation comprises a series of frames of different intra-operative events stored as a continuous video

Associating the first set of frames with a unique patient identifier and updating a medical record comprising the unique patient identifier

Wherein the position of the at least one image sensor is at least one of above an operating table in the operating room or within the patient's body

Wherein the step of distinguishing the first set of frames from the second set of frames in the particular surgical slice comprises: analyzing the specific surgical short sheet to detect a medical instrument

Analysis of the specific surgical patch to detect anatomical structures

-analyzing the video to detect relative movement between the detected medical instrument and the detected anatomical structure

Differentiating the first set of frames from the second set of frames based on the relative movement

Wherein the first set of frames comprises surgically active frames and the second set of frames comprises non-surgically active frames

Wherein presenting the summary therefrom enables a surgeon preparing for surgery to skip the non-surgical active frames during video review of a cursory presentation

Wherein distinguishing the first set of frames from the second set of frames is further based on the detected relative position between the medical instrument and the anatomical structure

Wherein distinguishing the first set of frames from the second set of frames is further based on the detected interaction between the medical instrument and the anatomical structure

Wherein skipping the non-surgical activity frame comprises: skipping capture of most frames of non-surgical activity

Accessing a repository of multiple sets of surgical video clips that reflect multiple surgical procedures performed on different patients and include intra-operative surgical events, surgical results, patient characteristics, surgeon characteristics, and intra-operative surgical event characteristics

Enabling a surgeon to prepare a envisaged surgical procedure to enter case-specific information corresponding to said envisaged surgical procedure

Comparing the case-specific information with data associated with multiple sets of the surgical video clips to identify a set of intra-operative events likely to be encountered during the envisaged surgical procedure

Using the case-specific information and the identified set of likely-to-encounter intraoperative events to identify specific frames in a particular set of the plurality of sets of surgical video clips that correspond to the identified set of intraoperative events

Wherein the identified particular frames include frames from the plurality of surgical procedures performed on different patients

Determining that the first and second sets of video clips from different patients contain frames associated with intra-operative events sharing a common characteristic

Skipping inclusion of the second set from the compilation to be presented to the surgeon and including the first set in the compilation to be presented to the surgeon

Enabling the surgeon to view a presentation comprising the compilation containing frames from the different surgical procedures performed on different patients

Enabling display of a common surgical timeline along the presentation, the common surgical timeline including one or more chronological markers corresponding to one or more of the identified particular frames

Wherein the step of enabling the surgeon to view the presentation comprises: sequentially displaying discrete sets of video clips of the different surgical procedures performed on different patients

Wherein the step of sequentially displaying the discrete set of video clips comprises: displaying an index of the discrete sets of video clips such that the surgeon can select one or more of the discrete sets of video clips

Wherein the index includes a timeline that analyzes the discrete set into corresponding surgical stages and a text stage indicator

Wherein the timeline includes intraoperative surgical event markers corresponding to intraoperative surgical events

Wherein the enabling a surgeon to click on the intraoperative surgical event marker to display at least one frame depicting the corresponding intraoperative surgical event

Wherein the case specific information corresponding to the envisaged surgical procedure is received from an external device

Wherein the step of comparing the case specific information to data associated with the plurality of sets of surgical video clips comprises: using an artificial neural network to identify the set of intraoperative events likely to be encountered during the envisioned surgical procedure

Wherein the step of using the artificial neural network comprises: providing the case-specific information to the artificial neural network as input

Wherein the case specific information comprises characteristics of the patient associated with the envisaged surgery

Wherein the patient's characteristics are received from the patient's medical record

Wherein the case specific information comprises information relating to a surgical tool

Wherein the information related to the surgical tool comprises at least one of a tool type or a tool model

Wherein the common characteristics comprise characteristics of the different patients

Wherein the common characteristic comprises an intra-operative surgical event characteristic of the envisaged surgical operation

Wherein the step of determining that the first and second sets of video clips from different patients contain frames associated with intra-operative events sharing common features comprises: identifying the common features using an implementation of a machine learning model

Training the machine learning model using example video clips to determine that two sets of video clips share the common feature

Wherein the step of implementing the machine learning model comprises: implementing the trained machine learning model

Training a machine learning model to generate an index of the repository based on: the intra-operative surgical event, the surgical outcome, the patient characteristic, the surgeon characteristic, and an intra-operative surgical event characteristic

Generating the index of the repository

Wherein the step of comparing the case specific information with the data associated with the plurality of sets comprises searching the index

-analyzing the frames of the surgical short to identify anatomical structures in the first set of frames

Accessing first historical data based on an analysis of first frame data taken from a first set of prior surgeries

Analyzing the first set of frames using the first historical data and using the identified anatomical structure to determine a first surgical complexity level associated with the first set of frames

Analyzing the frames of the surgical short sheet to identify a medical tool, the anatomical structure and an interaction between the medical tool and the anatomical structure in a second set of frames

Accessing second historical data based on analysis of second frame data taken from a second set of prior surgeries

Analyzing the second set of frames using the second historical data and using the identified interactions to determine a second surgical complexity level associated with the second set of frames

Wherein the step of determining the complexity of the first surgical procedure further comprises: identifying a medical tool in the first set of frames

Wherein determining the second surgical complexity level is based on an elapsed time from the first set of frames to the second set of frames

Wherein determining at least one of the first or second complexity levels is based on a physiological response

Determining a skill level exhibited by a healthcare provider in the surgical short sheet

Wherein determining at least one of the first complexity level or the second complexity level is based on the determined skill level exhibited by the healthcare provider

Determining that the first surgical complexity is less than a selected threshold, determining that the second surgical complexity exceeds the selected threshold, and in response to determining that the first surgical complexity is less than the selected threshold and determining that the second surgical complexity exceeds the selected threshold, storing the second set of frames in a data structure while bypassing the first set of frames from the data structure

Wherein identifying the anatomical structure in the first set of frames is based on identification of a medical tool and a first interaction between the medical tool and the anatomical structure

Tagging the first set of frames with the first surgical complexity level

Tagging the second set of frames with the second surgical complexity level

Generating a data structure comprising the first set of frames with the first label and the second set of frames with the second label to enable a surgeon to select the second surgical complexity level and thereby cause the second set of frames to be displayed while skipping displaying the first set of frames

Determining at least one of the first surgical complexity level or the second surgical complexity level using a machine learning model trained to identify surgical complexity levels using frame data taken from a previous surgical procedure

Wherein determining the second surgical complexity level is based on events occurring between the first set of frames and the second set of frames

Wherein determining at least one of the first surgical complexity or the second surgical complexity is based on a condition of the anatomical structure

Wherein determining at least one of the first surgical complexity or the second surgical complexity is based on analysis of electronic medical records

Wherein determining the first surgical complexity is based on an event occurring after the first set of frames, wherein determining at least one of the first surgical complexity or the second surgical complexity is based on a skill level of a surgeon associated with the surgical short slice

Wherein determining the second surgical complexity level is based on an indication to call an additional surgeon after the first set of frames

Wherein determining the second surgical complexity level is based on an indication to administer a particular drug after the first set of frames

Wherein the first historical data comprises a machine learning model trained using the first frame data taken from a first set of prior surgeries

Wherein the first historical data comprises an indication of a statistical relationship between a particular anatomical structure and a particular surgical complexity level

Receiving visual data tracking the surgical procedure in progress from an image sensor located in the surgical operating room

Accessing a data structure containing information based on historical surgical data

Analyzing the visual data of the ongoing surgery using the data structure to determine an estimated time of completion of the ongoing surgery

Accessing a schedule of the surgical operating room, the schedule including scheduled times associated with completion of the ongoing surgical procedure

Calculating, based on the estimated completion time of the surgical procedure in progress, whether an expected completion time is likely to result in a difference relative to the scheduled time associated with the completion

Outputting a notification about the calculation of the difference thereby enabling subsequent users of the surgical operating room to adjust their schedules accordingly

Wherein the notification comprises an updated operating room schedule

Wherein the updated operating room schedule enables queued healthcare professionals to prepare for subsequent surgical procedures

Sending the notification electronically to a device associated with a subsequent scheduled user of the surgical operating room

Determining the extent of said discrepancy relative to said scheduled time associated with said completion

Outputting the notification in response to the degree of the first determination

Forgoing outputting the notification in response to the degree of the second determination

Determining whether the expected completion time is likely to result in a delay of at least a selected threshold amount of time relative to the scheduled time associated with the completion

Outputting the notification in response to determining that the expected completion time is likely to result in a delay of at least the selected threshold amount of time

Forgoing outputting the notification in response to determining that the expected completion time is unlikely to result in a delay of at least the selected threshold amount of time

Wherein determining the estimated completion time is based on one or more stored characteristics associated with a healthcare professional performing the ongoing surgical procedure

Updating a historical average completion time based on the determined actual completion time of the ongoing surgical procedure

Wherein the image sensor is positioned above the patient

Wherein the image sensor is positioned on a surgical tool

Wherein the analyzing step further comprises: detecting a characteristic event in received visual data accesses the information based on historical surgical data to determine an expected completion time of the surgical procedure after occurrence of the characteristic event in the historical surgical data, and determines the estimated completion time based on the determined expected completion time

Training a machine learning model using historical visual data to detect the feature events

Training the machine learning model using historical visual data to estimate completion time,

wherein the step of calculating the estimated completion time comprises: implementing the trained machine learning model as trained

Determining the estimated completion time using an average historical completion time

Detecting medical tools in the visual data

Wherein calculating the estimated completion time is based on detected medical tools

Wherein the analyzing step further comprises: detecting anatomical structures in the visual data

Wherein calculating the estimated completion time is based on the detected anatomical structure

Wherein the analyzing step further comprises: detecting interaction between an anatomical structure and a medical tool in the visual data

Wherein calculating the estimated completion time is based on detected interactions

Wherein the analyzing step further comprises: determining a surgeon's skill level in the visual data

Wherein calculating the estimated completion time is based on the determined skill level

Accessing video frames taken during a surgical procedure on a patient

Analyzing the video frames taken during the surgical procedure to identify at least one medical instrument, at least one anatomical structure and at least one interaction between the at least one medical instrument and the at least one anatomical structure in the video frames

Accessing a database of claim criteria associated with medical instruments, anatomical structures, and interactions between medical instruments and anatomical structures

Comparing at least one interaction between the identified at least one medical instrument and the at least one anatomical structure with information in the claim criteria database to determine at least one claim criteria associated with the surgical procedure

Outputting the at least one claim criterion for use in obtaining insurance claims for the surgical procedure

Wherein the at least one claim criterion of the output comprises a plurality of claim criteria of the output

Wherein at least two of the plurality of output claim criteria are based on different interactions with a common anatomical structure

Wherein the at least two output claim criteria are determined based in part on the detection of two different medical instruments

Wherein determining the at least two claim criteria is further based on analysis of post-operative surgical reports

Wherein the video frames are taken from an image sensor positioned above the patient

Wherein the video frames are taken from an image sensor associated with the medical device

Updating the database by associating the at least one claim criterion with the surgical procedure

Generating a correlation between the processed claim criteria and at least one of a plurality of medical instruments in the historical video snippets, a plurality of anatomical structures in the historical video snippets, or a plurality of interactions between medical instruments and anatomical structures in the historical video snippets

Updating the database based on the generated correlations

Wherein the step of generating a correlation comprises: implementing statistical models

Using a machine learning model to detect the at least one of a plurality of medical instruments, a plurality of anatomical structures, or a plurality of interactions between medical instruments and anatomical structures in the historical video snippets

-analyzing the video frames taken during the surgical procedure to determine the condition of the patient's anatomy

Determining the at least one claim criterion associated with the surgical procedure based on the determined condition of the anatomical structure

-analyzing the video frames taken during the surgical procedure to determine a change in condition of the patient's anatomy during the surgical procedure

Determining the at least one claim criterion associated with the surgical procedure based on the determined change in condition of the anatomical structure

Analyzing the video frames taken during the surgical procedure to determine usage of a particular medical device

Determining the at least one claim criterion associated with the surgical procedure based on the determined use of the particular medical device

-analyzing the video frames taken during the surgical procedure to determine the type of use of the particular medical device

Determining at least a first claim criterion associated with the surgical procedure in response to a first determined type of use

Determining at least a second claim criteria associated with the surgical procedure in response to a second determined type of use, the at least first claim criteria being different from the at least second claim criteria

Receiving processed claim criteria associated with the surgical procedure, and updating the database based on the processed claim criteria

Wherein the processed claim criterion is different from the corresponding one of the at least one claim criterion

-analyzing the video frames taken during the surgical procedure to determine the amount of a particular type of medical supply used during the surgical procedure

Determining the at least one claim criterion associated with the surgical procedure based on the determined amount

Receiving input of an identifier of the patient

Receiving input of an identifier of a healthcare provider

Receiving input of a surgical short for a surgical procedure performed on the patient by the healthcare provider

Analyzing a plurality of frames of the surgical short sheet to derive image-based information for filling out a post-operative report of the surgical procedure

Having the derived image-based information fill out the post-operative report of the surgical procedure

-analyzing the surgical patch to identify one or more phases of the surgical procedure and to identify characteristics of at least one of the identified phases

Wherein the derived image-based information is based on the identified at least one stage and the identified characteristics of the at least one stage

Analyzing the surgical short sheet to associate a name with the at least one stage

Wherein the derived image-based information comprises the name associated with the at least one stage

Determining at least the start of said at least one phase

Wherein the derived image-based information is based on the start of the determination

Associating a time stamp with the at least one phase

Wherein the derived image-based information comprises a time stamp associated with at least one stage

Sending data to the healthcare provider, the sent data comprising the patient identifier and the derived image-based information

Analyzing the surgical short-film to identify at least one recommendation for a post-operative treatment

Providing at least one suggestion of the identification

Wherein the post-operative report populating the surgical procedure is configured to enable a healthcare provider to alter at least part of the derived image-based information in the post-operative report

Wherein the post-operative report populating the surgical procedure is configured to have at least part of the derived image-based information to be identified in the post-operative report as automatically generated data

Analyzing the surgical blade to identify surgical events within the surgical blade and to identify characteristics of the identified surgical events

Wherein the derived image-based information is based on the identified surgical event and the identified characteristic

Analyzing the surgical short-film to determine the event name of the identified surgical event

Wherein the derived image-based information includes the determined event name

Associating a time stamp with the identified surgical event

Wherein the derived image-based information includes the time stamp

Providing the derived image-based information in a form enabling updating of the electronic medical record

Wherein the derived image-based information is based in part on user input

Wherein the derived image-based information comprises a first portion associated with a first part of the surgical procedure and a second portion associated with a second part of the surgical procedure, and further comprising: receiving preliminary post-operative reports

Analyzing the preliminary post-operative report to select a first location and a second location within the preliminary post-operative report, the first location being associated with the first part of the surgical procedure and the second location being associated with the second part of the surgical procedure

-inserting the first part of the derived image-based information into the selected first location, and inserting the second part of the derived image-based information into the selected second location

-analyzing the surgical patch to select at least part of at least one frame of the surgical patch

Causing the selected at least part of at least one frame of the surgical short to be included in the post-operative report of the surgery

Receiving preliminary post-operative reports

-analyzing the preliminary post-operative report and the surgical short sheet to select the at least part of at least one frame of the surgical procedure

Receiving preliminary post-operative reports

Analyzing the preliminary post-operative report and the surgical short to identify at least one inconsistency between the preliminary post-operative report and the surgical short

Providing an indication of the identified at least one inconsistency

Accessing video frames taken during a particular surgical procedure

Accessing stored data identifying a suggested sequence of events for the surgical procedure

Comparing the accessed frames to the proposed sequence of events to identify an indication of a deviation between the particular surgical procedure and the proposed sequence of events for the surgical procedure

Determining the name of the intraoperative surgical event associated with the deviation

Providing a notification of the deviation, the notification comprising the name of the intraoperative surgical event associated with the deviation

Wherein identifying the indication of the deviation and providing the notification occur in real time during the surgical procedure

Receiving an indication that a particular action is to occur in the particular surgical procedure

Use the suggested sequence of events to identify a preliminary action of the particular action

Determining, based on analysis of the accessed frames, that the identified preliminary action has not occurred

Identifying the indication of the deviation in response to determining that the identified preliminary action has not occurred

Wherein the specific surgical procedure is cholecystectomy

Wherein the suggested sequence of events is based on a critical safety view

Wherein the specific surgical procedure is appendectomy

Wherein the specific surgical procedure is a hernia repair

Wherein the specific surgical procedure is a hysterectomy

Wherein the specific surgical procedure is a radical prostatectomy

Wherein the particular surgical procedure is a partial nephrectomy, and the deviation includes disregarding the identification of the hilum

Wherein the particular surgical procedure is a thyroidectomy and the deviation comprises disregarding the identified recurrent laryngeal nerve.

Identifying a set of frames associated with the deviation

Wherein the step of providing the notification comprises: displaying the identified set of frames associated with the deviation

Wherein the indication that the particular action is to occur is based on input from a surgeon performing the particular surgical procedure

Wherein the indication that the particular action is to occur is entry of a particular medical instrument into a selected region of interest

Wherein the step of identifying the deviation comprises: determining that a surgical tool is in a particular anatomical region

Wherein the specific surgical procedure is a segmental colectomy

Wherein said deviation comprises disregarding of performing the anastomosis

Wherein the indication identifying the deviation is based on an elapsed time associated with an intraoperative surgical procedure

Receiving video clips of a surgical procedure performed on a patient by a surgeon in an operating room

Accessing at least one data structure comprising image-related data characterizing a surgical procedure

-analysing the received video clips using the image-related data to determine the presence of a surgical decision-making node

Correlation between access results in the at least one data structure and specific actions taken at the decision-making node

Outputting a suggestion to take the particular action to a user based on the determined presence of the decision-making node and the relevance of the access

Wherein the instructions are configured to cause the at least one processor to perform the operations in real time during the surgical procedure

Wherein the user is a surgeon

Wherein the decision-making nodes are determined by analysis of a plurality of different historical procedures, wherein different courses of action occur after a common surgical situation

Wherein the video filmlet comprises images from at least one of an endoscope and an in vivo camera

Wherein the recommendation comprises a recommendation to perform a medical examination

Receiving results of the medical examination

Outputting to the user a second suggestion to take a particular action based on the determined presence of the decision-making node, the accessed correlations, and the received results of the medical examination

Wherein the specific action comprises bringing an additional surgeon to the operating room

Wherein the decision-making node comprises at least one of: improper access or exposure, retraction of an anatomical structure, misjudgment of an anatomical structure, or fluid leakage

Wherein the recommendation includes a level of confidence that a desired surgical outcome will occur if the particular action is taken

Wherein the suggestion includes a confidence level that a desired result would not occur if the particular action were not taken

Wherein the recommendation is based on the time elapsed since a particular point in the surgical procedure

Wherein the recommendation includes an indication that an undesirable surgical outcome is likely to occur if the particular action is not taken

Wherein the recommendation is based on the skill level of the surgeon

Wherein the recommendation is based on a surgical event occurring in the surgical procedure prior to the decision-making node

Wherein the specific action comprises a plurality of steps

Wherein determining the presence of the surgical decision-making node is based on at least one of a detected physiological response of the anatomical structure and a motion associated with a surgical tool

Receiving a vital sign of the patient, and

Wherein the recommendation is based on the accessed correlations and the vital signs

Wherein the surgeon is a surgical robot and the advice is provided in the form of instructions of the surgical robot

Wherein the recommendation is based on a condition of a tissue of the patient

Wherein the suggestion of the particular action comprises creating a stoma

Receiving image data of a surgical operation from at least one image sensor in an operating room

Analyzing the received image data to determine the identity of the anatomical structure and to determine the condition of the anatomical structure as reflected in the image data

Selecting a contact force threshold associated with the anatomical structure, the selected contact force threshold being based on the determined condition of the anatomical structure

Receiving an indication of an actual contact force on the anatomical structure

Comparing the indicated actual contact force with the selected contact force threshold

Outputting a notification based on determining that the indicated actual contact force indication exceeds the selected contact force threshold

Wherein the contact force threshold is associated with a tension level

Wherein the contact force threshold is associated with a level of compression

Wherein the actual contact force is associated with contact between a medical instrument and the anatomical structure

Wherein the indication of the actual contact force is estimated based on an image analysis of the image data

Wherein the step of outputting the notification comprises: providing real-time alerts to a surgeon performing the surgical procedure

Wherein the notification is an instruction of the surgical robot

Determining from the image data that the surgical procedure is in battle mode

Wherein the notification is suspended during the combat mode

-determining from the image data that the surgeon is operating in a mode ignoring contact force notifications, and-based on said determining that the surgeon is operating in said mode ignoring contact force notifications, suspending further contact force notifications at least temporarily

Wherein selecting the contact force threshold is based on a contact position between the anatomical structure and a medical instrument, wherein selecting the contact force threshold is based on a contact angle between the anatomical structure and a medical instrument, wherein selecting the contact force threshold comprises: providing the condition of the anatomical structure as an input to a regression model, and selecting the contact force threshold based on an output of the regression model

Wherein selecting the contact force threshold is based on an anatomical structure table comprising corresponding contact force thresholds

Wherein the selection of the contact force threshold is based on an action performed by a surgeon

Wherein the indication of the actual contact force is received from a surgical tool

Wherein the indication of actual contact force is received from a surgical robot

Determining the condition of the anatomical structure in the image data using a machine learning model trained with training examples

Selecting the contact force threshold using a machine learning model trained with training examples

Receiving image data associated with a first event during a surgical procedure from at least one image sensor arranged to take an image of the surgical procedure

Determining a predicted outcome associated with the surgical procedure based on the received image data associated with the first event

Receiving image data associated with a second event during a surgical procedure from at least one image sensor arranged to take an image of the surgical procedure

Determining, based on the received image data associated with the second event, a change in the prediction result that reduces the prediction result below a threshold value

Accessing data structures based on image-related data of prior surgical procedures

Identifying suggested remedial actions based on the accessed image-related data

Output the suggested remedial action

Wherein the suggested remedial action includes suggesting that the surgeon leave the surgery to rest

Wherein the suggested remedial action includes a suggestion to request assistance from another surgeon

Wherein the suggested remedial action comprises a revision of the surgical procedure

Wherein the prediction outcome comprises a likelihood of readmission

Wherein determining the change in the prediction is based on the magnitude of bleeding

Wherein identifying the remedial action is based on an indication that the remedial action is likely to elevate the predicted result above the threshold

Wherein the step of identifying the remedial action comprises: using a machine learning model that is trained using historical examples of remedial actions and surgical outcomes to identify the remedial actions

Wherein the step of determining the prediction result comprises: determining a predicted outcome using a machine learning model trained based on historical surgical videos and information indicative of surgical outcomes corresponding to the historical surgical videos

Wherein the step of determining the prediction result comprises: identifying an interaction between a surgical tool and an anatomical structure, and determining the prediction result based on the identified interaction

Wherein determining the prediction result is based on a skill level of a surgeon depicted in the image data

Wherein determining a change in the predicted outcome is based on the skill level

Further comprising, in response to the prediction decreasing below a threshold, updating a scheduling record associated with a surgical operating room associated with the surgical procedure

Wherein determining a change in the predicted outcome is based on the time elapsed between a particular point in the surgical procedure and the second event

Wherein determining the prediction result is based on a condition of an anatomical structure depicted in the image data

Determining the condition of the anatomical structure

Wherein determining the change in the prediction is based on a change in color of at least a portion of the anatomical structure wherein determining the change in the prediction is based on a change in appearance of at least a portion of the anatomical structure receiving intraoral video of a surgical procedure in real time

Analyzing frames of the intra-cavity video to determine abnormal fluid leakage conditions in the intra-cavity video

Enacting remedial action when the abnormal fluid leakage situation is determined

Wherein the fluid comprises at least one of blood, bile or urine

Wherein the analyzing step comprises: analyzing the frames of the intra-cavity video to identify blood splatter and at least one characteristic of the blood splatter

Wherein the selection of the remedial action depends on the at least one characteristic of the identified blood splash

Wherein the at least one characteristic is associated with a source of the blood splatter

Wherein the at least one characteristic is associated with an intensity of the blood splash

Wherein the at least one characteristic is associated with the volume of blood splatter

Wherein the step of analyzing the frames of the intra-cavity video comprises: determining characteristics of the abnormal fluid leakage condition

Wherein the selection of the remedial action depends on the determined characteristic

Wherein the characteristic is associated with the volume of fluid leakage

Wherein the characteristic is associated with a color of the fluid leak

Wherein the characteristic is associated with a fluid type of the fluid leak

Wherein the characteristic is related to the fluid leakage rate

Storing the intra-cavity video and, in determining the abnormal leakage condition, analysing a previous frame of the stored intra-cavity video to determine a source of leakage

Wherein the step of formulating the remedial action comprises: providing notification of leak source

Wherein the step of determining the source of the leak comprises: identifying ruptured anatomical organ

Determining a flow rate associated with the fluid leakage situation

Wherein formulating the remedial action is based on the flow

Determining the fluid loss associated with said fluid leakage situation

Wherein formulating the remedial action is based on the fluid loss

Wherein the step of analyzing the frames of the intra-cavity video to determine abnormal fluid leakage conditions in the intra-cavity video comprises: determining whether the determined fluid leakage situation is an abnormal fluid leakage situation, and

enacting the remedial action in response to determining that the determined fluid leakage situation is an abnormal fluid leakage situation

Forgoing the formulation of the remedial action in response to determining that the determined fluid leak condition is a normal fluid leak condition

Wherein the intra-cavity video depicts a surgical robot performing the surgical procedure, and the remedial action comprises sending instructions to the robot

Accessing video frames taken during a particular surgical procedure on a patient

Accessing stored historical data identifying intraoperative events and associated results

Analyzing the accessed frames and identifying at least one specific intra-operative event in the accessed frames based on information obtained from the historical data

Determining a predicted outcome associated with the particular surgical procedure based on information obtained from the historical data and the identified at least one intraoperative event

Outputting the prediction result in a manner that associates the prediction result with the patient

Wherein identifying the at least one particular intraoperative event is based on at least one of: a detected surgical tool in the accessed frame, a detected anatomical structure in the accessed frame, an interaction between a surgical tool and an anatomical structure in the accessed frame, or a detected abnormal fluid leak condition in the accessed frame

Wherein the at least one particular intra-operative event in the accessed frame is identified using a machine learning model trained using example training data

Wherein determining the prediction outcome is based on at least one of a characteristic of the patient, an electronic medical record, or a post-operative surgical report

Wherein the predicted outcome associated with the particular surgical procedure is determined based on intra-operative events using a machine learning model trained using training examples

Wherein the step of determining the prediction result comprises: predicting a surgical outcome based on identified intra-operative events and identified features of the patient using a trained machine learning model

Receiving information identifying surgical results achieved after the surgical procedure, and updating the machine learning model by training the machine learning model using the received information

Identifying characteristics of the patient

Wherein the prediction outcome is determined based also on the identified patient characteristics

Wherein the patient characteristics are derived from electronic medical records

Wherein the step of identifying the patient characteristics comprises: the accessed frames are analyzed using a machine learning model that is trained using historical surgical procedures and corresponding training examples of historical patient features to identify patient features.

Wherein the prediction result comprises at least one of: risk assessment of post-discharge accidents, post-discharge adverse events, post-discharge complications or readmission

Accessing a data structure containing a proposed sequence of surgical events

Wherein the identification of the at least one specific intra-operative event is based on the identification of a deviation between a proposed sequence of events of the surgical procedure identified in the data structure and an actual sequence of events detected in the accessed frame

Wherein the identification of the deviation is based on at least one of: a detected surgical tool in the accessed frame, a detected anatomical structure in the accessed frame, or an interaction between a surgical tool and an anatomical structure in the accessed frame

Wherein the identifying of the deviation comprises using a machine learning model trained to identify deviations from the suggested sequence of events based on the historical surgical video clips, the historical suggested sequence of events, and information identifying deviations from the historical suggested sequence of events in the historical video clips

Wherein identifying the deviation comprises: comparing the accessed frame with a reference frame depicting a suggested sequence of events

Wherein outputting the prediction results comprises: updating an electronic medical record associated with a patient

Wherein outputting the prediction results comprises: transmitting the prediction result to a data receiving device associated with the healthcare provider

Determining at least one action likely to improve the prediction result based on the accessed frames

Providing a suggestion based on the determined at least one action

The systems and methods disclosed herein relate to unconventional improvements over conventional methods. The description of the disclosed embodiments is not intended to be exhaustive or to be limited to the precise forms or embodiments disclosed. Modifications and variations to the disclosed embodiments will be apparent from consideration of the specification and practice of the disclosed embodiments. Additionally, the disclosed embodiments are not limited to the examples discussed herein.

The foregoing description has been presented for purposes of illustration. The description is not intended to be exhaustive or to be limited to the precise form or embodiments disclosed. Modifications and variations to the disclosed embodiments will be apparent from consideration of the specification and practice of the disclosed embodiments. For example, the described implementations include hardware and software, but systems and methods consistent with the present disclosure may be implemented solely as hardware.

Computer programs based on the written description of the specification and the methods are within the skill of the software developer. Various programming techniques may be used to create the various functions, scripts, programs or modules. For example, programs, scripts, functions, program segments or program modules may be designed by or with the help of languages including: JAVASCRIPT, C + +, JAVA, PHP, PYTHON, RUBY, PERL, BASH, or other programming or scripting language. One or more of such software segments or modules may be integrated into a computer system, non-transitory computer-readable medium, or existing communication software. The program, module, or code may also be embodied or copied as firmware or circuit logic.

Moreover, although illustrative embodiments have been described herein, the scope may include any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of aspects across various embodiments), adaptations or alterations based on the present disclosure. The elements of the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the specification or during the prosecution of the application, which examples are to be construed as non-exclusive. Further, the steps of the disclosed methods may be modified in any manner, including reordering steps or inserting or deleting steps. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit being indicated by the following claims and their full scope of equivalents.

244页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:具有集成传感器的蒸发器装置和附件

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!