Automated diagnostic report preparation

文档序号:1246812 发布日期:2020-08-18 浏览:8次 中文

阅读说明:本技术 自动诊断报告准备 (Automated diagnostic report preparation ) 是由 A·扎尔巴赫 M·格拉斯 T·布罗施 J·冯贝格 S·扬 于 2018-12-24 设计创作,主要内容包括:在用于准备关于医学图像中的发现的报告的常规系统中,所述报告的实际制定不由或仅不足地由基于计算机的系统支持。尽管存在通过自动报告改进这样的系统的努力,然而,在先前系统中,误差率太高和/或所述系统的操作太复杂。本申请提出在显示设备上向用户提供文本预测。所述文本预测基于至少当用户经由输入单元激活在显示设备上示出的文本字段时先验分析显示给用户的医学图像的图像内容。所显示的文本预测选自与分析结果相关联的文本模块的预定义集合。(In conventional systems for preparing reports on findings in medical images, the actual formulation of the report is not or only inadequately supported by computer-based systems. Despite efforts to improve such systems by automatic reporting, however, in previous systems, the error rate was too high and/or the operation of the system was too complex. The present application proposes to provide text predictions to a user on a display device. The text prediction is based on a prior analysis of the image content of the medical image displayed to the user at least when the user activates a text field shown on the display device via the input unit. The displayed text predictions are selected from a predefined set of text modules associated with the analysis results.)

1. A system (100) for computer-aided diagnosis report preparation based on medical image data (120) provided by medical imaging (130), comprising:

an image processing unit (140) adapted to read out at least image content from the provided image data (120),

an image analysis unit (150) adapted to analyze the read out image content based on predefined characteristics and to derive an analysis result,

a display device (160) having at least one text field (170, 171, 172, 173) for displaying the written text to a user,

an input unit (180) for interacting with the user to activate at least the text field (170, 171, 172, 173), and

a text generation unit (190) adapted to provide a text prediction, e.g. a visual display of independently selectable text suggestions, to a user on the display device (160) at least when the user activates the text fields (170, 171, 172, 173) via the input unit (180), wherein the text prediction is selected from a predefined set of text modules (200) associated with the analysis result.

2. The system (100) of claim 1,

wherein the system (100) or the text generation unit (190) is further adapted to display an initial text prediction when the text field (170, 171, 172, 173) is activated but there is still no written text content.

3. The system (100) of claim 1 or 2,

wherein the system (100) or the text generation unit (190) is further adapted to create or change the displayed text prediction in response to an input of a text module part by the user via the input unit (180).

4. The system (100) according to any one of the preceding claims,

wherein the system (100) or the text generation unit (190) is further adapted to automatically complete the displayed text prediction of the written text content in the text field (170, 171, 172, 173) when the user confirms the displayed text prediction to be completed by a corresponding input via the input unit (180).

5. The system (100) of claim 4,

wherein the system (100) or the text generation unit (190) is further adapted to automatically accomplish the text prediction in at least one of the following cases:

i) the text field (170, 171, 172, 173) is still completely unfilled or completely unfilled after deletion of already written text content, wherein, as a reaction to activation of the text field (170, 171, 172, 173), an initial text prediction is displayed,

ii) the user enters at least an initial letter matching at least one of the predefined text modules into the text field (170, 171, 172, 173), wherein the matching text module (200) is automatically completed word by word or sentence by sentence as a reaction to the entry of the letter, and

iii) the user enters at least one identified initial word that matches at least one of the predefined text modules (200), wherein matching text modules are automatically completed on a paragraph-by-paragraph basis as a reaction to entering the word.

6. The system (100) according to any one of the preceding claims,

wherein the text prediction comprises at least one text module (200) tagged with a computation identifier associated with a class of images, and wherein the image content is associated with a respective class of images.

7. The system (100) according to any one of the preceding claims,

wherein the text prediction comprises a display portion (175) indicating a probability that the suggested text module (200) matches the read image content.

8. The system (100) according to any one of the preceding claims,

wherein the text prediction comprises a text module (200) naming and/or describing the applied medical imaging method.

9. The system (100) according to any one of the preceding claims,

wherein the system (100) is further adapted to analyze the image quality of the provided image data and, in case of an image quality considered insufficient, the text prediction comprises a text module (200) containing at least disclaimers and/or suggestions for additional or alternative diagnostic methods or examinations to be applied.

10. The system (100) according to any one of the preceding claims,

wherein the system further comprises a learning unit (220) adapted to change and/or expand the predefined set of text modules (200) in response to a user input via the input unit (180).

11. The system (100) of claim 10,

wherein the learning unit (220) is further adapted to change and/or extend an association between at least one of the predefined text modules (200) and the read out image content.

12. A method for computer-aided preparation of a diagnostic report based on medical image data provided by medical imaging, comprising:

reading (S1) at least image content from the supplied image data,

associating (S2) the read image content with a predefined class of images that is computationally associated with at least one particular finding, and

displaying a textual prediction (S3) to a user preparing the report on the display device (160) when the user activates at least a text field (170, 171, 172, 173) shown on the display device (160), wherein the textual prediction comprises at least a predefined text module (200) tagged with a computing identifier associated with the predefined class of images.

13. The method of claim 12, wherein the first and second light sources are selected from the group consisting of,

wherein the text module (200) of displayed text predictions is changed or automatically completed into written text as a reaction to the user's input via an input unit, which written text is then shown in the text fields (170, 171, 172, 173) and/or stored non-volatile for preparing the diagnostic report.

14. A computer program element for controlling a system (100) according to any one of claims 1 to 11, which, when being executed by a processing unit, is adapted to perform the method steps of claim 12 or 13.

15. A computer readable medium having stored the computer program element of claim 14.

Technical Field

The invention relates to computer-aided diagnosis report preparation or creation based on automated evaluation of medical images. In particular, systems for computer-aided diagnosis report preparation, and associated methods, computer program elements, and computer-readable media are discussed.

Background

Reported findings from observations or abnormalities found in medical (particularly radiology) images represent a significant amount of effort in diagnostic radiology and require time and thus incur costs. These reports must also be written and/or digitally recorded and are typically part of the patient's records. Recently, automatic analysis of such images by electronic image processing methods has advanced. However, this has not or hardly improved the report, i.e. the flow for creating the report. Consequently, dictation and speech recognition are still widely used today by radiologists for reporting. Proposals have been made to reduce the time and/or cost required.

US patent application US 2009/0171871a1 discloses computer-aided detection, review and diagnosis by utilizing different learning methods. A fuzzy system is used to map the findings to a diagnostic report constructed using a formal language.

Furthermore, US patent application US 2006/0274928a1 discusses a system for automatically analyzing medical images and computing a diagnosis. Upon selection of a diagnosis by a user, a diagnostic report is electronically generated.

Furthermore, in US patent application US 2016/0350919a1, a deep learning model is selected for automated image recognition of a particular medical condition on image data and applied to the image data to identify characteristics of the particular medical condition. For reporting, in the graphical user interface, the report content may be pre-selected.

Further, in US patent US 9177110B1, various systems and methods for improved report interaction and generation are described.

Furthermore, US patent application US 2006/0190256a1 describes a digital processing device that receives inherently ambiguous user input, wherein the device interprets the received user input for a vocabulary to produce candidates such as words.

However, such a method may be further improved.

Disclosure of Invention

It is an object of the present invention to provide an improved (in particular more efficient) way of providing computer-assisted diagnosis report preparation. The object of the invention is solved by the subject matter of the appended independent claims, wherein further embodiments are incorporated in the dependent claims.

According to a first aspect of the present invention, a system for computer-aided diagnosis report preparation or creation is provided, wherein the report preparation is based on an automatic evaluation of medical image data provided by medical or, in particular, radiological imaging (such as X-ray, computed tomography, magnetic resonance imaging, ultrasound imaging, etc.). Thus, the system is particularly adapted to electronically assist a radiologist or other physician in preparing or creating a report on an observation or abnormality that has been found, for example, by evaluating the image.

The system comprises:

an image processing unit adapted to read out at least image content from the provided image data.

After one or more images are created by one of the above exemplarily listed imaging methods and preferably provided to the healthcare management system, these are processed and/or analyzed by means of computer-assisted image processing, wherein manual image analysis features or depth learning algorithms may be used.

In this description, a system unit like an image processing unit may generally be understood as a part of a controller, such as a software program, a part of a software program or a software module, or a correspondingly configured electronic circuit, such as a graphics processor or a card, etc. Although the functional designation of the units is used in this description, this does not necessarily imply that they are separate units of an individual. Rather, several of the functions may be summarized in the system unit.

An image analysis unit adapted to analyze the read-out image content based on predefined or trained characteristics and to derive an analysis result from the analyzed image content.

In other words, the image analysis unit may be capable of detecting, for example, a medical abnormality or a characteristic of an observation (such as an indicator for a disease) in the read-out image content. It may thus determine at least one possible disease derived at least from the read-out image content. Furthermore, metadata can be read from the image and also analyzed.

During or after image processing, report-relevant characteristics (particularly optically detectable observations and/or anomalies) that may indicate a possible disease may be determined by image analysis or particularly by disease detection algorithms using manual image features or methods, computer vision techniques, or deep learning algorithms. For example, benign or malignant tumor characterization, pulmonary embolism detection may be performed.

A display device having or adapted to display at least one text field for displaying written text, text prompts or the like to a user, such as a radiologist or other physician.

The text field may constitute or be part of a graphical user interface representing a report mask. Ideally, it is displayed adjacent to one or more of the processed and/or analyzed images so that the radiologist can view the images or sequence of images when writing the report.

-an input unit for interacting with the user to activate at least the text field.

For example, the input unit may be a keyboard, a touch panel, a voice input engine including a microphone, and the like. In the simplest case, the report is created by typing text via a conventional computer keyboard.

-a text generation unit adapted to provide a text prediction, e.g. a visual display of individually selectable text suggestions, to a user on the display device at least when the user activates the text field via the input unit, wherein the text prediction is selected from a predefined set of text modules associated with the analysis result.

In this description, text prediction may be understood as a visual display of text suggestions that are individually selectable at run-time or during text entry, respectively. Generally, text suggestions depend on the situation and/or context. The text suggestions may contain individual words, phrases, complete sentences, etc. in a particular established medical term or phrase, such as descriptions of findings, recommended treatments, follow-up examinations, etc. Preferably, the respective suggestions are derived directly from the read-out image content, and the user (i.e. radiologist) may refer back to the corresponding image portions in order to enable the analysis algorithm to learn the text module by machine learning, e.g. via heat maps or the like. For example, the text module may be populated or learned from publicly available databases via appropriate data interfaces using a limited set of standardized categories to describe findings within the image, such as NIH (see: https:// www.nlm.nih.gov/mesh /) and commercially available dictionaries, such as SnoMed (see: http:// www.snomed.org/SnoMed-ct). Of course, the text module may also be predefined or learned by an offline interface.

Depending on the image analysis, the current portion of the report, and/or the last user input, several text suggestions may be displayed simultaneously. In this case, the text prediction contains several suggestions that are displayed to the user at the same time (e.g., as a list of individual terms or phrases), where individual suggestions may be displayed in corresponding rows.

The invention has the advantageous effect that the reporting time effect is reduced, especially compared to manual reporting, such as dictation, speech recognition or unsupported typing. There are also advantages in view of background generated reports, since potential corrections of text due to insufficient image recognition may still be taken into account during the run time of the text generation unit. Thus, despite a high degree of automation, there is still a constant possibility of intervention for the user. This also reduces the probability of errors in the textual content of the report. In addition, if the system is adapted for machine learning, the system can be steadily improved by teaching when text input is required for reporting anyway.

In an embodiment, the system or in particular the text generation unit is further adapted to display an initial text prediction when the text field is activated but there is still no written text content.

In this case, the text field may still be empty but activated, e.g. suggesting and displaying a term or phrase that is either appropriate for the determined possible disease and/or disease description report starting from experience.

Optionally, the system or in particular the text generation unit may further be adapted to create or change the displayed text prediction in response to an input of a text module part by the user via the input unit.

Thus, the displayed text predictions continuously react to the input of text. The text input and the text prediction may be visually distinguished by a visual highlighting.

In an embodiment, the system or in particular the text generation unit may further be adapted to preferably automatically complete the displayed text prediction of the written text content in the text field when the user confirms the displayed text prediction to be completed by a corresponding input via the input unit.

Thus, the suggested text content does not have to be typed in completely, but can be done automatically by means of a short input confirmation, such as a single keystroke.

Optionally, the system or in particular the text generation unit is further adapted to automatically complete the text prediction in at least one of the following cases:

i) in the first case, the text field is still or completely unfilled after deletion of the already written text content. In response to activating the text field, an initial text prediction is displayed to the user.

The initial textual prediction may be, for example, an introductory portion of a report that has been marked by the system as an appropriate introduction, where this may ideally be learned from previous reports in the context of having a determined disease.

ii) in a second case, the user can enter at least an initial letter matching at least one of the predefined text modules into the text field. The matching text module is automatically completed word by word or sentence by sentence as a reaction to the entry of the letter and preferably a short confirmation.

Without mentioning its own formula, the entire sentence portion of the report may be automatically completed by selection from suggestions and/or short confirmations.

iii) in a third instance, the user entering at least one identified initial word that matches at least one of the predefined text modules, wherein matching text modules is automatically completed paragraph by paragraph as a reaction to entering the word.

This may also be done automatically by selection from suggestions and/or short confirmations.

In an embodiment, the text prediction comprises at least one text module tagged with a computation identifier associated with at least one class of images, and wherein the image content is computationally associated with a respective class of images.

The association between a given tag and a corresponding class of images may be learned or known from example images previously tagged according to a previous or current reporting workflow. For example, the system may also use a Convolutional Neural Network (CNN) for classification of the images, wherein such classification may distinguish between, for example, normal chest X-ray images and chest X-ray images indicative of cardiac hypertrophy. Based on the predictions, respectively their probabilities, the system may derive the suggestions from the terminology and/or text module. In further operation, the system may learn from the user input such that a selection and/or display of a probability of change of deviation from a displayed suggestion is initially displayed due to a high probability initially computed by one or more user corrections (i.e., a selection or input of a deviating suggested term and/or text module). Through this learning process, text suggestions made for the same analysis image may change over time. Thus, the association between a given label and a corresponding type of image may be learned from user input.

Optionally, the text prediction may include a display portion indicating a probability that the suggested text module matches the read image content and/or the determined likely illness.

For example, the text predictions may be displayed as a list with lines that individual suggestions can be selected line by line, where each suggestion also gives a probability in percent. Ideally, when displayed, the lines are sorted by probability, which is computed in the background by the system. The probability indicator may be arranged before or after the display of the text suggestions. This may also show the user that in current automatic image processing several possible diagnoses may be considered and, therefore, in this case, a more accurate analysis by a human expert may be appropriate. If the system is capable of machine learning, individual or repeated preferences for less or more likely text suggestions may cause the system to assume different probability scores in the future. Of course, there may also be feedback of the image analysis unit to also adapt its results to the user's selection.

In an embodiment, the text prediction comprises a text module naming and/or describing the applied medical imaging method.

When one or more images are loaded into and/or processed in the system, information may also be sent on what imaging method was taken and/or under what conditions the images were taken. This information may then be suggested as a text prediction for the user.

Optionally, the system (in particular the image analysis unit) may further be adapted to analyze the image quality of the provided image data and in case of an image quality considered ambiguous and/or insufficient, the text prediction comprises a text module comprising at least disclaimer and/or advice for additional or alternative diagnostic methods or examinations to be applied.

Thus, the radiologist may indicate that no clear diagnosis has been possible with the imaging method used. Alternatively, it should be noted that additional studies are recommended for validation or clarification.

In an embodiment, the system may further comprise a learning unit adapted to change and/or expand the predefined set of text modules in response to a user input via the user unit.

The learning unit may be particularly adapted to functionally adjust each of the above described system units by means of a suitable machine learning algorithm, particularly by feedback or by changing the association between the content of these system units. Thus, the system may continuously respond to the text input and adjust its text suggestions, taking into account at least the probability that the typed letter and/or the respective text module matches the respective image content.

Furthermore, the learning unit may be further adapted to change and/or extend an association, i.e. a mapping, between at least one of the predefined text modules and the read out image content.

As explained above, the system may be capable of machine learning, such that single or repeated preferences for less or more likely text suggestions may cause the system to be expanded or revised associations or remove associations between respective predefined text modules and respective read-out image content. Thus, the user may adjust or teach the system in terms of image content recognition and/or text prediction quality during text input without complex user menu navigation.

A second aspect of the invention provides a method for computer-aided preparation of a diagnostic report based on medical image data provided by the medical imaging techniques mentioned above. The method may preferably be executed as (part of) a computer program, in particular by means of a processing or computing unit, such as a microprocessor.

The method comprises the following steps:

-reading out at least image content from the provided image data.

The method steps may include electronically processing and/or analyzing the images by means of computer-assisted image processing, wherein manual image analysis features or depth learning algorithms may be used.

-associating the read out image content with a predefined class of images, which are computationally associated with at least one specific medical or report related characteristic.

During or after reading out the image content, the system may analyze and classify it. In this context, the system may determine whether at least one characteristic derived from the image content matches at least one or more characteristics assigned to a class of images. Of course, such characteristics may also be assigned to classes. Likewise, several classes of properties may be assigned. The allocation may also be changed or adapted by the learning unit mentioned above during run time or during entering of reports.

-displaying a text prediction to a user preparing a report on a display when the user activates at least a text field shown on the display, wherein the text prediction contains at least a predefined text module tagged with a calculation identifier associated with a predefined class of images.

However, image classes are assigned not only to characteristics of the image content, but also to labels and/or calculation identifiers of text modules, whereby the text modules name or describe characteristics or findings derived from the image content, such as detected abnormalities or complete diagnoses, and the like.

In an embodiment, the displayed text module of text predictions may be changed or automatically completed into written text, which is then shown in the text field and/or non-volatile stored for preparing the report diagnosis, in reaction to the user's input via the input unit.

For example, an initial letter that changes during typing may cause the suggested text module to change. Thus, the text suggestions may continue to change as additional letters or spaces are entered, at least until no longer consistent with a given text module.

According to a third aspect, a computer program element for controlling a report preparation system according to the first aspect is provided, which, when being executed by a processing unit, is adapted to perform at least the method steps of the second aspect.

According to a fourth aspect, a computer readable medium is provided, in which the computer program element of the third aspect is stored.

It shall be understood that the system, the method and the computer program have similar and/or identical preferred embodiments, in particular as defined in the dependent claims.

It shall be understood that preferred embodiments of the invention may also be any combination of the dependent claims or the above embodiments with the respective independent claims.

These and other aspects of the invention are apparent from and will be elucidated with reference to the embodiments described hereinafter.

Drawings

In the following drawings:

fig. 1 schematically and exemplarily shows an embodiment of a system for computer-aided diagnosis report preparation based on medical image data provided by medical imaging according to a first aspect.

Fig. 2 schematically and exemplarily shows an embodiment of the text generation unit according to the first aspect.

Fig. 3 shows schematically and exemplarily a display unit displaying a text prediction.

Fig. 4 shows schematically and exemplarily an embodiment of the learning unit according to the first aspect.

Fig. 5 shows a flow chart of a method according to the second aspect.

List of reference numerals

100 system

110 database

120 image

130 medical imaging device

140 image processing unit

150 image analysis unit

160 display device

161 graphic field

170 text field

171 text field

172 text field

173 text field

174 select field

175 text field section

180 input unit

190 text generation unit

191 memory cell

200 text module

210 external database

220 learning unit

Method step S1

Method step S2

Method step S3

Method step S4

Detailed Description

Fig. 1 schematically and exemplarily shows an embodiment of a system 100 for computer-aided diagnosis report preparation based on medical image data provided by medical imaging. In particular, the system 100 is used to prepare or formulate a discovered report, i.e., an actual text-based report, by a user (not shown), such as a radiologist. In this embodiment, the system 100 mainly comprises software components, wherein it is implemented by hardware modules of a personal computer having a processing unit and possibly a server also having a processing unit (not shown). The personal computer may also be a portable computer, such as a tablet computer or the like. For example, the functions or software components of the system 100 are provided as computer program elements stored on a computer readable medium.

First, the system 100 itself, or, for example, a healthcare or radiology management system (not shown) in which the system 100 is embedded (such as the PACS of Philips), includes an optional database 110, implemented, for example, on a server, in which previously created images 120 from medical imaging devices 130 (such as X-ray detectors) with assignments to patients are stored. The database 110 is connected to the system 100 or the individual system units described below, for example via a data network. Although the system elements of system 100 are described below as individual elements, they may share common hardware components. Thus, they are mainly functionally distinguished from each other.

Furthermore, the system 100 comprises an image processing unit 140 adapted to electronically read out image content from at least the provided image 120. In this embodiment, the image processing unit 140 uses a graphics processor and includes image processing program code. As a result, image processing is possible, at least to the extent that relevant image content is detected in the image material. In addition, for example, metadata may also be read out. To process the respective images 120, the system 100 is configured to load the images 120 from the database 110 via a data network.

The system 100 further comprises an image analysis unit 150 adapted to analyze the read-out image content and to derive an analysis result based on predefined or trained characteristics. In this embodiment, the image analysis unit 150 uses the same hardware components (i.e., graphics processor) as the image processing unit 140 above and thus mainly distinguishes in functional terms. A common technique for analyzing the image 120 is machine recognition of particular characteristics, features, and conditions in the image content. Since such analytical techniques are generally known from the prior art mentioned above (in particular from US 2016/0350919a1), a more detailed description is omitted here.

Further, the system 100 includes a display device 160 having at least one text field 170 for displaying the written text to the user. In this embodiment, the display device 160 comprises a computer monitor displaying a graphical user interface, wherein the text field 170 is an integrated part of the graphical user interface, in addition to other displays and/or text fields (see fig. 3). In addition to the hardware computer monitor, the display device 160 includes a corresponding computer interface.

The system 100 further comprises an input unit 180 for interacting with a user to activate at least the text field 170. In this embodiment, the input unit 180 includes a computer keyboard and a computer mouse, in addition to the corresponding computer interface.

Furthermore, the system 100 comprises a text generation unit 190 adapted to provide a text prediction to the user on the display device 160 at least when the user activates the text field 170 via the input unit 180, wherein the text prediction is selected from a predefined set of text modules 200 associated with the analysis results of the image analysis unit 150.

To predefine the text module 200, the system 100 is optionally connected to an (external) public database 210 containing a plurality of medical vocabularies with a limited set of standardized classifications to textually describe findings within an image, such as the image 120. Dominant examples for such a database 210 include terms such as those set forth by NIH (see: https:// www.nlm.nih.gov/mesh /) and commercially available dictionaries such as SnoMed (http:// www.snomed.org/SnoMed-ct). It should be noted that the text module 200 may also be learned or predefined offline, i.e., within the system 100, for example through a database of a healthcare or radiology management system or individually by the user.

The system further comprises a learning unit 220 adapted to at least change and/or expand the predefined set of text modules 200 in response to a user input via the input unit 180. It should be noted that other ones of the system elements described above may also be taught by the learning unit 220.

In the following, the functionality of some of the above system units will now be described in more detail.

First, referring to fig. 2 and 3, the function of the text generation unit 190 is described. Thus, as indicated in fig. 2, the text generation unit 190 is generally configured to generate a text prediction in the form of a text output (i.e., a word or sentence suggestion) based on a particular selection of the text module 200 in the text field 170 of the display unit 160 in response to a user action on the input unit 180. Thus, the text generation unit 190 also takes into account the current state of the text field 170. For this purpose, the text generation unit 190 has several direct or indirect software interfaces, e.g. via hardware drivers or software modules, to a storage unit 191 where at least the display unit 160, the input unit 180, the image analysis unit 150 and the text module 200 are stored. In essence, two different current states of the text field 170 can be distinguished, i.e. the text field 170 is still completely empty but activated by e.g. a mouse click via the input unit 180 and text has been entered into the activated text field 170. The context and context sensitive selection of one or more of the text modules 200 to be displayed in the text field 170 is based on a priori associations between the analysis results of the image analysis unit 150 and the text modules 200, as described in more detail below. The selection of the text module 200 changes as a function of the image content identified during the image analysis. As a result, during operation of the text generation unit 190, the generated text prediction comprises an initial text prediction related to the image content when the text field 170 is active but empty, or the text prediction changes in response to an input to the text module portion by the user via the input unit 180.

Further, the text generation unit 190 is further configured to automatically complete the displayed text prediction of the written text content in the text field 170 when the user confirms the displayed text prediction to be completed by the respective input via the input unit. In the simplest case, the confirmation is a single command confirmation caused by pressing the corresponding key of the keyboard. Also, in auto-completion, the following cases can be distinguished:

1. still or after deletion of already written text content, the text field 170 is completely unfilled. In this case, when the user has activated the text field 170, an initial text prediction is displayed, for example, describing medical terms due to findings of analyzing the image content by the image analysis unit 150, as described above. The displayed text is then written to the text field 170 upon confirmation by the user.

2. The user enters at least an initial letter into the text field 170 that matches at least one of the predefined text modules 200. The matching text module 200 is automatically completed word by word or sentence by sentence as a reaction to the entered letters or by additional confirmation.

3. The user enters at least one identified initial word that matches at least one of the predefined text modules 200. The matching text module 200 is automatically completed paragraph by paragraph in response to entering words and/or additional confirmation.

Referring to fig. 3, the text prediction of the text generation unit 190 may be described in more detail. Which shows an exemplary embodiment of a display unit 160 with an input mask for the report to be created, the display unit 160 comprises a text field 170, a graphic field 161 and other text fields 171, 172, 173 as explained above. It should be noted that the other text fields function the same as text field 170. These text fields 170 to 173 are assigned to specific parts of the report to be created and are thus provided, for example, for information about the imaging method used, the view of the image 120, the image quality achieved, etc. Based on the assignment, the text generation unit 190 then makes a further pre-selection of the text module 200 to display when the corresponding text field is activated and/or populated.

The graphics field 161 shows one of the images 120, which is analyzed a priori by the analysis unit 150 and based on its selection from the text module 200, output for display when a suggestion for a text prediction is made. In this example, the text fields 171, 172 and 173 have been filled with text, which can now be interpreted using the example of the text field 170 that is still unfilled. The user has clicked on the text field 170 to activate with the mouse and then written the letter C. As a result, in the pop-up selection field 174, the list of (only) text modules 200 is now displayed with the initial letter C. As can be seen in fig. 3, the probability of a suitable association of the displayed text module 200 with the analyzed image 120 shown in the graphical field 161 is displayed in percentage in the text field portion 175 in addition to the corresponding text module 200. Based on the probabilities, the displayed selectable text modules 200 are sorted in ascending order. The displayed text module 200 is arranged individually line by line, wherein all sentences are also possible. A particular text module 200 of the list may be selected, for example, when the text module is typed via a keyboard by selecting the text module using the directional keys and confirmed by pressing a key (such as the enter key). Alternatively, the text module may be selected by a mouse. With the selection and confirmation of the text module 200, the text module completes automatically into the written and displayed text in the text field 170, as explained above.

Referring to fig. 4, the learning unit 220 is now described in more detail.

The learning unit 220 has several direct or indirect software interfaces to at least the input unit 180, the image analysis unit 150 and the storage unit 191 of the storage module 200, e.g. via hardware drivers or software modules. Which is adapted to recognize text entered by the user via the input unit 180 at least when it no longer matches any text module 200 displayed as a suggestion. The system then stores the new term or new sentence as a new text module of the stored text modules 200 and/or, in the case of existing text modules 200, a new distribution of image content as analyzed by the analysis unit 150.

In the following, the method according to the second aspect will be described with reference to the flowchart shown in fig. 5. For example, the method may be used as a control method for the system 100 described above.

In step S1, at least image content from one or more of the provided images 120 is read out. This is done via the analysis unit 150, which optionally also determines the image quality and makes a disclaimer from which to select the text module 200, e.g. including a disclaimer indicating insufficient image quality in the report. In step S2, the read out image content is associated with or assigned to a predefined class of images associated with at least one specific discovery calculation, which may be, for example, a specific disease. In step S3, when the user activates at least the text field 170 shown on the display, a text prediction for the user preparing the report is displayed on the display device 160, wherein the text prediction contains at least one of the predefined text modules 200 tagged with a calculation identifier associated with a predefined class of images. In optional step S4, after selecting one or more of the text modules 200 and/or entering additional (free) text, the report is complete and may be stored, transmitted, printed, etc.

It should be noted that embodiments of the present invention are described with reference to different subject matters. In particular, some embodiments are described with reference to method type claims, while other embodiments are described with reference to apparatus type claims. However, a person skilled in the art will gather from the above and the following description that, unless other notified, in addition to other combinations of features belonging to one type of subject-matter also any combination between features relating to different subject-matters is considered to be disclosed with this application.

All features can be combined to provide a synergistic effect beyond the simple addition of features.

While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive. The invention is not limited to the disclosed embodiments.

Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims.

In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims shall not be construed as limiting the scope.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于认知和移动疾病或障碍的数字质量计量生物标记

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!