Pathological image processing method and device, electronic equipment and storage medium

文档序号:1939985 发布日期:2021-12-07 浏览:20次 中文

阅读说明:本技术 病理图像的处理方法、装置、电子设备与存储介质 (Pathological image processing method and device, electronic equipment and storage medium ) 是由 陈培林 邢艺释 叶亦舟 李梦道 于 2021-07-27 设计创作,主要内容包括:本发明提供了一种病理图像的处理方法、装置、电子设备与存储介质,病理图像的处理方法,包括:获取病理图像的K组标注分类信息,所述标注分类信息表征了:记载至少一个标注区域的标注图像,以及每个标注区域所添加的分类标签;所述标注区域是针对于所述病理图像而划出的;评估所述K组标注分类信息的一致性,得到表征所述一致性的评估信息;若所述评估信息所表征的一致性高于预设标准,则向具有复核权限的用户反馈标注分类结果,其中,所述标注分类结果是根据所述K组标注分类信息确定的。(The invention provides a pathological image processing method, a pathological image processing device, electronic equipment and a storage medium, wherein the pathological image processing method comprises the following steps: acquiring K groups of label classification information of the pathological image, wherein the label classification information represents: recording an annotated image of at least one annotated region and a classification label added to each annotated region; the labeling area is drawn for the pathology image; evaluating the consistency of the K groups of labeled classification information to obtain evaluation information representing the consistency; and if the consistency represented by the evaluation information is higher than a preset standard, feeding back a labeling classification result to a user with rechecking authority, wherein the labeling classification result is determined according to the K groups of labeling classification information.)

1. A method for processing a pathology image, comprising:

acquiring K groups of label classification information of the pathological image, wherein the label classification information represents: recording an annotated image of at least one annotated region and a classification label added to each annotated region; the labeling area is drawn for the pathology image;

evaluating the consistency of the K groups of labeled classification information to obtain evaluation information representing the consistency;

and if the consistency represented by the evaluation information is higher than a preset standard, feeding back a labeling classification result to a user with rechecking authority, wherein the labeling classification result is determined according to the K groups of labeling classification information.

2. The processing method according to claim 1, wherein evaluating consistency of the K groups of labeled classification information to obtain evaluation information representing the consistency comprises:

forming a matrix for each labeling area in each group of labeling classification information; in the matrix, the distribution of matrix elements is matched with the distribution of pixel points in the labeled image, the matrix elements corresponding to the pixel points in the labeled area take a first numerical value, and the matrix elements corresponding to the pixel points outside the labeled area take a second numerical value;

calculating the evaluation information based on all matrices and the classification labels.

3. The processing method of claim 2, wherein the K sets of labeled classification information include a first set of labeled classification information and a second set of labeled classification information;

calculating the evaluation information based on all matrices and the classification labels, including:

determining a coincidence region between each matrix of the first group of labeled classified information and each matrix of the second group of labeled classified information, the area of the coincidence region, and two classification labels corresponding to the coincidence region;

the overlapping area represents a set of matrix elements which are the same in position and value and are the first value between two matrixes, and the area represents the number of the matrix elements of the overlapping area;

calculating the sum of the areas of the similar overlapped areas to obtain a first area parameter; the same kind of overlapping area refers to: the two corresponding classification labels are overlapped areas of the same classification label;

calculating the sum of the areas of the heterogeneous coincidence areas to obtain a second area parameter; the heterogeneous coincidence region refers to: the two corresponding classification labels are overlapped areas of different classification labels;

and calculating the evaluation information according to the first area parameter and the second area parameter.

4. The processing method according to claim 3, wherein before calculating the evaluation information based on the first area parameter and the second area parameter, further comprising:

calculating the sum of the areas of the labeled areas in the first group of labeled classified information to obtain a third area parameter;

calculating the sum of the areas of the labeled areas in the second group of labeled classified information to obtain a fourth area parameter;

the sum of the areas of the labeling areas represents the total number of pixel points or matrix elements corresponding to the pixel points in all the labeling areas corresponding to one group of labeling classification information;

calculating the evaluation information according to the first area parameter and the second area parameter, specifically including:

and calculating the evaluation information according to the first area parameter, the second area parameter, the third area parameter and the fourth area parameter.

5. The processing method according to claim 4, wherein the evaluation information comprises first evaluation information and/or second evaluation information;

the first evaluation information is matched with a first ratio;

wherein the first ratio is: ss/(SA+SB-Ss-Sd);

The second evaluation information is matched with a second ratio;

wherein the second ratio is: sd/(SA+SB-Ss-Sd);

SdCharacterizing the second area parameter;

Sscharacterizing the first area parameter;

SAcharacterizing the third area parameter;

SBthe fourth area parameter is characterized.

6. The processing method according to claim 5,

if the evaluation information includes the first evaluation information, then: before the labeling classification result is fed back to the specified user, the method further comprises the following steps: determining that the first evaluation information is above a corresponding first specified threshold;

if the evaluation information includes the second evaluation information, then: before the labeling classification result is fed back to the specified user, the method further comprises the following steps: determining that the second evaluation information is below a corresponding second specified threshold.

7. A processing method according to claim 3, characterized in that in said matrix said first value is 1 and said second value is 0;

determining a coincidence region, an area of the coincidence region, and two classification labels corresponding to the coincidence region between each matrix of the first set of labeled classification information and each matrix of the second set of labeled classification information, including:

multiplying the two matrixes to obtain a Hadamard product matrix; wherein, the area of the matrix element with the value of 1 in the Hadamard product matrix is the overlapping area;

calculating the number of matrix elements with the value of 1 in the Hadamard product matrix as the area of the overlapped area;

and taking the classification labels added to the labeling areas of the two matrixes as two classification labels corresponding to the overlapping area.

8. The processing method according to any one of claims 3 to 7, wherein the fed-back labeling classification result characterizes a position and an extent of at least a partially overlapped region in the pathological image and a label corresponding to the overlapped region.

9. The processing method according to any one of claims 1 to 7, wherein acquiring K sets of label classification information of the pathological image comprises:

after a first user with a first right marks out an annotation region for the pathological image, obtaining an annotation image for recording the annotation region;

and acquiring classification labels added by the second users with the N second authorities for the labeled area, and forming a group of labeled classification information based on the labeled image and the classification label added by each second user to obtain the K groups of labeled classification information.

10. The processing method of claim 9, further comprising:

in response to the modification operation of the second user, modifying the annotation area so that: and the labeling area in the labeling classification information is the modified labeling area.

11. A pathological image processing apparatus, comprising:

the acquisition module is used for acquiring K groups of label classification information of the pathological image, and the label classification information represents: recording an annotated image of at least one annotated region and a classification label added to each annotated region; the labeling area is drawn for the pathology image; different groups of labeled classification information are determined by different users;

the evaluation module is used for evaluating the consistency of the K groups of labeled classification information to obtain evaluation information representing the consistency;

and the feedback module is used for feeding back a labeling classification result to a user with rechecking authority if the consistency represented by the evaluation information is higher than a preset standard, wherein the labeling classification result is determined according to the K groups of labeling classification information.

12. An electronic device, comprising a processor and a memory,

the memory is used for storing codes;

the processor configured to execute the code in the memory to implement the method of any one of claims 1 to 10.

13. A storage medium having stored thereon a computer program which, when executed by a processor, carries out the method of any one of claims 1 to 10.

Technical Field

The present invention relates to the field of pathological image processing, and in particular, to a method and an apparatus for processing a pathological image, an electronic device, and a storage medium.

Background

Due to the development of artificial intelligence technology, the artificial intelligence technology for solving the problem of complex medical image recognition has great clinical application value. The pathological image is analyzed by the machine, so that the repeatability is high, the result can be analyzed quantitatively or semi-quantitatively, the repeated work of a pathologist can be reduced, and the accuracy and the reliability of the result can be improved by assisting the pathologist. If the tissues and cells in the pathological images need to be classified by a deep learning method, a large number of labeled images are required to be used as a training set, so as to train a deep learning model.

In the prior art, a model for labeling and classifying can be trained based on a certain amount of labeling results, and then the trained model is used for automatically labeling and classifying the newly input image. However, in this process, the accuracy of labeling is difficult to be ensured, and professional personnel (such as doctors) are still required to participate in a complete and large amount of labeling and classifying work, which also results in a high threshold for participating in labeling and classifying work.

Disclosure of Invention

The invention provides a pathological image processing method and device, electronic equipment and a storage medium, which aim to overcome the defects in the prior art.

According to a first aspect of the present invention, there is provided a method of processing a pathology image, comprising:

acquiring K groups of label classification information of the pathological image, wherein the label classification information represents: recording an annotated image of at least one annotated region and a classification label added to each annotated region; the labeling area is drawn for the pathology image;

evaluating the consistency of the K groups of labeled classification information to obtain evaluation information representing the consistency;

and if the consistency represented by the evaluation information is higher than a preset standard, feeding back a labeling classification result to an appointed user, wherein the labeling classification result is determined according to the K groups of labeling classification information.

Optionally, evaluating consistency of the K groups of labeled classification information to obtain evaluation information representing the consistency, including:

forming a matrix for each labeling area in each group of labeling classification information; in the matrix, the distribution of matrix elements is matched with the distribution of pixel points in the labeled image, the matrix elements corresponding to the pixel points in the labeled area take a first numerical value, and the matrix elements corresponding to the pixel points outside the labeled area take a second numerical value;

calculating the evaluation information based on all matrices and the classification labels.

Optionally, the K groups of labeled classification information include a first group of labeled classification information and a second group of labeled classification information;

calculating the evaluation information based on all matrices and the classification labels, including:

determining a coincidence region between each matrix of the first group of labeled classified information and each matrix of the second group of labeled classified information, the area of the coincidence region, and two classification labels corresponding to the coincidence region;

the overlapping area represents a set of matrix elements which are the same in position and value and are the first value between two matrixes, and the area represents the number of the matrix elements of the overlapping area;

calculating the sum of the areas of the similar overlapped areas to obtain a first area parameter; the same kind of overlapping area refers to: the two corresponding classification labels are overlapped areas of the same classification label;

calculating the sum of the areas of the heterogeneous coincidence areas to obtain a second area parameter; the heterogeneous coincidence region refers to: the two corresponding classification labels are overlapped areas of different classification labels;

and calculating the evaluation information according to the first area parameter and the second area parameter.

Optionally, before calculating the evaluation information according to the first area parameter and the second area parameter, the method further includes:

calculating the sum of the areas of the labeled areas in the first group of labeled classified information to obtain a third area parameter;

calculating the sum of the areas of the labeled areas in the second group of labeled classified information to obtain a fourth area parameter;

the sum of the areas of the labeling areas represents the total number of pixel points or matrix elements corresponding to the pixel points in all the labeling areas corresponding to one group of labeling classification information;

calculating the evaluation information according to the first area parameter and the second area parameter, specifically including:

and calculating the evaluation information according to the first area parameter, the second area parameter, the third area parameter and the fourth area parameter.

Optionally, the evaluation information includes first evaluation information and/or second evaluation information;

the first evaluation information is matched with a first ratio;

wherein the first ratio is: ss/(SA+SB-Ss-Sd);

The second evaluation information is matched with a second ratio;

wherein the second ratio is: sd/(SA+SB-Ss-Sd);

SdCharacterizing the second area parameter;

Sscharacterizing the first area parameter;

SAcharacterizing the third area parameter;

SBthe fourth area parameter is characterized.

Optionally, if the evaluation information includes the first evaluation information, then: before the labeling classification result is fed back to the specified user, the method further comprises the following steps: determining that the first evaluation information is above a corresponding first specified threshold;

if the evaluation information includes the second evaluation information, then: before the labeling classification result is fed back to the specified user, the method further comprises the following steps: determining that the second evaluation information is below a corresponding second specified threshold.

Optionally, in the matrix, the first value is 1, and the second value is 0;

determining a coincidence region, an area of the coincidence region, and two classification labels corresponding to the coincidence region between each matrix of the first set of labeled classification information and each matrix of the second set of labeled classification information, including:

multiplying the two matrixes to obtain a Hadamard product matrix; wherein, the area of the matrix element with the value of 1 in the Hadamard product matrix is the overlapping area;

calculating the number of matrix elements with the value of 1 in the Hadamard product matrix as the area of the overlapped area;

and taking the classification labels added to the labeling areas of the two matrixes as two classification labels corresponding to the overlapping area.

Optionally, the fed-back labeling classification result characterizes a position and a range of at least a partial overlapping region in the pathological image, and a label corresponding to the overlapping region.

Optionally, the acquiring K groups of labeled classification information of the pathological image includes:

after a first user with a first right marks out an annotation region for the pathological image, obtaining an annotation image for recording the annotation region;

and acquiring classification labels added by the second users with the N second authorities for the labeled area, and forming a group of labeled classification information based on the labeled image and the classification label added by each second user to obtain the K groups of labeled classification information.

Optionally, the processing method further includes:

in response to the modification operation of the second user, modifying the annotation area so that: and the labeling area in the labeling classification information is the modified labeling area.

Optionally, the designated user is a rechecking user with a rechecking authority; and the labeling classification result is a labeling classification result to be rechecked.

According to a second aspect of the present invention, there is provided a processing apparatus of a pathological image, including:

the acquisition module is used for acquiring K groups of label classification information of the pathological image, and the label classification information represents: recording an annotated image of at least one annotated region and a classification label added to each annotated region; the labeling area is drawn for the pathology image; different groups of labeled classification information are determined by different users;

the evaluation module is used for evaluating the consistency of the K groups of labeled classification information to obtain evaluation information representing the consistency;

and the feedback module is used for feeding back a labeling classification result to the specified user if the consistency represented by the evaluation information is higher than a preset standard, wherein the labeling classification result is determined according to the K groups of labeling classification information.

According to a third aspect of the invention, there is provided an electronic device comprising a processor and a memory,

the memory is used for storing codes;

the processor is configured to execute the code in the memory to implement the method according to the first aspect and its alternatives.

According to a fourth aspect of the present invention, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, carries out the method of the first aspect and its alternatives.

In the pathological image processing method, the pathological image processing device, the electronic equipment and the storage medium, label classification information can be determined by a user, and then corresponding label classification results are sent to the user with rechecking authority, so that division of the label, classification process and rechecking process is realized, in a further alternative, division of the marking process and the classification marking process (including a modification process) of a label area can be refined, and thus the users with different authorities can be matched for processing conveniently, on the basis, management confusion of label work can be avoided through reasonable division and circulation, rechecking of the label classification results can be used for ensuring the quality of label classification, meanwhile, on the basis of a division mechanism, each user can be more involved in own division without being concentrated on all processes, and when the scheme is applied specifically, different professional abilities can be allocated, The invention provides a technical basis for reasonable allocation, and is also beneficial to improving the efficiency of the whole working process and the quality of label classification through the division mechanism.

In addition, due to the fact that division of labor is achieved, rechecking can be conducted on the basis of results of labeling classification of a plurality of users, and compared with a scheme of rechecking only on the basis of one group of labeling classification information of one user, accuracy of labeling classification results can be effectively improved.

On the basis, the invention creatively introduces an evaluation mechanism, wherein the consistency of the K groups of labeled classification information determined by different users can be evaluated, the labeled classification information with poor consistency is prevented from being directly used as the basis for rechecking, the workload of the user (such as a doctor) with the rechecking authority can be saved, the unnecessary and complicated workload can be avoided, and the accuracy of the labeled classification result can be ensured.

Drawings

In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.

FIG. 1 is a flowchart illustrating a method for processing a pathological image according to an embodiment of the present invention;

FIG. 2 is a flowchart illustrating step S11 according to an embodiment of the present invention;

FIG. 3 is a flowchart illustrating step S12 according to an embodiment of the present invention;

FIG. 4 is a first flowchart illustrating the step S122 according to an embodiment of the present invention;

FIG. 5 is a second flowchart illustrating the step S122 according to an embodiment of the present invention;

FIG. 6 is a schematic diagram of a marker image, matrix in one embodiment of the invention;

FIG. 7 is a diagram illustrating multiplication of matrix N1 by matrix M1 in accordance with an embodiment of the present invention;

FIG. 8 is a diagram illustrating multiplication of matrix N1 by matrix M2 in accordance with an embodiment of the present invention;

FIG. 9 is a diagram illustrating multiplication of matrix N2 by matrix M1 in accordance with an embodiment of the present invention;

FIG. 10 is a diagram illustrating multiplication of matrix N2 by matrix M2 in accordance with an embodiment of the present invention;

FIG. 11 is a schematic diagram of a matrix with integrated overlap regions according to an embodiment of the invention;

FIG. 12 is a block diagram of a program of a pathological image processing device according to an embodiment of the present invention;

fig. 13 is a schematic structural diagram of an electronic device in an embodiment of the invention.

Detailed Description

The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.

The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.

Referring to fig. 1, a method for processing a pathological image according to an embodiment of the present invention includes:

s11: acquiring K groups of label classification information of the pathological image;

wherein N is more than or equal to 2;

s12: evaluating the consistency of the K groups of labeled classification information to obtain evaluation information representing the consistency;

s13: if the consistency represented by the evaluation information is higher than a preset standard, feeding back a labeling classification result to an appointed user;

and determining the labeling classification result according to the K groups of labeling classification information.

Wherein the label classification information characterizes: recording an annotated image of at least one annotated region and a classification label added to each annotated region; the labeled region may be marked for the pathological image, the marking may be performed once or performed after first marking and then adjusting, and the added classification label may be added at the beginning of marking the labeled region, may be added during the adjusting process, or may be added after marking.

When the number of labeled regions is 2, taking fig. 6 as an example, the image I1 and the image I2 can be regarded as the labeled images mentioned above, two labeled regions in the image I1 are labeled regions labeled with the classification label C1 and the classification label C2, respectively, and two labeled regions in the image I2 are labeled regions labeled with the classification label C1 and the classification label C3, respectively. In other examples, the number of the labeling areas may be greater than or equal to 3.

In one embodiment, the different sets of label classification information may be determined by different users (for example, different users define the calibration area and/or add labels), where the different users may be understood as: if only one user needs to determine the single group of label classification information (for example, one user defines a calibration area and adds a label), different groups of label classification information can be determined by different users; if the process of determining a single set of annotation classification information needs to be determined by more than one user (e.g., two users, or at least three users), then the annotation classification information for different sets can be determined by disparate users or partially disparate users. In other embodiments, different sets of annotation classification information may be determined by the same user.

Wherein, the designated user can be a rechecking user with a rechecking authority; correspondingly, the labeling classification result is a labeling classification result to be rechecked. In other examples, the designated user may also be a user with other rights or perhaps for other purposes.

In the above scheme, the user can determine the labeling classification information, and then send the corresponding labeling classification result to the user with the rechecking authority, so that the division of labor in the labeling, classification and rechecking processes is realized. In addition, in the embodiment of the invention, due to the fact that division of labor is realized, the method can be used for rechecking based on the results of labeling classification of a plurality of users, and compared with a scheme of rechecking based on a group of labeling classification information of one user, the method can effectively improve the accuracy of the labeling classification results.

On the basis, the embodiment of the invention creatively introduces an evaluation mechanism, wherein the consistency of the K groups of labeled classification information determined by different users can be evaluated, the labeled classification information with poor consistency is prevented from being directly used as the basis for rechecking, the workload of the user (such as a doctor) with the rechecking authority can be saved, the unnecessary and complicated workload is prevented from being generated, and the accuracy of the labeled classification result can be ensured.

Referring to step S11, in one embodiment, referring to fig. 2, step S11 may include:

s111: after a first user with a first right marks out an annotation region for the pathological image, obtaining an annotation image for recording the annotation region;

s112: and acquiring classification labels added by the second users with the N second authorities for the labeled area, and forming a group of labeled classification information based on the labeled image and the classification label added by each second user to obtain the K groups of labeled classification information.

The users with the first authority can be understood as users with authority to perform region division on the pathological image so as to draw out the labeled region, and specifically, the users with the first authority can be primary labeling personnel;

the user with the second authority can be understood as a user capable of adding a label to the label area in the label image, and in a further scheme, the authority of the user with the second authority may further include: adjusting the size and/or shape of the marked area; specifically, the user with the second authority may be a senior tagging person;

the user with the first authority, the user with the second authority and the designated user can complete corresponding interaction on the same terminal, and can also complete corresponding interaction on different terminals.

The authority of each user may be embodied by information bound to the user, and the information may be represented as authority information, or may not be limited thereto, and may also be, for example, identity information, job information, or the like.

The pathological image can be any image that can describe the pathology, and the pathological image involved can be any based on the equipment and mechanism used, that is: the embodiment of the invention can be suitable for processing various pathological images.

The annotation image can be any image in which an annotation region is formed, and can be an image or a layer presented independently of the pathological image or presented integrally with the pathological image.

In another embodiment, step S11 may further include:

s113: in response to the modification operation of the second user, modifying the annotation area so that: and the labeling area in the labeling classification information is the modified labeling area.

In a specific example, the terminal can interact with a user through a marking tool; for example, a pathological image may be displayed to a user, and the user may label the pathological image through a labeling tool to form a labeled image, where the labeling tool may be an online labeling tool, and further, terminals connected to a network may be used to implement steps S111 to S113 above; wherein:

in step S111, a user (e.g., a user with a first authority) may label the pathological image through a rectangular, circular, elliptical, or polygonal graph configured in the labeling tool to mark a labeled region. For example: the primary labeling personnel can manually label the pathological tissue area based on the pathological image, namely: the target tissue is delineated by using a marking tool. The annotation image after the annotation region is marked out in step S111 can be, for example, the image I0 shown in fig. 6;

in steps S112 and S113, a user (e.g., a user with a second authority) may add a label to the labeled area through the area label function in the labeling tool, so as to achieve the purpose of classifying the labeled area. For example: the high-grade labeling personnel manually add labels to the labeled and generated labeling areas to achieve the purpose of classifying the labels; the annotation images added and adjusted in steps S112 and S113 may be, for example, the image I1 and the image I2 shown in fig. 6.

Therefore, in a further alternative scheme, division of marking process and classification marking process (including modification process) of the marking area can be further refined, so that users with different authorities can be matched for processing conveniently, management confusion of marking work can be avoided through reasonable division and circulation on the basis, quality of marking classification can be guaranteed through rechecking of a marking classification result, meanwhile, each user can concentrate on own division based on a division mechanism, the user does not need to participate in all processes, and when the scheme is applied specifically, people with different professional abilities and experiences can be allocated to participate in different divisions, so that reasonability of manpower resource allocation is improved.

In one embodiment, referring to fig. 3, step S12 may include:

s121: forming a matrix for each labeling area in each group of labeling classification information;

in the matrix, the distribution of matrix elements is matched with the distribution of pixel points in the labeled image, and the matrix elements corresponding to the pixel points in the labeled region take a first numerical value (for example, 1), and the matrix elements corresponding to the pixel points outside the labeled region take a second numerical value (for example, 0); in other examples, the first value may also be 0, the second value may also be 1, and the first value and the second value may also take other values (for example, numerical values, and also numerical values represented by letters and symbols).

S122: calculating the evaluation information based on all matrices and the classification labels.

The method can realize the classification of the labeling area in the labeling image and the quantification of the label adding result through the formation of the matrix, thereby providing a basis for the quantification calculation of the evaluation information, and moreover, the distribution of matrix elements in the matrix is matched with the distribution of pixel points, so that the quantification result can be accurately matched with the labeling image.

Wherein, each matrix corresponds to one label area, so that: the distribution result of each labeling area in the whole labeling image can be embodied as a matrix, which can be helpful to more finely and accurately embody the consistency of the labeling classification information.

In a further example, for each labeled classification information, a csv file may be formed, and a corresponding matrix may be formed based on the corresponding csv file.

In a further scheme, with the labeled annotation images (for example, the image I1 and the image I2) shown in fig. 6, a matrix N _ I, where I is 1,2,3, …, may be separately generated for each type of annotation region in the annotation file a (i.e., csv file) of the image I1, and the matrix specification is consistent with the image pixels. Assigning the value of each pixel point in the labeling area to be 1, and filling the area outside the labeling area with 0; the matrices formed may be, for example, matrix N1 and matrix N2;

it is also possible to generate a proof matrix M _ I, I being 1,2,3, … for each type of annotation region in the annotation file B (i.e., csv file) of the image I2, with the matrix specification being consistent with the image pixels. Assigning the value of each pixel point in the labeling area to be 1, and filling the area outside the labeling area with 0; the matrices formed may be, for example, matrix M1 and matrix M2.

If the K sets of label classification information include a first set of label classification information and a second set of label classification information (the corresponding matrix label images can be, for example, image I1 and image I2); in one embodiment, step S122 may include:

s1221: determining a coincidence region between each matrix of the first group of labeled classified information and each matrix of the second group of labeled classified information, the area of the coincidence region, and two classification labels corresponding to the coincidence region;

s1222: calculating the sum of the areas of the similar overlapped areas to obtain a first area parameter;

s1223: calculating the sum of the areas of the heterogeneous coincidence areas to obtain a second area parameter;

s1224: and calculating the evaluation information according to the first area parameter and the second area parameter.

Wherein the overlap region represents a set of matrix elements having the same value and being the first value (for example, 1) and the position between two matrices, and the area represents the number of matrix elements of the overlap region;

in the examples shown in fig. 6 and 7, the areas of the 12 matrix elements with the value of 1 in the matrix N1 and the 12 matrix elements with the value of 1 in the matrix M1 are overlapping areas, and correspondingly, the area of the overlapping area is 12. Meanwhile, the area of the overlapping region of the matrix N1 and the matrix M2 is 0, the area of the overlapping region of the matrix N2 and the matrix M1 is 0, and the area of the overlapping region of the matrix N2 and the matrix M2 is 6. As can be seen, in step S1221, the overlapping region and area between the matrix N1 and the matrix M1, the overlapping region and area between the matrix N1 and the matrix M2, the overlapping region and area between the matrix N2 and the matrix M1, and the overlapping region and area between the matrix N2 and the matrix M2 are obtained.

Wherein the same kind of overlapping area refers to: the two corresponding classification labels are overlapped regions of the same classification label, for example, an overlapped region between the matrix of the labeled region labeled as C1 in the image I1 and the matrix of the labeled region labeled as C1 in the image I2 is a similar overlapped region.

Wherein the heterogeneous overlapping area refers to: the two corresponding classification labels are overlapped areas of different classification labels, for example, an overlapped area between the matrix of the labeled area labeled as C2 in the image I1 and the matrix of the labeled area labeled as C3 in the image I2 is a heterogeneous overlapped area.

In the examples shown in fig. 6 and 7:

the area of the overlapping area of the matrix N1 and the matrix M1 is 12, and the overlapping areas are the same classification areas (namely the area of the same type of overlapping area of the matrix N1 and the matrix M1 is 12); the labels represented by the matrix N1 and the matrix M1 are all classified in C1;

the area of the overlapping region of the matrix N1 and the matrix M2 is 0; the area of the overlapping region of the matrix N2 and the matrix M1 is 0; the area of the overlapping region of the matrix N2 and the matrix M2 is 9, and the overlapping regions are different classes (i.e. the area of the heterogeneous overlapping region of the matrix N1 and the matrix M2 is 9); the labels represented by matrix N2 and matrix M2 are classified as C2 and C3, respectively.

In the illustrated example, the matrix of the label region labeled C1 in the image I1 and the matrix of the label region labeled C3 in the image I2 do not overlap each other, and therefore do not form a similar overlapping region or a dissimilar overlapping region, and in another example, if overlapping is formed, a partially dissimilar overlapping region may be formed.

In one embodiment, when the first numerical value is 1 and the second numerical value is 0, the result of multiplying 1 by 1 is 1 and the result of multiplying 1 by 0 and the result of multiplying 0 by 0 is 0, and thus the overlapping region can be represented by the multiplication of the matrices.

Therefore, referring to fig. 5, step S1221 may include:

s12211: multiplying the two matrixes to obtain a Hadamard product matrix; wherein, the area of the matrix element with the value of 1 in the Hadamard product matrix is the overlapping area;

s12212: calculating the number of matrix elements with the value of 1 in the Hadamard product matrix as the area of the overlapped area;

s12213: and taking the classification labels added to the labeling areas of the two matrixes as two classification labels corresponding to the overlapping area.

In a specific scheme, the matrix N _ i, i is 1,2,3, …, and the matrix M _ i, i is 1,2,3, … are multiplied one by one, a hadamard product is calculated, the number of non-0 pixels in all hadamard product matrices is recorded, and the area of the overlapped part of each type label in the N _ i and the M _ i is obtained according to the specific value (the non-0 part is the overlapped part, and the specific value represents the overlapped object). And recording the classification corresponding to the overlapping area of each type label.

In fig. 7, 8, 9 and 10, the hadamard products of matrix N1 and M1, the hadamard products of matrix N1 and M2, the hadamard products of matrix N2 and M1, and the hadamard products of matrix N2 and M2 are respectively illustrated.

The first area parameter referred to above may be characterized as SsThe second area parameter referred to above may be characterized as Sd(ii) a In some embodiments, the first area parameter and/or the second area parameter itself may be used as the evaluation information, and the first area parameter S may be comparedsAnd a corresponding standard, and/or: comparing the second area parameter SdAnd comparing S with corresponding standard to judge whether the consistency is higher than the preset standards/Ss+SdAnd/or Sd/Ss+SdAs evaluation information, further, S is compareds/Ss+SdAnd a corresponding standard, and/or: sd/Ss+SdAnd corresponding standard, thus judge whether the conformance is higher than the predetermined standard; for example, can be at Ss/Ss+SdGreater than a certain value and/or: sd/Ss+SdAnd when the evaluation information is less than a certain value, judging that the evaluation information is higher than the preset standard.

In another embodiment, a third area parameter and a fourth area parameter may be further introduced to calculate the evaluation information, for example, please refer to fig. 5, and step S122 may further include:

s1225: calculating the sum of the areas of the labeled areas in the first group of labeled classified information to obtain a third area parameter;

s1226: calculating the sum of the areas of the labeled areas in the second group of labeled classified information to obtain a fourth area parameter;

correspondingly, step S1224 may include:

s12241: and calculating the evaluation information according to the first area parameter, the second area parameter, the third area parameter and the fourth area parameter.

The sum of the areas of the labeled regions represents the total number of the pixel points or the matrix elements corresponding to the pixel points in all the labeled regions corresponding to a group of labeled classification information, taking fig. 6 as an example:

the sum of the labeled regions labeled C1 of the image I1 is the total number of matrix elements of the labeled regions of the matrix N1, i.e., 12;

the sum of the labeled regions labeled C2 of the image I1 is the total number of matrix elements of the labeled regions of the matrix N2, i.e., 15;

the sum of the labeled regions labeled C1 of the image I2 is the total number of matrix elements of the labeled regions of the matrix N2, i.e., 12;

the sum of the labeled regions labeled C3 of the image I2 is the total number of matrix elements of the labeled regions of the matrix N2, i.e., 15.

Correspondingly, the third area parameter of the image I1 (i.e., a set of label classification information) is 27, and the third area parameter of the image I2 (i.e., a set of label classification information) is 27.

In a specific example, the evaluation information includes first evaluation information and/or second evaluation information;

the first evaluation information is matched with a first ratio;

wherein the first ratio is: ss/(SA+SB-Ss-Sd);

The second evaluation information is matched with a second ratio;

wherein the second ratio is: sd/(SA+SB-Ss-Sd);

SdCharacterizing the second area parameter;

Sscharacterizing the first area parameter;

SAcharacterizing the third area parameter;

SBthe fourth area parameter is characterized.

In correspondence with the above-mentioned problem,

if the evaluation information includes the first evaluation information, then: before the labeling classification result is fed back to the specified user, the method further comprises the following steps: determining that the first evaluation information is above a corresponding first specified threshold;

if the evaluation information includes the second evaluation information, then: before the labeling classification result is fed back to the specified user, the method further comprises the following steps: determining that the second evaluation information is below a corresponding second specified threshold.

In one example, the first specified threshold may be, for example, 95% and the second specified threshold may be, for example, 5%. In some embodiments, the evaluation rule of the evaluation information and the preset criterion may be configured as: in the evaluation of the two evaluation information items, as long as one item does not reach the standard, the consistency score does not reach the standard, otherwise, only under the condition that both items reach the standard, the following conditions are met: the consistency characterized by the evaluation information is higher than a preset standard. In addition, the user can set the standard for reaching the consistency score by himself (for example, the first specified threshold and the second specified threshold can be set by himself).

Taking the matrices shown in fig. 6 and 7 as an example, then:

first ratio

Second ratio

At this time, because RsNot more than 90% and RdAnd if the evaluation information is not up to the standard, the system can update the pathological image state to 'fail', and the evaluation is carried out again after the corresponding labeling classification information is changed. In this embodiment, the first ratio and the second ratio may be directly used as the first evaluation information and the second evaluation information, and in other examples, the first evaluation information and the second evaluation information may also be formed based on the first ratio and the second ratio in combination with other calculations, for example, they may be corrected, and further calculations (for example, adding, subtracting, multiplying, and dividing some numerical values) may also be performed based on the corrected first evaluation information and the second evaluation information.

Taking fig. 6 and 7 as an example, if the preset standard of consistency is reset by the user at the beginning and the consistency score reaches the standard after resetting, a matrix (as shown in fig. 11) of the overlapping area of the first group of labeled classified information and the second group of labeled classified information can be generated, labeled contour coordinates are retained, and the csv file is generated and submitted for review.

As can be seen, the fed-back labeling classification result represents the position and range of at least part of the overlapping region in the pathological image and the label corresponding to the overlapping region. For example:

the overlapping regions may be integrated together (the integrated matrix may be, for example, as shown in fig. 11), and a corresponding label may be combined to generate a csv file of the corresponding annotation image, and when the review is required, the csv file may be retrieved and fed back to the designated user.

If the review fails, in some schemes, the step S11 may be returned to re-determine and adjust the labeled classification information, and then the subsequent steps are executed to re-submit the review, and in other schemes, the designated user may also re-determine the labeled classification information by himself, so as to generate the labeled classification result.

Therefore, in the specific scheme of the embodiment of the invention, the tasks with strong repeatability and large workload can be distributed to the region marking submodule, so that the doctor rechecking submodule can perform the work with relatively less time consumption and strong specialization, and the distribution of human resources is more reasonable. The region labeling submodule, the region classification submodule and the doctor review submodule generate respective division of labor, and the problems of management confusion, labeling quality irregularity and the like caused by large labeling workload are effectively avoided. The mode of gate-by-layer gating and the quality control scoring algorithm ensure the labeling quality.

In addition, if the number of the label classification information is at least three groups, then: in one example, the evaluation information (e.g., the first evaluation information, the second evaluation information) between each two groups can be calculated by referring to the above manner, and then the evaluation information is integrated in a statistical manner to obtain final evaluation information for evaluating consistency, and then calculation is performed based on the final evaluation information in the process of step S3; for example: the first evaluation information (or the second evaluation information) between each two groups can be calculated, a plurality of first evaluation information (or second evaluation information) to be counted are obtained, and then the average value of the first evaluation information (or the second evaluation information) to be counted is calculated to be the final first evaluation information (or second evaluation information).

Referring to fig. 12, an embodiment of the present invention further provides a pathological image processing apparatus 2, including:

an obtaining module 201, configured to obtain K groups of label classification information of a pathological image, where the label classification information represents: recording an annotated image of at least one annotated region and a classification label added to each annotated region; the labeling area is drawn for the pathology image; different groups of labeled classification information are determined by different users;

the evaluation module 202 is configured to evaluate consistency of the K groups of labeled classification information to obtain evaluation information representing the consistency;

a feedback module 203, configured to feed back a labeling classification result to a user with a rechecking authority if the consistency represented by the evaluation information is higher than a preset standard, where the labeling classification result is determined according to the K groups of labeling classification information.

Optionally, the evaluation module 202 is specifically configured to:

forming a matrix for each labeling area in each group of labeling classification information; in the matrix, the distribution of matrix elements is matched with the distribution of pixel points in the labeled image, the matrix elements corresponding to the pixel points in the labeled area take a first numerical value, and the matrix elements corresponding to the pixel points outside the labeled area take a second numerical value;

calculating the evaluation information based on all matrices and the classification labels.

Optionally, the K groups of labeled classification information include a first group of labeled classification information and a second group of labeled classification information;

optionally, the evaluation module 202 is specifically configured to:

determining a coincidence region between each matrix of the first group of labeled classified information and each matrix of the second group of labeled classified information, the area of the coincidence region, and two classification labels corresponding to the coincidence region;

the overlapping area represents a set of matrix elements which are the same in position and value and are the first value between two matrixes, and the area represents the number of the matrix elements of the overlapping area;

calculating the sum of the areas of the similar overlapped areas to obtain a first area parameter; the same kind of overlapping area refers to: the two corresponding classification labels are overlapped areas of the same classification label;

calculating the sum of the areas of the heterogeneous coincidence areas to obtain a second area parameter; the heterogeneous coincidence region refers to: the two corresponding classification labels are overlapped areas of different classification labels;

and calculating the evaluation information according to the first area parameter and the second area parameter.

Optionally, the evaluation module 202 is specifically configured to:

calculating the sum of the areas of the labeled areas in the first group of labeled classified information to obtain a third area parameter;

calculating the sum of the areas of the labeled areas in the second group of labeled classified information to obtain a fourth area parameter;

the sum of the areas of the labeling areas represents the total number of pixel points or matrix elements corresponding to the pixel points in all the labeling areas corresponding to one group of labeling classification information;

calculating the evaluation information according to the first area parameter and the second area parameter, specifically including:

and calculating the evaluation information according to the first area parameter, the second area parameter, the third area parameter and the fourth area parameter.

Optionally, the evaluation information includes first evaluation information and/or second evaluation information;

the first evaluation information is matched with a first ratio;

wherein the first ratio is: ss/(SA+SB-Ss-Sd);

The second evaluation information is matched with a second ratio;

wherein the second ratio is: sd/(SA+SB-Ss-Sd);

SdCharacterizing the second area parameter;

Sscharacterizing the first area parameter;

SAcharacterizing the third area parameter;

SBthe fourth area parameter is characterized.

Optionally, if the evaluation information includes the first evaluation information, then: before the labeling classification result is fed back to the specified user, the method further comprises the following steps: determining that the first evaluation information is above a corresponding first specified threshold;

if the evaluation information includes the second evaluation information, then: before the labeling classification result is fed back to the specified user, the method further comprises the following steps: determining that the second evaluation information is below a corresponding second specified threshold.

Optionally, in the matrix, the first value is 1, and the second value is 0;

the evaluation module is specifically configured to:

multiplying the two matrixes to obtain a Hadamard product matrix; wherein, the area of the matrix element with the value of 1 in the Hadamard product matrix is the overlapping area;

calculating the number of matrix elements with the value of 1 in the Hadamard product matrix as the area of the overlapped area;

and taking the classification labels added to the labeling areas of the two matrixes as two classification labels corresponding to the overlapping area.

Optionally, the fed-back labeling classification result characterizes a position and a range of at least a partial overlapping region in the pathological image, and a label corresponding to the overlapping region.

Optionally, the obtaining module 201 is specifically configured to:

after a first user with a first right marks out an annotation region for the pathological image, obtaining an annotation image for recording the annotation region;

and acquiring classification labels added by the second users with the N second authorities for the labeled area, and forming a group of labeled classification information based on the labeled image and the classification label added by each second user to obtain the K groups of labeled classification information.

Optionally, the obtaining module 201 is further configured to:

in response to the modification operation of the second user, modifying the annotation area so that: and the labeling area in the labeling classification information is the modified labeling area.

Referring to fig. 13, an electronic device 30 is provided, which includes:

a processor 31; and the number of the first and second groups,

a memory 32 for storing executable instructions of the processor;

wherein the processor 31 is configured to perform the above-mentioned method via execution of the executable instructions.

The processor 31 is capable of communicating with the memory 32 via a bus 33.

Embodiments of the present invention also provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the above-mentioned method.

Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.

Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

22页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种医院常备药登记系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!