Breast cancer ultrasonic image typing method and system fusing deep convolutional network and image omics characteristics and storage medium

文档序号:1273100 发布日期:2020-08-25 浏览:17次 中文

阅读说明:本技术 融合深度卷积网络和影像组学特征的乳腺癌超声图分型方法、系统及存储介质 (Breast cancer ultrasonic image typing method and system fusing deep convolutional network and image omics characteristics and storage medium ) 是由 田家玮 张蕾 王影 俞卫东 张云鹏 时嘉欣 于 2020-03-17 设计创作,主要内容包括:本申请提供了种融合深度卷积网络和影像组学特征的乳腺癌超声图分型方法、系统及计算机可读存储介质,该方法包括:获取超声图像,超声图像的对应内容包括乳腺部位;对超声图像进行处理,获得超声图像中的目标区域,该目标区域中包括有乳腺病灶区域图像;对识别出目标区域的超声图像提取第一特征和第二特征;基于所述第一特征和第二特征,进行融合处理,得到第一融合特征;对所述第一融合特征进行特征筛选处理,得到第二融合特征;基于所述第二融合特征,获得乳腺癌超声图像的分型结果。本发明提取高通量的超声图像特征及深度语义特征,并进行融合和特征筛选,实现对超声图像的有效、准确识别。(The application provides a breast cancer ultrasonic image classification method, a breast cancer ultrasonic image classification system and a computer readable storage medium, wherein the breast cancer ultrasonic image classification method and the breast cancer ultrasonic image classification system are combined with deep convolutional networks and image omics characteristics, and the method comprises the following steps: acquiring an ultrasonic image, wherein the corresponding content of the ultrasonic image comprises a breast part; processing the ultrasonic image to obtain a target region in the ultrasonic image, wherein the target region comprises a breast focus region image; extracting a first feature and a second feature from the ultrasonic image of the identified target area; performing fusion processing based on the first characteristic and the second characteristic to obtain a first fusion characteristic; performing feature screening processing on the first fusion features to obtain second fusion features; and obtaining a typing result of the breast cancer ultrasonic image based on the second fusion characteristic. The invention extracts the high-flux ultrasonic image characteristics and the depth semantic characteristics, and performs fusion and characteristic screening to realize effective and accurate identification of the ultrasonic image.)

1. A breast cancer ultrasonic image classification method fusing deep convolutional network and image omics characteristics is characterized by comprising the following steps:

s210: acquiring an ultrasonic image, wherein the ultrasonic image comprises a breast part;

s220: processing the ultrasonic image to obtain a target region in the ultrasonic image, wherein the target region comprises a breast lesion region;

s230: extracting features of the ultrasonic image of the identified target area to obtain a first feature, wherein the first feature is a depth feature; performing feature extraction processing on the ultrasonic image of the identified target region to obtain a second feature, wherein the second feature is obtained based on a plurality of texture features and edge features obtained by at least 5 different image omics image processing operators;

s240: fusing the first feature and the second feature to obtain a first fused feature;

s250: performing feature screening processing on the first fusion features to obtain second fusion features;

s260: obtaining a breast cancer sonogram typing result based on the second fusion characteristic.

2. The method according to claim 1, wherein in the step S230, the texture features and the edge features comprise: the method comprises the steps of extracting first texture features through a SIFT operator, extracting second texture features through an LBP operator, extracting third texture features through a GLSZM operator, extracting first edge features through a LOG operator, and extracting second edge features through a Gabor operator.

3. The method according to claim 1, wherein in S240, a first fusion feature is obtained by clustering; the clustering mode is as follows:

where V (j, k) is the output data of equation (5), akWeight, x, output for softmaxi(j) And ck(j) The ith local descriptor and the jth eigenvalue of the kth clustering center are respectively, wherein i, j and k are respectively positive integers.

4. The method according to claim 1, wherein in S250, further comprising: and screening the first fusion characteristics according to characteristic importance judgment, wherein the characteristic importance judgment is realized based on a LightGBM network.

5. The method according to claim 1, wherein in the step S220, obtaining the target region in the ultrasound image is performed by: acquiring a sample ultrasonic image and marking information to form a training set, wherein the marking information is used for marking a breast lesion area in the sample ultrasonic image;

training a deep learning network based on the training set;

identifying a target region for a new input ultrasound image based on the trained deep learning network.

6. A breast cancer ultrasound image classification system incorporating deep convolutional networks and iconomics features, the system comprising:

the acquisition module is used for acquiring ultrasonic images or video data;

the processor module is used for processing the ultrasonic image or video data acquired by the acquisition module and acquiring a parting result;

the display module is used for displaying the ultrasonic image or the video data and the parting result sent by the processor module;

the processor module further comprises:

the target area identification unit is used for processing the ultrasonic image or the video data to obtain a target area in the ultrasonic image or the video data, and the target area comprises a breast focus area;

the image processing device comprises a feature extraction unit, a feature extraction unit and a feature extraction unit, wherein the feature extraction unit is used for performing feature extraction on ultrasonic images or video data of an identified target region to obtain a first feature and a second feature, the first feature is a depth feature, and the second feature is obtained based on a plurality of texture features and edge features obtained by at least 5 different image omics image processing operators;

the feature fusion unit is used for fusing the first feature and the second feature to obtain a first fusion feature; performing feature screening processing on the first fusion features to obtain second fusion features;

and the typing unit is used for obtaining a typing result based on the second fusion characteristic.

7. The system of claim 6, wherein the feature extraction unit, the texture features and the edge features comprise: the method comprises the steps of extracting first texture features through a SIFT operator, extracting second texture features through an LBP operator, extracting third texture features through a GLSZM operator, extracting first edge features through a LOG operator, and extracting second edge features through a Gabor operator.

8. The system according to claim 6, wherein the feature fusion unit obtains the first fusion feature by clustering; the clustering mode is as follows:

where V (j, k) is the output data of equation (5), akWeight, x, output for softmaxi(j) And ck(j) The ith local descriptor and the jth eigenvalue of the kth clustering center are respectively, wherein i, j and k are respectively positive integers.

9. The system of claim 6, wherein the acquisition module acquires ultrasound images or video data in different modalities.

10. A computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of ultrasound image classification of breast cancer incorporating deep convolutional network and proteomic features of any one of claims 1 to 5.

Technical Field

The invention relates to the technical field of ultrasonic medical treatment, belongs to the field of identification and processing of ultrasonic images, and particularly relates to a method for identifying and typing a breast cancer ultrasonic image by fusing deep convolutional network and image omics characteristics and a corresponding system thereof.

Background

With the continuous development of medical equipment, the ultrasonic imaging instrument becomes one of the most widely used medical equipment tools in clinical practice due to its advantages of non-invasiveness, real-time performance, convenient operation, low price, and the like. Commonly used functional modes for ultrasound imaging include two-dimensional black and white (B) mode, spectral doppler mode (PW/CW), and color flow mode (CF/PDI). The B mode performs imaging depending on the amplitude of an ultrasonic echo signal, a two-dimensional structure and morphological information of a tissue is acquired, the larger the intensity of the echo signal is, the larger the corresponding image pixel gray value is, and otherwise, the smaller the gray value is; the fundamental principle of the PW/CW and CF/PDI modes is the Doppler effect, imaging is carried out depending on the phase of an ultrasonic echo signal, and blood flow information such as speed, direction and energy is acquired.

The threat of breast cancer to the global female health is increasing day by day, the ultrasonic technology is a well-known technology suitable for breast cancer screening, and in the breast cancer screening guide in China, ultrasonic inspection is listed as one of the main means for breast cancer screening. However, because the signal-to-noise ratio and the resolution ratio of the ultrasonic imaging are relatively low, the traditional feature extraction method is difficult to obtain the high-efficiency expression of the focus features, so that the accuracy of pathological classification of breast cancer by using an ultrasonic image is relatively low, and therefore, a method for accurately processing, extracting and identifying the features of an ultrasonic image of breast cancer is provided, so that the ultrasonic image can be conveniently used by follow-up personnel, and the method is a technical problem to be solved in the market at present.

Disclosure of Invention

In order to overcome the defects in the related art, the invention provides a breast cancer ultrasonic image typing method, a breast cancer ultrasonic image typing system and a storage medium, which can effectively improve the identification and typing accuracy of breast cancer ultrasonic images.

In order to achieve the above purpose, the present invention specifically provides the following specific technical solutions:

in one aspect, the invention provides a breast cancer ultrasound image classification method combining deep convolutional network and proteomics features, comprising the following steps:

s210: acquiring an ultrasonic image, wherein the ultrasonic image comprises a breast part;

s220: processing the ultrasonic image to obtain a target region in the ultrasonic image, wherein the target region comprises a breast lesion region;

s230: extracting features of the ultrasonic image of the identified target area to obtain a first feature, wherein the first feature is a depth feature; performing feature extraction processing on the ultrasonic image of the identified target region to obtain a second feature, wherein the second feature is obtained based on a plurality of texture features and edge features obtained by at least 5 different image omics image processing operators;

s240: fusing the first feature and the second feature to obtain a first fused feature;

s250: performing feature screening processing on the first fusion features to obtain second fusion features;

s260: obtaining a breast cancer sonogram typing result based on the second fusion characteristic.

Preferably, in S230, the texture features and the edge features include: the method comprises the steps of extracting first texture features through a SIFT operator, extracting second texture features through an LBP operator, extracting third texture features through a GLSZM operator, extracting first edge features through a LOG operator, and extracting second edge features through a Gabor operator.

Preferably, in S240, a first fusion feature is obtained through a clustering manner; the clustering mode is as follows:

where V (j, k) is the output data of equation (5), akWeight, x, output for softmaxi(j) And ck(j) The ith local descriptor and the jth eigenvalue of the kth clustering center are respectively, wherein i, j and k are respectively positive integers.

Preferably, in S250, the method further includes: and screening the first fusion characteristics according to characteristic importance judgment, wherein the characteristic importance judgment is realized based on a LightGBM network.

Preferably, in S220, obtaining the target region in the ultrasound image is performed by: acquiring a sample ultrasonic image and marking information to form a training set, wherein the marking information is used for marking a breast lesion area in the sample ultrasonic image;

training a deep learning network based on the training set;

identifying a target region for a new input ultrasound image based on the trained deep learning network.

In another aspect, the present invention further provides a breast cancer ultrasound image classification system combining deep convolutional network and proteomics features, wherein the system comprises:

the acquisition module is used for acquiring ultrasonic images or video data;

the processor module is used for processing the ultrasonic image or video data acquired by the acquisition module and acquiring a parting result;

the display module is used for displaying the ultrasonic image or the video data and the parting result sent by the processor module;

preferably, the processor module further comprises:

the target area identification unit is used for processing the ultrasonic image or the video data to obtain a target area in the ultrasonic image or the video data, and the target area comprises a breast focus area;

the image processing device comprises a feature extraction unit, a feature extraction unit and a feature extraction unit, wherein the feature extraction unit is used for performing feature extraction on ultrasonic images or video data of an identified target region to obtain a first feature and a second feature, the first feature is a depth feature, and the second feature is obtained based on a plurality of texture features and edge features obtained by at least 5 different image omics image processing operators;

the feature fusion unit is used for fusing the first feature and the second feature to obtain a first fusion feature; performing feature screening processing on the first fusion features to obtain second fusion features;

and the typing unit is used for obtaining a typing result based on the second fusion characteristic.

Preferably, in the feature extraction unit, the texture features and the edge features include: the method comprises the steps of extracting first texture features through a SIFT operator, extracting second texture features through an LBP operator, extracting third texture features through a GLSZM operator, extracting first edge features through a LOG operator, and extracting second edge features through a Gabor operator.

Preferably, the feature fusion unit obtains a first fusion feature through a clustering manner; the clustering mode is as follows:

where V (j, k) is the output data of equation (5), akWeight, x, output for softmaxi(j) And ck(j) The ith local descriptor and the jth eigenvalue of the kth clustering center are respectively, wherein i, j and k are respectively positive integers.

Preferably, the acquisition module acquires ultrasound images or video data in different modes.

In yet another aspect, the present invention also provides a computer readable storage medium having stored thereon computer instructions for causing the computer to perform the breast cancer ultrasound image typing method fusing deep convolutional network and imagery omics features as described above.

The invention also provides a device, which at least comprises a processor and a storage device, wherein the storage device stores instructions which can be read and executed by the processor, and the instructions are used for realizing and executing the breast cancer ultrasonic image classification method fusing the deep convolutional network and the iconomics characteristics.

In summary, compared with the prior art, the technical scheme provided by the invention has the following advantages: the invention can utilize operators of the image omics to extract the high-flux ultrasonic image characteristics, can also utilize a depth convolution network to extract the depth semantic characteristics of the ultrasonic image, obtains the fusion characteristics by combining the high-flux ultrasonic image characteristics and the depth semantic characteristics, and obtains the characteristics with the most expressive ability on the focus area image by performing characteristic screening on the fusion characteristics to realize effective and accurate identification on the ultrasonic image, thereby improving the accuracy rate [ ZZ1] of ultrasonic image identification.

Drawings

In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.

Fig. 1 is a schematic structural diagram of an auxiliary diagnostic system 100 for ultrasonic pathological typing of breast cancer according to an embodiment of the present invention.

Fig. 2 is a flowchart of a breast cancer ultrasonic pathological typing auxiliary diagnosis method 200 according to an embodiment of the present invention.

FIG. 3 is a flow chart of a method 300 for training a neural network model according to an embodiment of the present invention.

FIG. 4 is a flowchart of a method 400 for training a first feature extraction model according to an embodiment of the invention.

Fig. 5 is a schematic diagram of a LightGBM network according to an embodiment of the present invention.

Detailed Description

The technical solutions in the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.

In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.

In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; the connection can be mechanical connection or electrical connection; the two elements may be directly connected or indirectly connected through an intermediate medium, or may be communicated with each other inside the two elements, or may be wirelessly connected or wired connected. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.

In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.

In describing embodiments of the present invention, additional subjects, such as users, may be added to assist in performing the breast sonography typing method, for example, in assisting the process of performing the method, as described below.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:图像处理装置、方法及介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!