Medical image reconstruction method, device, equipment and storage medium

文档序号:1939980 发布日期:2021-12-07 浏览:18次 中文

阅读说明:本技术 医疗影像的重建方法、装置、设备及存储介质 (Medical image reconstruction method, device, equipment and storage medium ) 是由 宁世杰 郭航 杜健 王帅 于 2021-05-13 设计创作,主要内容包括:本申请公开了一种医疗影像的重建方法、装置、设备及存储介质,涉及图像处理技术领域。该方法包括:获取三维医疗影像的断面图像序列,以及获取窗宽和窗位;基于窗宽和窗位将第一灰阶等级的二维断面图像映射成第二灰阶等级的灰度图像,第一灰阶等级高于第二灰阶等级;基于映射关系确定第二灰阶等级的灰度图像中各个灰度值对应的彩色分量;基于各个灰度值对应的彩色分量重新构建三维医疗影像。该方法通过降低了断面图像对应的灰度图像的灰阶等级后,重新确定第二灰阶等级的灰度图像中各个灰度值对应的彩色分量,使得确定的各个灰度值对应的彩色分量与各个灰度值匹配度更好,彩色分量之间的过渡效果更好,生成的三维医疗影像更具有真实性。(The application discloses a medical image reconstruction method, a medical image reconstruction device, medical image reconstruction equipment and a storage medium, and relates to the technical field of image processing. The method comprises the following steps: acquiring a cross-section image sequence of the three-dimensional medical image, and acquiring a window width and a window level; mapping the two-dimensional section image of the first gray scale level into a gray scale image of a second gray scale level based on the window width and the window level, wherein the first gray scale level is higher than the second gray scale level; determining color components corresponding to all gray values in the gray image of the second gray level based on the mapping relation; and reconstructing the three-dimensional medical image based on the color components corresponding to the gray values. According to the method, after the gray scale level of the gray image corresponding to the sectional image is reduced, the color component corresponding to each gray value in the gray image of the second gray scale level is re-determined, so that the matching degree of the color component corresponding to each determined gray value and each gray value is better, the transition effect among the color components is better, and the generated three-dimensional medical image is more authentic.)

1. A method of reconstructing a medical image, the method comprising:

acquiring a section image sequence of a three-dimensional medical image, wherein two-dimensional section images in the section image sequence are obtained by cutting the three-dimensional medical image perpendicular to the same shaft body; acquiring a window width and a window level, wherein the window width and the window level are used for determining the density range of the tissue and organ to be displayed in the sectional image;

mapping the two-dimensional cross-sectional image at a first grayscale level to a grayscale image at a second grayscale level based on the window width and the window level, the first grayscale level being higher than the second grayscale level, the grayscale levels indicating a number of grayscale levels;

determining color components corresponding to all gray values in the gray image of the second gray level based on the mapping relation between the gray values and the color components;

and reconstructing the three-dimensional medical image based on the color components corresponding to the gray values, and generating the reconstructed three-dimensional medical image.

2. The method according to claim 1, wherein determining the color component corresponding to each gray value in the gray image of the second gray scale level based on the mapping relationship between the gray value and the color component comprises:

determining the mapping relation corresponding to the window width and the window level;

and determining the color component corresponding to each gray value from the mapping relation.

3. The method of claim 2, wherein the mapping relationship comprises a first sub-mapping relationship between the grayscale value and a color component and a second sub-mapping relationship between the grayscale value and a transparency component; the color component corresponding to each gray value comprises a color component and a transparency component corresponding to each gray value;

determining the mapping relation corresponding to the window width and the window level; determining the color component corresponding to each gray value from the mapping relationship, including:

determining the first sub-mapping relationship corresponding to the window width and the window level, and determining the second sub-mapping relationship corresponding to the window width and the window level;

and determining a color component corresponding to each gray value based on the first sub-mapping relation, and determining a transparency component corresponding to each gray value based on the second sub-mapping relation.

4. The method of any of claims 1 to 3, wherein said mapping said two-dimensional cross-sectional image at a first gray scale level to a gray image at a second gray scale level based on said window width and said window level comprises:

generating a three-dimensional volume texture based on the two-dimensional section image of the first gray scale level, wherein the three-dimensional volume texture is used for describing three-dimensional data of the three-dimensional medical image;

extracting a gray image of the first gray scale level from the three-dimensional volume texture;

mapping the grayscale image of the first grayscale level to the grayscale image of the second grayscale level based on the window width and the window level.

5. The method of claim 4, wherein mapping the grayscale image of the first grayscale level to the grayscale image of the second grayscale level based on the window width and the window level comprises:

calculating the maximum value and the minimum value of the dynamic value range of the gray value according to the window width and the window level;

calculating a first difference value of the jth gray value minus the minimum value and a second difference value of the maximum value minus the minimum value, wherein the jth gray value is the gray value of a jth pixel point in the gray image of the first gray scale level, and j is a positive integer;

multiplying the first difference by the second difference of which the first product ratio of the first parameter is two times that of the first parameter to obtain a gray value of a jth second gray scale level, wherein the first parameter is determined according to the first gray scale level;

and generating a gray image of the second gray scale level based on the gray values of the second gray scale level.

6. The method of any of claims 1 to 3, wherein after generating the reconstructed three-dimensional medical image, further comprising:

and cutting the reconstructed three-dimensional medical image to generate a reconstructed cross-section image sequence, wherein the reconstructed cross-section image sequence is used for displaying a section view of the tissue organ.

7. An apparatus for reconstructing a medical image, the apparatus comprising:

the acquisition module is used for acquiring a section image sequence of the three-dimensional medical image, wherein two-dimensional section images in the section image sequence are obtained by cutting the three-dimensional medical image perpendicular to the same shaft body; acquiring a window width and a window level, wherein the window width and the window level are used for determining the density range of the tissue and organ to be displayed in the sectional image;

a mapping module for mapping the two-dimensional cross-sectional image at a first gray scale level to a gray image at a second gray scale level based on the window width and the window level, the first gray scale level being higher than the second gray scale level, the gray scale level indicating a gray scale number;

the determining module is used for determining the color component corresponding to each gray value in the gray image of the second gray level based on the mapping relation between the gray value and the color component;

and the building module is used for reconstructing the three-dimensional medical image based on the color components corresponding to the gray values and generating the reconstructed three-dimensional medical image.

8. The apparatus of claim 7, wherein the determining module is configured to:

determining the mapping relation corresponding to the window width and the window level;

and determining the color component corresponding to each gray value from the mapping relation.

9. An electronic device, comprising a processor and a memory, wherein the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the method for reconstructing medical images according to any one of claims 1 to 6.

10. A computer-readable storage medium, having at least one instruction, at least one program, a set of codes, or a set of instructions stored therein, which is loaded and executed by a processor to implement the method for reconstructing a medical image according to any one of claims 1 to 6.

Technical Field

The embodiment of the application relates to the technical field of image processing, in particular to a medical image reconstruction method, a medical image reconstruction device, medical image reconstruction equipment and a storage medium.

Background

The visualization of three-dimensional medical images is of great medical importance, which directly affects the diagnosis of medical staff on the state of an illness.

In the three-dimensional construction of medical images, a two-dimensional sequence of sectional images is usually volume-rendered to generate a three-dimensional medical image. Because the gray scale level of the original medical image is higher, and the gray scale level of most display screens is lower than that of the original medical image, the electronic equipment can reduce the gray scale level of each pixel in the process of volume rendering, and finally the three-dimensional medical image is generated by rendering. The color component is an RGBA value, where R represents Red (Red), G represents Green (Green), B represents Blue (Blue), and a represents transparency (Alpha).

However, the three-dimensional medical image generated by the method lacks reality, and influences the real reflection of human tissues and organs.

Disclosure of Invention

The embodiment of the application provides a medical image reconstruction method, a medical image reconstruction device, medical image reconstruction equipment and a storage medium, and a more real three-dimensional medical image can be generated. The technical scheme is as follows:

according to one aspect of the present application, there is provided a method for reconstructing a medical image, the method comprising:

acquiring a section image sequence of the three-dimensional medical image, wherein two-dimensional section images in the section image sequence are obtained by cutting the three-dimensional medical image perpendicular to the same axis body; acquiring a window width and a window level, wherein the window width and the window level are used for determining the density range of the tissue organ to be displayed in the sectional image;

mapping the two-dimensional section image with the first gray scale level into a gray image with a second gray scale level based on the window width and the window level, wherein the first gray scale level is higher than the second gray scale level, and the gray scale level is used for indicating the number of gray scales;

determining a color component corresponding to each gray value in the gray image of the second gray level based on the mapping relation between the gray value and the color component;

and reconstructing the three-dimensional medical image based on the color components corresponding to the gray values, and generating the reconstructed three-dimensional medical image.

According to another aspect of the present application, there is provided a medical image reconstruction apparatus, comprising:

the acquisition module is used for acquiring a section image sequence of the three-dimensional medical image, and two-dimensional section images in the section image sequence are obtained by cutting the three-dimensional medical image perpendicular to the same shaft body; acquiring a window width and a window level, wherein the window width and the window level are used for determining the density range of the tissue organ to be displayed in the sectional image;

the mapping module is used for mapping the two-dimensional section image with the first gray scale level into a gray image with a second gray scale level based on the window width and the window level, wherein the first gray scale level is higher than the second gray scale level, and the gray scale level is used for indicating the gray scale quantity;

the determining module is used for determining the color component corresponding to each gray value in the gray image of the second gray level based on the mapping relation between the gray value and the color component;

and the building module is used for reconstructing the three-dimensional medical image based on the color components corresponding to the gray values and generating the reconstructed three-dimensional medical image.

According to another aspect of the present application, there is provided an electronic device comprising a processor and a memory, wherein at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the method for reconstructing a medical image according to the above aspect.

According to another aspect of the present application, there is provided a computer-readable storage medium having at least one instruction, at least one program, a set of codes, or a set of instructions stored therein, which is loaded and executed by a processor to implement the method for reconstructing a medical image according to the above one aspect.

According to another aspect of the present application, a computer program product is provided, the computer program product comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the method provided in the various optional implementation modes of the medical image reconstruction method.

The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:

in the reconstruction method of the medical image, after a cross-section image sequence of the three-dimensional medical image, a window level and a window width are obtained, the gray scale level of a gray scale image corresponding to the cross-section image is reduced, and the color component corresponding to each gray scale value in the gray scale image of the second gray scale level is determined based on the mapping relation of the gray scale value color components, so that the matching degree of the color component corresponding to each gray scale value and each gray scale value after the gray scale level is reduced is better, the transition effect of the rendered color and the transparency is better, the generated three-dimensional medical image is more authentic, and better reference data can be provided for medical diagnosis of medical staff.

Drawings

In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.

FIG. 1 is a block diagram of a medical information system provided by an exemplary embodiment of the present application;

FIG. 2 is a flowchart of a method for reconstructing a medical image according to an exemplary embodiment of the present application;

FIG. 3 is a schematic diagram of a mapping between gray scale values and color components provided by an exemplary embodiment of the present application;

fig. 4 is a flowchart of a method for reconstructing a medical image according to another exemplary embodiment of the present application;

FIG. 5 is a schematic process diagram of three-dimensional volume texture construction provided by an exemplary embodiment of the present application;

FIG. 6 is a schematic diagram of a process for mapping a 16-bit gray scale to an 8-bit gray scale according to an exemplary embodiment of the present application;

FIG. 7 is a process diagram of three-dimensional volume cloud construction provided by an exemplary embodiment of the present application;

FIG. 8 is a process diagram of reconstruction of a medical image provided by an exemplary embodiment of the present application;

fig. 9 is a block diagram of a medical image reconstruction apparatus provided in an exemplary embodiment of the present application;

FIG. 10 is a block diagram of an electronic device provided in an exemplary embodiment of the present application;

fig. 11 is a block diagram of a server according to an exemplary embodiment of the present application.

Detailed Description

To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.

First, several terms related to the embodiments of the present application are explained:

digital Imaging and Communications in Medicine (DICOM), is an international standard for medical images and related information (ISO 12052). It defines a medical image format for data exchange that meets clinical requirements for digital medical image transmission, display and storage.

DICOM is widely used in radiomedicine, cardiovascular imaging and radiodiagnosis (X-ray, CT, nuclear magnetic resonance, ultrasound, etc.) and is increasingly used in ophthalmology, dentistry, and other medical fields. Among the tens of thousands of in-use medical imaging devices, DICOM is one of the most widely deployed standards for medical information. There are currently about billions of medical images that comply with the DICOM standard for clinical use.

The DICOM standard is based on an industrial standard of a Computer network, and helps to more effectively transmit and exchange digital images between medical imaging devices, which include not only Computer Tomography (CT), Magnetic Resonance (MR), nuclear medicine and ultrasound examinations, but also CR, film digitizing System, video acquisition System, and Hospital Information System (HIS)/Radiology Information System (RIS), etc.

At present, for example, CT, nuclear magnetic resonance, ultrasound, etc. utilize precisely collimated X-ray beam, gamma ray, ultrasound, etc. to perform cross-sectional scanning one by one around a certain part of a human body together with a detector with extremely high sensitivity, so the images obtained after scanning are multi-layer cross-sectional images, and we can form a three-dimensional medical image by stacking the cross-sectional images layer by layer on the z-axis (this is the reconstruction of a three-dimensional medical image), at this time, the cross-sectional images of each layer can be stored in a dicom file (of course, the dicom file is not pure pixel information, but also has many data header information), as shown in the following figure, we aim to read the data header information and the pixel information from a series of dicom files.

For ease of reading and use, the dicom file is often converted into the format of a raw file and an mhd file, one raw file and one mhd file for each case. Mhd, namely data header information (meta header data), mhd files are small and contain other information of pictures, such as a CT coordinate origin, a pixel pitch and the like; raw indicates "raw", and the raw file stores pixel information.

The window technique is a display technique for observing normal tissues or lesions with different densities in a CT examination, and includes a window width (window width) and a window level (window level). The window width is a range of CT values displayed on the tomographic image, and tissues or lesions in the CT range are displayed in different simulated gray scales. The window level is the average of the upper and lower window width limits.

The CT value is data obtained by Hounsfield Unit (HU) and is data for measuring the density of local tissues or organs in a human body. For example, different radiation intensities (Raiodensity) can correspond to 256 different gray-scale values (i.e. gray-scale values), which can redefine the attenuation values according to different ranges of CT values, and assuming that the central value of the CT range is unchanged, as soon as the defined range is narrowed, we will call the Narrow Window level (Narrow Window), and the detail can be distinguished by comparing the small changes, which is called contrast compression in the concept of image processing. For example, to find subtle changes in hepatic tumors in the abdomen, we can redefine the Window width using 70HU as the average value of the liver (called hepatic Window), which is defined as 70HU for Window (Window) of 170HU, which ranges from-15 HU to +155HU, and shows black for values below-15 HU and white for values above +155HU, and similarly, Wide Window (Wide Window) is used for Window of bone, mainly considering the medulla and outer dense bone in the medullary cavity containing fat.

The gray value and the CT value can be converted into each other. Taking the gray value to CT value as an example, the conversion formula is as follows:

Hu=pixel_val*rescale_slope+rescale_intercept;

wherein, pixel _ val is a gray value of the ith pixel; hu is the CT value of the ith pixel, and cache _ slope and cache _ interrupt are conversion parameters in the conversion formula, and the values of cache _ slope and cache _ interrupt are both read from the header file of the dicom file, and i is a positive integer. If the cache _ slope is 1 and the cache _ interrupt is 0, no conversion is needed.

The gray scale divides the brightness variation between the brightest and darkest into several parts, so as to facilitate the control of the screen brightness corresponding to the signal input. Each digital image is composed of a plurality of dots, also called pixels (pixels), each of which can usually represent a plurality of different colors, and is composed of three sub-pixels of red, green and blue (RGB). Each sub-pixel, the light source behind it, may exhibit a different brightness level. And the gray levels represent gradation levels of different brightness from the darkest to the brightest. The more the intermediate levels are, the more exquisite the picture effect can be presented. Taking 8bit panel as an example, the image can represent 2 to the power of 8, i.e. 256 luminance levels, which is called 256 gray levels. Each pixel on the LCD screen is composed of red, green and blue with different brightness levels to form different color dots. That is, the color change of each dot on the screen is actually caused by the gray scale change of the three RGB sub-pixels constituting the dot.

The ray tracing (ray tracing) algorithm firstly has a three-dimensional volume texture, then emits n rays from a camera, and the rays have a sampling step length; when a ray is in the volume texture, it is sampled once per step, the texture value (actually representing the density value of the point) is obtained, the illumination is calculated, and then blended with the current accumulated color value of the ray. Because the light path is reversible, the light emitted from the light source is scattered and finally enters the camera with the same effect as that of the ray emitted from the camera for coloring and sampling, so that a correct pattern can be rendered, for example, a shadow in a three-dimensional medical image can be rendered.

For example, the medical image reconstruction method provided by the present application can be applied to CR, film digitization system, video capture system, HIS, RIS, etc. the system may include a scanning device 120, an electronic device 140, a server cluster 160 and a communication network 180, as shown in fig. 1, which shows a block diagram of a medical information system provided by an exemplary embodiment of the present application.

Illustratively, the connections between the electronic device 140 and the server cluster 160, and between the electronic device 140 and the scanning device 120 are via a communication network 180. The scanning device 120 is configured to scan a cross-sectional image of the medical image, and after the scanning device 120 obtains the cross-sectional image by scanning, the cross-sectional images obtained by scanning sequentially are sequentially transmitted to the electronic device 140 and stored in the server cluster 160 by the electronic device. The electronic device 140 includes a display screen, and the electronic device 140 may further display the two-dimensional sectional image.

Illustratively, the electronic device 140 and the server cluster 160, and the scanning device and the server cluster are connected via a communication network 180. After obtaining the cross-sectional images through scanning, the scanning device 120 sequentially transmits the cross-sectional images to the server cluster 160 for storage. The electronics 140 may acquire cross-sectional images from the server cluster 160 for presentation on a display screen.

For example, the electronic device 140 may display two-dimensional sectional images and three-dimensional medical images constructed based on the sectional image sequence in a split screen manner. The electronic device 140 may also drag the three-dimensional medical image in different directions to display the three-dimensional medical image in different directions.

Illustratively, the electronic device 140 may be a desktop computer, a laptop convenience computer, a tablet computer, a smartphone, or the like. The electronic device 140 has an operating system installed therein, and an application program can be run on the operating system, so that the window width and the window level of the three-dimensional medical image during construction can be adjusted on the electronic device 140 through the application program.

Server cluster 160 includes one or more servers. Server cluster 160 is used to provide background services for applications. The server cluster 160 also provides a storage service of the cross-sectional image, for example, the server cluster 160 provides a service of storing the cross-sectional image as a dicom file; for another example, the server cluster 160 may also provide for converting a dicom file to an mhd file and a raw file. Optionally, the server 200 undertakes primary computing work and the terminal 100 undertakes secondary computing work; alternatively, the server 200 undertakes the secondary computing work and the terminal 100 undertakes the primary computing work; alternatively, the server 200 and the terminal 100 perform cooperative computing by using a distributed computing architecture. For example, the rendering of the three-dimensional medical image may be performed by the electronic device 140 alone, or the server cluster 160 may calculate a three-dimensional texture of the three-dimensional medical image, and the electronic device 140 may perform volume rendering processing based on the three-dimensional texture to display the reconstructed three-dimensional medical image on the display screen of the electronic device 140.

Illustratively, the communication network 180 includes a wired network or a wireless network; optionally, the wired network may be a metropolitan area network, a local area network, an optical fiber network, or the like; the Wireless network may be a mobile communication network or a Wireless Fidelity (WiFi) network.

Fig. 2 is a flowchart of a medical image reconstruction method according to an exemplary embodiment of the present application, which is described by way of example as being applied to the electronic device of the system shown in fig. 1, and the method includes:

step 201, acquiring a cross-section image sequence of the three-dimensional medical image, and acquiring a window width and a window level.

The cross-section image sequence comprises at least two-dimensional cross-section images, and the at least two-dimensional cross-section images are obtained by cutting the three-dimensional medical image perpendicular to the same axis.

The window width and the window level are used for determining the density range of the tissue organ to be displayed by the sectional image. For example, the window width is a CT value range included in a two-dimensional cross-sectional image, or the window width is a CT value range included in a three-dimensional medical image, and is used to define attenuation values between gray levels; for example, the difference between two adjacent gray levels in 16 levels is a1The difference between two adjacent gray levels in 32 levels is a2,a1Is greater than a2In (1). The window level is an average value of the upper window width limit and the lower window width limit. The window width and the window level both define a value range, the window width defines the value length between the maximum value (namely the upper limit of the window width) or the minimum value (namely the lower limit of the window width) of the value range and the window level, and the window level defines the middle value between the maximum value and the minimum value of the value range; illustratively, the window width is 60HU, the window level is 85HU, the value of the window width plus the value of the window level is the upper window width limit, and the value of the window level minus the value of the window width is the lower window width limit, and the value range defined by the two values can be determined to be +25HU to 145 HU.

For example, the window width and the window level may be user-defined, for example, a user interface of an application program is displayed on the electronic device before the three-dimensional medical image is reconstructed, and the user interface includes setting controls of the window width and the window level; and the electronic equipment responds to the setting operation of the window width and the window position on the setting control, determines the set window width and the set window position as rendering parameters of the three-dimensional medical image, and then acquires the window width and the window position. For example, the setting control may be a text editing control, and the value of the window width and the value of the window level may be directly input in the text editing control; or, the setting control may be a sliding rod control, and the value of the window width and the value of the window level may be set by sliding a slider on a sliding rod.

Illustratively, the sequence of cross-sectional images of the three-dimensional medical image is stored in a database, and the electronic device sends a data request to the server, wherein the data request is used for requesting the sequence of cross-sectional images of the three-dimensional medical image from the server; and after the server acquires the cross-section image sequence from the database, feeding back the cross-section image sequence to the electronic equipment. Each three-dimensional medical image in the database is provided with a unique identifier, and when the data of the cross-section image sequence of the three-dimensional medical image is acquired, the data request carries the unique identifier of the three-dimensional medical image.

The sequence of sectional images may be stored in a dicom file, and the electronic device extracts the sequence of sectional images from the dicom file. Or, the server may also convert the dicom file into a dhm file and a raw file, and then the electronic device may extract the sequence of the cross-sectional images from the dhm file and the raw file. Or, the electronic device converts the obtained dicom file into a dhm file and a raw file, and then the electronic device can extract a cross-sectional image sequence from the dhm file and the raw file. The sectional images are two-dimensional images, and stacking sequence information between the two-dimensional sectional images is hidden in the sectional image sequence, so that a three-dimensional volume texture can be generated through the two-dimensional sectional image sequence.

Step 202, based on the window width and the window level, mapping the two-dimensional section image of the first gray scale level into a gray scale image of a second gray scale level, wherein the first gray scale level is higher than the second gray scale level.

The above gray scale levels are used for the fingersDisplaying the number of gray scales; for example, the two-dimensional cross-sectional image has 65536 (i.e., 2)16) The gray levels, namely the first gray level has 65536 gray levels; the grayscale image has 256 (i.e., 2)8) There are 256 gray levels for each gray level, i.e., the second gray level. As another example, the two-dimensional cross-sectional image has 1024 (i.e., 2)10) The gray scales are 1024 gray scales, namely the first gray scale level; the gray image of the second gray scale level has 256 gray scales.

Exemplarily, the electronic device extracts a gray image of a first gray scale level from each of at least two-dimensional sectional images to obtain at least two gray images; and mapping the gray image of the first gray scale level into a gray image of a second gray scale level.

Step 203, determining the color component corresponding to each gray value in the gray image of the second gray level based on the mapping relationship between the gray value and the color component.

A mapping between the gray values and the color components is provided in the electronic device, which is illustratively set at a second gray scale level. The electronic equipment determines the color component corresponding to the kth gray value in the gray image of the second gray level from the mapping relation between the gray value and the color component, and finally obtains the color component corresponding to each gray value in the gray image of the second gray level.

Optionally, the mapping relationship is determined by a window width and a window level, that is, the window width and the window level have a corresponding relationship with the mapping relationship; exemplarily, the electronic device determines a mapping relationship corresponding to the window width and the window level; and determining the color component corresponding to each gray value from the mapping relation.

In some embodiments, the color component includes a color component and a transparency component; the mapping relationship includes a first sub-mapping relationship between the gray value and the color component and a second sub-mapping relationship between the gray value and the transparency component. The first sub-mapping relation is determined by the window width and the window level, namely the window width and the window level have a first sub-corresponding relation with the first sub-mapping relation; the second sub-mapping relation is determined by the window width and the window level, namely the window width and the window level have a second sub-corresponding relation with the second sub-mapping relation; the electronic device may determine a first sub-mapping relationship corresponding to the window width and the window level, and determine a second sub-mapping relationship corresponding to the window width and the window level, and then determine a color component corresponding to each gray value based on the first sub-mapping relationship, and determine a transparency component corresponding to each gray value based on the second sub-mapping relationship, where the color component corresponding to each gray value includes the color component and the transparency component corresponding to each gray value.

Exemplarily, the color component refers to an RGB component, that is, the color component includes an R component, a G component, and a B component, that is, a red component, a green component, and a blue component; the different color components differ from the first sub-correspondence between window width and window level. As shown in fig. 3, the mapping relationship between the gray scale value and the color component when the gray scale level is 8 bits includes: a first sub-mapping relationship between the grey value and the R component; a first sub-mapping relationship between the gray value and the G component; a first sub-mapping relationship between the gray value and the B component; a second sub-mapping relationship between the gray value and the transparency component. Taking the mapping of color components as an example, a gray value of 250 corresponds to RGB [0,50,80 ]. In the mapping relationship of fig. 3, the vertical line density is used to represent the color shade, the transparency, and the gray scale.

Illustratively, each of the above-described sub-mapping relationships may be represented by a mapping function, or alternatively, by a set of mapping functions.

And 204, reconstructing the three-dimensional medical image based on the color components corresponding to the gray values, and generating the reconstructed three-dimensional medical image.

Illustratively, the electronic device updates color components in a three-dimensional volume texture of the three-dimensional medical image based on the color components corresponding to the respective gray values, reconstructs the three-dimensional medical image based on the updated three-dimensional volume texture by using a volume rendering technology, and generates a reconstructed three-dimensional medical image.

Illustratively, in the process of reconstructing the three-dimensional medical image based on the updated three-dimensional volume texture, the electronic device further calculates a volume illumination parameter of the three-dimensional medical image by using a ray marking algorithm, and adjusts the color component according to the volume illumination parameter to generate the three-dimensional medical image. Illustratively, the volume illumination parameters include a scattering value and a reflection value when the ray is irradiated into the three-dimensional texture, and the color component is adjusted through the scattering value and the reflection value, so that the shadow in the three-dimensional medical image can be reflected more truly by the three-dimensional medical image.

In summary, after the cross-section image sequence, the window level and the window width of the three-dimensional medical image are obtained, the grayscale level of the grayscale image corresponding to the cross-section image is first reduced, and the color component corresponding to each grayscale value in the grayscale image of the second grayscale level is determined based on the mapping relationship of the grayscale color components, so that the matching degree between the color component corresponding to each grayscale value and each grayscale value after the grayscale level is reduced is better, the transition effect of the rendered color and the transparency is better, the generated three-dimensional medical image is more realistic, and better reference data can be provided for medical diagnosis of medical staff.

In addition, the window width and the window level define gray scale attenuation speed, different window widths and different window levels correspond to different mapping relations, so that the attenuation speed between the color components is more matched with the gray scale value, and the color and transparency presenting effect of the tissue organ in the generated three-dimensional medical image is better.

For the way of mapping the gray image of the first gray scale level into the second gray scale level, as shown in fig. 4, steps 2021 to 2023 can be implemented as follows:

step 2021, generate a three-dimensional volume texture based on the two-dimensional cross-sectional image of the first gray scale level.

The three-dimensional body texture is used for describing three-dimensional data of the three-dimensional medical image, namely the three-dimensional body texture is used for rendering to generate the three-dimensional medical image.

After the electronic equipment obtains the sequence of the section images, namely after the at least two-dimensional section images with the arrangement sequence are obtained, the at least two-dimensional section images are processed according to the arrangement sequence, three-dimensional data of each pixel point in the two-dimensional section images are determined, and then three-dimensional body textures under the first gray scale level are generated. Illustratively, as shown in fig. 5, the electronic device processes the locally arranged dicom cross-sectional images 301 to obtain a three-dimensional volume texture 302.

Step 2022, extract the gray image of the first gray scale level from the three-dimensional volume texture.

The electronic equipment extracts the gray value of each pixel point from the three-dimensional volume texture and generates a gray image with a first gray level. For example, if the cross-sectional image includes 65536 gray levels, a gray-scale image having 65536 gray levels is generated.

Illustratively, the electronic device calculates an average value of RGB of each pixel point as the gray value of the pixel point. Or the electronic equipment calculates the average value of the maximum value and the minimum value in the RGB of each pixel point as the gray value of the pixel point.

Step 2023, based on the window width and the window level, map the grayscale image of the first grayscale level to a grayscale image of a second grayscale level.

The electronic equipment calculates the maximum value and the minimum value of the dynamic value range of the gray values in the section image under the second gray scale level according to the window width and the window level; in other words, the maximum value and the minimum value of the dynamic value range of the gray values in the three-dimensional medical image at the second gray scale level are calculated according to the window width and the window level. And then the electronic equipment maps the gray scale level of each pixel point in the gray scale image with the first gray scale level into pixel points with a second gray scale level to obtain pixel points with the second gray scale level corresponding to all the pixels, and further generates a gray scale image with the second gray scale level.

Optionally, the electronic device calculates the maximum value and the minimum value of the dynamic value range of the gray value according to the window width and the window level; calculating a first difference value of the jth gray value minus the minimum value and a second difference value of the maximum value minus the minimum value, wherein the jth gray value is the gray value of the jth pixel point in the gray image of the first gray scale level, and j is a positive integer; multiplying the first difference value by a second difference value of which the ratio of the first product of the first parameter to the second product is twice to obtain a gray value of a jth second gray scale level, wherein the first parameter is determined according to the first gray scale level; and generating a gray image of the second gray scale level based on the gray values of the second gray scale level.

For example, for the calculation of the maximum value and the minimum value, the electronic device calculates a third difference value by subtracting the window width from the twice window level, and divides the third difference value by 2, and adds 0.5 to obtain the minimum value; the maximum is obtained by calculating twice the window level plus the window width, dividing the sum by 2, and adding 0.5.

For example, as shown in fig. 6, a schematic diagram of mapping a 16-bit gray scale level to an 8-bit gray scale level can be implemented by using the following programming languages:

computeminValMaxVal (pixel _ val, min, max); first calculating the maximum and minimum values;

for (i is 0; i < nNumPixels; i + + {// loop executing the following calculation statement to complete the calculation of the gray level value of each pixel point in the image;

disp _ pixel _ val ═ (pixel _ val-min) × 255.0/(double) (max-min); a calculation formula of gray values of a second gray level;

}

wherein, nNumPixels is the number of pixels of the image, pixel _ val is the gray value of the first gray scale level, disp _ pixel _ val is the gray value of the second gray scale level, and double represents multiplying 2.

For the calculation of the maximum value (max) and the minimum value (min) of the dynamic value range in the statement, the following functions are called:

min=(2*window_center-window_width)/2.0+0.5;

max=(2*window_center+window_width)/2.0+0.5;

wherein, window _ center is the window level and window _ width is the window width. In fig. 6, the diagonal density is used to represent the gray scale, and the larger the diagonal density is, the closer the gray scale is to the black, and the smaller the gray scale value is; conversely, a larger diagonal density indicates a closer gray level to white, and a larger gray level value.

After the mapping from the 16-bit gray scale level to the 8-bit gray scale level is completed, generating a gray scale image with a second gray scale level; in the image with the second gray scale level, the color component corresponding to the gray scale value of each pixel point in the image with the second gray scale level is determined, and then the three-dimensional volume texture with the second gray scale level is further generated, and the electronic device constructs the three-dimensional medical image based on the three-dimensional volume texture with the second gray scale level by using a volume rendering technology, as shown in fig. 7, and constructs and generates a three-dimensional volume cloud (i.e., a three-dimensional medical image) 402 based on the three-dimensional volume texture 401 with the second gray scale level.

In summary, the medical image reconstruction method provided by this embodiment can map a medical image with a high gray scale level into a medical image with a low gray scale level, so that a three-dimensional medical image and a two-dimensional cross-sectional image can be displayed on most display screens, because the gray scale level of most display screens at the present stage is lower than that of the medical image scanned by the tomography device; in addition, the gray scale level that human eyes can distinguish is 256 levels, so that the requirement of human observation is not reduced.

It should be noted that, after the electronic device generates the reconstructed three-dimensional medical image, the electronic device also cuts the reconstructed three-dimensional medical image to generate a reconstructed cross-sectional image sequence for displaying a sectional view of the tissue organ. Illustratively, the reconstructed three-dimensional medical image is composed of a plurality of voxels, and a calculation method of the position relation between a point and a plane is used, wherein the normal vector of the plane is { A, B and C }, and the intersection point of the plane and the normal vector is P0 (x)0,y0,z0) If the set point is (x, y, z), then A (x' -x)0)+B(y`-y0)+C(z`-z0) (x, y, z) is substituted into the position of the verification point as 0. If the verification formula is met, the point is on the plane; if equation coordinate polynomial>0, i.e. A (x' -x)0)+B(y`-y0)+C(z`-z0)>0, the point is on the front side of the plane (in the normal vector direction), otherwise, on the back side of the plane, the voxels hidden on the back side of the clipping plane are used for achieving the volume clipping function. The section map is matched with the reconstructed three-dimensional medical image, so that more related information of tissues and organs can be provided for medical personnel, and more favorable medical diagnosis reference can be realized.

The above medical image reconstruction process is generally described, as shown in fig. 8, as follows:

acquiring at least two sectional images from a dicom file (or mhd file and ram file) 501; generating a 16bit matrix (namely a cross-section image sequence) 502 based on at least two cross-section image sequences; generating a three-dimensional volume texture 503 based on the 16bit matrix 502; performing visual range control 504 on the three-dimensional volume texture 503 based on the window level and the window width, namely mapping a 16-bit gray scale image corresponding to the three-dimensional volume texture 503 into an 8-bit gray scale image; performing color mapping 505 on the 8-bit gray scale image to obtain a three-dimensional texture after color mapping; performing volume rendering 506 based on the three-dimensional texture after color mapping to obtain a reconstructed three-dimensional medical image; and then, carrying out volume cutting 507 on the reconstructed three-dimensional medical image to obtain a reconstructed cross-section image sequence.

According to the method, after the cross-section image sequence, the window level and the window width of the three-dimensional medical image are obtained, the gray scale level of the gray scale image corresponding to the cross-section image is reduced, the color component corresponding to each gray scale value in the gray scale image of the second gray scale level is determined based on the mapping relation of the gray scale value color components, the matching degree of the color component corresponding to each gray scale value and each gray scale value after the gray scale level is reduced is better, the transition effect of the rendered color and the transparency is better, the generated three-dimensional medical image is more authentic, and better reference data can be provided for medical diagnosis of medical staff.

In the following, embodiments of the apparatus of the present application are described, and for technical details not described in detail in the embodiments of the apparatus, reference may be made to the above-mentioned one-to-one corresponding method embodiments.

Fig. 9 is a block diagram of a medical image reconstruction apparatus according to an exemplary embodiment of the present application, which may be implemented as all or part of an electronic device through software or hardware or a combination of the two, and the apparatus includes:

the acquiring module 601 is configured to acquire a sequence of cross-sectional images of a three-dimensional medical image, where a two-dimensional cross-sectional image in the sequence of cross-sectional images is obtained by cutting the three-dimensional medical image perpendicular to a same axis; acquiring a window width and a window level, wherein the window width and the window level are used for determining the density range of the tissue organ to be displayed in the sectional image;

a mapping module 602, configured to map the two-dimensional cross-sectional image with the first gray scale level into a gray image with a second gray scale level based on the window width and the window level, where the first gray scale level is higher than the second gray scale level, and the gray scale level is used for indicating a gray scale number;

a determining module 603, configured to determine, based on a mapping relationship between the grayscale values and the color components, color components corresponding to each grayscale value in the grayscale image at the second grayscale level;

the constructing module 604 is configured to reconstruct the three-dimensional medical image based on the color component corresponding to each gray value, and generate a reconstructed three-dimensional medical image.

In some embodiments, the determining module 603 is configured to:

determining a mapping relation corresponding to the window width and the window level;

and determining the color component corresponding to each gray value from the mapping relation.

In some embodiments, the mapping relationship comprises a first sub-mapping relationship between the grayscale value and the color component, and a second sub-mapping relationship between the grayscale value and the transparency component; the color component corresponding to each gray value comprises a color component and a transparency component corresponding to each gray value; a determining module 603 configured to:

determining a first sub-mapping relation corresponding to the window width and the window level, and determining a second sub-mapping relation corresponding to the window width and the window level;

and determining a color component corresponding to each gray value based on the first sub-mapping relation, and determining a transparency component corresponding to each gray value based on the second sub-mapping relation.

In some embodiments, a mapping module 602 to:

generating a three-dimensional volume texture based on the two-dimensional section image with the first gray scale level, wherein the three-dimensional volume texture is used for describing three-dimensional data of a three-dimensional medical image;

extracting a gray level image of a first gray level grade from the three-dimensional body texture;

based on the window width and the window level, the grayscale image of the first grayscale level is mapped to a grayscale image of a second grayscale level.

In some embodiments, a mapping module 602 to:

calculating the maximum value and the minimum value of the dynamic value range of the gray value according to the window width and the window level;

calculating a first difference value of the jth gray value minus the minimum value and a second difference value of the maximum value minus the minimum value, wherein the jth gray value is the gray value of the jth pixel point in the gray image of the first gray scale level, and j is a positive integer;

multiplying the first difference value by a second difference value of which the ratio of the first product of the first parameter to the second product is twice to obtain a gray value of a jth second gray scale level, wherein the first parameter is determined according to the first gray scale level;

and generating a gray image of the second gray scale level based on the gray values of the second gray scale level.

In some embodiments, the building block 604 is further configured to:

and cutting the reconstructed three-dimensional medical image to generate a reconstructed cross-section image sequence, wherein the reconstructed cross-section image sequence is used for displaying a section view of the tissue organ.

In summary, after the cross-sectional image sequence, the window level and the window width of the three-dimensional medical image are obtained, the grayscale level of the grayscale image corresponding to the cross-sectional image is first reduced, the color component corresponding to each grayscale value in the grayscale image of the second grayscale level is determined based on the mapping relationship of the grayscale color components, and the color component is determined before the grayscale level is reduced, so that the matching degree between the determined color component corresponding to each grayscale value and each grayscale value after the grayscale level is reduced is better, the transition effect between the color components is better, the generated three-dimensional medical image is more authentic, and better reference data can be provided for medical diagnosis of medical staff.

Fig. 10 is a block diagram of an electronic device 700 according to an exemplary embodiment of the present application. The electronic device 700 may be: a smartphone, a tablet, a laptop, or a desktop computer. The electronic device 700 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so forth.

In general, the electronic device 700 includes: a processor 701 and a memory 702.

The processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 701 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 701 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.

Memory 702 may include one or more computer-readable storage media, which may be non-transitory. Memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 702 is configured to store at least one instruction for execution by the processor 701 to implement the steps performed by the electronic device in the method for reconstructing a medical image provided by the method embodiments of the present application.

In some embodiments, the electronic device 700 may further optionally include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 703 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 704, a display screen 705, and a power supply 706.

The peripheral interface 703 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 701 and the memory 702. In some embodiments, processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.

The Radio Frequency circuit 704 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 704 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 704 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 704 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.

The display screen 705 is used to display a UI (user interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 705 is a touch display screen, the display screen 705 also has the ability to capture touch signals on or over the surface of the display screen 705. The touch signal may be input to the processor 701 as a control signal for processing. At this point, the display 705 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 705 may be one, providing the front panel of the electronic device 700; in other embodiments, the number of the display screens 705 may be at least two, and the at least two display screens are respectively disposed on different surfaces of the electronic device 700 or are in a folding design; in still other embodiments, the display 705 may be a flexible display disposed on a curved surface or on a folded surface of the electronic device 700. Even more, the display 705 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The Display 705 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.

The power supply 706 is used to power the various components in the electronic device 700. The power source 706 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When the power source 706 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.

In some embodiments, the electronic device 700 also includes one or more sensors 707. The one or more sensors 707 include, but are not limited to: pressure sensor 708, fingerprint sensor 709, optical sensor 710, and proximity sensor 711.

Pressure sensors 708 may be disposed on a side bezel of electronic device 700 and/or underlying touch display 705. When the pressure sensor 708 is disposed on the side frame of the electronic device 700, the holding signal of the user to the electronic device 700 can be detected, and the processor 701 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 708. When the pressure sensor 708 is disposed at the lower layer of the touch display 705, the processor 701 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 705. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.

The fingerprint sensor 709 is configured to collect a fingerprint of the user, and the processor 701 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 709, or the fingerprint sensor 709 identifies the identity of the user according to the collected fingerprint. When the user identity is identified as a trusted identity, the processor 701 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 709 may be disposed on the front, back, or side of the electronic device 700. When a physical button or a vendor Logo is provided on the electronic device 700, the fingerprint sensor 709 may be integrated with the physical button or the vendor Logo.

The optical sensor 710 is used to collect the ambient light intensity. In one embodiment, the processor 701 may control the display brightness of the touch screen 705 based on the ambient light intensity collected by the optical sensor 710. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 705 is increased; when the ambient light intensity is low, the display brightness of the touch display 705 is turned down.

The proximity sensor 711, also called a distance sensor, is typically disposed on the front panel of the electronic device 700. The proximity sensor 711 is used to collect the distance between the user and the front of the electronic device 700. In one embodiment, when the proximity sensor 711 detects that the distance between the user and the front surface of the electronic device 700 gradually decreases, the processor 701 controls the touch display screen 705 to switch from the bright screen state to the dark screen state; when the proximity sensor 711 detects that the distance between the user and the front surface of the electronic device 700 is gradually increased, the processor 701 controls the touch display 705 to switch from the breath screen state to the bright screen state.

Those skilled in the art will appreciate that the configuration shown in fig. 10 is not limiting of electronic device 700 and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.

The application also provides a server, which comprises a processor and a memory, wherein at least one instruction is stored in the memory, and the at least one instruction is loaded and executed by the processor to implement the steps executed by the server in the medical image reconstruction method provided by the above method embodiments. It should be noted that the server may be a server as provided in fig. 11 below.

Referring to fig. 11, a schematic structural diagram of a server according to an exemplary embodiment of the present application is shown. Specifically, the method comprises the following steps: the server 800 includes a central processing unit 801, a system Memory 804 including a Random Access Memory (RAM) 802 and a Read-Only Memory (ROM) 803, and a system bus 805 connecting the system Memory 804 and the central processing unit 801. The server 800 also includes a basic input/output system (I/O system) 806, which facilitates transfer of information between devices within the computer, and a mass storage device 807 for storing an operating system 813, application programs 814, and other program modules 815.

The basic input/output system 806 includes a display 808 for displaying information and an input device 809 such as a mouse, keyboard, etc. for user input of information. Wherein the display 808 and the input device 809 are connected to the central processing unit 801 through an input output controller 810 connected to the system bus 805. The basic input/output system 806 may also include an input/output controller 810 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input-output controller 810 also provides output to a display screen, a printer, or other type of output device.

The mass storage device 807 is connected to the central processing unit 801 through a mass storage controller connected to the system bus 805. The mass storage device 807 and its associated computer-readable media provide non-volatile storage for the server 800. That is, the mass storage device 807 may include a computer-readable medium such as a hard Disk or CD-ROM (Compact Disk-ROM) drive.

Without loss of generality, the computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other solid state Memory technology, CD-ROM, high density Digital Video Disk (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 804 and mass storage 807 described above may be collectively referred to as memory.

The memory stores one or more programs configured to be executed by the one or more central processing units 801, the one or more programs including server-side instructions for implementing the construction of the medical image, and the central processing unit 801 executes the one or more programs to implement the steps executed by the server in the reconstruction method of the medical image provided by the above-described method embodiments.

The server 800 may also operate as a remote computer connected to a network via a network, such as the internet, in accordance with various embodiments of the invention. That is, the server 800 may be connected to the network 812 through the network interface unit 811 coupled to the system bus 805, or may be connected to other types of networks or remote computer systems using the network interface unit 811.

The memory further includes one or more programs, the one or more programs are stored in the memory, and the one or more programs include steps executed by the server for performing the medical image reconstruction method provided by the embodiment of the invention.

The present application further provides a computer-readable storage medium, in which at least one instruction, at least one program, a code set, or a set of instructions is stored, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the medical image reconstruction method provided by the above method embodiment.

The present application further provides a computer program product, which when run on an electronic device, causes the electronic device to execute the method for reconstructing a medical image according to the above-mentioned method embodiments.

Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

22页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种医学影像分析方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!