Method and system for continuous deep learning based radiotherapy treatment planning

文档序号:491521 发布日期:2022-01-04 浏览:8次 中文

阅读说明:本技术 用于基于连续深度学习的放射疗法治疗规划的方法和系统 (Method and system for continuous deep learning based radiotherapy treatment planning ) 是由 J·施雷伊尔 H·拉克索南 H·海沃南 于 2020-06-04 设计创作,主要内容包括:提供了用于基于连续深度学习的放射疗法治疗规划的示例方法和系统。一种示例方法可以包括:获取(210)深度学习引擎,该深度学习引擎被训练以基于与第一规划规则相关联的第一训练数据来执行放射疗法治疗规划任务。该方法还可以包括:基于与特定患者相关联的输入数据,使用深度学习引擎来执行(220)放射疗法治疗规划任务,以生成与特定患者相关联的输出数据;以及获取(230)经修改的输出数据,该经修改的输出数据包括对由深度学习引擎生成的输出数据的一个或多个修改。该方法还可以包括:基于经修改的输出数据,生成(240)与第二规划规则相关联的第二训练数据;并且通过使用第一训练数据和第二训练数据的组合重新训练深度学习引擎来生成(250)经修改的深度学习引擎。(Example methods and systems for continuous depth learning based radiation therapy treatment planning are provided. An example method may include: a deep learning engine trained to perform a radiation therapy treatment planning task based on first training data associated with a first planning rule is acquired (210). The method may further comprise: performing (220) a radiation therapy treatment planning task using a deep learning engine based on input data associated with a particular patient to generate output data associated with the particular patient; and obtaining (230) modified output data, the modified output data comprising one or more modifications to the output data generated by the deep learning engine. The method may further comprise: generating (240) second training data associated with a second planning rule based on the modified output data; and generating (250) a modified deep learning engine by retraining the deep learning engine using a combination of the first training data and the second training data.)

1. A method for a computer system to perform radiation therapy treatment planning based on continuous deep learning, wherein the method comprises:

obtaining a deep learning engine trained to perform a radiation therapy treatment planning task based on first training data associated with a first planning rule;

performing the radiation therapy treatment planning task using the deep learning engine based on input data associated with a particular patient to generate output data associated with the particular patient;

obtaining modified output data, the modified output data comprising one or more modifications to the output data generated by the deep learning engine;

generating second training data associated with a second planning rule based on the modified output data; and

generating a modified deep learning engine by retraining the deep learning engine using a combination of the first training data and the second training data.

2. The method of claim 1, wherein generating the modified deep learning engine comprises:

generating a modified set of weight data associated with a plurality of processing layers of the modified deep learning engine by modifying weight data associated with a plurality of processing layers of the deep learning engine.

3. The method of claim 1 or 2, wherein obtaining the deep learning engine comprises:

obtaining the deep learning engine from a central planning system configured to train the deep learning engine based on the first training data, wherein the first training data represents global training data and the second training data represents local training data accessible to the computer system.

4. The method of claim 1, 2 or 3, wherein obtaining the modified output data comprises:

obtaining the modified output data that includes the one or more modifications made by a particular planner according to the second planning rule.

5. The method of any of claims 1 to 4, wherein performing the radiation therapy treatment planning task comprises one of:

performing automatic segmentation to generate output structural data based on input image data associated with the particular patient;

performing dose prediction based on input image data and input structure data associated with the particular patient to generate output dose data; and

performing therapy delivery data prediction to generate therapy delivery data based on the input dose data associated with the particular patient.

6. The method of any of claims 1-5, wherein obtaining the modified output data comprises:

obtaining modified output structure data comprising at least one of the following modifications: a modified segmentation margin around the structure; extension of the structure in a particular direction; a section of a part of the structure; a modified cut plane of the structure; and modified margins on different sides of the structure.

7. The method of any of claims 1-5, wherein obtaining the modified output data comprises:

obtaining modified output dose data comprising at least one of the following modifications: modified organ preservation, modified target coverage, modified target dose prescription, and modified normal tissue dose prescription.

8. A non-transitory computer-readable storage medium comprising a set of instructions that, in response to execution by a processor of a computer system, cause the processor to perform a continuous deep learning based radiation therapy treatment planning method, wherein the method comprises:

obtaining a deep learning engine trained to perform a radiation therapy treatment planning task based on first training data associated with a first planning rule;

performing the radiation therapy treatment planning task using the deep learning engine based on input data associated with a particular patient to generate output data associated with the particular patient;

obtaining modified output data, the modified output data comprising one or more modifications to the output data generated by the deep learning engine;

generating second training data associated with a second planning rule based on the modified output data; and

generating a modified deep learning engine by retraining the deep learning engine using a combination of the first training data and the second training data.

9. The non-transitory computer-readable storage medium of claim 8, wherein generating the modified deep learning engine comprises:

generating a modified set of weight data associated with a plurality of processing layers of the modified deep learning engine by modifying weight data associated with a plurality of processing layers of the deep learning engine.

10. The non-transitory computer-readable storage medium of claim 8 or 9, wherein retrieving the deep learning engine comprises:

obtaining the deep learning engine from a central planning system configured to train the deep learning engine based on the first training data, wherein the first training data represents global training data and the second training data represents local training data accessible to the computer system.

11. The non-transitory computer-readable storage medium of claim 8, 9, or 10, wherein obtaining the modified output data comprises:

obtaining the modified output data that includes the one or more modifications made by a particular planner according to the second planning rule.

12. The non-transitory computer-readable storage medium of any of claims 8 to 11, wherein performing the radiation therapy treatment planning task comprises one of:

performing automatic segmentation to generate output structural data based on input image data associated with the particular patient;

performing dose prediction based on input image data and input structure data associated with the particular patient to generate output dose data; and

performing therapy delivery data prediction to generate therapy delivery data based on the input dose data associated with the particular patient.

13. The non-transitory computer-readable storage medium of any of claims 8 to 12, wherein obtaining the modified output data comprises:

obtaining modified output structure data comprising at least one of the following modifications: a modified segmentation margin around the structure; extension of the structure in a particular direction; a section of a part of the structure; a modified cut plane of the structure; and modified margins on different sides of the structure.

14. The non-transitory computer-readable storage medium of any one of claims 8-412, wherein obtaining the modified output data comprises:

obtaining modified output dose data comprising at least one of the following modifications: modified organ preservation, modified target coverage, modified target dose prescription, and modified normal tissue dose prescription.

15. A computer system configured to perform radiation therapy treatment planning based on continuous deep learning, wherein the computer system comprises: a processor and a non-transitory computer readable medium having instructions stored thereon that, when executed by the processor, cause the processor to:

obtaining a deep learning engine trained to perform a radiation therapy treatment planning task based on first training data associated with a first planning rule;

performing the radiation therapy treatment planning task using the deep learning engine based on input data associated with a particular patient to generate output data associated with the particular patient;

obtaining modified output data, the modified output data comprising one or more modifications to the output data generated by the deep learning engine;

generating second training data associated with a second planning rule based on the modified output data; and

generating a modified deep learning engine by retraining the deep learning engine using a combination of the first training data and the second training data.

16. The computer system of claim 15, wherein the instructions to generate the modified deep learning engine cause the processor to:

generating a modified set of weight data associated with a plurality of processing layers of the modified deep learning engine by modifying weight data associated with a plurality of processing layers of the deep learning engine.

17. The computer system of claim 15 or 16, wherein the instructions to acquire the deep learning engine cause the processor to:

obtaining the deep learning engine from a central planning system configured to train the deep learning engine based on the first training data, wherein the first training data represents global training data and the second training data represents local training data accessible to the computer system.

18. The computer system of claim 15, 16 or 17, wherein the instructions to obtain the modified output data cause the processor to:

obtaining the modified output data that includes the one or more modifications made by a particular planner according to the second planning rule.

19. The computer system of any of claims 15 to 18, wherein the instructions for performing the radiation therapy treatment planning task cause the processor to perform one of:

performing automatic segmentation to generate output structural data based on input image data associated with the particular patient;

performing dose prediction based on input image data and input structure data associated with the particular patient to generate output dose data; and

performing therapy delivery data prediction to generate therapy delivery data based on the input dose data associated with the particular patient.

20. The computer system of any of claims 15-19, wherein the instructions to obtain the modified output data cause the processor to:

obtaining modified output structure data comprising at least one of the following modifications: a modified segmentation margin around the structure; extension of the structure in a particular direction; a section of a part of the structure; a modified cut plane of the structure; and modified margins on different sides of the structure.

21. The computer system of any of claims 15-419, wherein the instructions to obtain the modified output data cause the processor to:

obtaining modified output dose data comprising at least one of the following modifications: modified organ preservation, modified target coverage, modified target dose prescription, and modified normal tissue dose prescription.

Background

Unless otherwise indicated herein, the approaches described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.

Radiation therapy is an important part of the treatment for reducing or eliminating unwanted tumors in patients. Unfortunately, the applied radiation itself does not distinguish between unwanted tumors and any proximal healthy structures such as organs. This requires careful management to limit radiation to the tumor (i.e., the target). Ideally, the goal is to deliver a lethal or curative radiation dose to a tumor while maintaining an acceptable dose level in a proximal healthy structure. However, to achieve this goal, conventional radiation therapy treatment planning can be time and labor intensive.

Disclosure of Invention

In one aspect, the invention provides a method for a computer system to perform radiation therapy treatment planning as defined in claim 1. Optional features are defined in the dependent claims.

In another aspect, the invention provides a non-transitory computer readable storage medium as defined in claim 8, the medium comprising a set of instructions that, in response to execution by a processor of a computer system, cause the processor to perform a radiation therapy treatment planning method. Optional features are defined in the dependent claims.

In yet another aspect, the invention provides a computer system configured to perform radiation therapy treatment planning as defined in claim 15. Optional features are defined in the dependent claims.

According to examples of the present disclosure, methods and systems for continuous deep learning (continuous deep learning) based radiotherapy treatment planning are provided. In this case, an example method may include: a deep learning engine is acquired that is trained to perform a radiation therapy treatment planning task based on first training data associated with a first planning rule. The method may further comprise: performing a radiation therapy treatment planning task using a deep learning engine based on input data associated with a particular patient to generate output data associated with the particular patient; and obtaining modified output data, the modified output data comprising one or more modifications to the output data generated by the deep learning engine. The method may further comprise: generating second training data associated with a second planning rule based on the modified output data; and generating a modified deep learning engine by retraining the deep learning engine using a combination of the first training data and the second training data.

Drawings

Fig. 1 is a schematic diagram illustrating an example process flow for radiation therapy treatment;

fig. 2 is a flow diagram of an example process for a computer system to perform radiation therapy treatment planning based on continuous depth learning;

fig. 3 is a schematic diagram illustrating an example radiation therapy treatment plan based on continuous depth learning according to the example in fig. 2;

FIG. 4 is a diagram illustrating an example automatic segmentation based on continuous depth learning;

FIG. 5 is a schematic diagram illustrating an example dose prediction based on continuous deep learning;

fig. 6 is a schematic diagram illustrating an example network environment in which continuous deep learning based radiation therapy treatment planning may be implemented;

FIG. 7 is a schematic illustration of an example treatment plan for treatment delivery; and

fig. 8 is a schematic diagram of an example computer system for performing radiation therapy treatment planning based on continuous deep learning.

Detailed Description

The technical details set forth in the following description enable one skilled in the art to practice one or more embodiments of the present disclosure.

Fig. 1 is a schematic diagram illustrating an example process flow 100 for radiation therapy treatment. The example process 100 may include one or more operations, functions, or actions illustrated by one or more blocks. Various blocks may be combined into fewer blocks, divided into more blocks, and/or eliminated, depending on the desired implementation. In the example of fig. 1, the radiation therapy treatment generally includes various stages, such as the imaging system performing image data acquisition for the patient (see 110); a radiation therapy treatment planning system (see 130) generates an appropriate treatment plan (see 156) for the patient; and the therapy delivery system delivers the therapy according to the therapy plan (see 160).

In more detail, at 110 in fig. 1, an image data acquisition may be performed using an imaging system to capture image data 120 associated with a patient, particularly the anatomy of the patient. Any suitable medical image modality or modalities may be used, such as Computed Tomography (CT), Cone Beam Computed Tomography (CBCT), Positron Emission Tomography (PET), Magnetic Resonance Imaging (MRI), and/or Single Photon Emission Computed Tomography (SPECT), any combination of the preceding, and so forth. For example, when using CT or MRI, the image data 120 may include a series of two-dimensional (2D) images or slices, each representing a cross-sectional view of the patient's anatomy, or may include volumetric or three-dimensional (3D) images of the patient, or may include a time series of 2D or 3D images of the patient (e.g., four-dimensional (4D) CT or 4D CBCT).

At 130 in fig. 1, a radiation therapy treatment plan may be performed during a planning phase to generate a treatment plan 156 based on the image data 120. Any suitable number of treatment planning tasks or steps may be performed, such as segmentation, dose prediction, projection data prediction, and/or treatment plan generation, among others. For example, segmentation may be performed to generate structure data 140 that identifies various segments or structures from image data 120. In practice, a three-dimensional (3D) volume of the patient's anatomy may be reconstructed from the image data 120. The 3D volume that will be subjected to radiation is referred to as a treatment or irradiation volume, which may be divided into a plurality of smaller volume pixels (voxels) 142. Each voxel 142 represents a 3D element associated with a location (i, j, k) within the treatment volume. The structure data 140 may include any suitable data related to the contour, shape, size, and location of a patient's anatomy 144, target 146, Organ At Risk (OAR)148, or any other structure of interest (e.g., tissue, bone). For example, using image segmentation, a line may be drawn around a portion of the image and labeled as a target 146 (e.g., labeled with a label ═ prostate). All content within the line will be considered a target 146, while all content outside the line will not be considered a target 146.

In another example, dose prediction may be performed to generate dose data 150 specifying a radiation dose (denoted as "D" at 152) to be delivered to the target 146TAR") and the radiation dose for the OAR 148 (denoted as" D "at 154OAR"). In practice, target 146 may represent a malignant tumor requiring radiation therapy treatment (e.g., a prostate tumor, etc.), while OAR 148 represents a proximal healthy or non-target structure (e.g., rectum, bladder, etc.) that may be adversely affected by the treatment. The target 146 is also referred to as a Planning Target Volume (PTV). Although one example is shown in fig. 1, the treatment volume may include multiple targets 146 and OARs 148 having complex shapes and sizes. Further, although shown as having a regular shape (e.g., a cube), voxels 142 may have any suitable shape (e.g., non-regular). Depending on the desired implementation, radiation therapy treatment planning at block 130 may be performed based on any additional and/or alternative data, such as prescriptions, disease stages, biological or radiological data, genetic data, laboratory data, biopsy data, past treatments or medical histories, any combination of the preceding, and so forth.

Based on the structure data 140 and dose data 150, a treatment plan 156 may be generated to include 2D fluence map data for a set of beam orientations or angles. Each fluence map specifies the intensity and shape of a radiation beam emitted from the radiation source at a particular beam orientation and at a particular time (e.g., as determined by a multi-leaf collimator (MLC)). For example, in practice, intensity modulated radiation therapy treatment (IMRT) or any other treatment technique(s) may involve changing the shape and intensity of the radiation beam while the gantry and couch angles remain constant. Additionally or alternatively, the treatment plan 156 may include machine control point data (e.g., jaw and blade positions), Volume Modulated Arc Therapy (VMAT) trajectory data for controlling the treatment delivery system, and the like. In practice, block 130 may be performed based on a target dose prescribed by a clinician (e.g., oncologist, dosimeter, planner, etc.), such as based on the clinician's experience, the type and extent of the tumor, the geometry and condition of the patient, and so forth.

At 160 in fig. 1, treatment delivery is performed during a treatment session to deliver radiation to the patient according to the treatment plan 156. For example, the radiation therapy treatment delivery system 160 may include a rotatable gantry 164 with a radiation source 166 attached to the gantry 164. During treatment delivery, the gantry 164 rotates around a patient 170 supported on a structure 172 (e.g., a table) to emit radiation beams 168 at various beam orientations in accordance with the treatment plan 156. The controller 162 may be used to retrieve the treatment plan 156 and control the gantry 164, radiation source 166, and radiation beam 168 to deliver radiation therapy treatment in accordance with the treatment plan 156.

It should be appreciated that any suitable radiation therapy treatment delivery system(s) may be used, such as a robotic-arm based system, a tomotherapy type system, a brachytherapy and/or West Rax sphere, any combination of the foregoing, and so forth. Furthermore, examples of the present disclosure may be applicable to particle delivery systems (e.g., protons and/or carbon ions, etc.). Such systems may employ a scattered particle beam that is then shaped by a device similar to an MLC, or they may use a scanning beam whose energy, spot size and dwell time can be adjusted.

Typically, the radiation therapy treatment planning at block 130 in fig. 1 is time and labor intensive. For example, it typically requires a team of highly skilled and trained oncologists and dosimeters to manually delineate the structure of interest by drawing contours or segmentation on the image data 120. These structures are reviewed manually by the physician and may require adjustment or redrawing. In many cases, the segmentation of critical organs is probably the most time-consuming part of the radiation therapy treatment planning. After the structure is agreed upon, there are additional labor intensive steps to process the structure to generate a clinically optimal treatment plan to specify treatment delivery data such as beam orientation and trajectory, and a corresponding 2D fluence map. These steps often become complicated by a lack of consensus among different physicians and/or clinical areas as to what constitutes a "good" contour or segmentation. In practice, there may be a great variation in the way different clinical experts draw structures or segments. Such variations may lead to uncertainty in the target volume size and shape, as well as the exact proximity, size, and shape of the OAR that should receive the minimum radiation dose. The manner in which the segments are rendered on different dates may vary even for a particular expert.

According to examples of the present disclosure, artificial intelligence (a1) techniques may be applied to improve various challenges associated with radiation therapy treatment planning. In particular, the deep learning engine(s) can be used to automate the radiation therapy treatment planning step(s). Throughout this disclosure, the term "deep learning" may generally refer to a class of methods that utilize many layers or stages of non-linear data processing for feature learning and pattern analysis and/or classification. A "deep learning model" may refer to a hierarchy of "layers" of nonlinear data processing, including an input layer, an output layer, and multiple (i.e., two or more) "hidden" layers between the input and output layers. These layers may be trained from end-to-end (e.g., from an input layer to an output layer) to extract feature(s) from the input and classify the feature(s) to produce an output (e.g., a classification label or class).

Thus, the term "deep learning engine" may refer to any suitable hardware and/or software component(s) of a computer system capable of executing algorithms according to any suitable deep learning model(s). Any suitable deep learning model(s) may be used, such as a convolutional neural network, a cyclic neural network, and/or a deep belief network, or any combination of the preceding, and so forth, depending on the desired implementation. In practice, neural networks are typically formed using networks of processing elements (referred to as "neurons", "nodes", etc.) interconnected via connections (referred to as "synapses", "weights", etc.). For example, a convolutional neural network may be implemented using any suitable architecture(s), such as U-net, LeNet, AlexNet, ResNet, and/or V-net, DenseNet, and the like. In this case, the "layers" of the convolutional neural network may be convolutional layers, pooling layers, rectifying linear unit (ReLU) layers, fully-connected layers, and/or lossy layers, etc. In practice, the U-net architecture includes a contracted path (left side) and an expanded path (right side). The shrink path includes repeated applications of convolution followed by a ReLU layer and a max pooling layer. Each step in the extension path may include upsampling of the feature map, followed by convolution, etc.

The deep learning method should be contrasted with machine learning methods that have been applied to, for example, automatic segmentation. Typically, these methods involve extracting (manually designed) feature vectors from the image, such as for each voxel, etc. The feature vectors can then be used as input to a machine learning model that classifies to which class each voxel belongs. However, such machine learning methods typically do not use the full image data and may require additional constraints. Another challenge is that these methods rely on high-dimensional, hand-designed features in order to accurately predict the class label of each voxel. The computational cost of solving the high-dimensional classification problem is high and requires a large amount of memory. Some methods use low-dimensional features (e.g., using dimension reduction techniques), but they may reduce prediction accuracy.

Traditionally, there are many challenges associated with training a deep learning engine for radiation therapy treatment planning. For example, different planners (e.g., individuals, groups, clinical sites or institutions, networks, etc.) often have different clinical practices in radiation therapy treatment planning. To train a deep learning engine according to a particular clinical practice, one option is to develop a particular internal model. However, it may be difficult to obtain the desired training results without collecting a large amount of refined training data. Moreover, while conceptually simple, training a deep learning engine typically requires significant technical expertise related to model architecture(s), optimization, convergence analysis, regularization, and the like. These challenges may lead to sub-optimal results or, worse, to the inability to create any deep learning engine sufficient for work. Such complexity may prevent the user from training and using the deep learning engine for radiation therapy treatment planning, which is undesirable.

Furthermore, there may be inherent problem(s) associated with deep learning engines. For example, while a trained deep learning engine may operate in concert with its training data, there is no guarantee that the engine may be generalized to other data sets. With respect to automatic segmentation, this may lead to sub-optimal contouring. For example, breast segmentation models trained using data from european and american clinics may not be applicable to patient data from other geographic regions. Differences may include average size, weight and size of the patient population. In addition, different planners have different contouring practices, which may be driven by different modalities and strategies (e.g., VMAT and IMRT). Thus, a deep learning engine that performs well in one clinic may not be accepted by another clinic.

Continuous deep learning

According to examples of the present disclosure, radiation therapy treatment planning may be implemented based on continuous deep learning to improve performance of a deep learning engine. As used herein, the term "continuous deep learning" (also referred to as "lifelong learning," "incremental learning," and "sequential learning") may generally refer to the technique(s) by which a deep learning engine is modified or improved throughout its operation based on additional training data. In this way, the trained deep learning engine may be modified over time to accommodate desired clinical practices and/or patient populations. By improving the adaptability of the deep learning engine, the treatment planning outcome of the patient may also be improved, such as increasing the probability of tumor control and/or reducing the likelihood of health complications or death due to radiation overdose in a healthy structure.

In more detail, fig. 2 is a flow diagram illustrating an example process 200 for a computer system to perform radiation therapy treatment planning based on continuous depth learning. The example process 200 may include one or more operations, functions, or actions illustrated by one or more blocks, such as 210-250. Various blocks may be combined into fewer blocks, divided into more blocks, and/or eliminated, depending on the desired implementation. The example process 200 may be implemented using any suitable computer system(s), examples of which will be discussed using FIG. 9. Some examples will be explained using fig. 3, which fig. 3 is a schematic illustration of an example radiation therapy treatment plan based on continuous depth learning according to the example of fig. 2.

At 210 in fig. 2, a deep learning engine (see 320 in fig. 3) trained to perform radiation therapy treatment planning tasks may be acquired. Herein, the term "obtaining" may generally refer to a computer system accessing or retrieving data and/or computer-readable instructions associated with deep learning engine 320 from any suitable source (e.g., another computer system), memory, or data store (e.g., local or remote), etc. The deep learning engine 320 may be trained during the training phase 301 based on first training data (see 310 in fig. 3) associated with the first planning rule. As used herein, the term "planning rule" may generally refer to any suitable clinical guideline(s), planning strategy, and/or planning practice(s) associated with a particular radiation therapy treatment planning task and/or anatomical region.

At 220 in fig. 2, a deep learning engine 320 can be used to perform the radiation therapy treatment planning task during the inference phase 302. For example, based on input data associated with a particular patient (see 330 in fig. 3), the deep learning engine 320 may perform a radiation therapy treatment planning task to generate output data associated with the patient (see 340 in fig. 3). In practice, the deep learning engine 320 may be trained to perform any suitable radiation therapy treatment planning task, such as automatic segmentation, dose prediction, treatment delivery data estimation, abnormal organ detection, and/or treatment outcome prediction, or any combination of the preceding.

In the case of automatic segmentation, the deep learning engine 320 may be trained to generate output structural data (e.g., 140 in fig. 1) based on input image data (e.g., 120 in fig. 1). In the case of dose prediction, the engine 320 may be trained to generate output dose data (e.g., 150 in fig. 1) based on input structure data and beam geometry data. In the case of therapy delivery data estimation, the engine 320 may be trained to generate output therapy delivery data (e.g., fluence map data and/or structural projection data, etc.) based on input structural data and/or dose data, etc.

At 230 in fig. 2, modified output data (see 350 in fig. 3) including modification(s) to output data 340 may be obtained. As such, at 240 in fig. 2, second training data (see 360 in fig. 3) associated with the second planning rule may be generated. In practice, the modified output data 350 may be generated by the planner according to the second planning rule to achieve a better treatment planning result.

The term "modify" may generally refer to adding, deleting, correcting, changing, or altering output data. For example, in the case of automatic segmentation (to be discussed using fig. 4), modified output data 350 may include the following modification(s): a modified segmentation margin around the structure; extension of the structure in a particular direction; a section of a part of the structure; a modified cut plane of the structure; and/or modified margins on different sides of the structure, etc. With respect to dose prediction (to be discussed using fig. 5), the modified output data 350 may include the following modification(s): modified organ preservation, modified target coverage, modified target dose prescription, and/or modified normal tissue dose prescription, and the like. Any alternative and/or additional modification(s) may be used.

At 250 in fig. 2, a modified deep learning engine (see 380 in fig. 3) may be generated by retraining or modifying the deep learning engine 320 using a combination of the first training data 310 and the second training data 360 (see 370 in fig. 3). The continuous deep learning at block 250 may be performed to facilitate adaptation from the first planning rule to the second planning rule. In the example of fig. 3, the deep learning engine 320 trained using the first training data 310(data (a)) may include a plurality of processing layers associated with weight data (w (a)). In this case, the modified deep learning engine 380 trained using the combined training data 370(data (a, B)) may include multiple processing layers associated with the modified weight data (w (a, B)).

In practice, the second training data 360 may represent local user-generated training data. In contrast, the first training data 310 may represent a more general set of data associated with different planning rules that may have been designed for different patient populations. For example, the deep learning engine 320 may be trained according to planning rules that are more appropriate for a particular patient population (e.g., european and U.S. patients). During the continuous deep learning phase 303, improvements may be made to adapt the deep learning engine 320 to different planning rules for different patient populations (e.g., east asian patients).

The combination or mixing ratio between the first training data 310 and the second training data 360 may be adjusted over time. Initially, there may be more first training data 310 than second training data 360. As more local training data becomes available, the ratio of the second training data 360 increases. It should be noted that the first training data 310 may be fully or partially included in the combined training data 370. As will be discussed further using fig. 6 and 7, the second training data 360 may be generated using a local planning system operated by the planner. In this case, the second training data 360 may represent additional training data generated by the planner according to the preferred second planning rule.

Examples of the present disclosure may be implemented to improve various challenges associated with training a deep learning engine for radiation therapy treatment planning. In practice, the training phase 301 may be implemented by a central planning system (to be discussed using fig. 6 and 7). The inference phase 302 can be implemented "locally" at the clinical site, where the output data 340 of the deep learning engine 320 can be refined by user corrections. Each time the planner generates modified output data 350, additional second training data 360 may be used for continuous training of the deep learning engine. As more corrections or modifications become available, the deep learning engine 320/380 may adapt to local needs.

Furthermore, the user does not have to have extensive knowledge about the deep learning model architecture(s), etc. Using the already stable deep learning engine 320 as a starting point, the user will not have much to worry about technical issues such as convergence, local minima, or poor weight initialization. Accordingly, it is not necessary to train deep learning engine 320 from scratch, particularly when there is only a limited amount of local training data (e.g., limited amount or variation compared to first training data 310). Instead, the user may utilize the better quality first training data 310 (e.g., more data, availability of expert culled data, and/or more variation, etc.) used to train the deep learning engine 320 during the training phase 301. In this way, the risk of achieving sub-optimal training results during the training phase 301 may also be reduced.

Various examples will be discussed below using fig. 4 through 9. In particular, an example automatic segmentation will be discussed using fig. 4, an example dose prediction using fig. 5, an example deployment using fig. 6, an example treatment plan using fig. 7, and an example computer system using fig. 8.

Automatic segmentation

Fig. 4 is a schematic diagram illustrating an example automatic segmentation based on continuous depth learning. In the example of fig. 4, a deep learning engine 420 (hereinafter also referred to as a "segmentation engine") may be trained during a training phase 401 using first training data 410; applied during the inference phase 402 to perform automatic segmentation; and retrained or refined during the continuous deep learning phase 403. In practice, the output of the automatic segmentation may be used for abnormal organ detection, dose prediction, and/or therapy delivery data estimation, among others.

(a) Training phase (see 401 in FIG. 4)

During training phase 401, segmentation engine 420 may be trained to map training image data 411 (i.e., input) to training structure data 412 (i.e., output). In practice, the image data 411 may include 2D or 3D images of the patient's anatomy and may be captured using any suitable imaging modality(s). The structure data 412 may identify any suitable outline, shape, size, and/or location of the structure(s) from the image data 411. Example structures may include target(s) of an anatomical site, OAR(s), or any other structure of interest (e.g., tissue, bone). The structure data 412 may identify a plurality of targets and OARs having any suitable shape and size, depending on the desired implementation.

For example, with respect to prostate cancer, image data 411 may include images of a region, the prostate region. In this case, the structure data 412 may identify a target representing the prostate of each patient, as well as OARs representing proximal healthy structures such as the rectum and bladder. With respect to lung cancer treatment, image data 411 may include images of lung regions. In this case, the structure data 412 may identify targets representing cancerous lung tissue, as well as OARs representing proximal healthy lung tissue, esophagus, and/or heart, among others. With respect to brain cancer, image data 411 may include an image of a brain region. The structural data 412 may identify targets representing brain tumors, as well as OARs representing proximal optic nerves and/or brainstem, etc.

The first training data 410 may be extracted from past treatment plans developed for a plurality of past patients according to any desired planning rules, and/or obtained from any suitable source(s) (e.g., system provider, hospital, patient database, etc.). The first training data 410 may be preprocessed using any suitable data enhancement method (e.g., rotation, flipping, translation, scaling, noise addition and/or clipping, or any combination of the foregoing, etc.) to produce a new data set with modified properties to use benchmark ground truth refinement model generalization. In practice, the 3D volume of a patient to be subjected to radiation is referred to as a treatment volume, which may be divided into a plurality of smaller volume pixels (voxels). In this case, the structure data 412 may specify a class label (e.g., "target," "OAR," etc.) associated with each voxel in the 3D volume.

Any suitable deep learning model(s) may be used. For example, in FIG. 4, the segmentation engine 420 includes a plurality of(N > 1) processing blocks or layers, each processing block or layer being labeled LiWhere i 1.., N (see 421-42N). In this case, the training phase 401 may involve finding weight data (for L) that can minimize the training error between the training structure data 412 and the estimated structure data (not shown for simplicity) generated by the segmentation engine 420iIs denoted by wi). The training phase 401 may also involve finding other parameters or hyper-parameters, such as parameters related to activation functions, etc. The training process may be guided by estimating the loss associated with the classification error. A simple example of a loss function is the mean square error between the true and predicted results, but the loss function may have a more complex formula. For example, the loss function itself may be a deep neural network. This loss can be estimated from the output of the model or from any discrete point within the model.

Weight data w of i-th layeriWhich may be scalar or multi-dimensional vectors. In the case of a convolutional neural network, layer i (L)i) May be convolutional layers configured to be convolved with a convolutional operation from training data 410 or an (i-L) th layer (L)i-1) Extracting characteristic data (F) from the output of (A)i). For example, the first layer (L)1) Processing input image data 411 to generate first feature data (F)1). Second layer (L)2) Processing the first characteristic data (F)1) To generate second characteristic data (F)2) And so on. Layer i (L)i) Feature extraction at (a) may involve applying convolution filter(s) or kernel(s) to overlapping locations of its inputs to learn corresponding weight data (w)i)。

Characteristic data (F) generated by the i-th layeri) A 2D feature map for 2D image data or a 3D feature map for 3D image data may be included. Characteristic data (F)i) Any suitable anatomical feature(s) may be specified, such as a boundary, a distance to a centroid, a distance to a midline, a distance to skin, a distance to a bone, a laterality, the presence of vertical and/or horizontal lines, shape-related parameter(s), and/or texture type, or any combination of the preceding, and the like. Such automatic feature extractionThe method should be distinguished from conventional methods that rely on hand-designed features.

(b) Reasoning phase (see 402 in FIG. 4)

At 430 and 440 in fig. 4, the planner may access the trained segmentation engine 420 to perform an automatic segmentation for a particular patient. Input image data 430 associated with the patient is processed using the plurality of layers 421-42N of the segmentation engine 420 to extract feature data (for layer L)iIs Fi). The purpose is to generate output structure data 440 based on the feature data. Output structure data 440 may identify any suitable outline, shape, size, and/or location of structure(s) detectable from input image data 430.

At 450 in fig. 4, the output structure data 440 may be modified based on any suitable segmentation rule(s) desired by the user to achieve a preferred segmentation result. With respect to automatic segmentation, the modified output structure data 450 may include the following modification(s): a modified segmentation margin (e.g., 2mm to 4mm) around the contour structure; extension of the structure in one direction; a section of a part of the structure; a modified cut plane of a structure (e.g., spinal cord); and/or modified margins on different sides of the structure (e.g., more margins on the lower side than on the upper side of the organ), or any combination of the foregoing, etc.

At 460 in fig. 4, second training data 460 may be generated to facilitate subsequent refinement of segmentation engine 420. For example, the second training data 460 may include the input image data 430 and the corresponding modified structure data 450, as well as any additional training data (not shown for simplicity). Second training data 460 may represent user-generated training data designed to train segmentation engine 420 to achieve a more desirable segmentation result than first training data 410.

(c) Successive deep learning phases (see 403 in FIG. 4)

At 470 in fig. 4, a combination of the first training data 410 and the second training data 460 may be used to generate combined training data 470. Thus, the training can be based on combinationData 470 retrains or modifies segmentation engine 420 to generate modified segmentation engine 480. In the example in fig. 4, modified segmentation engine 480 may include a database with associated modified weight dataMultiple (N > 1) process layers (L)i) Where i 1.., N (see 481-48N). In practice, any suitable mixing or combination ratio may be used for the combined training data 470. The mixing ratio is a parameter that can evolve over time as more local (user-generated) training data becomes available.

Any suitable continuous deep learning method may be used. In one example, the segmentation engine 420 may be trained for multiple periods (epochs) each time a user-generated segmentation is added to the second training data 460. In another example, the segmentation engine 420 may be retrained from scratch at intervals (e.g., once per day, once per week, or any other time interval), such as using computing resources available locally at a clinical site. Further, a case weight may be assigned to each case in the combined training data 470. All training cases were treated equally using the equal weight method. For example, if there are 1000 cases in the first training data 410 and 200 cases in the second training data 460, all training cases may be assigned the same case weight. Alternatively, some training cases may be assigned higher case weights, such as cases deemed to be of better quality.

At 490 in fig. 4, the modified segmentation engine 480 may be evaluated based on any suitable verification parameter data 490 and any user-provided verification sets. The validation parameter data 490 may include any suitable parameter(s), such as a dice score, an average surface difference (measuring the error in the contour surface position relative to a baseline true phase), a Hausdorff distance, accuracy, specificity, overlapping volume (e.g., if volume overlap is allowed), number of additional slices above the collection plane in the image set, and/or a Jaccard index (e.g., for comparing similarity and diversity of sample sets), or any combination of the preceding, and so forth. In practice, verification may be performed to ensure that the quality of the modified segmentation engine 480 is better than the segmentation engine 420 or at least the same as the segmentation engine 420. Once verified, the modified segmentation engine 480 may be used to perform automatic segmentation for the patient during the next iteration of the inference phase 402. If modifications are made to the output data generated by the modified segmentation engine 480, the continuous deep learning phase 403 may be repeated for further improvement.

Depending on the desired implementation, the verification process may be unsupervised, supervised, or a combination of both. According to an unsupervised approach, verification of the modified engine 480 may be performed based on: (a) a verification dataset provided by a system provider (e.g., the warian medical system), or (b) a verification dataset provided by a user (e.g., a clinic). In both of these options, the goal is to ensure that the quality of the modified engine 480 is better (or at least not significantly lower) than the original engine 420. Further, any suitable verification parameter data 490 may be used to evaluate the quality of the modified engine 480, such as the mean or median number of metrics on the verification set.

For unsupervised learning option (a), the validation criteria may be relaxed when the user (e.g., clinic) is able to provide sufficient training data. For option (b), the validation set may be a random patient selection; based on a selection of metrics from the initial engine 420 (verified for outliers) or a selection performed by the user. When verifying against outliers, the general idea is to focus on cases that are specific to the user but may not be present in the first training data 410. These outliers may be weighted more heavily during the continuous deep learning phase 403. Alternatively, according to the supervised learning approach, the user may be notified in response to determining that the verification process did not produce a clear result (i.e., the modified engine 480 failed verification using option (a) or (b) above). In this case, the user may review the second training data 460, such as cases of model degradation; selected or outlier cases; and/or a metric of the patient being evaluated.

After performing the verification, there may be several possible scenarios. In a first scenario where the modified engine 480 is superior to the initial engine 420, the modified engine 480 may be automatically deployed and the user notified. Alternatively, in a second scenario, the user may be notified and asked to manually review the modified engine 480 performing the poor case based on the authentication parameter data 490 used during authentication. The user may then make a decision as to whether to deploy the modified engine 480 for the next iteration of the inference phase.

In a third scenario, where modified engine 480 is not superior to initial engine 420, modified engine 480 will not be automatically deployed. Instead, the user may be provided with an interface to check for any new training data 460. The examination may identify whether the data quality is sufficient or whether it relates to different patient anatomies, types of plans, etc. Once reviewed, the user may choose to reject the training data 460, in whole or in part. Another option is that the training process can be automatically modified to create a new modified engine, such as by increasing the training duration, using a super search for model parameters, or a different sample strategy. After training, the modified engine may enter the validation pipeline. Instead of the options discussed herein, the data collection and retraining process may also be maintained, as once enough new data is collected, a new training process is initiated. Once new training data is available, the engine has an opportunity to improve.

Dose prediction and other planning tasks

Fig. 5 is a schematic diagram illustrating an example dose prediction based on continuous deep learning. In this example, a deep learning engine 520 (i.e., the "dose prediction engine" below) may be trained during the training phase 501 using the first training data 510; applied during the inference phase 502 to perform dose prediction; and modified during the continuous deep learning phase 503.

During the training phase (see 501 in fig. 5), the first training data 510 may be used to train the dose prediction engine 520. The first training data 510 may include image and structure data 511 (i.e., training inputs) associated with a plurality of past patients anddose data 512 (i.e., training output). The dose data 512 (e.g., 3D dose data) may specify a dose distribution of the target (denoted as "DTAR") and OAR (denoted as" DOAR"). For example, with respect to prostate cancer, the dose data 512 may specify a dose distribution that represents a target of the patient's prostate, as well as OARs that represent proximal healthy structures such as the rectum or bladder. In practice, the dose data 512 may specify a dose distribution for the entire 3D volume, not just the target and OAR volumes. The dose data 512 may include spatial bioeffect data (e.g., a segmentation correction dose) and/or cover only a portion of the treatment volume. Any additional input data may be used to train dose prediction engine 520, such as beam geometry data associated with the therapy delivery system, photon energy used during therapy, type of therapy (e.g., stereotactic, photon, proton, and/or electron, etc.).

During the inference phase (see 502 in fig. 5), the dose prediction engine 520 may be used to generate output dose data 540 based on input images and structural data 530 associated with a particular patient. The output dose data 540 may be obtained by using a plurality of (K)>1) The processing layers are evaluated for processing the input data 530, each processing layer being labeled LiAnd has associated weight data wiWhere i 1.., K (see 541-54K). The dose data 540 may specify the dose distribution (denoted as "D") of the OAROAR") and dose distribution (e.g., D) of multiple targetsTAR1And DTAR2). The dose data 540 may specify a dose distribution for the entire 3D volume, not just the target and OAR volumes. The modification(s) can then be made based on any suitable dose prediction rule(s) preferred by the planner to generate modified output dose data 550. The modification(s) may include modified organ preservation (e.g., emphasizing the importance of some organs over others), modified target coverage, modified target dose prescription, and/or modified normal tissue dose. Additionally or alternatively, the modification(s) may be related to treatment techniques (e.g., IMRT and/or VMAT, etc.), field geometry, machine specifications (e.g., IMRT and/or VMAT, etc.), field geometry, and/or machine specificationsEnergy and field shape, and/or field placement clinical practice), etc.

During the continuous deep learning phase (see 503 in fig. 5), a combination of first training data 510 (i.e., used during training phase 501) and second training data 560 (i.e., generated based on modified dose data 550) may be used to improve dose prediction engine 520. The modified dose prediction engine 580 may include a weighting module with associated modified weight dataA plurality of (K)>1) Treatment layer (L)i) Where i 1.., K (see 581-58K). Next, the modified dose prediction engine 580 may be validated based on any suitable validation parameter data 590 and validation data set. This is to ensure that the dose prediction quality is improved (or at least maintained) after continuous deep learning. The example verification parameter data and various verification methods discussed with respect to the automatic segmentation in fig. 4 (e.g., supervised, unsupervised, automated deployment, and/or manual review by a user, etc.) are also applicable here and are not repeated for brevity.

Once validated and approved, the modified dose prediction engine 580 can be deployed for the next iteration of the inference phase 502. The continuous deep learning phase 503 may be repeated for further improvement if modifications are made to the output dose data generated by the modification engine 580. In addition to the automatic segmentation in fig. 4 and the dose prediction in fig. 5, continuous deep learning may also be implemented for other radiation therapy treatment planning tasks, such as treatment delivery data estimation and/or treatment outcome prediction. The estimated therapy delivery data (i.e., output data) may include structural projection data and/or fluence map data, among others. For example, the deep learning engine may be trained to perform structural projection data, such as based on image data and/or structural data, dose data, or any combination of the preceding. The structural projection data may include data related to the beam orientation and machine trajectory of the therapy delivery system.

In another example, the deep learning engine may be trained to perform fluence map estimation, such as a 2D fluence map for a set of beam orientations or trajectories, and/or machine control point data (e.g., jaw and leaf positions, gantry and bed positions), and so forth. The fluence map will be further explained using fig. 8. Any additional and/or alternative training data may be used, such as field geometry data, monitoring units (the amount of radiation counted by the machine), quality of plan estimation (acceptable or not), daily dose prescription (output), field size or other machine parameters, patient bed position parameters or isocenter position within the patient, treatment strategy (whether a motion control mechanism is used, escalation or not), and/or with or without treatment decision, etc.

Example deployment

Examples of the present disclosure may be deployed in any suitable manner, such as a standalone system and/or a web-based planning as a service (PaaS) system, among others. An example will be explained below using fig. 6. Fig. 6 is a schematic diagram illustrating an example network environment 600 in which continuous deep learning based radiation therapy treatment planning may be implemented.

Network environment 600 includes a central planning system (see 610 in fig. 6) in communication with a plurality of local planning systems (see 611-614) via any suitable physical network. The local planning systems may each be operated by a planner of a particular planning site. Using M-4 planners in fig. 6, a first planners ("P1") operates a first local planning system 611, a second planners ("P2") operates a second local planning system 612, a third planners ("P3") operates a third local planning system 613, and a fourth planners ("P4") operates a fourth local planning system 614.

Here, the term "local" may generally refer to client data and/or element(s) associated with a particular planner and/or local planning system. The term "global" may generally refer to data and/or element(s) associated with a central planning system 610, the central planning system 610 being accessible by multiple planning parties through respective local planning systems 611 and 614. In practice, the various functions of the local planning systems 611-614 may be implemented using independent systems at the respective planning sites. Additionally or alternatively, various functionalities of local planning systems 611-614 may be supported by central planning system 610.

According to the example in fig. 1-5, training data 620 (denoted as global data (a)) may be used to train the initial therapy planning engine 630 during the training phase 301/401/501. The treatment planning engine 630, which includes a plurality of treatment layers associated with weight data (denoted as w (a)), may then be further refined using the respective local planning systems 611-614. The treatment planning engine 630 may also be associated with any additional engine parameter data, such as parameters of activation functions, and the like. At the first local planning system 611, the treatment planning engine 630 may be improved by performing continuous deep learning based on a combination of global or public training data 620 (i.e., data (a)) and local or private training data 621 (i.e., data (B1)). The refinement produces a modified treatment planning engine 631 associated with modified weight data denoted w (a, B1).

At the second local planning system 612, the treatment planning engine 630 may be modified using a combination of data (a)620 and data (B2)622 generated by the second planner according to the preferred planning rule(s). Continuous deep learning produces a modified engine 632. At the third local planning system 613, a combination of data (a)620 and data (B3)623 associated with the third planner may be used to generate a modified engine 633. Finally, at the fourth local planning system 613, the combination of data (a)620 and data (B4)624 associated with the fourth planner can be used to generate the modified engine 634. It should be appreciated that data (a) may be used in whole or in part during the continuous deep learning phase.

In contrast to the treatment planning engine 630 with weight data w (a), the modified treatment planning engine 631 and 634 are associated with modified weight data denoted w (a, B1), w (a, B2), w (a, B3) and w (a, B4), respectively. In this way, each planner may utilize the initial training of the therapy planning engine 630 based on the training data 620, and then adjust it to better meet their local needs, styles, and requirements. This is particularly beneficial when the planner does not have the required technical expertise and/or sufficient local training data 621-624 to train their own internal engines or models. As more local training data 621-624 becomes available, the modified treatment planning engine 631-634 may be progressively improved over time in order to achieve better treatment planning results for the respective planner.

In practice, deep migration learning techniques may be used to facilitate continuous learning by the treatment planning engine 630 by the respective local planning systems 611-614 in fig. 6. Here, the term "deep migration learning" may generally refer to the technique(s) in which one deep learning engine (see 630) is adjusted or repurposed (in whole or in part) as a starting point for another deep learning engine (see 631-634). In one example, deep migration learning represents an optimization strategy that can facilitate faster progress or improved performance during the training process. In this way, knowledge learned by the (global) treatment planning engine 630 can be utilized by the local planning system 611-614 and migrated to the corresponding (local) engine 631-634. As a variation of the example in FIG. 6, the local planning system 611-614 has access to the treatment planning engine 630, but does not have access to the global data (A) used during training. In this case, continuous deep learning may be performed by retraining the therapy planning engine 630 using the respective local data (B1) through data (B4) to generate the respective modified engines 631 and 634.

Example treatment plan

During radiation therapy treatment planning, a treatment plan 156/700 may be generated based on structural data and/or dose data generated using the treatment planning engine discussed above. For example, fig. 7 is a schematic illustration of an example treatment plan 156/700 generated or refined based on the output data in the examples of fig. 1-7. The treatment plan 156 may be delivered using any suitable treatment delivery system that includes a radiation source 710, the radiation source 710 operable to project a radiation beam 720 onto a treatment volume 760, the treatment volume 760 representing the patient's anatomy at various beam angles 730.

Although not shown in fig. 7 for simplicity, the radiation source 710 may include a linear accelerator for accelerating the radiation beam 720 and a collimator (e.g., MLC) for modifying or modulating the radiation beam 720. In another example, the radiation beam 720 can be modulated (e.g., as in proton therapy) by scanning the hyperdischarge beam 720 over the target patient at various energies and dwell times using a particular pattern. A controller (e.g., a computer system) may be used to control the operation of the radiation source 720 in accordance with the treatment plan 156.

During treatment delivery, the radiation source 710 may be rotatable using a gantry around the patient, or the patient may be rotated (as in some proton radiation therapy solutions) to emit radiation beams 720 at various beam orientations or angles relative to the patient. For example, five equidistant beam angles 730A-E (also labeled "a", "B", "C", "D", and "' E") may be selected using a deep learning engine configured to perform therapy delivery data estimation. In practice, any suitable number of beams and/or table or chair angles 730 may be selected (e.g., five, seven, etc.). At each beam angle, the radiation beam 720 is associated with a fluence plane 740 (also referred to as an intersection plane) that lies outside the patient envelope along a beam axis that extends from the radiation source 710 to the treatment volume 760. As shown in FIG. 7, the fluence plane 740 is generally at a known distance from the isocenter.

In addition to the beam angles 730A-E, fluence parameters of the radiation beam 720 are also required for treatment delivery. The term "fluence parameters" can generally refer to characteristics of the radiation beam 720, such as its intensity distribution as represented using a fluence map (e.g., 750A-E for corresponding beam angles 730A-E). Each fluence map (e.g., 750A) represents the intensity of the radiation beam 720 at each point on the fluence plane 740 at a particular beam angle (e.g., 730A). Therapy delivery may then be performed according to the fluence maps 750A-E, such as using IMRT and the like. The radiation dose deposited according to the fluence maps 750A-E should correspond as much as possible to a treatment plan generated according to the exemplary embodiments of the present disclosure.

Computer system

The above examples may be implemented by hardware, software or firmware, or a combination thereof. Fig. 8 is a schematic diagram of an example computer system 800 for continuous learning based radiation therapy treatment planning. In this example, the computer system 805 (also referred to as a treatment planning system) may include a processor 810, a computer-readable storage medium 820, an interface 840 for interfacing with the radiation therapy treatment delivery system 160, and a bus 830 that facilitates communication between these illustrated and other components.

The processor 810 will perform the processes described herein with reference to fig. 1-7. Computer-readable storage medium 820 may store any suitable information 822, such as information related to training data, deep learning engines, weight data, input data, and/or output data, and so forth. Computer-readable storage medium 820 may also store computer-readable instructions 824, which, in response to execution by processor 810, computer-readable instructions 824 cause processor 810 to perform processes described herein. The treatment may be delivered according to the treatment plan 156 using the treatment planning system 160 explained using fig. 1, the description of which is not repeated here for the sake of brevity. In practice, the computer system 800 may be part of a computing cluster that includes multiple computer systems. Computer system 800 may include any alternative and/or additional component(s), such as a Graphics Processing Unit (GPU), a message queue for communications, a binary large object storage or database, load balancer(s), and/or application specific circuits, among others. Computer system 800 may be deployed in any suitable manner, including service type deployment in a premise cloud infrastructure, and/or a public cloud infrastructure, or a combination thereof, and so forth.

The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, flowcharts, and/or examples. To the extent that such block diagrams, flowcharts, and/or examples contain one or more functions and/or operations, those skilled in the art will appreciate that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof. Throughout this disclosure, the terms "first," "second," "third," and the like do not denote any order of importance, but rather are used to distinguish one element from another.

Those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software or firmware would be well within the skill of one of skill in the art in light of this disclosure.

Although the present disclosure has been described with reference to specific example embodiments, it should be recognized that the present disclosure is not limited to the described embodiments, but can be practiced with modification and alteration within the spirit and scope of the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

25页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:抗菌性组合物

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!