Drug molecule intelligent generation method based on reinforcement learning and docking

文档序号:88121 发布日期:2021-10-08 浏览:30次 中文

阅读说明:本技术 一种基于强化学习和对接的药物分子智能生成方法 (Drug molecule intelligent generation method based on reinforcement learning and docking ) 是由 魏志强 王茜 刘昊 李阳阳 王卓亚 于 2021-07-09 设计创作,主要内容包括:本发明涉及一种基于强化学习和对接的药物分子智能生成方法,属于药物化学与计算机技术领域,所述方法包括以下步骤:1)构建药物设计的虚拟片断组合库;2)计算片段相似性进行分子片段编码;3)基于强化学习的actor-critic模型生成并优化分子。本发明方法在先导化合物的基础上,缩小了搜索的化学空间。通过强化学习的actor-critic模型采用transformer建模,引入分子片段的位置信息,保存了片段在分子中的相对或绝对位置信息,实现并列化训练。另外,奖励机制通过建立单层感知机模型,进一步优化了生成分子的活性。(The invention relates to a medicine molecule intelligent generation method based on reinforcement learning and docking, belonging to the technical field of medicinal chemistry and computers, and comprising the following steps: 1) constructing a virtual fragment combination library of drug design; 2) calculating fragment similarity to carry out molecular fragment coding; 3) molecules are generated and optimized based on an actor-critic model of reinforcement learning. The method of the invention reduces the chemical space of search on the basis of the lead compound. The actor-critic model through reinforcement learning adopts a transformer for modeling, introduces the position information of the molecular fragment, saves the relative or absolute position information of the fragment in the molecule, and realizes parallel training. In addition, the reward mechanism further optimizes the activity of the generating molecules by building a monolayer perceptron model.)

1. An intelligent generation method of drug molecules based on reinforcement learning and docking is characterized by comprising the following steps:

step 1, constructing a virtual fragment combination library for drug design;

the drug molecule virtual fragment combinatorial library is formed by fragmenting a group of molecules through the existing toolkit, when the molecules are split, the fragments are not classified, and all the fragments are treated identically;

step 2, calculating fragment similarity to carry out molecular fragment coding

Measuring the similarity between different molecular fragments by using the existing combination method of computational chemical similarity, and encoding all the fragments into binary character strings by constructing a balanced binary tree based on the similarity, so that the similar fragments obtain similar codes;

step 3, generating and optimizing molecules based on Actor-critic reinforcement learning model

(1) Actor-critic based reinforcement learning model framework introduction

Generating and optimizing a molecule using an Actor-critic based reinforcement learning model, modified by selecting a single segment of the molecule and one bit in the segment representation; the value in this bit is then swapped, i.e.: if it is 0, it becomes 1, and vice versa; this allows tracking the degree of change applied to the molecule, the encoded leading bit will remain unchanged, so the model only allows the bit to be altered at the end, forcing the model to search only for molecules in the vicinity of known compounds;

based on the Actor-critic reinforcement learning model, starting from a fragmented molecular state, i.e. the current state; the Actor extracts and checks all fragments, introduces the position information of different fragments in molecules, adopts a Transformer Encoder mechanism to calculate the attention coefficient of different fragments in each molecule, and then decides which fragments to replace and which fragments to replace through the network output probability of the DenseNet; scoring the new state according to the degree that the new state meets all the constraints, and then inspecting the difference value TD-Error between the rewards increased by the values of the new state and the current state by critic, wherein the difference value TD-Error is given to actor; if positive, the action of the actor will be enhanced, and if negative, the action will be blocked; then, the current state is replaced by the new state and the process is repeated a given number of times;

(2) optimization of reinforcement learning model reward mechanism

Designing molecules optimized according to two characteristics of inherent attribute information and molecule calculation activity information of the molecules, and realizing reward result prediction by constructing a sensor model of a reinforcement learning model, wherein the sensor model comprises two stages of training and prediction; in the training process, the data set comprises two parts of sources, wherein a positive sample of the data set is from a molecule with known activity reported by the existing literature, and a negative sample of the data set is from random sampling of a ZINC library with the same quantity, and the activity calculation information and the attribute information of the molecule obtained by calculation of the existing toolkit are obtained by sequentially butting the positive sample and the negative sample after disordering the sequence and are used as input, so that the model can learn the potential association relationship of the activity calculation information and the attribute information and whether the activity is really active or not through multiple rounds of training; in the prediction process, the model predicts whether the generated molecules have real activity or not by using the calculation activity information of the generated molecules, namely performing virtual molecule docking on the existing related PDB file of the target points related to the generated molecules and diseases by using advanced and rapid drug docking software, and the inherent attribute information of the generated molecules, namely calculating by using a general software package as input, so that the activity of the generated molecules is further optimized; an Actor in a reinforcement learning model receives a reward every time it produces a valid molecule, and if it tries to produce a molecule that is consistent with what the predictive model expects, it receives a higher reward.

2. The intelligent generation method of drug molecules based on reinforcement learning and docking as claimed in claim 1, wherein in step 1, all single bonds extending from a ring atom are destroyed when the molecule is split, and a fragment chain list is created to record and store the original split point when the molecule is split, so as to be used as a connection point for subsequent molecule design; if the total number of attachment points remains unchanged, the method allows for exchanging segments with different numbers of attachment points; in the process, the open source tool kit RDkit is used for molecular cracking; fragments with more than 12 heavy atoms will be discarded, as will fragments with 4 or more attachment points.

3. The intelligent generation method of drug molecules based on reinforcement learning and docking as claimed in claim 1, wherein in the step 2 of calculating the similarity between fragments, when comparing the "drug-like" molecules, the maximum common substructure Tanimoto-MCS is used to compare the similarity, and for smaller fragments, a Damerau-Levenshtein distance is introduced that improves the Levenshtein distance, and the Damerau-Levenshtein distance between two strings is defined as:

the TMCS distance between the two molecules M1 and M2 is defined as:

the similarity between the two molecules M1 and M2 was measured and the corresponding smiles denoted S1 and S2, Max (TMCS (M35s)1,M2),DL(S1,S2)。

4. The intelligent generation method of drug molecules based on reinforcement learning and docking as claimed in claim 1, wherein the molecular fragments in step 2 encode: these strings are created by constructing a balanced binary tree based on fragment similarity, which is then used to generate a binary string for each fragment, thereby generating a binary string representing the numerator in extension; the order of the attachment points is treated as an identifier for each segment; when assembling the tree, calculating the similarity between all the segments, and forming segment pairs in a greedy bottom-up manner, wherein the two most similar segments are paired firstly, and then repeating the process to connect the two pairs with the most similar segments into a new tree with four leaves; the calculated similarity between two subtrees is measured as the maximum similarity between any two segments of the trees; repeating the joining process until all segments are joined into a single tree;

when each fragment is stored in the binary tree, using it to generate a code for all fragments; determining the code of each segment from the path from the root to the leaves of the stored segment, and for each branch in the tree, appending a 1 to the code if left, and appending a 0 if right; thus, the rightmost character in the encoding corresponds to the branch closest to the segment.

Technical Field

The invention relates to the technical field of medicinal chemistry and computers, in particular to a medicine molecule intelligent generation method based on reinforcement learning and docking.

Background

In the pharmaceutical chemistry profession, it is critical to design and manufacture safe and effective compounds. This is a lengthy, complex and difficult multi-parameter optimization process in terms of money and time. The risk of failure of promising compounds in clinical trials is high (>90%), resulting in unnecessary waste of resources. The average cost of a new drug to market is well over 10 billion dollars, with an average time from discovery to market of 13 years. In the pharmaceutical sector, the average time from discovery to commercial production may be longer, for example 25 years for high-energy molecules. The key first step in molecular discovery is to generate a collection of candidates for computational research or synthesis and characterization. This is a difficult task because the chemical space of the molecule is likely to be huge-the number of potential drug-like compounds is estimated at 1023To 1060And the number of all compounds that have been synthesized is about 108Several orders of magnitude. Heuristic approaches, such as the "five rules" of lipiss based on pharmacy, can help narrow the space of possibilities, but still present significant challenges.

With the revolution of computer technology, drug discovery using AI is becoming a trend. Traditionally, to achieve this goal, various combinations of computational models have been used, such as quantitative structure-activity relationships (QSAR), molecular substitution, molecular simulation, and molecular docking. However, conventional methods are combinatorial in nature and tend to result in most molecular instabilities or non-synthesizability. In recent years, many production models for designing drug-like compounds based on deep learning models have been developed, such as molecular production methods based on variational automatic encoders, molecular production methods based on generating antagonistic networks. However, the current methods still need to be improved in terms of the rate of production, effectiveness and molecular activity of candidate compounds.

Disclosure of Invention

The invention provides a medicine molecule intelligent generation method based on reinforcement learning and docking, which is based on an Actor-critic reinforcement learning model and docking simulation and is used for generating a new medicine molecule with optimal properties. The Actor network is modeled by adopting a bidirectional Transformer Encoder mechanism and a Densenet network.

In order to solve the problems, the invention is realized by the following technical scheme:

an intelligent generation method of drug molecules based on reinforcement learning and docking specifically comprises the following steps:

step 1, constructing a virtual fragment combination library for drug design;

the drug molecule virtual fragment combinatorial library is formed by fragmenting a group of molecules through the existing toolkit, when the molecules are split, the fragments are not classified, and all the fragments are treated identically;

step 2, calculating fragment similarity to carry out molecular fragment coding

Measuring the similarity between different molecular fragments by using the existing combination method of computational chemical similarity, and encoding all the fragments into binary character strings by constructing a balanced binary tree based on the similarity, so that the similar fragments obtain similar codes;

step 3, generating and optimizing molecules based on Actor-critic reinforcement learning model

(1) Actor-critic based reinforcement learning model framework introduction

Generating and optimizing a molecule using an Actor-critic based reinforcement learning model, modified by selecting a single segment of the molecule and one bit in the segment representation; the value in this bit is then swapped, i.e.: if it is 0, it becomes 1, and vice versa; this allows tracking the degree of change applied to the molecule, the encoded leading bit will remain unchanged, so the model only allows the bit to be altered at the end, forcing the model to search only for molecules in the vicinity of known compounds;

based on the Actor-critic reinforcement learning model, starting from a fragmented molecular state, i.e. the current state; the Actor extracts and checks all fragments, introduces the position information of different fragments in molecules, adopts a Transformer Encoder mechanism to calculate the attention coefficient of different fragments in each molecule, and then decides which fragments to replace and which fragments to replace through the network output probability of the DenseNet; scoring the new state according to the degree that the new state meets all the constraints, and then inspecting the difference value TD-Error between the rewards increased by the values of the new state and the current state by critic, wherein the difference value TD-Error is given to actor; if positive, the action of the actor will be enhanced, and if negative, the action will be blocked; then, the current state is replaced by the new state and the process is repeated a given number of times;

(2) optimization of reinforcement learning model reward mechanism

Designing molecules optimized according to two characteristics of inherent attribute information and molecule calculation activity information of the molecules, and realizing reward result prediction by constructing a sensor model of a reinforcement learning model, wherein the sensor model comprises two stages of training and prediction; in the training process, the data set comprises two parts of sources, wherein a positive sample of the data set is from a molecule with known activity reported by the existing literature, and a negative sample of the data set is from random sampling of a ZINC library with the same quantity, and the activity calculation information and the attribute information of the molecule obtained by calculation of the existing toolkit are obtained by sequentially butting the positive sample and the negative sample after disordering the sequence and are used as input, so that the model can learn the potential association relationship of the activity calculation information and the attribute information and whether the activity is really active or not through multiple rounds of training; in the prediction process, the model predicts whether the generated molecules have real activity or not by using the calculation activity information of the generated molecules, namely performing virtual molecule docking on the existing related PDB file of the target points related to the generated molecules and diseases by using advanced and rapid drug docking software, and the inherent attribute information of the generated molecules, namely calculating by using a general software package as input, so that the activity of the generated molecules is further optimized; an Actor in a reinforcement learning model receives a reward every time it produces a valid molecule, and if it tries to produce a molecule that is consistent with what the predictive model expects, it receives a higher reward.

Further, in the step 1, when the molecule is split, all single bonds extending from one ring atom are destroyed, and when the molecule is split, a fragment chain list is created to record and store an original split point, so that the fragment chain list can be conveniently used as a connection point for the design of the following molecule; if the total number of attachment points remains unchanged, the method allows for exchanging segments with different numbers of attachment points; in the process, the open source tool kit RDkit is used for molecular cracking; fragments with more than 12 heavy atoms will be discarded, as will fragments with 4 or more attachment points;

further, in the step 2, when calculating the similarity between fragments, in the comparison of "drug-like" molecules, the maximum common substructure Tanimoto-mcs (tmcs) is used to compare the similarity, and for smaller fragments, a Damerau-Levenshtein distance is introduced to improve the Levenshtein distance, and the Damerau-Levenshtein distance between two character strings is defined as:

the TMCS distance between the two molecules M1 and M2 is defined as:

the similarity between the two molecules M1 and M2 was measured and the corresponding smiles denoted S1 and S2, Max (TMCS (M35s)1,M2),DL(S1,S2);

Further, the molecular fragment in step 2 encodes: these strings are created by constructing a balanced binary tree based on fragment similarity, which is then used to generate a binary string for each fragment, thereby generating a binary string representing the numerator in extension; the order of the attachment points is treated as an identifier for each segment; when assembling the tree, calculating the similarity between all the segments, and forming segment pairs in a greedy bottom-up manner, wherein the two most similar segments are paired firstly, and then repeating the process to connect the two pairs with the most similar segments into a new tree with four leaves; the calculated similarity between two subtrees is measured as the maximum similarity between any two segments of the trees; repeating the joining process until all segments are joined into a single tree;

when each fragment is stored in the binary tree, using it to generate a code for all fragments; determining the encoding of each segment from the path from the root to the leaves of the stored segment, and for each branch in the tree, appending a 1 ("1") to the encoding if left and a 0 ("0") if right; thus, the rightmost character in the encoding corresponds to the branch closest to the segment.

Compared with the prior art, the invention has the beneficial effects that:

the invention is based on an Actor-critic reinforcement learning model and a butt joint simulation method and is used for generating new molecules. The model learns how to modify and improve the molecule to have the desired properties.

(1) The present invention differs from previous reinforcement learning methods in that it focuses on how to generate new compounds structurally close to existing compounds by transforming fragments in the lead compound, thereby reducing the chemical space of the search.

(2) The invention is based on an Actor-critic reinforcement learning model, the Actor network adopts a bidirectional Transformer Encoder mechanism and a Densenet network for modeling, introduces the position information of different segments in molecules, adopts the Transformer Encoder mechanism to calculate the attention coefficient of different segments in each molecule, saves the relative or absolute position information of the segments in the molecules, and realizes parallel training.

(3) The reward mechanism of reinforcement learning establishes a single-layer perceptron model, the input of the model comprises two parts of information, namely molecule-related attribute information and activity information, the activity information is obtained by performing molecule docking on a generated molecule and a disease-related target by using docking software, and the activity of the generated molecule is further optimized.

(4) The method can generate more than 200 ten thousand candidate generating molecules aiming at target point estimation corresponding to a specific disease on the scale of the candidate generating molecules.

(5) The method of the invention can generate more than 80% optimized high-quality AI molecules by adding more than 1000 ultrahigh-dimensional parameters through the molecule docking part and fusing the molecule activity and related attribute information.

(6) The method of the invention relies on a large-scale super-computation platform, and the molecule generation speed is obviously improved.

Drawings

FIG. 1 is a pool of virtual molecular fragments of an Mpro-related compound;

FIG. 2 is a subsection of a binary tree containing all fragments of an Mpro-related compound;

FIG. 3 is a diagram of an Actor-critical reinforcement learning model framework;

FIG. 4 is a detailed diagram of an Actor in the Actor-critic reinforcement learning model;

FIG. 5 shows the molecular generation of active compounds of Mpro neo-corona target.

Detailed Description

The technical solution of the present invention is further explained by the following embodiments with reference to the attached drawings, but the scope of the present invention is not limited in any way by the embodiments.

Example 1

The main objectives of this embodiment are: the generation of active compounds directed against a new coronal Mpro target is based on an initial set of lead compounds, and then modifying and optimizing these molecules by replacing some of their fragments, thus generating new active compounds with Mpro target with desired properties. The embodiment is based on an Actor-critical reinforcement learning model and a docking simulation method and is used for generating new drug molecules with optimal properties. The following describes the technical solution of the present embodiment in detail.

An Actor-critic reinforcement learning model and docking-based intelligent generation method for drug molecules specifically comprises the following steps:

step 1, constructing a virtual fragment combination library for drug design.

The drug molecule virtual fragment combinatorial library is constructed by fragmenting a set of molecules. The virtual fragment library of this example was constructed from 10172 compounds from the ChEMBL database of the pharmaceutical chemistry database that are related to the Mpro target and 175 lead compounds from the Mpro target obtained by molecular docking screening in the laboratory, as shown in fig. 1. One common method of fragmenting molecules is to classify them into the classes of ring structures, side chains and linkers. When splitting molecules, we follow essentially the same protocol, but we do not classify fragments. Thus, all fragments are therefore treated identically. To break the molecule, all single bonds extending from one ring atom are broken. When the molecule is split, a fragment chain list is created to record and store the original split point, so that the original split point can be conveniently used as a connection point for the design of the later molecule. This method allows for exchanging segments with different numbers of attachment points if the total number of attachment points remains unchanged. In this process molecular cleavage was performed using the open source kit, RDKit, of existing chemical informatics. In this process, fragments with more than 12 heavy atoms will be discarded, as will fragments with 4 or more attachment points. These constraints are implemented to reduce complexity while still being able to generate a large number of interesting candidates.

And 2, calculating the similarity of the fragments to encode the molecular fragments.

Step 2.1 calculation of inter-fragment similarity

In this embodiment, all segments are encoded as binary strings and the purpose of the encoding is that similar segments should obtain similar encoding. The similarity between fragments must therefore be measured. There are many ways to calculate chemical similarity. A molecular fingerprint is a direct binary code in which similar molecules should in principle give similar codes. However, when comparing molecular fragments and their inherently sparse representations, we find them less useful for this purpose. A chemically straightforward method to measure similarity between molecules is to use the largest common substructure Tanimoto-mcs (tmcs) similarity:

here, mcs (M1, M2) is the number of atoms in the largest common substructure of molecules M1 and M2, and atoms (M1) and atoms (M2) are the number of atoms in molecules M1 and M2, respectively.

One advantage of Tanimoto-MCS similarity is that it directly compares the structure of fragments and therefore does not depend on other specific representations. This approach generally works well when comparing "drug-like" molecules. However, using Tanimoto-MCS similarity for smaller fragments is disadvantageous. Therefore, the invention introduces a common method Levenshtein distance for measuring the similarity between two text strings. The Levenshtein distance is defined as the minimum number of insertions, deletions, and substitutions required to make two strings identical. However, considering the effect of the permutation operation on the edit distance, the embodiment finally introduces a Damerau-Levenshtein distance that improves the Levenshtein distance, and the Damerau-Levenshtein distance between two character strings is defined as:

as a compromise, we chose to measure the similarity between two molecules M1 and M2, and the corresponding smiles for S1 and S2, i.e.

Max(TMCS(M1,M2),DL(S1,S2)

Step 2.2 molecular fragment coding

All fragments are encoded into a binary string. These strings are created by building a balanced binary tree based on segment similarity. The tree is then used to generate a binary string for each segment, thereby generating a binary string representing the numerator in the extension. The order of the attachment points is treated as an identifier for each fragment. When assembling the tree, the similarity between all segments is calculated. Segment pairs are then formed in a greedy, bottom-up fashion, where the most similar two segments are first paired. This process is then repeated, concatenating the two pairs with the most similar segments into a new tree with four leaves. The calculated similarity between two subtrees is measured as the maximum similarity between any two fragments of the trees. The join process is repeated until all segments are joined into a single tree.

When each fragment is stored in the binary tree, it can be used to generate codes for all fragments. The path from the root to the leaf storing the fragments determines the encoding of each fragment. For each branch in the tree, if left, add a 1 ("1") in the code, and if right, add a zero ("0"), as shown in FIG. 2; thus, the rightmost character in the encoding corresponds to the branch closest to the segment.

And 3, generating and optimizing molecules based on the Actor-critic reinforcement learning model.

Step 3.1 is based on Actor-critic reinforcement learning model framework introduction

The invention employs an Actor-critic based reinforcement learning model to generate and optimize molecules, the optimization being modified by selecting a single segment of a molecule and one bit in the segment representation. The value in this bit is then swapped. Namely: if it is 0, it becomes 1, and vice versa. This allows tracking the degree of change applied to the molecule, since the modification bits at the end of the encoding will represent changes to very similar fragments, while the changes at the beginning will represent changes to very different types of fragments. The encoded leader bit will remain unchanged and therefore the model will only allow the bit to be altered at the end to force the model to search only for molecules in the vicinity of known compounds. As shown in fig. 3.

The Actor-critical based reinforcement learning model starts with a fragmented molecular state, the current state S. The Actor extracts and examines all fragments, and a bidirectional Transformer Encoder mechanism and a DenseNet network are used to decide which fragments to replace and with which fragments to replace, i.e. the action Ai taken by the Actor gets the new state Si. The new state Si is scored R according to the extent to which it meets all constraints. critic then examined the difference between the prizes for Si and S added in value, Td-error, for actor. If positive, action Ai of the operator will be enhanced, and if negative, action will be prevented. The current state is then replaced by the new state and the process is repeated a given number of times. Wherein the loss function loss ═ log (prob) · td _ error

Step 3.2 network Structure of the reinforcement learning model Actor

The Actor network is modeled by a bidirectional Transformer Encoder mechanism and a densnet network, introduces the position information of different segments in molecules, calculates the attition coefficients of different segments in each molecule by the Transformer Encoder mechanism, reads the coding segment representing one molecule at a time, connects the forward and backward outputs, and calculates an estimate of which segment to change and the probability distribution of what to change by passing the connected representation through the densnet neural network.

Since the probability of replacing a fragment depends on the advancing and trailing fragments of the molecule. Thus, each molecule is constructed as a fragment sequence that is passed to the Transformer encoder mechanism in one go. The importance of the different fragments is obtained by calculating the attention coefficients of the different fragments in each molecule. Then outputting vectorization representation of one molecule with different fragment relevance through a forward transform Encoder and a backward transform Encoder; finally, the results of Concatenate are sorted through the DenseNet network, and an estimate of which segment to change and the probability distribution of what to change is calculated, as shown in fig. 4.

Step 3.3 optimization of reinforcement learning model reward mechanism

One of the major challenges in drug discovery is to design molecules optimized for a variety of properties, which may not be well correlated. To show that the proposed method can handle this situation, two different classes of properties were chosen that can characterize the feasibility of a molecule suitable for use as a drug. The aim of this invention is to generate molecules of drugs that are closer to the properties of the actual active molecule, i.e. to produce molecules at the "optimal location" of the target. As described above, the selected properties include intrinsic molecular property information (e.g., MW, clogP, PSA, etc.) and molecular computational activity information (i.e., information on the molecular docking outcome with a target site corresponding to a particular disease). It is particularly worth emphasizing that the reward mechanism of the reinforcement learning model realizes reward result prediction by constructing a single-layer perceptron model. The model comprises two stages of training and prediction. In the training process, the data set comprises two parts of sources, wherein a positive sample of the data set is from a molecule with known activity reported by the existing literature, and a negative sample of the data set is from random sampling of a ZINC library with the same quantity, the calculation activity information obtained by sequentially butting the positive sample and the negative sample after disordering the sequence and the molecule inherent attribute information obtained by calculation of the existing toolkit are taken as input, and the model can learn the potential association relation between the activity calculation information and the attribute information and whether the activity is really active or not through multiple rounds of training. In the prediction process, the model is obtained by performing virtual molecular docking on the generated molecules and disease-related targets by using advanced and rapid drug docking software, and the information of the computational activity of the generated molecules is used. The model uses drug docking software, such as Ledock, to virtually molecule dock existing associated PDB files for 380 different conformational targets, with fewer than or equal to 512 molecules generated per epoch associated with the Mpro corona. And (3) the inherent attribute information of the generated molecules is obtained by calculation by using a universal software package (RDkit), 1143 ultrahigh-dimensional parameters of the calculation activity information of the generated molecules and the inherent attribute information of the molecules are used as the input of a single-layer perceptron, and whether the generated molecules really have real activity or not is predicted, so that the activity of the generated molecules is further optimized. An actor in the reinforcement learning framework receives a reward for each valid molecule it produces, and if it tries to produce a molecule that is in line with what the predictive model expects, it receives a higher reward.

The active compound molecules of the resulting Mpro neo-corona target are shown in FIG. 5.

It should be noted that, although the above-mentioned embodiments of the present invention are illustrative, the present invention is not limited thereto, and thus the present invention is not limited to the above-mentioned embodiments. Other embodiments, which can be made by those skilled in the art in light of the teachings of the present invention, are considered to be within the scope of the present invention without departing from its principles.

11页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种具有深度学习能力的深部金矿床成矿找矿方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!