Methods and systems for predicting drug binding using synthetic data

文档序号:1909644 发布日期:2021-11-30 浏览:19次 中文

阅读说明:本技术 使用合成数据预测药物结合的方法和系统 (Methods and systems for predicting drug binding using synthetic data ) 是由 S·S·麦金农 Z·萨菲克哈尼 R·弗农 A·E·布里尔顿 A·温德穆特 于 2020-01-02 设计创作,主要内容包括:用于使用合成增强的数据预测药物-靶标结合的方法涉及为蛋白质结构数据库中的多个蛋白质生成多个幻影配体;使用所述多个幻影配体为药物-靶标相互作用(DTI)数据库中的蛋白质和配体生成多个DTI特征;使用多个DTI特征生成机器学习模型;和使用机器学习模型预测查询蛋白质和查询配体的组合的相互作用的可能性。(Methods for predicting drug-target binding using synthetically enhanced data involve generating a plurality of phantom ligands for a plurality of proteins in a protein structure database; generating a plurality of drug-target interaction (DTI) signatures for proteins and ligands in a DTI database using the plurality of phantom ligands; generating a machine learning model using the plurality of DTI features; and predicting the likelihood of interaction of the combination of the query protein and the query ligand using a machine learning model.)

1. A method of predicting drug-target binding using synthetic enhancement data, the method comprising:

generating a plurality of phantom ligands for a plurality of proteins in a protein structure database;

generating a plurality of drug-target interaction (DTI) signatures for proteins and ligands in a DTI database using the plurality of phantom ligands;

generating a machine learning model using the plurality of DTI features; and

predicting a likelihood of interaction of a combination of the query protein and the query ligand using the machine learning model.

2. The method of claim 1, wherein generating the plurality of phantom ligands comprises:

for a protein cluster selected from the plurality of proteins:

performing a structural alignment of the proteins in the protein cluster;

obtaining the plurality of phantom ligands by projecting ligands of one of the proteins in the cluster onto all other proteins in the cluster after the structural alignment;

obtaining a confidence score for each of the plurality of phantom ligands.

3. The method of claim 2, wherein the protein clustering is obtained based on one selected from the group consisting of:

the similarity of the sequences is determined by the similarity of the sequences,

similarity of three-dimensional topology, and

existing clusters in the database.

4. The method of claim 2, wherein the confidence score quantifies uncertainty of an associated phantom ligand.

5. The method of claim 1, wherein generating the plurality of DTI features comprises:

for each of a plurality of combinations of ligands and proteins in the DTI database:

selecting a phantom ligand from the plurality of phantom ligands that is most similar to the ligand considered for the combination; and

generating features for the proteins considered for the combination, wherein the generated features characterize the proteins considered for the combination.

6. The method of claim 5, wherein the generated feature comprises one selected from the group consisting of:

at least one local feature comprising a binding site feature in a concentric shell of increasing radius,

at least one global feature in addition to said binding site feature, and

at least one functional annotation.

7. The method of claim 5, wherein the selection of the most similar phantom ligands is performed based on a distance metric.

8. The method of claim 5, wherein generating the plurality of DTI features further comprises:

obtaining confidence vectors representing confidence in the plurality of components for the DTI features associated with the most similar phantom ligands.

9. The method of claim 8, wherein the confidence levels of the plurality of components comprise at least one selected from the group consisting of:

quantifying a first confidence score of uncertainty associated with the most similar phantom ligands,

quantifying a second confidence score that accounts for fingerprint similarity between the combined ligand and the most similar phantom ligand, and

a third confidence score that depends on a source from which the DTI feature was obtained.

10. The method of claim 1, wherein generating the machine learning model comprises:

obtaining a positive training sample based on the plurality of DTI features of the protein and the ligand;

obtaining a negative training sample based on the plurality of DTI features by at least once randomly ranking the plurality of DTI features of the protein and the ligand;

training the machine learning model for DTI prediction using the positive training samples and the negative training samples.

11. The method of claim 10, wherein generating the machine learning model comprises, prior to obtaining the positive training samples and the negative training samples:

filtering the plurality of DTI features of proteins and ligands using a confidence threshold applied to a confidence vector associated with the plurality of DTI features.

12. The method of claim 1, wherein the machine learning model is one selected from a classifier model and a regression model.

13. The method of claim 1, wherein predicting the likelihood of interaction of the combination of the query protein and the query ligand comprises:

obtaining likely binding sites and associated local features of the query protein based on the plurality of phantom ligands;

generating a feature of the query protein, the feature of the query protein comprising the local feature;

generating features of the query ligand, the features of the query ligand including a ligand fingerprint and a ligand descriptor; and

applying the machine learning model to the features of the query protein and the features of the query ligand to obtain a likelihood of interaction between the query ligand and the query protein.

14. The method of claim 13, wherein the features of the query protein further comprise at least one selected from the group consisting of global features and functional annotations.

15. A non-transitory computer-readable medium comprising computer-readable program code for predicting drug-target binding using synthetic enhancement data, the computer-readable program code causing a computer system to:

generating a plurality of phantom ligands for a plurality of proteins in a protein structure database;

generating a plurality of drug-target interaction (DTI) signatures for proteins and ligands in a DTI database using the plurality of phantom ligands;

generating a machine learning model using the DTI features; and

predicting a likelihood of interaction of a combination of the query protein and the query ligand using the machine learning model.

16. A system for differential drug discovery, the system comprising:

a database of protein structures;

a phantom ligand recognition engine configured to generate a plurality of phantom ligands for a plurality of proteins in the protein structure database;

a phantom ligand database storing the plurality of phantom ligands;

storing a drug-target interaction (DTI) database of proteins and ligands;

a feature generation engine configured to generate a plurality of DTI features for proteins and ligands in the DTI database using the plurality of phantom ligands in the phantom ligand database;

a machine learning model training engine configured to generate a machine learning model using the DTI features; and

a DTI prediction engine configured to predict a likelihood of interaction of a combination of a query protein and a query ligand using the machine learning model.

17. The system of claim 16, wherein generating the plurality of DTI features comprises:

for each of a plurality of combinations of ligands and proteins in the DTI database:

selecting a phantom ligand from the plurality of phantom ligands that is most similar to the ligand considered for the combination; and

generating features for the proteins considered for the combination, wherein the generated features characterize the proteins considered for the combination.

18. The method of claim 17, wherein the generated feature comprises one selected from the group consisting of:

at least one local feature comprising a binding site feature in a concentric shell of increasing radius,

at least one global feature in addition to said binding site feature, and

at least one functional annotation.

19. The system of claim 17, wherein generating the plurality of DTI features further comprises:

obtaining confidence vectors representing confidence in the plurality of components of the DTI features associated with the most similar phantom ligands,

wherein the confidence levels of the plurality of components comprise at least one selected from the group consisting of:

quantifying a first confidence score of uncertainty associated with the most similar phantom ligands,

quantifying a second confidence score that accounts for fingerprint similarity between the combined ligand and the most similar phantom ligand, and

a third confidence score that depends on a source from which the DTI feature was obtained.

20. The system of claim 16, wherein predicting the likelihood of interaction of the combination of the query protein and the query ligand comprises:

obtaining likely binding sites and associated local features of the query protein based on the plurality of phantom ligands;

generating a feature of the query protein, the feature of the query protein comprising the local feature;

generating features of the query ligand, the features of the query ligand including a ligand fingerprint and a ligand descriptor; and

applying the machine learning model to the features of the query protein and the features of the query ligand to obtain a likelihood of interaction between the query ligand and the query protein.

Background

Computational methods exist to predict the interaction between a ligand and a protein. They are generally classified as either 'ligand-based' or 'structure-based' depending on the type of information used to make the prediction.

Protein-based predictions have the potential to learn biophysical compatibility and thus may be more general, but with a high degree of data constraint. In particular, protein-based predictions use the 3D molecular structure of ligands co-crystallizing with the protein to assess or predict interactions. These methods are computationally demanding and they are often only trained on 100 to 1000 different proteins. Neural networks for these tend to have very high feature space to data ratios. As a result, this approach can produce a large number of false negatives and/or false positives in docking when applied to previously unseen protein systems or drug stents.

Ligand-based prediction can be performed using a Drug Target Interaction (DTI) database with millions of records. Examples of publicly available DTI databases include CHEMBL, Bioassay and STITCH by NCBI. However, these records often represent only about 2,000 of the 20,000 human proteins. Due to the high data ratio of ligand to protein, one standard approach is to derive many different models for each of the 2,000 proteins. These tend to be successful, even exceeding high throughput experimental results in many cases, but they (1) represent only about 10% of human proteins, (2) may be weaker when there is not much chemical diversity between individual protein instances, and (3) individual models do not learn the physical properties of drug-protein compatibility, lacking the benefit from the data used to generate other models.

Disclosure of Invention

In general, in one aspect, one or more embodiments relate to a method for predicting drug-target binding using synthetically enhanced data, the method comprising: generating a plurality of phantom ligands for a plurality of proteins in a protein structure database; generating a plurality of drug-target interaction (DTI) signatures for proteins and ligands in a DTI database using the plurality of phantom ligands; generating a machine learning model using the plurality of DTI features; and predicting the likelihood of interaction of the combination of the query protein and the query ligand using a machine learning model.

In general, in one aspect, one or more embodiments are directed to a non-transitory computer-readable medium comprising computer-readable program code for predicting drug-target binding using synthetically enhanced data, the computer-readable program code causing a computer system to: generating a plurality of phantom ligands for a plurality of proteins in a protein structure database; generating a plurality of drug-target interaction (DTI) signatures for proteins and ligands in a DTI database using the plurality of phantom ligands; generating a machine learning model using the DTI features; and predicting the likelihood of interaction of the combination of the query protein and the query ligand using a machine learning model.

In general, in one aspect, one or more embodiments are directed to a system for differential drug discovery, the system comprising: a database of protein structures; a phantom ligand recognition engine configured to generate a plurality of phantom ligands for a plurality of proteins in a protein structure database; a phantom ligand database storing a plurality of phantom ligands; storing a drug-target interaction (DTI) database of proteins and ligands; a feature generation engine configured to generate a plurality of DTI features for the proteins and ligands in the DTI database using a plurality of phantom ligands in a phantom ligand database; a machine learning model training engine configured to generate a machine learning model using the DTI features; and a DTI prediction engine configured to predict a likelihood of interaction of a combination of the query protein and the query ligand using a machine learning model.

Drawings

The present embodiments are illustrated by way of example and are not intended to be limited by the figures of the accompanying drawings.

Fig. 1A illustrates a block diagram of a system for predicting drug binding in accordance with one or more embodiments.

Fig. 1B illustrates a block diagram of a protein structure database, in accordance with one or more embodiments.

FIG. 1C illustrates a block diagram of a phantom ligand database in accordance with one or more embodiments.

Fig. 1D illustrates a block diagram of a drug-target interaction database in accordance with one or more embodiments.

FIG. 1E illustrates a block diagram of a protein annotation database, in accordance with one or more embodiments.

Fig. 2 shows a flow diagram describing a method for training a machine learning model for predicting drug-target interactions in accordance with one or more embodiments.

FIG. 3 shows a flow diagram that describes a method for generating a phantom ligand database in accordance with one or more embodiments.

Fig. 4 shows a flow diagram that describes a method for generating drug-target interaction (DTI) features in accordance with one or more embodiments.

FIG. 5 shows a flow diagram that describes a method for generating a machine learning model for DTI prediction in accordance with one or more embodiments.

Fig. 6 shows a flow diagram depicting a method for predicting an interaction between a query protein and a query ligand in accordance with one or more embodiments.

Fig. 7A shows an example for generating phantom ligands according to one or more embodiments.

Fig. 7B shows a diagram of a concentric shell model for obtaining binding site characteristics, in accordance with one or more embodiments.

FIG. 8 illustrates generation of training data for a machine learning model in accordance with one or more embodiments.

Fig. 9 shows a performance comparison of an embodiment of the present disclosure with a conventional method.

Fig. 10A and 10B illustrate a computing system in accordance with one or more embodiments.

Detailed Description

Specific embodiments disclosed herein will now be described in detail with reference to the accompanying drawings. Like elements in the various figures may be represented by like reference numbers and/or like names for consistency.

The following detailed description is merely exemplary in nature and is not intended to limit the embodiments disclosed herein or the application and uses of the embodiments disclosed herein. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, brief summary or the following detailed description.

In the following detailed description of some embodiments disclosed herein, numerous specific details are set forth in order to provide a more thorough understanding of the various embodiments disclosed herein. It will be apparent, however, to one skilled in the art that embodiments may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description.

Throughout this application, ordinal numbers (e.g., first, second, third, etc.) may be used as adjectives for elements (i.e., any noun in the application). The use of ordinal numbers does not imply or create any particular order of elements nor limit any elements to only a single element unless explicitly disclosed, such as by the use of the terms "before", "after", "single", and other such terms. Rather, ordinals are used to distinguish between elements. By way of example, a first element is different from a second element, and the first element may comprise more than one element and be subsequent (or preceding) the second element in the order of the elements.

In one or more embodiments of the invention, elements of the protein-based prediction method and the ligand-based prediction method may be combined to obtain an excellent prediction of drug-target interactions. In one or more embodiments of the invention, machine learning models are used to predict drug-target interactions (DTIs).

There are a large number of data points in the DTI database relating to protein-ligand interactions. For example, the ChEMBL database contains about 15,000,000 records describing protein and ligand pairs as binding or non-binding (and often provides a measure of affinity or confidence). However, using the contents of these DTI databases to predict the interaction of new protein and ligand pairs has certain limitations, such as limited coverage of the human proteome, and poor prediction capabilities when diversity between individual protein instances is limited. Thus, the prediction quality of machine learning models that run only on DTI database records may be limited. Alternatively, the machine learning model may operate on the 3D molecular structure of the ligand co-crystallizing with the protein to capture the biophysics of the interaction between the protein and the ligand. There are some databases (e.g., sc-PDB) that capture protein-ligand interactions, but these databases contain relatively few data points and are often highly redundant. In addition, these databases often lack structural and chemical diversity of proteins. Therefore, training the machine learning model on these data points can be challenging due to the insufficient amount of data. In particular, machine-learned models that operate on the 3D molecular structure of ligands co-crystallizing with proteins have a high dimensional feature space, coupled with the limited availability of suitable training samples, making machine-learned models susceptible to overfitting.

In one or more embodiments of the present disclosure, the machine learning model uses local 3D features in conjunction with DTI recordings to achieve predictions that are superior to the conventional protein-based and ligand-based predictions described above. More specifically, synthetic data is generated by projecting DTI recordings onto 3D structural models of known protein-ligand complexes to generate local protein features that would otherwise not be available. The synthesized data generated in this manner may be used to train a machine learning model. The query protein and query ligand pairs can then be predicted using a machine learning model.

Turning to fig. 1A, a system for predicting drug binding using synthetic data is shown, in accordance with one or more embodiments. The system (100) may include a phantom ligand recognition engine (110), a feature generation engine (120), a machine learning model training engine (130), a drug-target interaction prediction engine (150), a protein structure database (160), a phantom ligand database (170), a drug-target interaction database (180), and a protein annotation database (190). Each of these components will be described later.

According to one or more embodiments, the phantom ligand identification engine (110) includes instructions in the form of computer readable program code to perform at least one of the steps described in fig. 2 and 3 to generate a phantom ligand database (170) of phantom ligands and associated confidence scores from proteins in the protein structure database (160). The phantom ligand recognition engine (110) can obtain phantom ligands for a protein by structurally aligning known homologs and projecting ligands for these known homologs onto the protein at the alignment sites. While these phantom ligands may not interact with the protein, they may act as placeholders, suggesting structural compatibility between the phantom ligands and the aligned sites of the protein. The phantom ligand identification engine (110) is operably connected to a protein structure database (160) and a phantom ligand database (170).

According to one or more embodiments, the feature generation engine (120) includes instructions in the form of computer readable program code to perform at least one of the steps described in fig. 2 and 4 to generate drug-target interaction (DTI) features for training a machine learning model for DTI prediction. The feature generation engine (110) may generate features of proteins and ligands using data from a phantom ligand database (170), a drug-target interaction database (180), and a protein annotation database (190). Thus, the feature generation engine (120) is operably linked to a phantom ligand database (170), a drug-target interaction database (180), and a protein annotation database (190).

Continuing with fig. 1A, a machine learning model training engine (130) according to one or more embodiments includes instructions in the form of computer readable program code to perform at least one of the steps described in fig. 2 and 5 to train a machine learning model (140) for DTI prediction. The machine learning model training engine (130) may be trained using the DTI features generated by the feature generation engine (120). Accordingly, a machine learning model training engine (130) is operatively connected to the feature generation engine (120). The final machine learning model (140) for DTI prediction can be any type of classifier capable of predicting the interaction between the query drug and the query protein. In one or more embodiments, the machine learning model (140) for DTI prediction is a deep neural network.

According to one or more embodiments, the drug-target interaction (DTI) prediction engine (150) includes instructions in the form of computer readable program code to perform at least one of the steps described in fig. 6 to predict drug-target interactions of a query drug and a query protein using a machine learning model (140). The DTI prediction engine (150) generates features of the query protein and the query ligand that are relevant to the query drug, which are compatible with the machine learning model (140), and then calculates the likelihood of interaction based on the features and using the same machine learning model (140) trained by the machine learning model training engine (130). In various embodiments, one or more of the same and/or different machine learning models may be used.

The protein structure database (160) according to one or more embodiments may be any type of storage unit and/or device for storing data (e.g., a file system, a database, a collection of tables, or any other storage mechanism). The protein structure database (160) is described below with reference to FIG. 1B.

The phantom ligand database (170) according to one or more embodiments may be any type of storage unit and/or device for storing data (e.g., a file system, a database, a collection of tables, or any other storage mechanism). The phantom ligand database (170) is described below with reference to FIG. 1C.

The drug-target interaction database (180) according to one or more embodiments may be any type of storage unit and/or device (e.g., a file system, a database, a collection of tables, or any other storage mechanism) for storing data. The drug-target interaction database (180) is described below with reference to fig. 1D.

The protein annotation database (190) according to one or more embodiments may be any type of storage unit and/or device for storing data (e.g., a file system, a database, a collection of tables, or any other storage mechanism). The protein annotation database (190) is described below with reference to fig. 1D.

Turning to fig. 1B, a protein structure database (160) is shown in accordance with one or more embodiments. The protein structure database (160) may store 3D models (162A, 162B, 162N) of proteins. Each 3D model may be associated with a homology model (164A, 164B, 164N) and/or an experimental model (166A, 166B, 166N). Examples of publicly available protein structure databases (160) include, but are not limited to, Protein Databases (PDBs) and SWISS-MODEL.

Turning to fig. 1C, a phantom ligand database (170) is shown in accordance with one or more embodiments. For multiple 3D models (e.g., 172A, 172B, 172N) of proteins, a phantom ligand database (170) may store the identified phantom ligands (e.g., 174A, 174B, 174N). Further, for each identified phantom ligand, a confidence score (e.g., 176A, 176B, 176N) that may be based on similarity is included. The phantom ligand database may be built as described in figure 3.

Turning to fig. 1D, a drug-target interaction database (180) in accordance with one or more embodiments is shown. The drug-target interaction database (180) may store interaction confidences (e.g., 186A, 186B, 186N) for pairs of drugs (e.g., 182A, 182B, 182N) and targets (e.g., 184A, 184B, 184N). Examples of publicly available drug-target interaction databases (180) include, but are not limited to, STITCH and ChEMBL. These databases may contain many data points (-15,000,000 ChEMBL).

Turning to fig. 1E, a protein annotation database (190) in accordance with one or more embodiments is shown. The protein annotation database (190) may store relevant annotations (e.g., 194A, 192B, 194N) for a plurality of proteins (e.g., 192A, 192B, 192N). The protein-related annotations may include any available information about the protein, and may be added manually or computationally to the protein annotation database. For example, UniProt databases may be used.

Fig. 2, 3, 4, 5, and 6 illustrate flow diagrams in accordance with one or more embodiments. Fig. 2, 3, 4, and 5 depict flowcharts depicting methods for training machine learning models to predict drug-target interactions, and fig. 6 depicts a method for using machine learning models to predict drug-target interactions. One or more of the steps of fig. 2, 3, 4, 5, and 6 may be performed by components of the system (100) discussed above with reference to fig. 1A. In one or more embodiments, one or more of the steps shown in fig. 2, 3, 4, 5, and 6 may be omitted, repeated, and/or performed in a different order than that shown in fig. 2, 3, 4, 5, and 6. Additional steps may be further performed. Accordingly, the scope of the present invention should not be considered limited to the specific arrangement of steps shown in fig. 2, 3, 4, 5, and 6.

Turning to the flow chart of fig. 2, a method of generating a machine learning model to predict drug-target interaction (DTI) is shown. While FIG. 2 is intended to introduce the main steps of generating a machine learning model, the flowcharts discussed subsequently provide a more detailed description. After the method of fig. 2 is completed, the resulting machine learning model may be used to make predictions, as described in fig. 6.

In step 200, a library of phantom ligand is generated based on the proteins obtained from the protein structure database. A detailed description of step 200 is provided in fig. 3.

In step 202, drug-target interaction (DTI) features, including ligand features and protein features, are generated. FIG. 4 provides a detailed description of step 202.

In step 204, a machine learning model for DTI prediction is generated. FIG. 5 provides a detailed description of step 204.

Turning to the flow chart of FIG. 3, a method of generating a database of phantom ligands is described. The procedure described subsequently serves to map known ligands to the structure of their homologues (experimental and homology models), called "ghosts", using homology relationships between proteins.

In step 300, proteins are obtained from a protein structure database. For each protein, a 3D model can be retrieved.

In step 302, the obtained proteins are clustered by sequence or domain. In one or more embodiments, a protein cluster is defined as any collection of two or more proteins that share similarity in a one-level sequence or three-dimensional topology (often referred to as folding). Protein clustering can be obtained directly from publicly available databases such as PDB, SCOP, CATH, PFAM or Uniprot. For example, protein clusters can be created manually based on sequence similarity using protein sequence alignment tools and clustering tools (e.g., BLAST, CD-HIT, or UCLUST). Alternatively, proteins with three-dimensional structural models can be clustered by grouping unrelated proteins that share a common topology or fold.

In step 304, one of the clusters is selected for further processing.

In step 306, a pairwise structural alignment is performed for all proteins in the selected cluster. Figure 7A shows an example of a structural alignment of the 3D structure of proteins. Three-dimensional (3d) structural alignment attempts to establish positional equivalence between two proteins. Structural alignments can be performed by applying rotational and/or translational transformations to the coordinates of a protein to minimize the average distance between equivalent residues. For example, structural alignments can be performed on a sub-selection of the complete protein structure or residues, such as residues around a single domain or ligand binding site. The selection of ligand binding site residues for 3d structural alignment is the best heuristic method to map phantom ligands.

In step 308, phantom ligands are obtained by projecting each ligand to a clustering peer. A confidence score for each projection may be obtained that consists of individual scores representing the confidence in the projection of different components to the ligand. An example of the projection of ligands to clustering peers is provided in FIG. 7A. The confidence score may be based on any measure of uncertainty in the heuristically defined structural representation that is quantitatively selected to model the DTI interaction. At this step, the confidence score may include a homology model quality metric, such as percent sequence identity, sequence similarity, or QMEAN. The confidence score may also include a metric describing the quality of the structural alignment, such as the root mean square deviation of the local or global alignment. Multiple confidence scores may also be used.

In step 310, the phantom ligands and associated confidence scores are stored in a phantom ligand database.

In step 312, it is determined whether there are additional clusters to process. If additional clusters remain, execution of the method may return to step 304 to select another cluster for processing as described in steps 306 and 310. If no additional clusters remain, execution of the method may terminate. Once the method of fig. 3 is terminated, the phantom ligand database may contain a comprehensive set of phantom ligands and associated confidence scores for all proteins processed as described.

Turning to the flow chart of FIG. 4, a method for generating a drug-target interaction (DTI) signature is described. The generated features include the features of the ligand and the features of the protein. These features can then be used to train a machine learning model for predicting drug-target interactions. Many combinations of features of proteins and ligands can be generated to ensure the availability of adequate training samples.

In step 400, drug-target interactions of combinations of ligands and proteins are selected from the DTI database. The subsequent steps are performed for this contemplated combination of ligand and protein. These steps can be repeated later for other combinations of ligands and proteins.

In step 402, features are generated for the selected ligands. These features may include ligand fingerprints and ligand descriptors. The fingerprint may capture the structure of the ligand in a descriptor format and may be based on smile representations of potential molecules using fixed length vectors. For example, molecular fingerprinting may include atom pair, extended link fingerprint, graph-based fingerprint, twist fingerprint, or pharmacophore fingerprint. For example, molecular weight, number of rotatable bonds, number of hydrogen bond donors, number of hydrogen bond acceptors, hydrophobicity, aromaticity, and functional group composition may be used as ligand descriptors. Molecular shape descriptors, such as ovality, geometric descriptors, branch descriptors, or chiral descriptors, may also be used.

In step 404, a phantom ligand is retrieved from a database of phantom ligands for the selected protein.

In step 406, each phantom ligand is scored based on its difference from the drug, or more specifically, from the selected ligand. Higher similarity results in a higher score. For example, a distance metric used to compare molecular fingerprints can be used to score the similarity between ligands, such as the trough distance, D-ice distance, or cosine distance.

Steps 404 and 406 may be performed on all protein models (e.g., homology models or experimentally derived models) available for the selected protein.

In step 408, the phantom ligand that most closely resembles the selected ligand is selected for further processing.

In step 410, confidence vectors are generated for the DTI features, which consist of individual scores representing the confidence in different components of the DTI features and their representative phantom ligands. The confidence vector may include a confidence score representing the phantom ligand projection from step 308. The confidence vector may also include a confidence score for the similarity of fingerprints between the selected ligand and the most similar phantom ligand selected in step 408. Further, the confidence vector may include a score of the confidence of the selected DTI. The confidence of the selected DTI may be scored based on the source of the DTI data (e.g., different scores may be assigned depending on whether the DTI data was obtained using high-throughput screening, low-throughput screening, etc.).

In step 412, local features of the selected protein surrounding the most similar phantom ligands are obtained. The local features may include binding site features present in concentric shells of increasing radius, as shown in fig. 7B. For each concentric shell, multiple descriptors are provided, such as an atom type descriptor that specifies the presence of atoms within the shell. For example, each shell radius may provide 70 atom type descriptors. Descriptors can also include, but are not limited to, flexibility or rigidity of the binding site within the shell region, residue contacts within the shell region, and/or any other factor that represents biophysical and indirect binding site geometry. However, these features may not specify the exact location or coordinates of the atoms. The local features may also include a graphical depiction of the ligand binding sites that describes the distances between amino acids around the ligand binding sites in a network format. Local features may also be defined by the shape of the pocket, corresponding to the void not occupied by protein residues. Pocket gaps may be determined by pocket detection methods, such as overfill filling, concavity, or solvent accessibility. Local features defined by pocket void space may include the shape of the void space, including volume, ovality, curvature, branching pattern, or spatial stability based on nearby residue dynamics. For example, local features defined by residues adjacent to pocket void spaces may include orientation of residues, geometric availability of hydrogen bond donor and acceptor groups, hydrophobicity, aromaticity, or geometric availability of pi-stacking interactions. Local features may also include a description of ligand binding pathways that include solvent exposed residues near the ligand binding site that are not in direct contact with the ligand in a stable binding state. Ligand binding channels are expected to form transient interactions with ligands during the dynamic process of binding and dissociation. Ligand binding channel characteristics may include those similar to the defined pocket, such as the orientation of residues, amino acid composition, availability of hydrogen bond donors and acceptors.

In step 414, global features of the structure and/or sequence of the selected protein are obtained by extension to a shell having a large radius as described in step 412. The radius of the shell corresponding to the local feature may include, for exampleOrThe threshold value of (2). The shell radius corresponding to the domain-level description or the global protein description may have, for exampleOrOr there may be no distance threshold. Global features may also include descriptions of domains or folds, and may be derived from publicly available databases such as SCOP, CATH, or PFAM. For example, global features may also include features derived from protein sequences and may include the presence of common sequence motifs. Global features may include a description of the protein folding state, such as the presence of intrinsically disordered regions, hinges, loops, ordered regions or regulatory domains and biophysical properties. Global features can also be described in terms of distance from the ligand binding site.

In step 416, a functional annotation of the selected protein is obtained. Functional annotations may be obtained from a protein annotation database. Functional annotations may include, for example, Enzyme Commission (EC) numbers, Gene Ontology (GO) annotations, or Uniprot keywords. Functional annotations may also include the presence or absence of recorded protein location-specific properties, such as catalytic sites, post-translational modifications, disease associations, or genetic variations.

In step 418, features are generated for the selected protein. These features may include local features, global features, and/or functional annotations.

In step 420, it is determined whether additional DTIs remain to be processed. If additional DTIs remain, execution of the method may return to step 400 to select another DTI to process, as described in steps 402 and 418. If no additional clusters remain, execution of the method may terminate. Once the method of FIG. 4 is terminated, the comprehensive feature set of ligands and proteins listed in the DTI database can be used.

Turning to the flow diagram of FIG. 5, a method for generating a machine learning model for DTI prediction is described. Based on the DTI signature obtained as described in fig. 4, a machine learning model reflecting the compatibility between the protein environment and the ligand properties was obtained.

In step 500, ligand characteristics and protein characteristics are obtained. The obtaining of the ligand and protein characteristics may be performed as described in steps 402 and 418 of fig. 4.

In step 502, ligands and proteins are filtered as a function of the confidence vector established in step 408 of FIG. 4. The filtering may implement a confidence threshold, and only samples above the confidence threshold are considered for further processing. The confidence function may convert the confidence vector into a single score for filtering. For example, the confidence function may convert the confidence scores to probabilities and apply bayesian statistics to evaluate the combined probabilities. The confidence function may apply a separate cutoff threshold to each element of the confidence vector as a means of selecting which samples are suitable for machine learning. The confidence function threshold or equation may be set by automatically testing different combinations as hyper-parameters of the machine learning algorithm.

In step 504, the ligand and protein features are linked to generate a positive training sample.

In step 506, the ligand and protein features are randomly arranged (shuffle). Randomly arranged ligands and protein features are linked to generate negative training samples. This step may be repeated multiple times to evaluate different positive and negative training sample ratios, e.g., 1:1, 1:5, 1:10, 1:19, or 1: 20.

In step 508, a machine learning model for DTI prediction is trained using positive and negative training samples. For example, a back propagation based learning algorithm may be used. In one or more embodiments, the training samples may be weighted based on their associated confidence vectors. In one or more embodiments, migration learning is used to train machine learning models more efficiently. Initially, the machine learning model may be trained by applying an initial confidence threshold in step 502. In a subsequent retraining phase, the confidence threshold may be increased to reduce the number of training instances and improve their quality. Additionally or alternatively, a subsequent retraining phase may restrict training instances to select classes of drugs or targets. The machine learning model may be a supervised discriminant classification or regression model, such as a random forest, a support vector machine, a single layer perceptron, or a multi-layer artificial neural network. Considering the number of training data points (100,000s to 10,000,000s) and the dimensionality of the training data features (1000s to 10,000s), artificial neural networks are particularly well suited for this task. In one embodiment, the artificial neural network representation takes the form of a fully connected network with a feature input layer, two hidden layers, e.g., 512 and 256 nodes, respectively, and two output nodes corresponding to interactive and non-interactive pairs. In one embodiment, an artificial neural network with multiple hidden layers omits the connections between input types for creating separate potential spaces representing ligand fingerprints, global protein features, local protein features, and protein functional features.

Turning to the flow chart of FIG. 6, a method for predicting the interaction between a query protein and a query ligand is described. A machine learning model trained as described with reference to fig. 2-5 can be used to test the "compatibility" of the query protein and query compound by applying the machine learning model to a set of DTI features corresponding to the query ligand and at least one known binding site of the query protein.

In step 600, a query protein and a query ligand are obtained. The query protein and the query ligand may be obtained from a user who wants to obtain a prediction of the interaction between the query protein and the query ligand.

In step 602, the likely binding sites and associated local features of the query protein are obtained as previously described in step 404 and 412 of FIG. 4. Thus, one or more binding sites may be obtained from one or more experimental or homology models. Alternatively, the binding sites and associated local features may be obtained from the user, for example, if the user wishes to specify a particular binding site.

In step 604, global features and protein annotations for the query protein are obtained. Global features and protein annotations may be obtained as previously described in steps 414 and 416 of fig. 4.

In step 606, features are generated for the query protein. These features may include local features, global features, and/or functional annotations.

In step 608, the ligand fingerprint and ligand descriptor are obtained as previously described in step 402 of FIG. 4.

In step 610, features are generated for the query ligand. These features may include ligand fingerprints and ligand descriptors.

In step 612, a machine learning model for DTI prediction is applied to the features of the query ligand and the features of the query protein to obtain a numerical score of the likelihood of interaction between the query ligand and the query protein.

The following paragraphs further illustrate embodiments of the present disclosure based on various examples. Those skilled in the art will appreciate that the present disclosure is not limited to these examples.

(i) Sample phantom ligand:

turning to fig. 7A, an example (700) for generating phantom ligands is shown. Three hypothetical protein structures (top row) are shown. Two of the three hypothetical protein structures actually interacted with ligands (top row, left and middle column). The middle row shows various structural arrangements of the three hypothetical proteins. As a result of the structural arrangement, the ligands can project onto other proteins. Based on the similarity of the binding sites, a confidence score is assigned. The confidence score for the actual ligand-protein pairing is 1.0, but the confidence score for the phantom ligand-protein pairing is lower. The bottom row shows the generated phantom ligand-protein pairs as they can be stored in the phantom ligand database.

Fig. 7B shows a diagram of a concentric shell model (750) for obtaining binding site characteristics, in accordance with one or more embodiments. The concentric shells of increasing radius (r) enclose a central chemical structure that is considered part of the binding site. The inner shell mainly captures local features near the binding site, while the outer shell captures more and more global features. Features representing proteins may be based on concentric shell models, capturing local and global features of proteins without specifying precise 3D geometry (e.g., at the atomic level).

(ii) Sample confidence vector:

embodiments of the present disclosure rely on heuristic procedures to augment drug-target interaction (DTI) data using a hypothetical three-dimensional structural representation. These hypothesized DTI representations can provide informative features for machine learning, improving models aimed at predicting protein-ligand interactions. Obtaining these approximate DTI representations for any given DTI data point requires several assumptions outlined in steps 200 and 202 of fig. 2. For example, the three-dimensional protein structure used to represent DTI may be from a homogeneous model, rather than directly from experimental coordinates.

The confidence vector is composed of a number of metrics that describe the measurable uncertainty in the approximate DTI representation. These metrics are accumulated in the creation of the phantom ligand database (step 200) and the projection of the known DTI data onto the phantom ligand database (step 202). In one example, the confidence vector contains four elements, including: (1) percentage sequence identity between the homology model representing the DTI protein and its source template, (2) RMSD from the alignment between the source structure of the phantom ligand and the homology model representing the DTI protein, (3) Tanimoto similarity between the morgan3 fingerprint of the DTI ligand and the phantom ligand template, (4) confidence in the DTI data points.

In this example, the drug-target interaction (DTI) database demonstrates that the ligand gefitinib interacts with the protein Aurora kinase a. The DTI database assigns a probability of 85% for the interaction based on the accuracy of the source biophysical experiment. There was no specific interaction between gefitinib and Aurora kinase a in the source three-dimensional structure database. In creating the phantom ligand database, a homology model for Aurora kinase a protein was created from the close homolog Aurora kinase B, which shares 72.5% sequence identity. The closest molecule to gefitinib that successfully mapped to the homology model was erlotinib, which shared a Morgan3 fingerprint Tanimoto similarity of 0.372. Approximation of erlotinib phantom ligand position based on structural alignment between Aurora kinase a homology model and erlotinib-EGFR complex crystal structure, ligand binding thereofPosition RMSD ofThus, the corresponding confidence vector would be: [ 85%, 72.5%, 0.372,]。

(iii) sample training data and negative random permutation:

the described method focuses on augmenting drug-target interaction (DTI) pairs from a DTI database with a mixture of relevant features obtained through a series of deterministic mappable relationships and heuristically modeled features (local structural features). Each row in the DTI database may be converted to a feature vector, as illustrated in fig. 8, showing the generation of training data (800) with ligand features (columns labeled "ligand features") and protein features (columns labeled "global features", "functional features", and "local features") from the corresponding drug. The standard practice of database lookup and protein identifier mapping can be used to retrieve the global and functional characteristics of a protein from any protein. The local protein features may be the result of the heuristically defined process outlined in this patent. They are modeled and therefore may not be accurate. Each data line also has a corresponding confidence vector (described above but not shown in the figure) that can be used to imply hard truncation or weighting for training the machine learning model.

When a neural network is trained only with true drug-target interaction positive examples extracted from a drug-target interaction dataset, the model may learn to ignore the core and apparent patterns of the interaction, as they do not provide any signal to the model. Furthermore, it may be necessary to control the potentially significant bias towards highly representative drugs and targets in the drug-target interaction dataset. Therefore, it may be beneficial to sample the negative examples using the relative proportions of each drug and target. As a result, the model may learn patterns based on positive and negative examples. In fig. 8, the negative randomization result randomly aligns the ligand component of the feature vector (white) with the protein component of the feature vector (three shades of gray). The negative examples generated can be used to train a classification engine, balancing the presence of individual ligands or protein features. The use of single ligand and/or protein features equally in both the positive and negative sets avoids network learning, i.e. any single feature is usually particularly relevant for binding.

Embodiments of the present disclosure use phantom ligands to create local protein features for protein stoichiometry (PCM) from protein-ligand datasets. More specifically, drug-target interaction (DTI) data is linked (thread) to a 3D atomic model of the protein-ligand complex to deduce local protein features. The mixed feature dataset of the PCM may include local (pockets), regional (domains) and global (whole proteins) annotations and/or functional annotations.

Traditionally, the training data for machine learning should be high confidence 'model quality data'. According to one or more embodiments, it seems counterintuitive to use prediction (phantom ligand + linkage (thread)) to generate training data for machine learning algorithms. In particular, if the heuristic approach is not accurate enough, conventional wisdom holds that introducing local features derived from phantom ligand and junction combinations is likely to introduce additional noise, thereby degrading the performance of conventional DTI PMCs. However, as shown in the performance comparison (900) of FIG. 9, the improvement in performance is achieved by introducing local features derived by the method described in this patent. Omitting the local features derived from this approach equates to performance achieved by DTI PMC alone.

Specifically, figure 9 shows a performance comparison that ranks the binding probability of small molecule ligands and 8717 proteins. To test the ranking, 100 molecules were randomly removed from the training data and used for testing. The graph plots the predicted ordering of known interactions for these 100 random drugs. For example, in the absence of local features, only about 63% of the actual interactions were observed in the prediction of the first 300 proteins (first about 3.5%) (dashed lines) in 8717. Including the local features estimated by this procedure, the discovery rate increased to about 75% for the same threshold (solid line).

Various embodiments of the present disclosure have one or more of the following advantages. Embodiments of the invention enable prediction of drug-target interactions (DTIs) using machine learning models that reflect the compatibility between the protein environment and the ligand attributes. Localized 3D features are created to represent the binding sites even when the interaction under consideration has no 3D information available.

Mapping known drug-target interactions to a homology model synthetically enhances rich DTI training data with high-dimensional biophysical information to train deep neural networks. Thus, the method enables the use of a comprehensive DTI database, which can be used for entry even if the binding site of the drug to the protein is not necessarily known.

The method according to one or more embodiments does not require detailed knowledge of the biophysics of the protein-ligand interaction. Therefore, precise 3D coordinates of atoms are not required, enabling mapping of drug-target interactions onto protein pockets using DTI databases and homology models.

Embodiments of the present disclosure require a reduced feature space and allow for much larger training data than structure-based deep learning methods that rely on 3D atomic coordinates. Furthermore, embodiments of the present disclosure are found to be well summarized. Preliminary performance evaluations showed that the above method performed about 1,000,000 times faster than the docking simulation. The method according to one or more embodiments does not require human intervention. In particular, the most likely protein representation and binding site will be automatically recognized. As discussed in annexes a and B, the method according to one or more embodiments can be used as an accurate in-silico alternative or addition to other in-silico and/or experimental methods of predicting drug-target interactions.

Embodiments of the present disclosure may have various applications. For example, embodiments may be used for proteomic screening (e.g., performing toxicity prediction or phenotype deconvolution prediction), virtual screening, and general drug discovery and development.

Embodiments of the present disclosure may be implemented on a computing system. Any combination of mobile, desktop, server, router, switch, embedded device, or other types of hardware may be used. For example, as shown in fig. 10A, the computing system (1000) may include one or more computer processors (1002), non-persistent storage (1004) (e.g., volatile memory such as Random Access Memory (RAM), cache memory), persistent storage (1006) (e.g., hard disk, optical drive such as a Compact Disk (CD) drive or Digital Versatile Disk (DVD) drive, flash memory, etc.), communication interfaces (1012) (e.g., a bluetooth interface, an infrared interface, a network interface, an optical interface, etc.), and many other elements and functions.

The computer processor(s) (1002) may be an integrated circuit for processing instructions. For example, the computer processor(s) may be one or more cores or micro-cores of a processor. The computing system (1000) may also include one or more input devices (1010), such as a touch screen, keyboard, mouse, microphone, touch pad, electronic pen, or any other type of input device.

Communication interface (1012) may include an integrated circuit for connecting computing system (1000) to a network (not shown) (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) such as the internet, a mobile network, or any other type of network) and/or another device, such as another computing device.

Further, the computing system (1000) may include one or more output devices (1008), such as a screen (e.g., a Liquid Crystal Display (LCD), a plasma display, a touch screen, a Cathode Ray Tube (CRT) monitor, a projector, or other display device), a printer, an external storage device, or any other output device. One or more of the output devices may be the same or different from the input device(s). The input and output device(s) may be connected to the computer processor(s) (1002), the non-persistent storage device (1004), and the persistent storage device (1006), either locally or remotely. Many different types of computing systems exist, and the aforementioned input and output device(s) may take other forms.

Software instructions in the form of computer-readable program code for carrying out embodiments of the disclosure may be stored in whole or in part, temporarily or permanently, on a non-transitory computer-readable medium such as a CD, DVD, storage device, diskette, tape, flash memory, physical memory or any other computer-readable storage medium. In particular, the software instructions may correspond to computer-readable program code which, when executed by the processor(s), is configured to perform one or more embodiments of the present disclosure.

The computing system (1000) in fig. 10A may be connected to or part of a network. For example, as shown in fig. 10B, the network (1020) may include a plurality of nodes (e.g., node X (1022), node Y (1024)). Each node may correspond to a computing system, such as the computing system shown in fig. 10A or a combined set of nodes may correspond to the computing system shown in fig. 10A. For example, embodiments of the present disclosure may be implemented on nodes of a distributed system connected to other nodes. As another example, embodiments of the present disclosure may be implemented on a distributed computing system having multiple nodes, where each portion of the present disclosure may be located on a different node within the distributed computing system. Further, one or more elements of the aforementioned computing system (1000) may be located at a remote location and connected to the other elements over a network.

Although not shown in fig. 10B, the nodes may correspond to blades in a server chassis that are connected to other nodes via a backplane. As another example, the node may correspond to a server in a data center. As another example, a node may correspond to a computer processor or a micro-core of a computer processor having shared memory and/or resources.

Nodes (e.g., node X (1022), node Y (1024)) in the network (1020) may be configured to provide services to the client device (1026). For example, the node may be part of a cloud computing system. The node may include functionality for receiving requests (1026) from client devices and transmitting responses (1026) to the client devices. Client device (1026) may be a computing system, such as the computing system shown in fig. 10A. Further, client device (1026) may include and/or perform all or a portion of one or more embodiments of the present disclosure.

The computing system or group of computing systems described in fig. 10A and 10B may include functionality to perform various operations disclosed herein. For example, computing system(s) may perform communication between processes on the same or different systems. Multiple mechanisms employing some form of active or passive communication may facilitate data exchange between processes on the same device. Examples of communication that represent such processes include, but are not limited to, the implementation of files, signals, sockets, message queues, pipelines, semaphores, shared memory, message passing, and memory mapped files. Additional details regarding several of these non-limiting examples are provided below.

Based on the client-server network model, sockets can be used as interfaces or communication channel endpoints, enabling bidirectional data transfer between processes on the same device. First, following the client-server network model, a server process (e.g., a process that provides data) may create a first socket object. Next, the server process combines the first socket object to associate the first socket object with a unique name and/or address. After creating and joining the first socket object, the server process then waits and listens for incoming connection requests from one or more client processes (e.g., processes that look for data). At this point, when the client process wishes to obtain data from the server process, the client process begins by creating a second socket object. The client process then proceeds to generate a connection request that includes at least the second socket object and the unique name and/or address associated with the first socket object. The client process then transmits the connection request to the server process. Depending on availability, the server process may accept the connection request, establish a communication channel with the client process, or the server process, busy processing other operations, may queue the connection request in a buffer until the server process is ready. The established connection informs the client process that communication can begin. In response, the client process may generate a data request specifying the data that the client process wishes to obtain. The data request is then transmitted to the server process. Upon receiving a request for data, the server process analyzes the request and collects the requested data. Finally, the server process then generates a reply that includes at least the requested data and transmits the reply to the client process. More generally, data may be transmitted as datagrams or character streams (e.g., bytes).

Shared memory refers to the allocation of virtual memory space in order to demonstrate mechanisms by which data may be communicated and/or accessed by multiple processes. In implementing shared memory, an initialization process first creates shareable segments in persistent or non-persistent storage. After creation, the initialization process then loads the shareable segment, which is then mapped into the address space associated with the initialization process. After loading, the initialization process continues to identify and grant access to one or more authorization processes, which may also write data to and read data from the shareable segment. Changes made to data in a shareable segment by one process may immediately affect other processes that are also linked to the shareable segment. Further, when one of the authorization processes accesses a shareable segment, the shareable segment maps to the address space of the authorization process. In general, at any given time, only one authorization process, in addition to the initialization process, may load the shareable segment.

Other techniques may be used to share data between processes, such as the various data described in this application, without departing from the scope of this disclosure. These processes may be part of the same or different applications and may be executed on the same or different computing systems.

Instead of, or in addition to, sharing data between processes, a computing system performing one or more embodiments of the present disclosure may include functionality to receive data from a user. For example, in one or more embodiments, a user may submit data via a Graphical User Interface (GUI) on a user device. Data may be submitted via the graphical user interface by a user selecting one or more graphical user interface widgets or inserting text and other data into the graphical user interface widgets using a touch pad, keyboard, mouse, or any other input device. In response to selecting a particular item, information regarding the particular item may be obtained by the computer processor from persistent or non-persistent storage. When the user selects an item, the content of the obtained data about the particular item may be displayed on the user device in response to the user's selection.

As another example, a request to obtain data about a particular item may be sent to a server that is operably connected to the user device over a network. For example, a user may select a Uniform Resource Locator (URL) link within a web client of a user device, thereby initiating the sending of a hypertext transfer protocol (HTTP) or other protocol request to a web host associated with the URL. In response to the request, the server may extract data about the particular selected item and send the data to the device that initiated the request. Once the user device has received data about a particular item, the content of the received data about the particular item may be displayed on the user device in response to a user selection. Following the example described above, the data received from the server after the URL link is selected may provide a hypertext markup language (HTML) web page that may be rendered by the web client and displayed on the user device.

Once the data is obtained, such as by using the techniques described above or from storage, the computing system may extract one or more data items from the obtained data when performing one or more embodiments of the present disclosure. The extraction may be performed, for example, by the computing system in FIG. 10A as follows. First, an organization scheme (e.g., syntax, schema, layout) of the data is determined, which may be based on one or more of the following: location (e.g., bit or column location, nth token in data stream, etc.); attributes (where an attribute is associated with one or more values); or a hierarchical/tree structure (including node levels of different levels of detail, such as in nested package headers or nested document sections). Then, in the context of an organizational schema, the original, unprocessed stream of data symbols is parsed into a stream of tokens (or hierarchy) (where each token may have an associated token "type").

Next, extraction criteria are used to extract one or more data items from the token stream or structure, where the extraction criteria are processed according to an organizational schema to extract one or more tokens (or nodes from the hierarchy). For location-based data, the token(s) at the location(s) identified by the extraction criteria are extracted. For attribute/value based data, the token(s) and/or node(s) associated with the attribute(s) that meet the extraction criteria are extracted. For hierarchical/hierarchical data, the token(s) associated with the node(s) matching the extraction criteria are extracted. The extraction criteria may be as simple as an identifier string or may be a query provided to a structured data repository (where the data repository may be organized according to a database schema or data format, such as XML).

The extracted data may be used for further processing by a computing system. For example, the computing system of fig. 10A may perform the data comparison when performing one or more embodiments of the present disclosure. The data comparison may be used to compare two or more data values (e.g., A, B). For example, one or more embodiments may determine whether A > B, A ═ B, A! B, A < B, etc. The comparison may be performed by submitting an opcode A, B specifying an operation related to the comparison to an Arithmetic Logic Unit (ALU) (i.e., a circuit that performs arithmetic and/or bitwise logical operations on two data values). The ALU outputs a numeric result of the operation and/or one or more status flags associated with the numeric result. For example, the status flag may indicate that the numeric result is a positive number, a negative number, zero, or the like. The comparison may be performed by selecting the appropriate opcode and then reading the numeric result and/or the status flag. For example, to determine whether A > B, B may be subtracted from A (i.e., A-B), and the status flag may be read to determine whether the result is positive (i.e., if A > B, A-B > 0). In one or more embodiments, B may be considered a threshold and a is considered to satisfy the threshold if a ═ B or if a > B, as determined using an ALU. In one or more embodiments of the present disclosure, a and B may be vectors, and comparing a to B requires comparing a first element of vector a to a first element of vector B, comparing a second element of vector a to a second element of vector B, and so forth. In one or more embodiments, if a and B are strings, the binary values of the strings may be compared.

The computing system in FIG. 10A may be implemented and/or connected to a data repository. For example, one type of data repository is a database. A database is a collection of information configured to facilitate data retrieval, modification, reorganization, and deletion. A database management system (DBMS) is a software application that provides an interface for a user to define, create, query, update, or manage a database.

A user or software application may submit a statement or query to the DBMS. The DBMS then interprets the statement. The statement may be a select statement, an update statement, a create statement, a delete statement, etc. requesting information. Moreover, the statement may include parameters that specify data or data containers (databases, tables, records, columns, views, etc.), identifier(s), condition (comparison operator), function (e.g., join, fully join, count, average, etc.), ordering (e.g., ascending, descending), or otherwise. The DBMS may execute the statement. For example, the DBMS may access a memory buffer, reference, or index file for reading, writing, deleting, or any combination thereof in response to the statement. The DBMS may load data from persistent or non-persistent storage and perform computations in response to queries. The DBMS may return the result(s) to the user or software application.

The computing system of fig. 10A may include functionality to provide raw and/or processed data (such as the results of comparisons and other processing). For example, providing data may be accomplished through various presentation methods. In particular, the data may be provided through a user interface provided by the computing device. The user interface may include a GUI that displays information on a display device (a computer monitor or a touch screen on a handheld computer device). The GUI may include various GUI widgets that organize what data is shown and how the data is provided to the user. Further, the GUI may provide data directly to the user, e.g., through text as the actual data value, or presented by the computing device as a visual representation of the data, such as through a visual data model.

For example, the GUI may first obtain a notification from a software application requesting that a particular data object be provided within the GUI. Next, the GUI may identify a data object type associated with a particular data object, for example, by obtaining data from data attributes within the data object that identify the data object type. The GUI may then determine any rules specified for displaying the data object type, for example, rules specified by the software framework for the data object class or rules specified according to any local parameters defined by the GUI for rendering the data object type. Finally, the GUI may obtain data values from particular data objects and present visual representations of the data values within the display device according to specified rules for the type of the data object.

The data may also be provided by various audio methods. In particular, the data may be presented in an audio format and provided as sound through one or more speakers operably connected to the computing device.

Data may also be provided to the user by tactile methods. For example, haptic methods may include vibrations or other physical signals generated by a computing system. For example, data may be provided to a user to convey data using vibrations generated by a handheld computer device having a predetermined duration and intensity of vibrations.

The above description of functions presents only a few examples of functions performed by the computing system of fig. 10A and/or the client device in fig. 10B. Other functions may be performed using one or more embodiments of the present disclosure.

While the disclosure has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this disclosure, will appreciate that other embodiments can be devised which do not depart from the scope of the disclosure as disclosed herein. Accordingly, the scope of the present disclosure should be limited only by the attached claims.

The embodiments and examples set forth herein are presented to best explain the present invention and its particular application and to thereby enable those skilled in the art to make and utilize the invention. Those skilled in the art, however, will recognize that the foregoing description and examples have been presented for the purpose of illustration and example only. The description as set forth is not intended to be exhaustive or to limit the invention to the precise form disclosed.

While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this disclosure, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as disclosed herein. Accordingly, the scope of the invention should be limited only by the attached claims.

31页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于基于上下文压缩免疫肿瘤学生物标志物的基因组数据的方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!