Self-adaptive determination method for industrial control data characteristic reordering algorithm

文档序号:168157 发布日期:2021-10-29 浏览:33次 中文

阅读说明:本技术 一种工控数据特征重排序算法的自适应确定方法 (Self-adaptive determination method for industrial control data characteristic reordering algorithm ) 是由 刘学君 孔祥旻 张小妮 沙芸 晏涌 王文晖 曹雪莹 李凯丽 于 2021-07-30 设计创作,主要内容包括:本发明涉及一种工控数据特征重排序算法的自适应确定方法,通过基于不同特征重排序算法对数据集进行特征选择;基于机器学习算法对重排序后的数据集进行训练记录准确率、召回率和时间,并对数据集计算预设评价指标,以生成相应的指标数据集合;对指标数据集合使用机器学习算法建立选择最优重排序算法的决策树;并使用决策树对输入的目标数据集的指标数据进行处理,以得到和目标数据集相匹配的特征重排序算法。实现了能够自动挑选出和目标数据集匹配度最好的特征重排序算法,从而提高了数据集特征重排序算法的准确度和效率,为工控数据的异常检测提供了保证。(The invention relates to a self-adaptive determination method of an industrial control data characteristic reordering algorithm, which selects the characteristics of a data set based on different characteristic reordering algorithms; training the reordered data set based on a machine learning algorithm to record the accuracy, the recall rate and the time, and calculating a preset evaluation index for the data set to generate a corresponding index data set; establishing a decision tree for selecting an optimal reordering algorithm for the index data set by using a machine learning algorithm; and processing the index data of the input target data set by using the decision tree to obtain a characteristic reordering algorithm matched with the target data set. The characteristic reordering algorithm with the best matching degree with the target data set can be automatically selected, so that the accuracy and efficiency of the characteristic reordering algorithm of the data set are improved, and the guarantee is provided for the abnormal detection of the industrial control data.)

1. A self-adaptive determination method for an industrial control data characteristic reordering algorithm is characterized by comprising the following steps:

reordering the features of the data set based on different feature selection algorithms;

training the reordered data set based on a machine learning algorithm to record the accuracy, the recall rate and the time, and calculating a preset evaluation index for the data set to generate a corresponding index data set;

establishing a decision tree for selecting an optimal reordering algorithm by using a machine learning algorithm based on the index data set;

and judging the index data of the input target data set based on the decision tree to obtain a characteristic reordering algorithm matched with the target data set.

2. The adaptive determination method for the industrial control data feature reordering algorithm of claim 1, wherein training the reordered data set based on a machine learning algorithm to record Acc, Recall and time, and calculating a preset evaluation index for the data set to generate a corresponding index data set comprises:

and calculating the data set based on the related parameters of the preset data set, and calculating the feature selection result based on the feature selection result parameters to generate the index data set.

3. The method of adaptive determination of an industrial data feature reordering algorithm of claim 2, wherein the data set related parameters comprise: data set dimensionality, classification number, various data volume distribution unbalancedness, KL divergence, data fitness, variance and variance expansion coefficient.

4. The method of claim 2, wherein the feature selection result parameters include a number of feature selections and an imbalance of feature score distributions.

5. The method of claim 1, wherein said building a decision tree using a machine learning algorithm to select an optimal reordering algorithm based on the set of indicator data comprises:

respectively calculating information entropy after dividing the index data set according to each feature, and selecting the feature with the largest information gain as a data dividing node to divide the index data set;

recursively processing all the partitioned sub-data sets to select optimal data partitioning characteristics to partition the sub-data sets.

6. The method of adaptively determining an industrial control data feature reordering algorithm of claim 5, wherein said building a decision tree using a machine learning algorithm to select an optimal reordering algorithm based on said set of indicator data further comprises:

and pruning the decision tree to improve the classification speed and the classification precision of the decision tree.

Technical Field

The invention belongs to the technical field of data security, and particularly relates to a self-adaptive determination method for an industrial control data feature reordering algorithm.

Background

In the industrial control field, with the gradual maturity of the internet technology, the industrial control network is increasingly communicated with the internet, so that the industrial control network is extremely vulnerable.

The existing anomaly detection and calculation method aiming at industrial control data real-time monitoring mostly adopts a machine learning algorithm and a neural network algorithm. However, the actual industrial control environment is complex, the storage sequence is random when data are collected, and the correlation among the collected data dimensions has uncertainty. Such as the case where the adjacent dimensions may be unrelated parameters or unrelated devices, and the related devices or parameters are physically far apart. The above problems increase the learning difficulty of the anomaly detection algorithm, and the learning efficiency of the algorithm needs to be further improved. In actual situations, different industrial control environments exist, the acquired data sets are different, the repeated workload of searching for the applicable feature selection algorithm for each data set is large, the feature selection algorithm is directly matched with the data sets of the same type, and repeated operation removal is not slow.

Disclosure of Invention

In order to solve the problem of low efficiency in the prior art, the invention provides a self-adaptive determination method of an industrial control data feature reordering algorithm, which has the characteristics of improving the accuracy and efficiency of data set feature reordering, providing guarantee for the abnormal detection of industrial control data and the like.

The invention discloses a self-adaptive determination method of an industrial control data characteristic reordering algorithm, which comprises the following steps:

reordering the features of the data set based on different feature selection algorithms;

training the reordered data set based on a machine learning algorithm to record the accuracy, the recall rate and the time, and calculating a preset evaluation index for the data set to generate a corresponding index data set;

establishing a decision tree for selecting an optimal reordering algorithm by using a machine learning algorithm based on the index data set;

and judging the index data of the input target data set based on the decision tree to obtain a characteristic reordering algorithm matched with the target data set.

Further, training the reordered data set based on a machine learning algorithm to record Acc, Recall and time, and calculating a preset evaluation index for the data set to generate a corresponding index data set, wherein the method comprises the following steps:

and calculating the data set based on the related parameters of the preset data set, and calculating the feature selection result based on the feature selection result parameters to generate the index data set.

Further, the data set-related parameters include: data set dimensionality, classification number, various data volume distribution unbalancedness, KL divergence, data fitness, variance and variance expansion coefficient.

Further, the feature selection result parameters include the number of feature selections and the degree of imbalance of feature score distribution.

Further, the establishing a decision tree for selecting an optimal reordering algorithm by using a machine learning algorithm based on the index data set comprises:

respectively calculating information entropy after dividing the index data set according to each feature, and selecting the feature with the largest information gain as a data dividing node to divide the index data set;

recursively processing all the partitioned sub-data sets to select optimal data partitioning characteristics to partition the sub-data sets.

Further, the establishing a decision tree for selecting an optimal reordering algorithm by using a machine learning algorithm based on the index data set further comprises:

and pruning the decision tree to improve the classification speed and the classification precision of the decision tree.

The invention has the beneficial effects that: performing feature selection on the data set by a reordering algorithm based on different features; then evaluating the data set and the corresponding feature selection result based on preset evaluation indexes, and generating a corresponding index data set; establishing a decision tree for selecting an optimal reordering algorithm for the index data set by using a machine learning algorithm; and processing the index data of the input target data set by using the decision tree to obtain a characteristic reordering algorithm matched with the target data set. The characteristic reordering algorithm with the best matching degree with the target data set can be automatically selected, so that the accuracy and efficiency of characteristic reordering of the data set are improved, and the guarantee is provided for the abnormal detection of industrial control data.

Drawings

In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.

FIG. 1 is a flow chart of a method for adaptively determining a reordering algorithm for industrial control data features provided in accordance with an exemplary embodiment.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without any inventive step, are within the scope of the present invention.

Referring to fig. 1, an embodiment of the present invention provides a method for adaptively determining an industrial control data feature reordering algorithm, which specifically includes the following steps:

101. reordering the features of the data set based on different feature selection algorithms;

common sorting methods include a feature selection method, a regularization method, a random forest method, a top-level selection method, and the like. Different feature selection methods are now used to find suitable methods for different industrial control data sets, including:

pearson Correlation Coefficient (Pearson Correlation Coefficient) is used to measure whether two data sets are on a line, and it is used to measure the linear relation between distance variables [28 ]. If the two variables change consistently, it indicates that the two sets of results are more similar.

The maximum information coefficient (maximum information coefficient) is used to measure the linear or non-linear strength of the two dimensions X, Y. And calculating the probability of the occurrence of the two dimensionalities with different conditions to obtain the information quantity. MIC has universality, and can balance and cover all functional relations when the sample size is large enough. The MIC has fairness, which means that similar coefficients can be given to the correlation relations of different types of single noise with similar degrees when the sample size is large enough.

The distance correlation coefficient dCor (X, Y) researches the independence of the two dimensions X and Y, and the distance correlation coefficient is more independent when approaching to 0, otherwise, the distance correlation coefficient is more strongly correlated.

The L1 regularization adds the L1 norm of the coefficient θ to the loss function as a penalty term, which forces the coefficients for those weak features to become 0 because the regularization term is non-zero. Therefore, the regularization of L1 tends to make the learned model very sparse, and the characteristic makes the regularization of L1 a good feature selection method.

The L2 regularization adds the L2 norm of the coefficient vector to the loss function. Since the coefficients in the L2 penalty term are quadratic, which makes L2 and L1 have many differences, most notably, the values of the coefficients become averaged by the regularization of L2. For the associated features, this means that they can get more closely corresponding coefficients. Its formula is the same as the L1 regularization, but the loss function is replaced with an L2 norm of the coefficient θ. The formula is the same as the L1 norm.

A random forest consists of several CART decision trees, each node in the decision tree being a condition on a certain feature in order to split the dataset into two according to different response variables, for the classification problem, kini's purity is usually used.

When training the decision tree, it can be calculated how much less the tree is impure per feature. For a decision tree forest, it is possible to calculate how much the average reduction of each feature is impure and take the average reduced impure degree as the value of feature selection.

Another common feature selection method is to directly measure the influence of each feature on the model accuracy. The main idea is to disturb the order of the feature values of each feature and measure the influence of order variations on the accuracy of the model. It is clear that for non-important variables, the scrambling does not affect the accuracy of the model much, but for important variables, the scrambling reduces the accuracy of the model.

Stability selection is a newer method based on a combination of subsampling and a selection algorithm, which may be regression, SVM, or other similar methods.

The feature selection algorithm is operated on different data subsets and feature subsets, and is repeated continuously, and finally, feature selection results are summarized. For example, the frequency with which a feature is considered important (the number of times it is selected as an important feature divided by the number of times it is tested in the subset) may be counted.

The recursive feature elimination top-level selection algorithm is used for feature selection and belongs to one of packing method feature selection algorithms. The recursive elimination characteristic method uses a machine learning model to carry out multiple rounds of training, after each round of training is finished, the characteristics corresponding to a plurality of weight coefficients are eliminated, and then the next round of training is carried out based on a new characteristic set.

102. Training the reordered data set based on a machine learning algorithm to record the accuracy, the recall rate and the time, and calculating a preset evaluation index for the data set to generate a corresponding index data set;

for different data sets and different feature reordering algorithms, the evaluation indexes are used for calculating the data set and feature selection results, such as:

and calculating the data set based on the related parameters of the preset data set, and evaluating the feature selection result based on the feature selection result parameters to generate an index data set.

Wherein the dataset-related parameters include: data set dimensionality, classification number, various data volume distribution unbalancedness, KL divergence, data fitness, variance and variance expansion coefficient.

The feature selection result parameters comprise feature selection quantity and feature score distribution imbalance degree.

The dataset dimensions reflect how complex the dataset data is. Generally, the more dimensions a data set is, the more information the data set contains.

The classification number is: the classification number of different data sets is different, and the classification number of the data sets directly influences the effect of the algorithm on detecting the data sets, so that the classification number of the data sets also serves as an evaluation index of the data sets.

The distribution unbalance of various data volumes: the data sets are classified into a plurality of classes, the number of each class is not necessarily the same, the distribution of the data classes is unbalanced, and the unbalance degree is measured by an unbalance ratio:

where IR is the imbalance ratio and N is NmajorityIs a collection of majority classes, P ═ nminorityAre a collection of a few classes. The larger the numerical value of the imbalance ratio is, the more unbalanced the class distribution of the sample data is, and the classification accuracy is easily affected.

The KL divergence is used for calculating the accumulated difference between the information entropy of the real event and the information entropy obtained by theoretical fitting, and can be used for measuring the distance between two dimension distributions. When the two dimensions are equally distributed, their KL divergence is zero; as the difference between the two dimensional distributions increases, their KL divergence also increases:

in the formula, p (x)i) Is x in a first dimensioniProbability distribution of (2), q (x)i) Is x in the second dimensioniProbability distribution of (2).

Degree of data fitting: the similarity of the change trend of the data set is also an embodiment of data repetition, and the data fitting degree can measure the repetition success of the data trend on the data:

D=X'

Rsquredfor data fitness, D is the result of the derivation of the dataset by columns, m is the number of data per column after the derivation of the data, n is the number of data dimensions, D is the data in the dataset after the derivation, and r (a, b) is the number of times a occurs in b. The degree of data fitting in the industrial control data can calculate the redundancy degree of the data in the data set from the angle of the trend.

Variance is a measure of the degree of dispersion when probability theory and statistical variance measure a random variable or a set of data. In actual work, when the overall mean is difficult to obtain, the sample statistics is applied to replace the overall parameters, and after correction, the sample variance calculation formula is as follows:

S2is the sample variance, X is a variable,is the sample mean, and n is the number of data in the dimension. When an industrial control data set is measured, calculating the variance S of each dimension of the industrial control datai 2Then, the variance of each dimension is combined into a new one-dimensional array [ S ]1 2,S2 2,S3 2,...]And then calculating the variance of the new one-dimensional array as the integral variance of the data set, thereby measuring the dispersion degree of the data set.

And (4) checking linear correlation among the features of the data set by using the coefficient of variance expansion (VIF), and selecting the features with strong independence to increase the interpretability of the model. The calculation formula of the variance expansion coefficient is as follows:

wherein R isiIs the ith dimension XiWith all other dimensions XjComplex correlation coefficients of (i, j ≠ 1, 2.., k j ≠ i) | P | is a determinant on a matrix of correlation coefficients, MiiThen, the ith row and the ith column of the relational number matrix P are removed, and the remaining part is used for calculating the determinant.

Each feature selection algorithm is based on own rules, so according to different feature selection methods, result parameters are respectively calculated and used as evaluation indexes of the feature-reordered data set:

number of feature choices: and (4) performing characteristic screening on the data sets by adopting a characteristic selection method, wherein the number of the reserved characteristics of different data sets is different. If the Lasso algorithm is adopted to select the characteristics of the Mississippi State university data set, only 4 characteristics are reserved, and 22 characteristics are deleted; and (4) selecting the features of the CICIDOS2019 data set by adopting the same feature selection method, and reserving 34 features and deleting 4 features. The same feature selection method has obvious proportion difference of deleting features in the data sets, so that the feature quantity retained by feature selection has a direct relation with the characteristics of the data sets, and the difference between the data sets can be measured.

Feature score distribution imbalance: the features selected by the feature selection method are all features of which the absolute value of the feature coefficient is not zero after the feature score is calculated. But in features where the absolute value of the feature score is not zero, the distribution of scores can also reflect the characteristics of the data set. For example, the influence of the feature with high score on the response is relatively large, the influence of the feature with low score on the response is relatively weak, and the number of the features with large influence on the response in one data set is judged by looking at the feature score distribution condition.

And performing characteristic reordering on each data set by using different methods respectively, and then calculating the evaluation index. All indexes are used as horizontal coordinates, index results of different methods are used as vertical coordinates for each data set, and then an index data set for processing data through a characteristic reordering method can be constructed. Such as: with a total of 10 feature reordering algorithms, if n data sets are used, there will be 10 × n data, each containing 9 evaluation indexes. Such a data set may satisfy the subsequent creation and verification of decision trees.

103. Establishing a decision tree for selecting an optimal reordering algorithm by using a machine learning algorithm based on the index data set;

decision trees can fit data using complex nonlinear models for regression analysis by varying the metric of purity. Similar linear regression models use corresponding loss functions, and decision trees when used for regression use an impure metric approach.

Each data set evaluation index is used as a node to construct a decision tree, the data sets are operated on a machine learning algorithm to calculate the recall rate and the accuracy, a group with the best recall rate and accuracy is found out by inputting a plurality of groups of data to be used as a final result, and the most suitable feature selection algorithm is selected for different data sets, and the method specifically comprises the following steps:

characteristic splitting node: traversing all the characteristics, calculating the change value of information entropy before and after dividing the data set, and then selecting the characteristic with the maximum information entropy change amplitude as the data set dividing basis, namely selecting the characteristic with the maximum information gain as a dividing node:

H(X)=-∑p(x)logp(x)

where p (x) represents the probability of the occurrence of the element x in the dimension, the value of the information entropy decreases as the probability p (x) approaches 0 or 1. When the probability value is 1, the information entropy is 0, and the data type is single. When selecting the features, the features with the largest information gain are selected, and the data is physically transformed in a more single direction as much as possible. The information gain is therefore an indicator of how well the data has become more ordered.

And (3) decision tree construction: constructing a decision tree, and firstly calculating the information entropy before data set division; secondly, traversing all the features serving as dividing conditions, respectively calculating the information entropy after dividing the data set according to each feature, and selecting the feature with the largest information gain as a data dividing node to divide the data; and finally, recursively processing all the divided subdata sets, continuously repeating the steps from the unselected features, and selecting the optimal data division feature to divide the subdata sets.

The conditions for the end of recursion are generally two: one is that all features have been used; firstly, the gain of information entropy after division is small enough, that is, the divided data sets belong to the same class as much as possible.

Due to the influence of factors such as noise and the like, the values of certain characteristics of the samples are not matched with the categories of the samples, and certain branches and leaves of a decision tree generated based on the data generate errors; especially, at the end of the decision tree close to the branches and leaves, the interference of the irrelevant factors can be highlighted due to the fact that the samples are reduced; the generated decision tree may have an overfitting phenomenon, so unreliable branches are deleted by a pruning method, and the classification speed and the classification precision of the whole decision tree are improved. The decision tree pruning strategy comprises front pruning and rear pruning.

104. And judging the index data of the input target data set based on the decision tree to obtain a characteristic reordering algorithm matched with the target data set.

After the decision tree is established, only evaluation index data of one data set needs to be input, and a feature selection algorithm matched with the evaluation index data is output.

Thus, a data set index system is established by calculating some inherent characteristics existing among data sets and results of different characteristic selection algorithms of the data sets. Secondly, an index data set is constructed according to an index system, finally, the index data set is combined with machine learning to construct a decision tree, suitable feature reordering algorithms suitable for different types of data sets are found out for different data sets by using a suitable feature selection method, and an adaptive algorithm which is most suitable for a processing algorithm can be found out through index parameters of the data sets, so that the feature reordering algorithm which can improve the accuracy of an abnormal detection algorithm or has the best effect of reducing the time when the algorithm is used can be selected, and the accuracy and the efficiency of the feature selection algorithm for an industrial control data set and a simple data set are improved.

The following description will be made in conjunction with a specific real-time manner, in which 11 data sets, such as a cic cddos2017, a mississippi data set, a singapore water plant data set, an email data set, a water quality data set, and an mnist data set, are used for data processing, and the data set conditions are as follows:

name (R) Number of dimensions Number of classifications
Singapore water plant data set 124 2
CICDDoS2017 79 6
Missississippi dataset 27 8
Self-built oil depot data set 126 11
csgo dataset 95 2
Mail data set 3000 2
Water quality data set 10 2
Wine data set 12 2
Price data set of mobile phone 21 4
mnist dataset 784 10
Music genre data set 26 10

Then establishing an index data set comprises:

201. downloading a data set and carrying out data cleaning;

202. respectively preprocessing the 11 data sets by using a feature reordering algorithm, wherein each feature reordering algorithm generates a sequenced data set according to an original data set;

203. and performing classified calculation on the sorted data by using an anomaly detection algorithm to obtain the accuracy and the running time. In order to verify the effectiveness of the algorithm, four classification algorithms, namely a random forest algorithm, an SVM algorithm, an AlexNet neural network and a ResNet neural network are adopted for carrying out experiments;

204. calculating indexes of the sorted data sets;

205. taking the indexes in the step 204 as characteristics, taking the result of the step 204 as reference, and marking the optimal items of each data set as labels;

206. finishing the construction of the index data set.

Constructing a decision tree model and analyzing results: the preparation data set is divided into a test set and a verification set, wherein the test set comprises a Singapore water plant data set, a self-built oil depot data set, an mnist data set, a mail data set, a water quality data set, a mobile phone price data set, a music genre data set and a csgo data set, and the test set comprises a CICDDoS2017, a Missississippi data set and a wine data set. And selecting a training set to generate a decision tree according to the constructed index data set, and selecting a feature reordering algorithm for the test set by using a generated model.

Four anomaly detection algorithms of SVM, Random-Forest, ResNet and AlexNet5 are adopted, all data sets are classified by the four anomaly detection algorithms, the following table shows the results of the Random Forest algorithm, and the results are shown as follows:

the experimental result shows that the classification results of the 4 algorithms have strong consistency and basically consistent Accuracy, the Time consumption has certain difference, but the Time consumption of different characteristic reordering algorithms is basically consistent, wherein, Accuracy is the Accuracy of the random forest classification results, and Time is the Time consumption of classification, and the thickened part is the optimal item label. And after the training set is confirmed, inputting the training set into the decision tree to complete the construction of the decision tree. And then, inputting data of the test set to obtain a test set result of the decision tree pair. The following table shows the accuracy and time-use results of classification using a random forest algorithm after preprocessing the 3 test sets using different feature re-ordering calculation methods. The thickened part is a decision result of the decision tree model, and the selected result is compared with the results of other methods, so that the feature reordering algorithm selected by the model in the locking of the three test sets is optimal or superior.

After the adaptive algorithm selects the optimal feature reordering algorithm for reordering, the result of the anomaly detection algorithm is compared with the result before reordering, and the result is shown in the following table:

the added rough part in the table is a relatively good item before and after reordering, and it can be seen that the result after reordering is generally superior to that before ordering in 11 data sets and 10 data set features in total. Therefore, the method provided by the embodiment of the invention can select the feature reordering algorithm which can improve the accuracy of the anomaly detection algorithm or has the best effect of reducing the anomaly detection algorithm in use, and improves the accuracy and efficiency of the feature selection algorithm for the industrial control data set and the simple data set.

The self-adaptive determination method for the industrial control data characteristic reordering algorithm provided by the embodiment of the invention is characterized in that the industrial control data set and some simple data sets are preprocessed by adopting different characteristic reordering algorithms, and an evaluation data set is constructed according to the basic attributes of the preprocessed data sets; then, carrying out classification calculation on the preprocessed data set by adopting different feature selection algorithms, and labeling a corresponding reordering algorithm according to a calculation result; and constructing a training set and a test set according to the evaluation data set and the label, and generating and verifying a decision tree. The characteristic reordering algorithm which can improve the accuracy of the abnormal detection algorithm or has the best effect of reducing the abnormal detection algorithm in use can be selected, and the accuracy and the efficiency of the characteristic selection algorithm for the industrial control data set and the simple data set are improved.

It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.

In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.

The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.

In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.

What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, to the extent that the term "includes" is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean a "non-exclusive or".

The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

12页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:全自动机器人零点标定的系统内置测量相对精度方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类