Method for configuring image evaluation device, image evaluation method and image evaluation device

文档序号:590131 发布日期:2021-05-25 浏览:9次 中文

阅读说明:本技术 配置图像评估装置的方法和图像评估方法及图像评估装置 (Method for configuring image evaluation device, image evaluation method and image evaluation device ) 是由 弗罗里安·比特纳 马库斯·迈克尔·盖佩尔 加比·马夸特 丹尼拉·塞德尔 克里斯托弗·蒂茨 于 2019-09-16 设计创作,主要内容包括:为了配置图像评估装置(BA),将多个分别与对象类型(OT)和对象亚类型(OST)相关联的训练图像(TPIC)馈入到第一神经网络模块(CNN)中以识别图像特征。此外,将第一神经网络模块(CNN)的训练输出数据集(FEA)馈入到第二神经网络模块(MLP)中,以根据图像特征识别对象类型。根据本发明,第一和第二神经网络模块(CNN、MLP)共同地训练为:第二神经网络模块(MLP)的训练输出数据集(OOT)至少近似地复现与训练图像(TPIC)相关联的对象类型(OT)。此外,对于相应的对象类型(OT1、OT1):-将与该对象类型(OT1、OT1)相关联的训练图像(TPIC)馈入到已训练的第一神经网络模块(CNN)中,-将第一神经网络模块的针对相应的训练图像(TPIC)生成的训练输出数据集(FEA1、FEA2)与相应的训练图像(TPIC)的对象亚类型(OST)相关联,并且-根据该亚类型关联关系为图像评估装置(BA)配置亚类型识别模块(BMLP1、BMLP2),以根据图像特征识别对象亚类型(OST)。(In order to configure an image evaluation device (BA), a plurality of training images (TPIC) which are respectively associated with an Object Type (OT) and an Object Subtyping (OST) are fed into a first neural network module (CNN) in order to identify image features. Furthermore, the training output data set (FEA) of the first neural network module (CNN) is fed into the second neural network Module (MLP) to identify the object type from the image features. According to the invention, the first and second neural network modules (CNN, MLP) are jointly trained as: the training output data set (OOT) of the second neural network Module (MLP) reproduces, at least approximately, the Object Type (OT) associated with the training image (TPIC). Furthermore, for the respective object types (OT1 ): -feeding training images (TPIC) associated with the object type (OT1) into a trained first neural network module (CNN), -associating training output datasets (FEA1, FEA2) of the first neural network module generated for the respective training images (TPIC) with object sub-types (OST) of the respective training images (TPIC), and-configuring sub-type recognition modules (BMLP1, BMLP2) for the image evaluation apparatus (BA) in accordance with the sub-type association to recognize the object sub-types (OST) in accordance with image features.)

1. Method for configuring an image evaluation device (BA) for determining an object type (OT, OT1, OT2) and an Object Subtyping (OST) of an imaged Object (OBJ), wherein

a) Feeding a plurality of training images (TPIC) respectively associated with an object type (OT, OT1, OT2) and an object sub-type (OST) into a first neural network module (CNN) for identifying image features,

b) feeding the training output data set (FEA) of the first neural network module (CNN) into a second neural network Module (MLP) to identify object types from image features,

c) the first and second neural network modules (CNN, MLP) are jointly trained as: a training output data set (OOT) of the second neural network Module (MLP) reproduces at least approximately an Object Type (OT) associated with the training image (TPIC), and,

d) for the respective object type (OT1 ):

-feeding training images (TPIC) associated with the object type (OT1) into the first neural network module (CNN) trained,

-associating training output data sets (FEA1, FEA2) of the first neural network module generated for respective training images (TPIC) with object sub-types (OST) of the respective training images (TPIC), and

-configuring a sub-type recognition module (BMLP1, BMLP2) for the image evaluation apparatus (BA) to recognize an object sub-type (OST) from image features according to the sub-type association.

2. Method according to claim 1, characterized in that correlation parameters (CP1, CP2) regarding the correlation between image features and object sub-types (OST) are derived from the sub-type correlations, and the sub-type recognition modules (BMLP1, BMLP2) are configured according to the correlation parameters (CP1, CP 2).

3. The method according to any of the preceding claims, characterized in that a probability classifier is used as a subtyping module (BMLP1, BMLP 2).

4. The method according to any of the preceding claims, characterized in that the subtyping identification module (BMLP1, BMLP2) has a linking structure corresponding to the second neural network Module (MLP).

5. Method according to any of the preceding claims, characterized in that object sub-types (OST) to be identified for object types (OT, OT1, OT2) form an ordered sequence, which is specified by preset sequence information, and that the sub-type identification modules (BMLP1, BMLP2) are configured according to the sequence information.

6. The method according to claim 5, characterized in that an ordinal regression is performed according to the sequence information when configuring the sub-type recognition module (BMLP1, BMLP 2).

7. The method according to any of the claims 3 to 6, characterized in that the a priori distribution of configuration parameters of the probabilistic classifier (BMLP1, BMLP2) is derived from the trained learning parameters of the second neural network Module (MLP).

8. Method according to any of the preceding claims, characterized in that the configuration parameters of the subtyping identification modules (BMLP1, BMLP2) are set according to the training parameters of the first neural network module (CNN) and/or of the second neural network Module (MLP).

9. An image evaluation method for determining an object type (OT, OT1, OT2) and an Object Subtyping (OST) of an imaged Object (OBJ), wherein

a) Feeding the image to be evaluated (PIC) into a first neural network module (CNN) trained according to any one of the preceding claims,

b) feeding the trained resulting output data set (FEA) of the first neural network module (CNN) into a second neural network Module (MLP) trained according to any of the preceding claims,

c) deriving an object type (OT1) from the resulting output data set of the trained second neural network Module (MLP),

d) sub-type recognition module (BMLP1) configured according to any of the preceding claims, exclusively for the derived object type (OT1), is selected,

e) deriving an object sub-type (OST) associated with the output data set (FEA) of the trained first neural network module (CNN) by means of the selected sub-type recognition module (BMLP1), and

f) the derived object type (OT1) and the resulting object sub-type (OST) are output.

10. The image evaluation method according to claim 9, characterized in that, upon deriving the associated object sub-type (OST), a respective distance between the output dataset (FEA) of the trained first neural network module (CNN) and a plurality of stored training output datasets of the first neural network module (CNN) is derived,

selecting a training output data set having a smaller pitch than another training output data set, and

an object sub-type associated with the selected training output data set is derived as an associated object sub-type (OST).

11. The method according to any one of the preceding claims,

the first neural network module (CNN), the second neural network Module (MLP) and/or the subtyping module (BMLP1, BMLP2) comprises an artificial neural network, a recurrent neural network, a convolutional neural network, a multi-layer perceptron, a bayesian neural network, an autoencoder, a deep learning architecture, a support vector machine, a data-driven trainable regression model, a k-nearest neighbor classifier, a physical model and/or a decision tree.

12. An image evaluation apparatus (BA) for deriving an object type (OT, OT1, OT2) and an Object Subtyping (OST) of an imaged Object (OBJ), designed for carrying out the method according to any one of the preceding claims.

13. A computer program product arranged to perform the method according to any one of claims 1 to 11.

14. A computer-readable storage medium having the computer program product of claim 13.

Background

For the automatic evaluation of image recordings, for example in medical diagnostics, in the monitoring of engineering or non-engineering systems and/or in the field of visual sensor devices of autonomous systems, machine learning methods are increasingly used. By means of this method, a learning-based image evaluation device can be trained to automatically recognize or associate objects displayed on an image with an object type.

Thus, for example, an image evaluation apparatus of a medical diagnostic apparatus may be trained to specifically identify, or associate with, a cell or tissue type on a microscopic image. In particular, the image evaluation device may be trained to associate an image record of a biological cell (such as a blood cell) with a cell type as the subject type and with a stage of development of the cell type as the subject sub-type, respectively.

For such training, a large number of preset images are usually used, which images have been associated with object types and possibly object sub-types. By means of such training images, the learning-based image evaluation device can be trained to reproduce the predefined object type and possibly the object sub-type as well as possible, that is to say that the identified, for example, cell type and its development stage deviate as little as possible from the predefined object type and object sub-type. In order to perform such training, a large number of known learning methods, in particular methods of supervised learning, are provided.

However, in practice it often happens that: some object types to be identified occur significantly more rarely than others. Thus, some biological cell types, in particular pathological cell types, account for only a fraction of less than 0.005% of all cell types. Accordingly, for rare object types, there are typically significantly fewer training images available than for more common object types. However, if only a small number of training images are provided for training, the training results and thus the recognition accuracy are significantly deteriorated in the conventional training method. Furthermore, sub-types of rare object types are often difficult to identify or distinguish.

However, it is precisely in the medical field that: rare pathological patterns and stages of development are distinguished as correctly as possible. It is known that: for better classification of rare object types, specific image features are derived by experts individually and are trained for matching accordingly. Alternatively or additionally, training may continue until enough training images of rare object types are evaluated. However, with the above approach, the required training overhead can increase significantly, especially in the presence of rare object types.

Disclosure of Invention

It is an object of the invention to provide a method for configuring an image evaluation device, an image evaluation method and an image evaluation device, which enable more efficient training.

This object is achieved by a configuration method having the features of claim 1, by an image evaluation method having the features of claim 9, by an image evaluation apparatus having the features of claim 12, by a computer program product having the features of claim 13 and by a computer-readable storage medium having the features of claim 14.

In order to configure the image evaluation device to derive an object type and an object sub-type of the imaged object, a plurality of training images, which are respectively associated with the object type and the object sub-type, are fed into the first neural network module to identify image features. In particular, biological cells can be used as subjects, their corresponding cell type can be derived as subject type, and their corresponding stage of development or cell state can be derived as subject sub-type. In addition, the training output data set of the first neural network module is fed into the second neural network module to identify the object type from the image features. According to the invention, the first and second neural network modules are jointly trained as: the training output data set of the second neural network module reproduces, at least approximately, the object type associated with the training image. Further, for the respective object type:

-feeding a training image associated with the object type into the trained first neural network module,

-associating a training output data set of the first neural network module generated for the respective training image with the object sub-type of the respective training image, and

-configuring a sub-type identification module for the image evaluation device to identify object sub-types according to the image features according to the sub-type association relation.

By dividing the configuration into data-driven training and object-type-specific configurations, many of the disadvantages of conventional training methods due to the lack of object-type-specific training data can be alleviated.

The configuration method according to the invention therefore generally proves to be particularly effective for the derivation of object sub-types of rare object types and universally in the case of strong inhomogeneities in the frequency distribution of the object.

The image evaluation method according to the invention for deriving the object type and the object sub-type of the imaged object can be implemented by means of the first and second neural network modules trained as above and the sub-type recognition module configured as above. Here, the image to be evaluated is fed into a trained first neural network module, and the resulting output data set of the trained first neural network module is fed into a trained second neural network module. The object type is then derived from the resulting output data set of the trained second neural network module. In addition, a sub-type recognition module configured specifically for the derived object type is selected, by means of which an object sub-type associated with the output data set of the trained first neural network module is derived. Finally, the derived object type and the derived object sub-type are output.

Due to the subdivision of the image output method according to the invention corresponding to the configuration method according to the invention, the image evaluation method generally operates reliably to derive object sub-types of rare object types, in particular in the case of non-uniform and strong object frequency distributions.

In order to carry out the configuration method according to the invention and/or the image evaluation method according to the invention, a corresponding image evaluation apparatus, a computer program product and a computer-readable storage medium are proposed.

The configuration method according to the invention, the image evaluation device according to the invention and the computer program according to the invention may be implemented or realized, for example, by means of one or more processors, Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs) and/or so-called "field programmable gate arrays" (FPGAs).

Advantageous embodiments and developments of the invention are specified in the dependent claims.

Preferably, a correlation parameter relating to the correlation between the image feature and the object sub-type can be derived from the sub-type association. The sub-type identification module can be configured according to the correlation parameter. This correlation or correlation parameter is such that: the object sub-types which are best or sufficiently well correlated for the given image features which have not been classified are determined by means of statistical criteria.

According to an advantageous embodiment of the invention, a probabilistic classifier, in particular a so-called bayesian classifier, can be used as the sub-type recognition module. In this case, the estimated and/or conditional probabilities and uncertainties may be used as correlation parameters. A probabilistic classifier, in particular a so-called bayesian classifier, may associate a given image feature or other feature with a class to which the image feature or other feature belongs with the highest probability.

Advantageously, the sub-type identification module may have a link structure corresponding to the second neural network module. If a multi-layered perceptron is used as the second neural network module, the corresponding sub-type recognition module may be constructed as a multi-layered bayesian perceptron in that sense. In a corresponding linkage structure, the learning parameters and hyper-parameters of the second neural network module can advantageously be reused when configuring the sub-type recognition module.

In addition, on the premise that the object sub-types to be identified for the object types form an ordered sequence specified by preset sequence information, the sub-type identification module may be configured according to the sequence information. Such a sequence can be given, for example, by a time series of the developmental stages of the cell type.

From this sequence information, a so-called ordinal regression can preferably be performed when configuring the sub-type identification module. In particular, the activation function of the nerve endian layer of the subtyping module may be adapted and/or a so-called Probit model may be used. Further, an activation threshold for activating a function may be learned.

According to another embodiment of the present invention, the prior distribution of the configuration parameters of the probabilistic classifier can be derived from the learning parameters of the trained second neural network module. In this case, the second neural network module parameters, which are set or optimized in particular by training, are referred to as learning parameters. For deriving configuration parameters of the probability classifier, in particular, a markov chain monte carlo method and a mutation-based or other bayesian derivation method can be used. In this way, the information obtained by training about the learning parameters and their value distributions can advantageously be used again for configuring the sub-type recognition module.

In addition, configuration parameters of the sub-type recognition module may be set according to training parameters of the first and/or second neural network modules. The training parameters may be hyper-parameters and other parameters controlling the training and/or parameters obtained by the training.

Furthermore, in the configuration method according to the present invention and the image evaluation method according to the present invention, the first neural network module, the second neural network module, and/or the sub-type recognition module may include an artificial neural network, a recurrent neural network, a convolutional neural network, a multi-layer perceptron, a bayesian neural network, an auto-encoder, a deep learning architecture, a support vector machine, a data-driven trainable regression model, a k-nearest neighbor classifier, a physical model, and/or a decision tree.

Drawings

Embodiments of the present invention are explained in detail below with reference to the drawings. Here, the following are illustrated in schematic views:

FIG. 1 shows a configuration of an image evaluation apparatus according to the present invention, and

fig. 2 shows the evaluation of an image by means of an image evaluation device configured.

Detailed Description

Fig. 1 shows a configuration of an image evaluation apparatus BA according to the invention for identifying an object imaged onto an image to be evaluated, and in particular for deriving an object type and an object subtype of the respective imaged object.

The image evaluation apparatus BA has one or more processors PROC for carrying out the method steps of the image evaluation apparatus BA and one or more memories MEM coupled to the processors PROC for storing data to be processed by the image evaluation apparatus BA.

In the present exemplary embodiment, a medical image evaluation device BA is described as an exemplary application of the present invention for evaluating microscopic image recordings of biological cells as objects to be identified. From one or more images of a cell, its cell type should be derived as the object type and the stage of development or cell state of the cell type as the object sub-type. For this purpose, the image evaluation apparatus BA is configured by means of a machine learning method, as described below.

In the context of this configuration, a large number of pre-classified training images TPIC, here microscopic images of biological cells, are read from the database DB by the image evaluation device BA. The classification is here represented as: the training images TPIC are previously associated, for example by an expert, with the object type OT (i.e. cell type) and the object sub-type OST (i.e. development stage or cell state), respectively. The respective object type OT and object subtype OST can be specified in the form of a type or subtype identification and can be read from the database DB by the image evaluation device BA in a correlation relationship with the relevant training image TPIC.

The read training image TPIC is fed into the first neural network module CNN of the image evaluation apparatus BA. The training images TPIC are each represented by an image data set. The first neural network module CNN preferably comprises a convolutional neural layer forming a deep convolutional neural network. Such Convolutional Neural networks (also commonly referred to as Convolutional Neural networks) are particularly suitable for efficiently identifying image features within the image fed in. Such image features may in particular describe edges, corners, faces or other geometric properties contained in the image, in particular local geometric properties or relationships between image elements.

The first neural network module CNN should be trained to extract or generate as output data image features of the fed-in image that are particularly suitable for object type recognition. In this case, the image features of the respective fed-in image are represented by the respective resulting output data sets of the first neural network module CNN. The output data set thereof can therefore be understood as image data which is reduced to image features essential for object type recognition. Such image features are commonly referred to as "features".

For this training, the training output data set FEA generated from the training image TPIC by the first neural network module CNN is fed into the second neural network module MLP of the image evaluation apparatus BA. The respective training output data sets FEA are each generated by processing the respective training images TPIC.

Preferably, the second neural network module MLP comprises a Multi-Layer Perceptron (MLP). Such a perceptron MLP is particularly suitable for classification tasks, where it is suitable for classifying objects according to image features.

The first and second neural network modules CNN and MLP together form a deep neural network DNN. The neural networks CNN and MLP can thereby also be understood in particular as sub-networks of a more advanced deep neural network DNN.

According to the invention, the second neural network module MLP should be trained to recognize the preset object type OT according to suitable image features.

In order to achieve the above training goals of the first neural network module CNN and the second neural network module MLP, the two network modules CNN and MLP are trained together. It is sought here to: the training output data set OOT generated by the second neural network module MLP from the fed-in training output data set FEA reproduces as precisely as possible the object type OT previously associated with the fed-in training image TPIC. The respective training output data sets OOT of the second neural network module MLP are each generated by processing the respective training images TPIC.

"training" is to be understood generally as the optimization of the mapping of the input data set (here TPIC) of a parameterized system model (e.g. a neural network) to its output data (here to the training output data set OOT). During the training phase, the mapping is optimized according to preset, learned and/or to-be-learned criteria. For example, in a classification model, classification errors, analysis errors and/or prediction errors may be used as criteria. In the present case, the following is sought through the co-training of the network modules CNN and MLP: the training output data set OOT coincides as frequently and/or as well as possible with the previously associated object type OT.

For this purpose, the learning parameters of the network modules CNN and MLP are set by training such that the training output data set OOT output by the second neural network module MLP as object type reproduces the preset object type OT of the training image TPIC as well as possible. Here, the learning parameters may include the networking structure of the nerves such as the network modules CNN and MLP and/or the weights of the connections between these nerves.

The sought optimization of the learning parameters can be achieved, for example, by: the deviation D between the training output data set OOT and its corresponding predetermined object type OT is determined by suitable measures. Here, the deviation D represents a classification error of the neural network DNN. If the training output data set OOT and the preset object type are represented by vectors, the deviation D may for example be determined as a multi-dimensional euclidean distance or a weighted distance of these vectors. The resulting deviation D is fed back to the neural network DNN, i.e. to the co-training of the network modules CNN and MLP, as indicated by the dashed arrow in fig. 1.

The neural network DNN is trained on the fed-back deviation D in order to minimize this deviation D, i.e. to reproduce the preset object type OT as well as possible by the output object type OOT. For this purpose, the learning parameters can be changed by standard optimization methods until the deviation D is minimal or close to minimal. For example, a gradient descent method may be used for minimization. A number of standard methods of machine learning may be used to perform the above described optimization.

By the above-described co-training, on the one hand, the first network module CNN is trained to recognize or generate image features that are particularly well suited for object type recognition, and on the other hand, the second network module MLP is simultaneously trained to derive the object type to which it belongs from the image features.

This data-driven training form of the network modules CNN and MLP can often be applied very successfully to the classification problem for which a large number of training images are available. However, as already mentioned above, it is not uncommon to: particularly for the recognition of object sub-types of rare object types, too few relevant training images are available for efficient training of deep neural networks.

For this reason, for the determination of the object sub-type, according to the invention a probabilistic classifier is used instead of the second neural network module MLP as object-type-specific sub-type identification module. Generally, such a probabilistic classifier can also perform classification based on a relatively small amount of reference data. For example, respective probabilities of the presence of respective object sub-types may be derived from the reference data, and the object sub-types having a high or highest probability may be output as classification results.

And performing sub-type classification on the image through a corresponding sub-type recognition module according to the image characteristics related to the classification of the image, and reusing the trained first neural network module CNN for generating the image characteristics to a certain extent. This reuse of the trained first neural network module CNN is illustrated in fig. 1 by a dashed arrow.

Advantageously, an own sub-type identification module is specially configured for each object type respectively to identify the object sub-types according to the image characteristics.

For this purpose, the training images TPIC (OT, OST) are supplied to an assignment module SPL of the image evaluation device BA and are assigned by the assignment module to an object-type-specific configuration line as a function of the respectively associated object type (OT1, OT2, … … here). For clarity, only two object types, OT1 and OT2, are explicitly shown in fig. 1.

Thus, the training images TPIC (OT1, OST) associated with the object type OT1 are fed to the trained first neural network module CNN, which thereby generates image features FEA1 specific to the object type OT1 as a training output data set. The image features FEA1 of the respective training image TPIC (OT1, OST) are associated with the object sub-type OST of the training image and are transmitted in this association to the object-type-specific OT1 probability classifier BMLP1 as a sub-type recognition module. From these object associations, the probability classifier BMLP1 is configured to identify an object sub-type of the object type OT 1.

The probabilistic classifier BMLP1 is preferably constructed as a bayesian neural network. Advantageously, the bayesian neural network has a linkage structure between the nerves or corresponding frameworks corresponding to the second neural network module MLP. In the current embodiment, the probabilistic classifier BMLP1 is thus implemented as a multi-layered bayesian perceptron.

The configuration of the probability classifier BMLP1 can be implemented, for example, in such a way that the object-type-specific image features FEA1 of all or almost all training images TPIC (OT1, OST) of the object type OT1 are stored in the probability classifier BMLP1 in an associated relationship with the respective object subtype OST of the respective training image. This requires relatively little storage overhead in the case of the rare object type OT 1. Furthermore, unlike in conventional neural network training, essentially every training information about the associative relationship of image features and object sub-types is retained.

With the aid of the thus configured probability classifier BMLP1, for identifying an object subtype, it is possible, for example, in a simple manner to compare the image features of the image to be classified with all stored image features of this object type and to derive the object subtype with the smallest or small deviation as a classification result. In contrast, due to the typically very large number of training images, it is generally not possible to extend this comparison across all training images, i.e. across common object types, with acceptable overhead.

Alternatively or additionally, a statistical correlation between the image features FEA1 and the object sub-type OST is derived from the association of the image features FEA1 with the object sub-type OST and is illustrated or represented by an object-specific correlation parameter CP 1. From these correlations or correlation parameters CP1, a probability classifier BMLP1 can be configured. For this purpose, a large number of known standard methods can be used, by means of which the conditional probabilities, uncertainties and/or probability maps necessary for configuring the bayesian neural network are derived from the correlations as configuration parameters.

Furthermore, particularly when classifying the developmental stage of a cell type, the sequence information about the essential or possible sequences of the developmental stage can be evaluated by the probability classifier BMLP1 as additional information or auxiliary conditions in the sub-type identification. Preferably, a so-called sequential regression is performed in configuring the probabilistic classifier BMLP1 based on the sequence information.

In addition, a so-called prior distribution AD of the relevance parameters CP1 of the probability classifier BMLP1 can be derived from the learning parameters of the second neural network module MLP set by training (i.e. here from the optimized weights of the neural connections and/or from the statistical distribution thereof). In this way, as indicated by the dashed arrow in fig. 1, the learning parameters of the second neural network module MLP and other training parameters may be used as the hyper-parameters HP for configuring the probabilistic classifier BMLP 1.

The method approach described above exclusively with respect to the object type OT1 is likewise carried out for the object type OT2 and possibly also for other object types, in order to configure one or more further object-specific sub-type recognition modules BMLP2, … … of the image evaluation apparatus BA in this way to recognize object sub-types within the object type concerned.

With the aid of the trained and configured image evaluation device BA described above, newly recorded and/or not yet classified images should now be evaluated.

Fig. 2 shows such an evaluation of an image PIC of an object OBJ by means of a trained and configured image evaluation device BA. As long as the same reference numerals as in fig. 1 are used in fig. 2, the same entities are denoted.

In the present exemplary embodiment, the recorded object OBJ is a biological cell, the cell type of which is to be recognized as the object type and the development stage of which is to be recognized as the object subtype by the configured image evaluation device BA. The image PIC to be evaluated is taken by means of a microscope MIC provided with a camera.

The recorded image PIC is read by the image evaluation device BA and fed into the trained first neural network module CNN. The trained first neural network module CNN generates an output data set FEA from the image PIC, which output data set preferably comprises image features that are particularly well suited for object type recognition according to the above-mentioned training objectives. The output data set FEA generated from the trained first neural network module CNN is fed by the trained first neural network module CNN into the trained second neural network module MLP. The trained second neural network module MLP derives an output data set from the output data set FEA of the first neural network module CNN, which should describe the object type of the recorded object OBJ as correctly as possible according to the above training objectives. In the present embodiment it is assumed that: the output data set of the trained second neural network module MLP describes the object type OT1 of the object OBJ.

The image features FEA generated from the trained first neural network module CNN and the resulting object type OT1 are transmitted to a selection module SEL of the image evaluation device BA. The selection module SEL is coupled to the respective object-specific probability classifiers BMLP1, BMLP2, … … and serves to select one of the probability classifiers BMLP1, BMLP2, … … depending on the respectively derived object type. In the present exemplary embodiment, OT1 is derived as the object type, so that the probability classifier BMLP1, which is specific to the object type OT1, is selected by the selection module SEL.

Furthermore, as is indicated in fig. 2 by a dashed arrow, the image features FEA are transmitted exclusively from the selection module SEL to the selected probability classifier BMLP 1. The selected classifier BMLP1 associates the image features FEA with the stored training output data set or image features of the first neural network module CNN according to their relevance parameters CP1 and/or by comparison. In this case, an object subtyping is preferably derived which, depending on the correlation parameter CP1, correlates optimally with the image features FEA, i.e. with a maximum or sufficiently large correlation. Alternatively or additionally, a spacing between the image features FEA and the stored training output data set or image features of the first neural network module CNN may be determined. In this case, an object sub-type can be derived, the training output data set associated with which has a minimum or sufficiently small spacing. The object sub-type derived in the above manner may be understood as the most likely object sub-type OST of the object OBJ.

The object type OT1 obtained by the trained second neural network module MLP and the object subtype OST obtained here by the classifier BMLP1 are finally output by the image evaluation device BA as a classification result of the object OBJ.

By dividing the classification task into a data-driven object type recognition (also commonly referred to as big-data scheme) and a probabilistic object sub-type recognition, which is also suitable for small amounts of training data (small data), the recognition reliability can often be significantly improved. This is particularly useful for identifying object sub-types of rare object types.

In addition to the above-described application in medical diagnostics, the configuration method and/or the image evaluation method according to the invention can be used for the efficient classification of imaged objects in many other engineering fields. Thus, for example, for optical monitoring of engineering or non-engineering systems (such as optical monitoring of production facilities or agricultural areas), for optical sensing devices for autonomous systems or also for general optical sorting tasks.

12页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:满足指定神经网络分类器属性的图像的自动生成

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!