Method for generating and analyzing overview contrast images

文档序号:1631626 发布日期:2020-01-14 浏览:18次 中文

阅读说明:本技术 用于生成和分析概览对比度图像的方法 (Method for generating and analyzing overview contrast images ) 是由 丹尼尔·哈泽 托马斯·奥特 马库斯·施蒂克 于 2018-05-15 设计创作,主要内容包括:本发明涉及一种用于生成和分析样品载体(1)和/或布置在样品载体(1)上的样品的概览对比度图像的方法。其中将至少部分地布置在检测光学单元(2)的焦点中的样品载体(1)在透射光中以二维的、阵列形的照明图案(3)进行照明。检测至少两个具有样品载体(1)的不同照明的概览原始图像,并根据要从概览对比度图像中提取的信息来选择计算算法,通过所述计算算法基于至少两个概览原始图像计算出概览对比度图像。最后,根据要从概览对比度图像中提取的信息选择图像评估算法,通过所述图像评估算法提取信息。(The invention relates to a method for generating and analyzing an overview contrast image of a sample carrier (1) and/or a sample arranged on the sample carrier (1). Wherein the sample carrier (1) arranged at least partially in the focal point of the detection optical unit (2) is illuminated in transmitted light in a two-dimensional, array-shaped illumination pattern (3). At least two overview raw images with different illumination of the sample carrier (1) are detected, and a calculation algorithm is selected depending on the information to be extracted from the overview contrast image, by means of which the overview contrast image is calculated on the basis of the at least two overview raw images. Finally, an image evaluation algorithm is selected on the basis of the information to be extracted from the overview contrast image, by means of which information is extracted.)

1. Method for generating and analyzing an overview contrast image of a sample carrier (1) and/or a sample arranged on the sample carrier (1), wherein

Illuminating the sample carrier (1), which is arranged at least partially in the focal point of the detection optical unit (2), in transmitted light in a two-dimensional, array-shaped illumination pattern (3),

detecting at least two overview raw images with different illumination of the sample carrier (1),

selecting a calculation algorithm depending on the information to be extracted from the overview contrast image, by which an overview contrast image is calculated on the basis of at least two overview original images,

an image evaluation algorithm is selected on the basis of the information to be extracted from the overview contrast image, by means of which image evaluation algorithm the information is extracted from the overview contrast image.

2. Method according to claim 1, characterized in that an array of preferably equally sized lighting elements is used to generate the lighting pattern, wherein the respective lighting elements in the at least two overview original images are respectively distinguishable from each other.

3. Method according to claim 2, characterized in that as illumination elements, elements of LEDs, OLEDs, optical fibers, illuminated DMDs or SLMs are used.

4. A method as claimed in any one of claims 1 to 3, characterized in that the different illuminations are generated by moving the sample carrier (1) laterally relative to the illumination pattern (3) between two recordings or using different exposure times for detection.

5. Method according to claim 4, characterized in that the overview contrast image is composed of individual contrast images which respectively show different regions of the sample carrier (1) and/or the sample.

6. A method according to claim 2 or 3, characterized in that the different illuminations are generated by different illumination patterns (3), which are selected according to the information to be extracted.

7. Method according to claim 6, characterized in that for generating different illumination patterns (3) the illumination elements are controlled separately or in groups and switched to different illumination patterns (3), wherein a first part of the illumination elements is switched to emit light and at least one second part of the illumination elements is switched to emit no light or to emit light of another color or light of another polarization.

8. Method according to claim 7, characterized in that the different illumination patterns (3) are generated by:

a. randomly selecting a first part of the lighting elements for each lighting pattern (3), or

b. The first portion of the lighting elements has a checkerboard-like or other regular distribution in the array, preferably a cross-shaped distribution or a half-pupil distribution.

9. Method according to option b of claim 8, wherein the second portion of the lighting element is not illuminated, characterized in that at least two overview original images are recorded in mutually complementary lighting patterns (3).

10. Method according to option b of claim 8, wherein all sections of the lighting element emit light of mutually different colors or polarizations in pairs, characterized in that a number of overview raw images corresponding to the number of sections are simultaneously recorded in an image and then separated according to color channel or polarization.

11. The method according to claim 6 or 7, wherein as illumination element an LED (16) or an OLED is used, characterized in that the random illumination pattern is generated by using a pulse width modulated illumination element, the pulse width of which is selected to be longer than the integration time of the detector unit for recording the overview raw image.

12. Method according to claim 6, characterized in that the illumination pattern (3) is generated only in at least one section of the array of illumination elements, and the different illumination is generated by a scanning movement of said at least one section over the array, wherein illumination elements outside said at least one section are switched to emit no light.

13. The method according to claim 6, wherein the lighting element is configured as an LED (16), characterized in that the LED (16) is formed by three separate mutually adjacent sub-LEDs, which emit light in different primary colors red, green and blue, respectively, and different lighting is provided by different angles of lighting in the primary colors.

14. The method according to any one of claims 1 to 13, characterized in that the calculation algorithm for generating an overview contrast image in dark field mode or bright field mode is selected according to the selection of the illumination method.

15. The method of claim 14,

the calculation algorithm is based on a pixel-by-pixel projection, preferably a class projection or a projection of statistical moments, wherein for generating an overview contrast image based on a stack of at least two overview raw images, the overview raw images are compared pixel-by-pixel and an intensity value for one of the pixels of the corresponding position in the overview contrast image is selected depending on the projection conditions, or

The calculation algorithm is based on morphological operations with subsequent pixel-by-pixel projections, preferably based on top-hat or black-hat transformations with subsequent pixel-by-pixel maximum projections, or

A segmentation-based calculation algorithm is selected, wherein it is first determined for each pixel of the overview original image whether the pixel is directly illuminated by an illumination element, and is disregarded for generating an overview contrast image in the dark-field contrast mode.

16. Method according to one of claims 2 to 15, characterized in that for correcting geometric distortions, a calibration is performed by recording and evaluating a calibration pattern prior to recording the overview raw image, wherein the calibration pattern is preferably generated with an array of lighting elements.

17. The method of any one of claims 1 to 16, wherein prior to the evaluation, interfering background signals are removed from the overview contrast image.

18. Method for generating and analyzing an overview contrast image of a sample carrier (1) and/or a sample arranged on the sample carrier (1), wherein

Illuminating in transmitted light the sample carrier (1) arranged at least partially in the focal point of the detection optics (2) with a two-dimensional array-shaped illumination pattern (3),

wherein a scattering disk is introduced between the array-shaped illumination pattern (3) and the sample carrier (1), or a switchable scattering disk arranged there is switched to be scattering,

the overview contrast image is detected in bright field mode,

an image evaluation algorithm is selected according to the information to be extracted from the overview contrast image, by which information is extracted from the overview contrast image.

19. Method according to one of claims 1 to 18, characterized in that as image processing algorithm a machine learning algorithm, in particular a deep learning algorithm, is selected, which is preferably trained on overview contrast images with known information.

20. The method of claim 19,

for identifying the type of the sample carrier (1), a deep learning algorithm based on a convolutional neural network is used,

in order to locate the structure of the sample carrier (1) or sample, a semantic segmentation is performed, preferably using a deep learning algorithm based on a full convolutional network.

Technical Field

The invention relates to a method for generating and analyzing an overview contrast image of a sample carrier and/or a sample arranged on the sample carrier.

Background

The development of conventional optical microscopes is based on the premise that: the user, sitting or standing, observes the sample carrier through the eyepiece and can interact directly with the sample, which means that he can on the one hand quickly obtain an overview of the sample and the field of view of the objective and on the other hand can move the sample carrier laterally together with the sample, directly or by means of an adjustable sample stage, to bring other areas of the sample into the field of view of the objective. In this case, the user of the microscope can stay in his position with only a minimal movement of his head, so that a conventional microscope is highly ergonomic in this respect.

As time goes on, examination methods, in particular for biological samples, are further developed, as a result of which the structure of microscopes suitable for carrying out said examination methods becomes more and more complex. In today's microscope systems, which allow recording of an image stack in the viewing direction and reconstruction of a spatial image of the sample therefrom, images are generated by means of a detector. As detector, for example, a camera equipped with a corresponding surface sensor is used, or also a photomultiplier tube. Thus, in such systems the workplace has been transferred from the microscope stand, and thus from the sample to the computer or the screen of the computer. But on the other hand are used as such and require a working place in front of the rack to prepare (i.e. set up) the sample for examination. For this purpose, the sample on the sample carrier must be moved into the field of view of the objective, a sample region must be selected, the position must be adjusted for said sample region, and finally the sample must be focused. Thus, when using modern complex microscope systems, a work process is associated with two work areas which represent different steps in the work process and which are spatially separated from one another — on the one hand a microscope stand with an eyepiece for direct observation and on the other hand a screen of a connected computer.

The direct observation of the sample, i.e. the position of the field of view of the objective in the sample, is greatly limited by the addition of other instruments, such as incubators for examining living cells. Furthermore, if larger sample carriers are used so that a plurality of samples, for example multiwell plates, can be examined one after the other, the orientation on the samples is also hindered.

Thus, it becomes difficult for the user to find the sample and set the sample area, and furthermore there is a loss of orientation on the sample. For setup, multiple switches between the computer work site and the microscope stand where the sample can be viewed directly are required.

On the other hand, there are also microscope systems designed for handling large amounts of samples and not under constant control. Where the sample area must be automatically identified and microscopically examined. In this case, the sample carrier usually has a marking, for example in the form of a barcode or QR code, which has to be assigned to the sample. Such an examination is performed semi-automatically, the user merely taking part in, for example, changing the sample carrier, setting an overview image or registering the sample carrier number.

Especially when using multi-well plates in High Throughput (High Throughput) microscope systems, it is the case that some wells do not contain any sample or contain incorrectly embedded, contaminated or defective samples. These holes are also inspected in a semi-automated manner, although the results may not be usable, so that more time is spent than actually needed.

Although different methods for generating overview images exist in the prior art, these methods have more or less major disadvantages. For example, images can be recorded with a microscope optical unit and a weakly magnifying objective lens and a camera arranged downstream of the objective lens. However, this makes it possible to record only small object fields compared to the size of the sample or sample carrier, in particular if a sample carrier for a plurality of samples is used. Therefore, in order to be able to record large sample areas, for example for porous sample carriers, it is necessary to record a plurality of images of sections of the sample or of the sample carrier which are adjacent to one another. The images must then be merged. This procedure is rather time consuming and not suitable for e.g. examining a biopsy.

If the microscope optical unit is omitted and instead a camera with a camera objective is used, whereby a large object field can be imaged, an overview image can also be recorded; the camera objective is typically not telecentric. This solution is for example implemented in the applicant's AxioScan series, but can only be used with bright field illumination and incident light. It is difficult to detect coverslips and unstained samples in this way.

Other solutions illuminate the sample and the sample carrier obliquely, i.e. at an angle different from zero with respect to the optical axis, wherein the back-scattered light is detected. In this case, the measurement sensitivity of the object field to be measured fluctuates greatly, so that the result is not always reliable.

Disclosure of Invention

It is therefore an object of the present invention to generate an overview image of a sample carrier, in which the structure of the sample carrier itself and optionally further structures, such as markings on the sample carrier, possible sample regions, immersion liquid present if appropriate, actual samples and/or sample defects such as bubbles, dirt, etc., can be detected clearly and without errors. The overview image should have an increased contrast or an improved signal-to-noise ratio of the respective structure of interest compared to a conventionally recorded or even HDR image of the camera, and is therefore referred to below as overview contrast image. Overview contrast images may provide navigation for the user, but it may also be used to improve automated analysis of the sample and reduce susceptibility to defects by, for example, detecting defects in the sample.

This object is achieved by a method of the type mentioned in the opening paragraph by generating an overview contrast image in the following manner: the sample carrier (usually provided with at least one sample) is arranged at least partially in the focal point of the detection optical unit and is illuminated in transmitted light in a two-dimensional, preferably array-shaped, illumination pattern. At least partially, this means that the sample carrier or sample does not have to be completely visible, but in particular the sample carrier and/or the sample can also have a range along the optical axis which is greater than the depth of field of the detection optical unit used. As detection optical unit, the optical unit of the microscope can be used, but preferably a camera with a camera objective makes it possible to image a large object field onto the flat panel detector, preferably with a sufficiently high depth of field.

In this configuration of the method, for generating an overview contrast image, at least two overview raw images with different illumination of the sample carrier need to be detected.

The overview raw image, i.e. the unprocessed image, is then detected, for example by a camera, which is done by registering the intensity pixel by a flat panel detector (e.g. a CMOS chip). Depending on the type of lighting, the overview raw images may be recorded sequentially or simultaneously, e.g. using a single camera.

Depending on the type of illumination and the information to be extracted from the overview contrast image, a calculation algorithm is selected by which the overview contrast image is calculated on the basis of the at least two overview raw images. The information can be, for example, the already mentioned structures of the sample carrier, the sample, etc. and the label of the sample carrier.

Also, in accordance with the information to be extracted from the overview contrast image, an image evaluation algorithm is selected by which information is extracted from the overview contrast image. This information can then be used, for example, by the user on the screen of a connected computer to initiate further steps within the scope of observation and analysis, for example for navigating through the sample, for which purpose the image is displayed on the screen and the region of the sample of interest is selected by the user, for example by clicking a mouse. Based on the image evaluation, the microscope system can automatically adjust to this position. However, within the scope of automated examinations, for example in high-throughput examinations, this information can also be used to exclude incorrectly filled microtubes (wells) of porous carriers, such as microtiter plates, so that they are not even positionally adjusted by a microscope.

The basic aspects relate to the use of two-dimensional, preferably array-shaped illumination and the recording of overview raw images with different illumination. Two-dimensional illumination can be obtained in different ways, wherein for generating the illumination pattern preferably an array of illumination elements of the same size is used. In any case, the individual lighting elements must be distinguishable from one another in the at least two overview original images, that is to say they must be able to be represented separately from one another in the images, although without being focused on them. The illumination element may be an LED, an OLED, the end of an optical fiber, an illuminated DMD (DMD-digital micromirror device) or other element of a spatial light modulator. It may be an active or passive light emitting element. The lighting element can also be produced, for example, with a planar light source, in front of which, for example, a switchable filter structure is arranged, by means of which one or more properties of the lighting element, for example color, intensity or polarization, can be manipulated. However, Light Emitting Diodes (LEDs) can be used particularly advantageously, since they can be arranged on the one hand in a nested array of multicolored LEDs and on the other hand also provide a sufficiently high light power, and finally it is also already possible to obtain microscopes using LED arrays for illumination, i.e. LEDs arranged in a matrix or grid (LED array microscope, angle illumination microscope-AIM). The LED array of such a microscope can also be used to generate a two-dimensional illumination pattern.

The different illumination for recording the at least two overview raw images can be realized in different ways. A simple possibility is to use a static two-dimensional illumination pattern and to move the sample carrier transversely with respect to the illumination pattern, i.e. perpendicularly to the optical axis of the detection optical unit, between two recordings. This can be achieved by moving the illumination pattern itself (also arranged in a plane normal to the optical axis) or by moving the sample carrier. In this case, the sample carrier or sample is illuminated in transmitted light, i.e. the sample carrier is located between the illumination elements of the illumination pattern and the detection optical unit, e.g. a camera.

In addition to spatially different illumination, temporally different illumination can also be used, by recording the overview raw image, for example, with different exposure times or with different lengths of illumination for the same exposure time, in which case the signal to noise ratio is poor. From these overview raw images, an overview contrast image can be calculated according to the HDR method (HDR — high dynamic range) known in the art.

Another possibility consists in generating spatially different illumination by means of different illumination patterns, wherein the illumination patterns are preferably selected depending on the information to be extracted. In principle, a plurality of illumination patterns, which may be applied to an array of illumination elements, for example, is suitable. For example, different illumination patterns may be generated by controlling the illumination elements individually or in groups and switching them to different illumination patterns, wherein a first portion of the illumination elements is switched to emit light and at least one second portion of the illumination elements is switched to emit no light or to emit light of another color or light of another polarization. If at least a second portion of the lighting elements does not emit light, exactly two portions are present in each pattern and the overview raw image is detected successively. If the second part of the lighting elements emits light of another color, the lighting elements may also be divided into more than two groups and comprise a third part or further parts, respectively, emitting light of another color, wherein the colors differ from each other pair by pair. The corresponding situation applies to polarization. The overview raw image when illuminated with light of different colors can be recorded simultaneously, provided that it is separated into different color channels on the detection side. If, for example, the array of lighting elements has LEDs of the three RGB primary colors red (R), green (G) and blue (B), and the sensor of the camera has corresponding sensors for these color channels, separation can be easily achieved and three overview raw images can be recorded simultaneously. The corresponding situation applies to polarized illumination if, for example, the LED array is equipped with polarizing filters of different polarization and the polarization direction is likewise detected and used for the separation of the channels.

If the illumination element is configured as an LED and the LED is formed by three mutually adjacent sub-LEDs emitting light in different primary colors red, green and blue, respectively, different illuminations can also be provided by illuminations with different primary color angles. Here, the overview original image may also be detected simultaneously.

Another possibility consists in generating an extrinsic different illumination pattern and recording the overview original image in succession with the different illumination patterns. In a first variant, this can be done, for example, by: the first part of the lighting elements is randomly selected for each lighting pattern, wherein the individual lighting elements can be randomly controlled and switched to emit light or not, wherein the boundary conditions of the as good as possible equal distribution of light emitting elements and non-light emitting elements should be noted. In a second variant, a pulse-width-modulated illumination element is used (which can be realized particularly well with LEDs or OLEDs), wherein the selected pulse width is longer than the integration time of the detector unit for recording the overview raw image. In this case, it is not necessary to control the lighting elements individually.

Instead of a random illumination pattern, an illumination pattern may also be generated in which the illumination elements are regularly distributed. For example, a checkerboard, cross or half-pupil distribution may be used for the illuminating element that emits light. In particular in the case of a checkerboard-type distribution, there are two possibilities: on the one hand, the second part of the illumination element may not emit light, so that the two overview original images are recorded one after the other with illumination patterns that are complementary to one another. In the case of checkerboard type lighting, the patterns are reversed from one another. On the other hand, the two parts of the illumination element may also emit light of different colors or polarizations, in which case the overview original image may be recorded simultaneously in one image and then separated according to the color channel or polarization. In the case of checkerboard-type lighting with two lighting patterns, the two patterns are not only complementary, but also inverted with respect to each other. In the case of the use of a plurality of patterns, for example a scanning array of individual luminous illumination elements, all the patterns work together in a mutually complementary manner, that is to say overall an array consisting only of luminous illumination elements is produced. In the case of half-pupil illumination, two of the four necessary illumination patterns are respectively complementary to one another.

Finally, different illumination may also be achieved by selecting at least one section from the array of illumination elements and generating an illumination pattern only in said section. The different illuminations are achieved by a scanning movement of at least one section on the array, wherein the illumination elements outside the at least one section are switched to emit no light. For example, illumination by a single LED may be used here, as well as a section of checkerboard-type illumination. In this case, in the case of large sample carriers, a plurality of parallel-moving segments can be selected, combinations of illumination elements of different colors being likewise possible in order to generate a plurality of segments simultaneously.

After the overview raw images have been recorded, a calculation algorithm is selected from the information to be extracted from the overview contrast images, by means of which calculation algorithm the overview contrast image is calculated on the basis of the at least two overview raw images. Preferably, the selection of the calculation algorithm is also made in accordance with, i.e. adapted to, the previous selection of the lighting method. The overview contrast images are preferably generated in dark field mode or bright field mode, as they may achieve the best contrast, but mixed modes are also possible. It should be ensured here that the contrast is optimal for the structure concerned, for example depending on whether the cover slip, the sample itself or the marker is to be presented with as high a contrast as possible. In some cases, depending on the required information, an overview contrast image may be generated from the overview raw image in bright field and dark field modes. In addition to generating the overview contrast image in dark field or bright field mode, other types of contrast may be generated, such as the overview contrast image in HDR mode, which contains both dark field and bright field components.

In a first configuration, the calculation algorithm is based on a pixel-by-pixel projection, preferably a hierarchical projection or a projection based on statistical moments. In this case, in order to generate an overview contrast image on the basis of a stack of at least two overview raw images, the overview raw images are compared pixel by pixel and the intensity value of one of the pixels is selected for the respective position in the overview contrast image in accordance with the projection conditions. The recorded image stack is thus calculated pixel by pixel, that is to say each pixel in the overview contrast image is only influenced by pixels in the image stack which are located at the same image position in the overview original image. In the case of hierarchical projection, the respective pixel values of the overview raw image are classified by intensity for the image position, and a value corresponding to a p-quantile is used for the overview contrast image, where p is a parameter specified by the user and preset by the calculation method. A special case is the minimum projection with p ═ 0.0, where the pixel with the smallest intensity is selected; other special examples are the maximum projection with p 1, where the pixel with the highest intensity is selected, or the medium projection with p 0.5.

This type of calculation can be used to generate an overview contrast image with dark field contrast or bright field contrast depending on the illumination used to generate the overview raw image. For example, if the illumination element (in particular if it is designed as an LED) is not overdriven and p is 1, i.e. the maximum projection is selected, an overview contrast image of bright field mode can be generated. However, it is also possible to generate the overview contrast image in the dark field mode if, for example, an overview contrast image which is as bright as possible is generated with as large a p as possible, wherein only pixels which are not directly illuminated by the illumination element are taken into account in the calculation. If two overview raw images are generated, for example, with a checkerboard-like distribution of the first part of the illumination elements and a distribution complementary thereto, p is selected to be 0 and a minimum projection is carried out. Conversely, if the illumination pattern is generated in only one section of the array and the section is moved across the array in a scanning manner, the frequency of direct illumination of each pixel in the image is lower compared to the recording of the overview raw image. If, for example, four LEDs connected in the form of a cross are used and 30 overview raw images are generated for a scanned checkerboard pattern, each pixel is illuminated at most four times directly by an LED having a significantly larger diameter than the pixel. Then the value is p ═ ((30-1) -4)/(30-1) ═ 0.8621.

Alternatively, an algorithm based on statistical moment projections may be used. In this case, each pixel in the overview contrast image corresponds to a statistical moment, for example the standard deviation of the intensity at the corresponding pixel of the overview raw image. This provides good contrast and also keeps details small, especially in combination with a series of statistical LED patterns that are moved laterally relative to the sample carrier, making the calculation algorithm particularly suitable for use in, for example, detecting porous sample carriers or chamber slide sample carriers.

The advantages of the above-described projection methods as computational algorithms are that, on the one hand, they can be parallelized very well and thus allow very fast computations, and, on the other hand, because all pixels are treated equally, no seam artifacts occur, as is the case, for example, in the case of segmentation-based computational algorithms, in which case the boundaries of the structure appear to jump discontinuously at the seam when this is disadvantageous.

In another configuration, the calculation algorithm is based on morphological operations with subsequent pixel-wise projections, preferably based on top-hat or black-hat transformations with subsequent pixel-wise maximum projections. A top hat transform may be used to highlight bright structures on a dark background, while a black hat transform may highlight dark structures on a dark background. The glass edges, i.e. the edges of the sample carrier or the cover slip, can be made visible in particular with these calculation algorithms. Next, a pixel-by-pixel maximum projection is formed on the overview original image thus transformed, thereby generating an overview contrast image. The advantage of this calculation algorithm is that information is obtained from both bright and dark field contrast and is also efficiently calculated. However, the contrast in the calculated image is mostly poor compared to the level projection and is often only visible at the glass edges. In addition, strong background artifacts are generated, which must be subsequently removed.

In another configuration, a segmentation-based calculation algorithm is selected, wherein it is first determined for each pixel of the overview original image whether the pixel has been directly illuminated with light by the illumination element. Thus, these pixels are not considered for generating the overview contrast image in the dark-field contrast mode. The overview contrast image is also generated here using a projection method. The advantage of this calculation algorithm compared to hierarchical projection is that it is possible to unambiguously determine which pixel information from the overview original image can be used. A disadvantage is that seams may occur in the calculated contrast image due to unequal treatment of the segments and the resulting pixels. In addition, this calculation cannot be performed as efficiently as in the aforementioned calculation algorithm.

In an alternative configuration of the method operating in the bright field mode, the overview contrast image is not generated by calculation but is detected directly, i.e. no overview raw image is detected, or in other words in this case the overview raw image is identical to the overview contrast image. For this purpose, a scattering disk is inserted into the beam path between an array-shaped illumination pattern, which here can also be composed of an array of illumination elements, preferably of the same size, and the sample carrier. The scattering disk, which generates diffuse illumination that is advantageous for bright field illumination, can also be permanently held in the beam path, as long as it is switchable, so that it is switched on, i.e. switched to scattering, only for generating an overview contrast image in bright field mode.

For the case of a lateral movement of the sample carrier relative to the illumination pattern between two recordings, it is necessary for the calculation to know how the sample or the illumination pattern moves in the image in this case. For this purpose, the camera is calibrated relative to the sample carrier or the illumination pattern, so that the coordinates of the platform, on which the sample carrier is held and which can be moved for the movement of the sample carrier, can be plotted on the image coordinates in the event of a movement of the sample carrier. Corresponding processing can be performed for the movable illumination pattern. In the case of a travelable platform, for calibration, a calibration pattern, for example a checkerboard, is in particular placed in the platform, whereby the coordinate mapping can be estimated with sufficient accuracy. Alternatively, such a calibration can also be omitted in the calculation and the movement of the sample can be determined by image analysis or using another measuring system.

In particular, for the case in which the illumination pattern is fixed in position and the sample carrier is moved between recordings, it is possible to generate an overview contrast image after calibration, i.e. after quantifying the actual movement of the sample carrier relative to the image, even for larger sample carriers which are not completely suitable for the object field detectable by the camera or the detection optics, by first generating a single contrast image which respectively shows different regions of the sample carrier and/or the sample and which is generated by a corresponding single overview raw image. These single contrast images are then combined into an overview contrast image, where calibration is used to correctly position the connection points for joining to each other.

Advantageously, independently of this, a calibration is also carried out before recording the overview raw image by recording and evaluating a calibration pattern in order to correct for geometric distortions. Calibration patterns are objects with known geometry and well detectable structures, such as the above mentioned checkerboard patterns, which are placed at different positions in the image field of the camera and recorded by the camera. However, an array of lighting elements may also be used as the calibration pattern, in particular if they are configured as LEDs. Such calibration is known in the art.

As already indicated in connection with the segmentation-based calculation algorithm, depending on the calculation algorithm used, background artifacts may appear in the generated overview contrast image. Such disturbing background signals are preferably removed, i.e. subtracted, after the generation of the overview contrast image and also before its evaluation by a corresponding correction algorithm. If the illumination pattern or the sample carrier is not moved laterally, the artifacts typically form a periodic structure that follows the position of the individual illumination elements. It can then be removed by so-called self-filtration. Further correction methods are known from the prior art with which the occurrence of background artifacts can be removed or at least reduced, in particular by recording or calculating a background image and then subtracting it from the overview contrast image, i.e. subtracting it therefrom. For example, the background image may be determined from a calculation of the overview raw image by averaging the foreground. The background image can also be determined from a recording without sample carrier or with an empty sample carrier. The background image may be determined from the calculation of the background contrast image by, for example, calculating an average pixel value in a local area around the non-emitting lighting element, by analyzing all lighting elements in the overview contrast image. This is because the foreground structure is independent of the position relative to the lighting elements and averaged, but the background structure depends on the position and is thus enhanced. This generates a background image by the position of the illumination element, which can then be subtracted from the overview contrast image. Another possibility for subtracting background artifacts is to use a band-pass filter, if necessary in combination with a non-linear filter.

In the last step, the background contrast image is finally automatically analyzed by the image processing algorithm to be selected and the required information is extracted. In this case, the information to be extracted includes one or more of the following data: the type of sample carrier, the marking of the sample carrier, the position of the sample or sample area, the cover slip, the microtubes of the multi-well plate in the image, the information of the immersion liquid such as the position, volume, shape, artefacts, defective samples, air bubbles etc. Only due to the high contrast in the overview contrast image can this information be extracted reliably.

As image processing algorithms, for example, algorithms based on machine learning principles, in particular deep learning algorithms, which are preferably trained on overview contrast images with known information, can be selected.

In order to automatically extract the above information from the overview contrast image, a method from the field of machine learning may be used. In this case, an annotated training sample comprising a number of contrast images to be analyzed is provided for each task, including, for example, image classification, segmentation, localization, detection. In this case, a desired output corresponding to the task is assigned to each contrast image, which will be further explained below with reference to an example. The model can then be automatically adjusted in a learning step using machine learning techniques to produce the correct output required even for the unviewed images, i.e. the new images.

A possible algorithm based on machine learning techniques is outlined by way of example below. Alternatively, methods from traditional image and signal processing may also be used, but algorithms based on machine learning, in particular based on deep learning, offer significant advantages, in particular in terms of quality, robustness, flexibility, versatility and development and maintenance costs.

In order to identify the type of sample carrier, e.g. whether it is e.g. a multi-well sample carrier, petri dish, chamber slide, etc., a deep learning algorithm based on a convolutional neural network may advantageously be used. Thus, it is a task in the field of image classification, which uses images as input and outputs a class. In this case, the training sample comprises contrast images and each contrast image is assigned an associated sample carrier type, the first contrast image is assigned a "petri dish" type, the second contrast image is assigned a "multi-well plate" type, the third contrast image is assigned a "chamber slide" type, the fourth contrast image is assigned a "slide" type, and so on.

Convolutional Neural Networks (CNNs) are composed of different layers, such as convolutional layers, pooling layers, nonlinear layers, etc., the arrangement of which is specified in the network architecture. The architecture for image classification follows a certain basic structure but is in principle flexible. Each element of the network receives input and computes output. In addition, some elements of the network have free parameters that determine the output calculations from the inputs. A three-dimensional sequence, i.e. a color image with three color values per pixel, is input to the first layer as an input to the network. Then, as an output of the network, the last layer outputs the probability distributions of all possible sample carrier types — the output of the overview contrast image is for example: 87% of slide, 1% of perforated plate, 2% of petri dish and 10% of chamber slide. Optionally, a reject class may also be included, which produces values such as "unknown"/"invalid"/"null". The free parameters of the network are adjusted by a training process on the basis of the provided training data in order to fit the output of the model as well as possible to the expected output.

In this case, the training can also use models that have been trained on other data as a starting point for fine-tuning, which has advantages in terms of quality, time consumption and data consumption.

As an alternative to CNNs or deep learning methods derived from or related to CNNs, image classification can also be performed using conventional machine learning methods, which typically include two steps: (I) feature extraction and (ii) classification. In the feature extraction of step (i), the overview contrast image is converted into an alternative representation, typically into a compact or sparse vector, by a predefined algorithm. A simple example here is e.g. a local gradient histogram (histogram of oriented gradients, HoG). Each of these feature vectors is then assigned a class in the classification of step (ii) by means of a classifier. One example of a classifier is a Support Vector Machine (SVM). The free parameters of the classifier are also adjusted in the training step in order to match the actual output to the desired output as well as possible.

The hybrid approach between traditional machine learning and deep learning is based on feature extraction in step (i) using CNN. In this case, CNNs that have been trained for other data are switched off at some level and the activation of the network is used as a feature vector.

The positioning of structures of the sample carrier on the one hand (for example coverslips or petri dishes in the case of slides and chambers or multiwell plates of chamber slides) and of the sample or sample region on the other hand can be regarded as a matter of semantic segmentation, that is to say that for an overview contrast image as input image such an image should be returned: each pixel of the input image is assigned a class in the image (e.g., "background", "coverslip", "dirt", "sample", etc.). Preferably, this can also be preferably achieved with networks from the deep learning domain, for example using CNN-based Full Convolutional Networks (FCN).

Like CNNs, FCNs generally expect a three-dimensional array (i.e., a color overview contrast image) as an input, but produce as an output an array in which each pixel of the input image is assigned a probability of belonging to each appearing class. In this case, the training samples comprise contrast images, and each contrast image is assigned an array of the same size (grayscale image), wherein each pixel is in turn assigned a class encoded by the grayscale. Training proceeds similarly as in CNN. In this case the positioning of the sample carrier and the positioning of the sample area may be performed using different FCNs, but it is particularly feasible to perform the evaluation using a single FCN, which may map or contain both the "cover slip" and "sample" classes.

Of course, the features mentioned above and those yet to be explained below can be used not only in the given combination but also in other combinations or alone without departing from the scope of the invention.

Drawings

The invention is explained in more detail below, with reference to the appended figures, which also disclose the essential features of the invention. In the figure:

figure 1 shows an apparatus for performing the method for generating and analyzing overview contrast images,

figure 2 shows the structure of a microscope suitable for this,

figure 3 shows a detail of the illumination,

figure 4 shows the generation of a random illumination pattern,

figure 5 shows two complementary checkerboard type illuminations,

figure 6 shows a checkerboard type illumination with different colours,

figure 7 shows a cross-shaped distribution of lighting elements,

figure 8 shows the half-pupil distribution of the illumination element,

figure 9 shows a scanning movement of a segment with an illumination pattern,

10-12 illustrate the generation of an overview contrast image from an overview raw image with a static illumination pattern.

Detailed Description

First, fig. 1 outlines a structure with which an overview contrast image of a sample carrier 1 and/or a sample arranged on the sample carrier 1 can be generated. In this case, the sample carrier 1 is arranged at least partially in the focal point of the detection optical unit 2 and the illumination pattern 3 in the form of a two-dimensional array is illuminated in transmitted light. To generate the overview contrast images, at least two overview raw images are first detected with different illumination of the sample carrier 1. For this purpose, a flat detector 4 is used, for example, to detect the image formed by the optical unit 2. The detection optical unit 2 may be a microscope objective with a small magnification, but is preferably the objective of a camera, which is capable of imaging a large area of the object field, which ideally includes the entire sample carrier 1 in the overview. Accordingly, the flat panel detector 4 is in this case a sensor of a camera, for example a CMOS chip. Depending on the configuration and in particular on the choice of illumination, the flat panel detector 4 only credits the intensity, for example in the case of white illumination, or separates the intensity into different color channels, for example red (R), green (G) and blue (B). Similar to the different colors, corresponding sensors, which also register polarization, can also be used to take into account the different polarizations in the illumination and for splitting into different channels.

Depending on the type of pattern and the type of illumination, the overview raw image is recorded simultaneously or one after the other, with a corresponding intensity value being assigned to each pixel. The overview raw image is then supplied to the calculation unit 5 to calculate an overview contrast image. In the calculation unit 5, a calculation algorithm is selected from the information to be extracted from the overview contrast images and optionally also from the illumination, by means of which calculation algorithm the overview contrast image is calculated on the basis of the at least two overview original images. The overview contrast image is then supplied to an image evaluation unit 6, in which image evaluation unit 6 an image evaluation algorithm is selected on the basis of the information to be extracted from the overview contrast image, by means of which image evaluation algorithm the information is finally extracted. These information are transmitted to the control unit 7, which accordingly further processes them and excludes such porosities from the microscopic analysis, for example in a high-throughput method: wherein evaluation of the overview contrast image indicates that the porosity is not filled correctly, e.g. contains a defective sample or air bubbles, etc. Of course, the overview contrast image can also be displayed to the user on a screen which is connected to the image evaluation unit 6 or the control unit 7 and can be part of said unit, so that the user can manually carry out the corresponding settings. The calculation unit 5, the image evaluation unit 6 and the control unit 7 may be integrated together in a PC as hardware and/or software.

As already indicated in connection with the description of fig. 1, the method can also be easily performed with existing microscopes. Particularly suitable are microscopes that are illuminated using an array of LEDs, where the illumination pattern 3 is produced by the LEDs of the array. Such a microscope is shown by way of example in fig. 2, which is used, for example, for Angle Illumination Microscopy (AIM). The sample carrier 1 is illuminated here by an LED array 8, and the detection optics 2 here comprise, for example, two lenses, between which a deflecting mirror 9 is arranged for folding the optical axis. A part of the light is directed onto the flat detector 4 by means of a beam splitter 10, while another part of the light is directed onto an eyepiece 12 by means of a lens 11, so that the overview original image can also be observed by an observer. The camera and the illumination device can be positioned particularly well on an inverted microscope stand by, for example, arranging the LED array 8 by which the illumination pattern 3 is generated above the arm ready for transmitted light illumination and placing the camera below the sample, for example on an objective turret.

For generating the illumination pattern 3, preferably an array of illumination elements is used, preferably having the same dimensions. As illumination elements, for example, LEDs, OLEDs, optical fibers, i.e. the ends or exit surfaces thereof, can be used as active light sources, or elements which are illuminated DMDs (digital micromirror devices) or other spatial light modulators can be used as passive illumination elements. If, for example, the LED is referred to as a light source in the following, this is to be interpreted only intuitively and does not explicitly exclude the use of other possible arrays of lighting elements.

The recording of the overview raw image is performed by a camera with a flat panel detector 4; the objective of the camera is focused on the sample carrier 1 as shown in fig. 1 and directed at the illumination pattern 3 or the illumination behind the sample carrier 1, which is not in focus. In this case, a camera with an objective lens having a large object field and not being telecentric may be used. No additional optical elements have to be placed between the sample carrier 1 and the illumination pattern 3, which may be configured as an LED array, for example, to manipulate the illumination. In general, the distances between the detection optical unit 2 or the flat panel detector 4 and the sample carrier 1 and between the sample carrier 1 and the illumination pattern 3, respectively, can be chosen in the range between 0.5cm and 10cm, but in particular can be larger in order to detect the entire sample carrier 1 if necessary.

In principle, the distance can also be freely selected, as long as various conditions are met: (i) the sample carrier 1 must be located in the focal plane of the detection optics 2; (ii) the structure of the sample carrier 1 to be analyzed, for example the edge of a cover slip, can still be resolved by the camera; (iii) the structure generated by the illumination pattern 3 has to be identifiable on the image registered by the flat panel detector 4, i.e. the individual illumination elements have to be distinguishable and advantageously cover the entire structure to be analyzed, which may be influenced by the respective choice of the size of the array of illumination elements, the size of the illumination elements and their distance from each other, so that for example an array of LEDs is very suitable for large structures, such as sample carriers. If they do not completely cover the structure, a combined overview contrast image can be generated with corresponding calibration.

The illumination is explained in more detail below with reference to fig. 3, wherein LEDs are exemplarily used as illumination elements. Here, only one LED13 is exemplarily picked out of the illumination pattern 3, which is arranged behind the sample carrier 1. The distance is chosen at will and in practice the illumination pattern 3 may be arranged further away. However, the arrangement directly behind the sample carrier 1 allows for an arrangement near the focal plane, thereby improving the resolution of the individual illumination elements.

Recording is performed using a non-telecentric detection optics unit 2. Each LED13 that is switched in serves as bright field or dark field illumination depending on the field of view area of the sample. For a first field area 14, the LED13 is arranged directly behind the sample or sample carrier, where the transmitted light component dominates, so that the LED13 serves as bright field illumination and generates a corresponding bright field contrast for this first field area 14. However, for the second field area 15 beside the LED13, the LED13 is used as dark field illumination and may be used to generate a dark field contrast. If an overview contrast image is to be generated in bright field mode, a scattering disk may optionally be introduced between the array of illumination elements and the sample carrier 1, since a diffuse light source is advantageous for bright field contrast but not for dark field contrast. The scattering disk can be introduced separately, but it can also be permanently positioned in the light path and be switchable such that light diffusion only takes place when the scattering disk is switched on. By generating overview raw images with different illumination, which can be realized in particular by different illumination patterns 3, bright-field and/or dark-field information of the sample carrier 1 as well as of the sample itself can be extracted and presented in the overview contrast image.

Different lighting is selected depending on the information to be extracted. This information usually includes the type of sample carrier, for example whether the sample carrier is a plain slide in the sense of a small glass plate or a simple petri dish, or a multi-well sample carrier with a plurality of open microtubes, or a sample carrier with different closed sample chambers (chamber slides), which are therefore provided with cover slips. The sample numbers are often given on the sample carriers 1, for example by hand marking, but are more often encoded with bar codes or QR codes, which can be interpreted in conjunction with a corresponding database. In particular, when using sample carriers with a plurality of sample chambers or microtubes, possible sample regions are to be identified. It is also desirable to be able to identify samples and to be able to identify defects such as bubbles, contamination or empty sample chambers. Furthermore, the presence of immersion liquid and its form, volume and location should be identifiable.

After the required information is automatically extracted from the overview contrast image by the algorithm for image processing, the following steps may be automatically or semi-automatically performed according to the task. As an example, it is mentioned here that the selection of which microwell in the multiwell plate is to be examined can be done automatically or completely by the PC without the user having to look through the eyepiece again, but wherein it is also possible to present an overview image of the complete sample carrier 1.

Different illuminations and calculation algorithms suitable for this will be described below with reference to fig. 4-12. For illustration purposes, an LED array is always used here, but other lighting elements, such as mentioned above as an example, can just as easily be used.

A first possibility consists in generating different illuminations by different illumination patterns 3, which are selected according to the information to be extracted. Such illumination patterns are shown in fig. 4-9.

For example, different illumination patterns 3 may be randomly generated. This is illustrated in fig. 4, with six panels showing different illumination patterns, where the LEDs 16 of the LED array 17 emit white light and are randomly switched on or off. In the on state they are shown as small circles and in the off state they are omitted to clearly identify the different illumination patterns.

A simple possibility for generating the random illumination pattern 3 consists in using a pulse width modulated illumination element, the pulse width of which is selected to be longer than the integration time of the detector unit for recording the overview raw image, wherein the selection can also be made with a determination of the specified integration time. During the integration time of the camera, some LEDs are then switched on and others are switched off, because the pulse width modulation between the LEDs 16 is not synchronized. In this case, the LEDs 16 of the LED array 17 do not have to be individually controllable or switchable.

It is of course also possible to control the lighting elements individually or in groups and to switch to different lighting patterns. In this case, the first part of the illumination element is switched to emit light and the at least one second part of the illumination element is switched to emit no light or to emit light of another color or to emit light of another polarization. In case the random illumination pattern in fig. 4 is generated, the first part of the illumination elements is randomly selected for each illumination pattern 3. The second portion of the lighting element does not emit light. In order to be able to generate high-quality overview contrast images, relatively many overview raw images are required, so that usually several seconds are required to record the overview raw images. This time can be shortened by the second part of the illumination element not being switched off but emitting light of another color, so that two overview original images are recorded simultaneously and then separated by color.

The overview contrast image can be generated both in the dark field mode and in the bright field mode according to a calculation algorithm, which can be based here, for example, on a top-hat transformation for the bright field contrast image or a black-hat transformation for the dark field contrast image, each with a subsequent pixel-by-pixel maximum projection, wherein both transformations can be applied equally to the overview raw image, so that the overview contrast image can be generated both in the bright field mode and in the dark field mode. In this illumination, the glass edges, i.e. the edges of the sample carrier 1 or the cover glass, can be made clearly visible, which have a high contrast compared to the actual sample.

When using random illumination patterns relatively more images (typically between 30 and 50) have to be recorded to obtain a satisfactory contrast in the overview contrast image, whereas other illumination patterns involve much less images. Such patterns are shown in fig. 5 and 6, which are checkerboard-type patterns. In this case, the switched-on and switched-off lighting elements (LEDs 16) have a checkerboard-like distribution, a first part of the switched-on lighting elements corresponding for example to the white areas of the checkerboard, and a second part of the switched-off LEDs 16 corresponding to the black field. Two overview original images are required, which are generated with mutually complementary, i.e. inverted, illumination patterns. These two checkerboard patterns are shown in FIG. 5. A first part of the lighting element is formed by the LEDs 16 being switched on, and only every other LED 16 in each row and column is switched on. Here, in the right illumination pattern of the two illumination patterns, the LED 16 that is turned off in the left image is turned on, and vice versa. Here, the illumination pattern may extend over the entire LED array 17 or only over a region of interest of the sample carrier 1, in order to reduce the total amount of light and not unnecessarily load the sample. Checkerboard-type illumination may be used in particular for using overview contrast images in dark field mode, as a calculation algorithm here in particular a hierarchical projection algorithm based on pixel-by-pixel projection in minimum projection. Only two overview raw images are required and the method provides a good contrast both for the sample carrier 1 and the sample area (e.g. the glass edge of a cover slip) as well as for the sample itself.

When using an illumination pattern as shown in fig. 5, an LED array 17 with single color LEDs emitting, for example, white light or one of the primary colors R, G, B may be used; for detection, a flat panel detector inscribed in color or monochrome may be used.

If a first part of the lighting element emits light and a second part of the lighting element does not, in the case shown in fig. 5, two overview raw images which have to be recorded one after the other are required to generate the overview contrast image. However, the overview raw image can also be recorded simultaneously with the camera and then separated if the polarizations of the light emitted by all parts of the illumination element differ from each other in pairs. In order to generate a checkerboard-type illumination pattern, an LED array 17 can be used, which is provided with complementary polarizing filters in correspondence with the pattern, said polarizing filters alternating in rows and columns. The polarizing filter may also be switchable. In this way, two overview raw images can be generated by one recording and only have to be separated afterwards, for which purpose the polarization must also be detected.

Another possibility is that all parts of the lighting element emit light of mutually different colors in pairs, that is to say, for example, in the case of four parts of the lighting element, each part emits light of a different color. This is again illustrated in fig. 6 with reference to a checkerboard-type illumination pattern. The first part of the lighting element comprises here a blue LED 18, which thus emits light in the blue wavelength range, while the second part of the lighting element comprises a red LED 19, i.e. an LED emitting light in the red wavelength range. The two grids are nested within each other so that a red-blue checkerboard appears over the LED array 17. The sample carrier 1 is illuminated with this illumination pattern and a recording is taken which already comprises the two necessary overview raw images. By separating the records by color channel, each summarized original image is obtained. Depending on the configuration of the camera, that is to say on the number of color channels and the number of LEDs 13 of the LED array 17, it is also possible to nest three or more patterns within one another, whereby the samples or sample carriers 1 are illuminated simultaneously. Ideally, in this case the LEDs 16 of the LED array 17 and the color channels of the camera for recording are matched to one another, and the three primary color channels red, green and blue can usually be used without further measures, since even the LEDs emitting white light are composed of a combination of red, green and blue sub-LEDs.

Instead of a checkerboard-shaped illumination pattern, other illumination patterns can also be used, in which the first portion and the corresponding second portion and possibly also other portions of the illumination elements have a regular distribution, in contrast to a random distribution. Fig. 7 shows an example of such a so-called cross pattern, in which four different illumination patterns are generated and accordingly four overview raw images are required. The contrast here is slightly improved compared to a checkerboard type illumination pattern. Four separate overview raw images are also required when using a half-pupil pattern, as shown in fig. 8. To generate the overview original image, the LED array 17 is divided in half, so that a first part of the lighting elements is located in one half and a second part of the lighting elements, which have been switched off, is located in the other half. The second overview raw image is recorded with a distribution complementary thereto, that is to say if first a first part of the lighting elements fills the left half on the LED array 17, then the right half will be filled for the second overview raw image. By dividing the LED array into an upper half and a lower half, i.e. with a dividing direction perpendicular to the first dividing direction, two further overview raw images are generated. In the case of transparent sample carriers with vertical elements, in particular in the case of so-called chamber slides or transparent multiwell plates, a high contrast can be achieved in this way. The overview contrast image is preferably generated in a dark field mode, for which a pixel-by-pixel projection-based calculation algorithm, preferably a hierarchical projection algorithm, is used. In this case, the overview raw images are compared pixel by pixel and the intensity value of one of the pixels is selected for the corresponding position in the overview contrast image according to the projection conditions.

In another configuration of the method, the following facts are utilized: each white-emitting LED is formed by three separate, mutually adjacent sub-LEDs, which emit light in different primary colors of red, green and blue, respectively. Different illuminations can be provided by illumination from different angles from the primary colors, where the illumination pattern can be the same. In this case, a calculation algorithm is selected with which an overview contrast image in bright-field mode is generated.

Another configuration of the method consists in generating the illumination pattern 3 only in at least one section of the array of illumination elements. Different illuminations are then generated by a scanning movement of at least one section on the array. In this case, the lighting elements outside the at least one section are switched such that they do not emit light. This is presented in fig. 9 as an example of a checkerboard pattern, from which a small segment of four LEDs 16 is selected, which is scanned across a row in a series of image or illumination patterns, and then moved across the LED array 17 row by row. Compared to the checkerboard pattern described in connection with fig. 5, here more overview raw images are required, the contrast being comparable in quality. Advantageously, however, the amount of light emitted per unit time by the LED array 17 is significantly smaller than when two full patterns are used simultaneously. The background brightness is thus reduced and there are fewer disturbing reflections in the overview original image. The time required for recording the overview raw image can be reduced, for example, by moving spatially distant regions of the image of the sample carrier 1 simultaneously with the sections with the illumination pattern and/or by generating separately inscribed illumination patterns of different colors in the sections to be moved.

In a section moving over the LED array 17, other patterns may also be generated, for example all but one of the LEDs 16 are on, so that the section comprises only one LED that is off, and then the section is moved. Another possibility is to switch on only one LED and switch off all other LEDs and move this section over the array and record the overview raw image in the process.

In particular, a hierarchical projection algorithm, in particular also in terms of a minimum projection, can be used as a calculation algorithm in order to obtain an overview contrast image in the dark field mode.

It may often be necessary to overdrive bright field areas on the camera to obtain good dark field signals for dark field contrast. For subsequent bright field recording, further recording may have to be performed without overdriven bright field areas.

Finally, another configuration of the method is explained below with reference to fig. 10 to 12. In this case, the different illuminations are not generated with different illumination patterns, but by the sample carrier 1 being moved laterally relative to the illumination pattern 3 between recordings. With respect to fig. 1, this corresponds to a movement perpendicular to the optical axis. In this case the sample carrier 1 can be moved relative to the illumination pattern 3 or vice versa, but it is also possible to move both relative to each other. In general, only the movement of the sample carrier 1 is easier to achieve, since it is usually mounted on a platform that is movable in all three spatial directions. The LEDs 16 of the LED array 17 are switched in a fixed pattern, for example a regular grid. Between the two overview raw images, the illumination pattern 3 and/or the sample carrier 1 is moved in a plane orthogonal to the optical axis of the camera. In fig. 10-12, different sample carriers are shown in four different positions of the illumination pattern 3 relative to the sample carrier 1, respectively, which are realized with an LED array 17 and an LED 16. As sample carrier, a slide 20 with a cover slip 21 is used in fig. 10, a multiwell plate 22 in fig. 11 and a microtube 23 in fig. 12, and a chamber slide 24 in fig. 12 and a chamber 25 in fig. 10. In all cases, the overview contrast image is displayed on the right side of the figure. Here, the overview contrast image may be generated in both the bright field mode and the dark field mode. In the case of an overview contrast image in dark field mode, the smallest projection (as a special case of the sorted projection) is used as the calculation algorithm, and the largest projection may be used in the case of an overview contrast image in bright field mode. In order to compensate for the brightness differences, shading correction can be carried out after the calculation here and in all other cases where necessary. In the case of segmentation-based calculations, shading correction may also be performed prior to the calculation of the overview raw image.

Due to the relative movement between the recordings of the overview original images, it is necessary to know how the sample carrier or illumination pattern 1 moves in the images for the correct application of the calculation algorithm. For this purpose, it is necessary to calibrate the camera or detection optics 2 relative to the sample carrier 1 or the platform on which the sample is mounted, in order to be able to map the sample carrier coordinates onto the image coordinates and vice versa. For this purpose, instead of the sample carrier, first a calibration pattern on the same location is used or clamped on the platform. In this way, such a mapping-homography-that is to say a mapping of a two-dimensional plane to a two-dimensional plane in space-can be evaluated. It is of course also possible to dispense with calibration if the relative movement can be determined by image analysis or by a separate measuring system, or to carry out calibration beforehand on the basis of objective parameters and distances.

The overview contrast image determined with a random illumination pattern provides the best contrast in terms of quality, especially when using LEDs, since in the case of dynamic patterns (i.e. in which the pattern changes), the disconnected LEDs may provide a rather strong background signal due to reflection by the sample carrier. However, these artifacts can be eliminated in the evaluation, i.e. not taken into account, by corresponding image processing algorithms, for example using a deep learning algorithm.

Another possibility for generating different illuminations using random patterns without moving the sample or sample carrier 1 laterally with respect to the illumination pattern consists in combining overview contrast images composed of multiple recordings made at different exposures together in a HDR recording (HDR-high dynamic range) type. The overview contrast image may be combined from, for example, three overview original images recorded at different exposures as an HDR image.

In this case, the position of the illumination element relative to the sample or sample carrier may also be taken into account when calculating the overview contrast image, as explained in connection with fig. 3. Bright field information is used if the illumination element directly illuminates the sample carrier or sample, and dark field information is used otherwise. Thus, the overview contrast image is a mixture of bright field contrast and dark field contrast.

The calibration patterns described above in connection with the calibration of the relative motion may also be used to correct for geometric distortions in the images applied in each overview contrast image. In addition, background artifacts can also be subtracted by calculation.

After the overview contrast image is generated, it is automatically analyzed by an image evaluation algorithm, preferably using an algorithm based on deep learning using a neural network. For example, the type of sample carrier is identified, and the sample carrier may also be located in the image. If the sample carrier has a marker, the marker can likewise be determined from the contrast image. The same is true for areas on the sample or carrier, such as microwells that may contain a sample. By identifying bubbles or other artifacts with a corresponding image evaluation algorithm, in particular in the case of sample carriers comprising a plurality of samples in separate containers, the examination time for the use of the sample carrier in the presence of such artifacts can be reduced. Finally, the volume and form of the immersion liquid droplets can be identified by image evaluation of the overview contrast image in the case of immersion liquid, and conclusions about contamination in the immersion liquid can also be drawn.

This information can be displayed to the viewer or user on the PC, preferably by graphical means, so that the user can adapt his further actions to the results of the analysis of the overview contrast image. In the case of manipulation by the user, it is in some cases sufficient to present the overview contrast image to the user himself, but the information obtained by the image evaluation can also be used for automatic control of the examination of the sample, in particular using a microscope. The overview contrast image provided to the user may be navigated over the specimen using controls to prepare for further examination. However, the information of the overview contrast image extracted by the image processing algorithm can also be used to enable subsequent processing, for example, automatic identification and localization of relevant structures of a sample carrier, for example a slide, or of a sample on a sample carrier, for example a tissue section, a biological body or a cell, in order to set a fully automatic coarse localization of the sample in all three spatial directions. Finally, the extracted image information may also enable more powerful, faster and more efficient automated microscopy, such as high-throughput microscopy, with smaller data volume and shorter recording time with automatic exclusion of defect points.

List of reference numerals

1 sample carrier

2 detection optical unit

3 illuminating pattern

4 plane detector

5 calculating unit

6 image evaluation unit

7 control unit

8 LED array

9 deflection mirror

10 beam splitter

11 lens

12 ocular lens

13 LED

14 first field area

15 second field region

16 LED

17 LED array

18 blue LED

19 Red LED

20 slide glass

21 cover glass

22 porous plate

23 micro dish

24 chamber slide

25 chamber

23页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:具有冲击吸收壁的头戴式装置的金属框架

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!