Automatic focusing method of full-glass-slide imaging scanner and image acquisition method thereof

文档序号:1589594 发布日期:2020-02-04 浏览:38次 中文

阅读说明:本技术 全载玻片成像扫描仪的自动聚焦方法及其图像获取方法 (Automatic focusing method of full-glass-slide imaging scanner and image acquisition method thereof ) 是由 梁毅雄 何柱君 向遥 刘晴 刘剑锋 于 2019-11-04 设计创作,主要内容包括:本发明公开了一种全载玻片成像扫描仪的自动聚焦方法,具体为获取当前聚焦点的成像图片并提取聚焦特征;对聚焦特征和隐藏特征进行融合生成聚集特征和新的隐藏特征;预测聚焦镜头的离焦距;对离焦距进行判断和反复调整得到最终的聚焦点;对最终的聚焦点所成像的图片清晰度和距离最终聚焦点最近的若干个聚焦点所成像的图片的清晰度,并采用清晰度最大的图片所对应的聚焦点作为最终的最优聚焦点。本发明还公开了包括所述全载玻片成像扫描仪的自动聚焦方法的图像获取方法。本发明方法的准确性高,而且聚焦速度快,效率较高。(The invention discloses an automatic focusing method of a full-glass-slide imaging scanner, which is specifically used for acquiring an imaging picture of a current focusing point and extracting focusing characteristics; fusing the focusing features and the hidden features to generate aggregation features and new hidden features; predicting the defocusing distance of the focusing lens; judging and repeatedly adjusting the defocusing distance to obtain a final focusing point; and (3) the definition of the image formed by the final focusing point and the definition of the image formed by a plurality of focusing points closest to the final focusing point are determined, and the focusing point corresponding to the image with the highest definition is used as the final optimal focusing point. The invention also discloses an image acquisition method comprising the automatic focusing method of the full-glass-slide imaging scanner. The method has the advantages of high accuracy, high focusing speed and high efficiency.)

1. An automatic focusing method of a full-slide imaging scanner specifically comprises the following steps:

s1, acquiring an imaging picture of a current focus point;

s2, extracting the focusing characteristics of the imaging picture obtained in the step S1;

s3, fusing the focusing features and the hidden features obtained in the step S2 to generate aggregation features and new hidden features;

s4, predicting the defocusing distance of the focusing lens according to the gathering characteristics and the new hidden characteristics obtained in the step S3; the defocus distance is defined as the offset of the optimal focus point relative to the current focus point;

s5, judging and repeatedly adjusting the defocusing distance obtained by predicting in the step S4 to obtain a final focusing point;

s6, comparing the image definition of the final focus point obtained in the step S5 with the image definition of the images of the focus points closest to the final focus point, and taking the focus point corresponding to the image with the highest definition as the final optimal focus point, thereby completing the automatic focusing of the full-glass-slide imaging scanner.

2. The method of claim 1, wherein the step S2 is performed to extract the focus characteristics of the image obtained in step S1, specifically, the step S1 is performed using convolutional neural network.

3. The method of claim 2, wherein the step of extracting the focusing characteristics of the imaged picture obtained in step S1 is to crop the original imaged picture obtained in step S1 to obtain a plurality of sub-pictures and extract the focusing characteristics of the plurality of sub-pictures.

4. The method of claim 3, wherein the original image obtained in step S1 is cropped to obtain a plurality of sub-pictures,specifically, the lower left corner of the original imaging picture obtained in step S1 is used as an origin, and (0.2L,0.2W), (0.2L,0.8W), (0.8L,0.2W), (0.5L,0.5W) and (0.8L,0.8W) are respectively used as center points, and the side length is intercepted as LsetThe square area picture is used as a sub-picture obtained by cutting; l is the length of the original imaging picture, and W is the width of the original imaging picture.

5. The method of claim 4, wherein the step S3 is performed by fusing the focus feature and the hidden feature obtained in the step S2 to generate an aggregate feature and a new hidden feature, and the method further comprises fusing the focus feature and the hidden feature by using a recurrent neural network to obtain the aggregate feature and the new hidden feature.

6. The method of claim 5, wherein the step S4 predicts the defocus distance of the focus lens according to the focus feature obtained in step S3 and the new hidden feature, and more particularly, the linear regressor predicts the defocus distance of the focus lens.

7. The method of claim 6, wherein the step S4 predicts the defocus distance of the focus lens according to the focus feature and the new hidden feature obtained in step S3, and the predicted defocus distance Δ x is calculated by the following formulai

Figure FDA0002259701180000021

In the formula WTIs the weight of the linear regressor;

Figure FDA0002259701180000022

8. The method of claim 7, wherein the defocus distances associated with the sub-images are averaged, and the average is used as the final focus distance of the sub-image.

9. The method of auto-focusing of a full-slide imaging scanner according to any one of claims 1 to 8, wherein the step S5 of determining and repeatedly adjusting the defocus distance predicted in step S4 to obtain the final focus point comprises the following steps:

A. the defocus distance predicted in step S4 is determined by the following rule:

if the predicted defocus distance Deltax is obtainediIf the current focusing point is 0, determining that the current focusing point is the final focusing point;

if the predicted defocus distance Deltax is obtainediIf not equal to 0, moving the focusing lens;

B. calculating the target position x when moving the focus lens by the following formulai+1

xi+1=xi+Δxi

In the formula xiThe current position of the focusing lens;

C. judging whether the current adjusting times reach the set times:

if the current adjusting times reach the set times, determining the focusing point of the focusing lens after the current adjusting position as the final focusing point;

if the current adjustment count does not reach the set count, the process goes to step S1, and adjustment judgment and adjustment are restarted.

10. An image acquisition method comprising the auto-focusing method of the all-slide imaging scanner of any one of claims 1 to 9, further comprising the steps of:

and S7, imaging the optimal focusing point obtained in the step S6 so as to obtain an image with the maximum definition.

Technical Field

The invention belongs to the field of image processing, and particularly relates to an automatic focusing method of a full-glass-slide imaging scanner and an image acquisition method thereof.

Background

The full slide imaging scanning technique is an important technique for digital pathology. A full slide imaging scanner scans the physical slice to form a digital slice for easy storage, retrieval, and transmission. A full slide imaging scanner focuses the image at each field of view of the slice, scans the entire slice, and then aligns and stitches all the resulting pictures together to produce a complete seamless image of the entire slice. Meanwhile, since there are thousands of fields of view per slice, the autofocus operation on each field of view becomes a major factor limiting the speed of the full slide imaging scan.

Autofocus refers to: and finding an optimal focusing point from a plurality of focusing points on the z axis of the up-and-down movement of the focusing lens, so that the obtained imaging picture is clearest. The currently common autofocus algorithm is: selecting dozens of focus points on the z axis at equal intervals, moving the focus lens to the focus points, evaluating focus pictures obtained on the focus points, and taking the picture with the maximum definition as a final output picture, wherein the corresponding focus point is the optimal focus point for automatic focusing.

However, it takes a lot of time to move the focus lens and to image. Therefore, the time required for focusing increases as the number of focusing points to be investigated increases. Meanwhile, because of the differences of the initial positions and sampling intervals of the focusing lenses and the selection problem of the focusing points, the optimal focusing point searched by the existing algorithm is not necessarily the actual optimal focusing point, and the final picture obtained by automatic focusing is not necessarily the actually clearest picture.

Disclosure of Invention

It is an object of the present invention to provide a fast, efficient and accurate auto-focusing method for a full-slide imaging scanner.

It is a further object of the present invention to provide an image acquisition method that includes the autofocus method of the all-slide imaging scanner.

The automatic focusing method of the full-glass-slide imaging scanner provided by the invention specifically comprises the following steps:

s1, acquiring an imaging picture of a current focus point;

s2, extracting the focusing characteristics of the imaging picture obtained in the step S1;

s3, fusing the focusing features and the hidden features obtained in the step S2 to generate aggregation features and new hidden features;

s4, predicting the defocusing distance of the focusing lens according to the gathering characteristics and the new hidden characteristics obtained in the step S3; the defocus distance is defined as the offset of the optimal focus point relative to the current focus point;

s5, judging the defocusing distance obtained by predicting in the step S4, jumping to a step S6 when a termination condition is reached, or jumping to a step S1 and repeatedly adjusting to obtain a final focusing point;

s6, comparing the image definition of the final focus point obtained in the step S5 with the image definition of the images of the focus points closest to the final focus point, and taking the focus point corresponding to the image with the highest definition as the final optimal focus point, thereby completing the automatic focusing of the full-glass-slide imaging scanner.

The step S2 of extracting the focus feature of the imaging picture obtained in the step S1 is to specifically extract the focus feature of the imaging picture obtained in the step S1 by using a convolutional neural network.

The convolutional neural network is a ResNet-18 convolutional neural network, and the network parameters are obtained through a training process.

The step of extracting the focusing features of the imaged picture obtained in the step S1 is to cut the input original imaged picture obtained in the step S1 to obtain a plurality of sub-pictures, and extract the focusing features of the plurality of sub-pictures.

The cutting of the original imaging picture obtained in step S1 is performed to obtain a plurality of sub-pictures, specifically, the lower left corner of the original imaging picture obtained in step S1 is used as an origin, and the side lengths are cut to L, respectively taking (0.2L,0.2W), (0.2L,0.8W), (0.8L,0.2W), (0.5L,0.5W) and (0.8L,0.8W) as central pointssetThe square area picture is used as a sub-picture obtained by cutting; l is the length of the original imaging picture, and W is the width of the original imaging picture.

And step S3, fusing the focus feature and the hidden feature obtained in step S2 to generate an aggregate feature and a new hidden feature, specifically, fusing the focus feature and the hidden feature by using a recurrent neural network to obtain the aggregate feature and the new hidden feature.

The recurrent neural network is an LSTM (long short term memory) network, and network parameters are obtained through a training process.

And step S4, predicting the defocus distance of the focus lens according to the aggregation feature and the new hidden feature obtained in step S3, specifically predicting the defocus distance of the focus lens by a linear regressor.

Step S4, predicting the defocus distance of the focus lens according to the focus feature and the new hidden feature obtained in step S3, specifically, calculating the predicted defocus distance Δ x by using the following formulai

Figure BDA0002259701190000031

In the formula WTIs the weight of the linear regressor;

Figure BDA0002259701190000032

the aggregation characteristics obtained in step S3; b is the offset of the linear regressor, WTAnd the specific values of b are obtained through a training process.

And averaging the defocus distances corresponding to the sub-pictures, and taking the obtained average as the final focus distance of the picture.

In step S5, the defocus distance predicted in step S4 is determined and repeatedly adjusted to obtain a final focus point, specifically, the following steps are adopted for determining and adjusting:

A. the defocus distance predicted in step S4 is determined by the following rule:

if the predicted defocus distance Deltax is obtainediIf the current focusing point is 0, determining that the current focusing point is the final focusing point;

if the predicted defocus distance Deltax is obtainediIf not equal to 0, moving the focusing lens;

B. calculating the target position x when moving the focus lens by the following formulai+1

xi+1=xi+Δxi

In the formula xiThe current position of the focusing lens;

C. judging whether the current adjusting times reach the set times:

if the current adjusting times reach the set times, determining the focusing point of the focusing lens after the current adjusting position as the final focusing point;

if the current adjustment count does not reach the set count, the process goes to step S1, and adjustment judgment and adjustment are restarted.

The invention also provides an image acquisition method comprising the automatic focusing method of the full-slide imaging scanner, which further comprises the following steps:

and S7, imaging the optimal focusing point obtained in the step S6 so as to obtain an image with the maximum definition.

According to the automatic focusing method and the image acquisition method of the full-glass-slide imaging scanner, provided by the invention, the neural network algorithm and the cyclic neural network technology are adopted, the position of the focusing lens is continuously adjusted, so that the optimal focusing point is obtained, and the focusing lens is imaged on the focusing point, so that the focusing picture with the highest definition is obtained, and therefore, the method is high in accuracy; in addition, the method only adjusts the focusing lens for limited times, so the method has high focusing speed and high efficiency.

Drawings

FIG. 1 is a schematic method flow diagram of an auto-focusing method of the present invention.

FIG. 2 is a schematic diagram of cropping an original imaged picture in the method of the present invention.

Fig. 3 is a schematic method flow diagram of an image acquisition method according to the method of the present invention.

Detailed Description

The invention provides a fast and efficient automatic focusing algorithm aiming at the problem of long time consumption of automatic focusing of a full-glass slide scanner, and improves the quality of a focused picture while accelerating the automatic focusing speed. As described in the background art, the larger the number of focus points investigated in the auto-focusing process, the longer the time consumed for focusing. The focal distance of the focusing lens is calculated, and the lens is moved to the best focusing point according to the defocusing distance, which is the fastest method, and only the lens needs to be moved once, and only 2 times of imaging is needed. But the actual value of the focus distance cannot be accurately calculated on existing equipment. The invention extracts the focusing characteristics containing the focusing information by utilizing the information of the current focusing picture, and can roughly predict the focus distance by utilizing the focusing characteristics. A common method for extracting useful information from pictures is convolutional neural networks. This predicted defocus is not very accurate and the present invention uses two methods to compensate for this deficiency. First, the process of predicting the distance from the focal distance and moving the position of the lens is iterated so that the lens is continuously close to the optimal focus point. Considering the speed problem, the iteration frequency is not too large, and the frequency of lens moving is ensured to be less than that of the existing focusing algorithm. Second, the focus information contained in the focus feature is enhanced. And recording the focusing characteristics obtained in the iterative process by using a recurrent neural network. The recurrent neural network fuses the historical and current focus features to form new aggregate features containing rich correct focus information. And predicting the defocusing distance of the current focusing lens by using the focusing characteristics. With the increase of the iteration times, the defocus distance predicted by the aggregation characteristics is more and more accurate, and the focusing lens is ensured to be continuously close to the optimal focusing point. And finally, fine adjustment is carried out on the position of the focusing lens, so that the quality of the focusing picture is improved.

In particular, the entire autofocus process of the present invention is an iterative process. In each iteration process, firstly, a picture is obtained at a focus point of a current focus lens, a convolution neural network is used for extracting focus characteristics from the picture, then the convolution neural network reads the focus characteristics and the focus characteristics of previous iteration and combines the characteristics into aggregate characteristics, finally, a linear regressor predicts the defocus distance of the current focus lens or the offset of the position of the optimal focus point relative to the current focus point by using the new aggregate characteristics, and moves the focus lens according to the offset. If the offset is equal to 0 or the number of iterations exceeds a defined threshold, the iteration is stopped, otherwise the iteration is continued. And after the iteration is stopped, investigating two gathering points before and after the current focus point, and selecting the clearest image in the imaging images of the three focus points as the final output.

The automatic focusing method of the full-glass-slide imaging scanner provided by the invention specifically comprises the following steps:

s1, acquiring an imaging picture of a current focus point;

s2, extracting the focusing characteristics of the imaging picture obtained in the step S1; specifically, a convolution neural network (such as ResNet-18 convolution neural network) is adopted to extract the focusing characteristics of the imaging picture obtained in the step S1;

in particular implementations, the focus features are extracted using a convolutional neural network

Figure BDA0002259701190000061

Figure BDA0002259701190000062

CNN () is the process of extracting features from the convolutional neural network;

Figure BDA0002259701190000063

the imaging picture acquired in step S1;

meanwhile, in the specific implementation, the input original imaging picture obtained in the step S1 is cut, so that a plurality of sheets are obtainedSub-pictures and extracting focusing characteristics of the plurality of sub-pictures; for example, as shown in fig. 2, the lower left corner of the original imaging picture obtained in step S1 is used as the origin, and the side lengths are taken as L, respectively with (0.2L,0.2W), (0.2L,0.8W), (0.8L,0.2W), (0.5L,0.5W) and (0.8L,0.8W) as the center pointssetTaking a square area picture of 500 pixels as a sub picture obtained by cutting;

s3, fusing the focusing features and the hidden features obtained in the step S2 to generate aggregation features and new hidden features; specifically, a cyclic neural network (such as an LSTM (long short term memory) network) is adopted to fuse focusing features and hidden features, so that aggregation features and new hidden features are obtained;

in particular implementation, a recurrent neural network is used to fuse the focus features and hidden features to generate aggregated features

Figure BDA0002259701190000071

And new hidden features:

Figure BDA0002259701190000072

old hidden feature ci-1And hi-1Focus information comprising focus features extracted from previous iterations; the LSTM fuses the old hidden features and the focusing information of the current gathering features together to form gathering features containing rich focusing information; meanwhile, the LSTM stores the focusing information of the current focusing feature in the hidden feature, and updates the focusing information in the hidden feature so that the current focusing information is used in the subsequent iteration process, so that the subsequent focusing feature contains more and more focusing information, and the predicted defocus distance is more and more accurate;

s4, predicting the defocusing distance of the focusing lens according to the gathering characteristics and the new hidden characteristics obtained in the step S3; the defocus distance is defined as the offset of the optimal focus point relative to the current focus point; predicting the defocusing distance of a focusing lens by a linear regressor;

in specific implementation, the predicted defocus distance Δ x is calculated by the following formulai

In the formula WTIs the weight of the linear regressor;

Figure BDA0002259701190000074

the aggregation characteristics obtained in step S3; b is the offset of the linear regressor, WTThe specific values of b and b are obtained through a training process;

meanwhile, averaging the corresponding defocus distances of a plurality of sub-pictures, and taking the obtained average as the final focus distance of the picture;

s5, judging and repeatedly adjusting the defocusing distance obtained by predicting in the step S4 to obtain a final focusing point; the method specifically comprises the following steps of judging and adjusting:

A. the defocus distance predicted in step S4 is determined by the following rule:

if the predicted defocus distance Deltax is obtainediIf the current focusing point is 0, determining that the current focusing point is the final focusing point;

if the predicted defocus distance Deltax is obtainediIf not equal to 0, moving the focusing lens;

B. calculating the target position x when moving the focus lens by the following formulai+1

xi+1=xi+Δxi

In the formula xiThe current position of the focusing lens;

C. judging whether the current adjusting times reach the set times:

if the current adjusting times reach the set times, determining the focusing point of the focusing lens after the current adjusting position as the final focusing point;

if the current adjustment times do not reach the set times, jumping to step S1, and restarting adjustment judgment and adjustment;

s6, comparing the image definition of the final focus point obtained in the step S5 with the image definition of the images of a plurality of (such as two) focus points closest to the final focus point, and taking the focus point corresponding to the image with the maximum definition as the final optimal focus point, thereby completing the automatic focusing of the full-slide imaging scanner.

Fig. 3 is a schematic flow chart of the image acquisition method according to the present invention: the invention also provides an image acquisition method comprising the automatic focusing method of the full-slide imaging scanner, which comprises the following steps:

s1, acquiring an imaging picture of a current focus point;

s2, extracting the focusing characteristics of the imaging picture obtained in the step S1; specifically, a convolution neural network (such as ResNet-18 convolution neural network) is adopted to extract the focusing characteristics of the imaging picture obtained in the step S1;

in particular implementations, the focus features are extracted using a convolutional neural network

Figure BDA0002259701190000091

Figure BDA0002259701190000092

CNN () represents the process of extracting features by the convolutional neural network;

Figure BDA0002259701190000093

the imaging picture acquired in step S1;

meanwhile, in specific implementation, the input original imaging picture obtained in the step S1 is cut to obtain a plurality of sub-pictures, and the focusing characteristics of the plurality of sub-pictures are extracted; for example, as shown in fig. 2, the lower left corner of the original imaging picture obtained in step S1 is used as the origin, and the side lengths are taken as L, respectively with (0.2L,0.2W), (0.2L,0.8W), (0.8L,0.2W), (0.5L,0.5W) and (0.8L,0.8W) as the center pointssetTaking a square area picture of 500 pixels as a sub picture obtained by cutting;

s3, fusing the focusing features and the hidden features obtained in the step S2 to generate aggregation features and new hidden features; specifically, a cyclic neural network (such as an LSTM (long short term memory) network) is adopted to fuse focusing features and hidden features, so that aggregation features and new hidden features are obtained;

in particular implementation, a recurrent neural network is used to fuse the focus features and hidden features to generate aggregated features

Figure BDA0002259701190000094

And new hidden features:

old hidden feature ci-1And hi-1Focus information comprising focus features extracted from previous iterations; the LSTM fuses the old hidden features and the focusing information of the current gathering features together to form gathering features containing rich focusing information; meanwhile, the LSTM stores the focusing information of the current focusing feature in the hidden feature, and updates the focusing information in the hidden feature so that the current focusing information is used in the subsequent iteration process, so that the subsequent focusing feature contains more and more focusing information, and the predicted defocus distance is more and more accurate;

s4, predicting the defocusing distance of the focusing lens according to the gathering characteristics and the new hidden characteristics obtained in the step S3; the defocus distance is defined as the offset of the optimal focus point relative to the current focus point; predicting the defocusing distance of a focusing lens by a linear regressor;

in specific implementation, the predicted defocus distance Δ x is calculated by the following formulai

Figure BDA0002259701190000101

In the formula WTIs the weight of the linear regressor;the aggregation characteristics obtained in step S3; b is the offset of the linear regressor, WTAnd the specific values of b are trainedObtaining the process;

meanwhile, averaging the corresponding defocus distances of a plurality of sub-pictures, and taking the obtained average as the final focus distance of the picture;

s5, judging and repeatedly adjusting the defocusing distance obtained by predicting in the step S4 to obtain a final focusing point; the method specifically comprises the following steps of judging and adjusting:

A. the defocus distance predicted in step S4 is determined by the following rule:

if the predicted defocus distance Deltax is obtainediIf the current focusing point is 0, determining that the current focusing point is the final focusing point;

if the predicted defocus distance Deltax is obtainediIf not equal to 0, moving the focusing lens;

B. calculating the target position x when moving the focus lens by the following formulai+1

xi+1=xi+Δxi

In the formula xiThe current position of the focusing lens;

C. judging whether the current adjusting times reach the set times:

if the current adjusting times reach the set times, determining the focusing point of the focusing lens after the current adjusting position as the final focusing point;

if the current adjustment times do not reach the set times, jumping to step S1, and restarting adjustment judgment and adjustment;

s6, comparing the image definition of the final focus point obtained in the step S5 with the image definition of the images of a plurality of (such as two) focus points closest to the final focus point, and taking the focus point corresponding to the image with the maximum definition as the final optimal focus point, thereby completing the automatic focusing of the full-slide imaging scanner.

And S7, imaging the optimal focusing point obtained in the step S6 so as to obtain an image with the maximum definition.

12页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:镜头模组和电子装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!