Three-dimensional topography measuring method, system and storage medium based on deep learning

文档序号:551863 发布日期:2021-05-14 浏览:38次 中文

阅读说明:本技术 基于深度学习的三维形貌测量方法、系统和存储介质 (Three-dimensional topography measuring method, system and storage medium based on deep learning ) 是由 蔡长青 于 2021-01-13 设计创作,主要内容包括:本申请公开了一种基于深度学习的三维形貌测量方法、系统和存储介质,所述方法包括在待测物体上投影正弦条纹,采用十二步相移法采集若干第一条纹图像和第二条纹图像;随机选取第一条纹图像中的三个图像作为第一输入图像;将第一输入图像输入到卷积神经网络中,获取第一包裹相位;随机选取第二条纹图像中的三个图像作为第二输入图像;将第二输入图像输入到卷积神经网络中,获取第二包裹相位;计算第一包裹相位和第二包裹相位的差频,获取合成包裹相位;对合成包裹相位进行解包裹,获取连续相位;根据连续相位获取待测物体的三维形貌特征。相较于现有技术,本申请能够实现三维形貌的高动态范围成像。本申请可广泛应用于三维测量技术领域中。(The application discloses a three-dimensional shape measurement method, a system and a storage medium based on deep learning, wherein the method comprises the steps of projecting sine stripes on an object to be measured, and collecting a plurality of first stripe images and second stripe images by adopting a twelve-step phase shift method; randomly selecting three images in the first stripe image as a first input image; inputting a first input image into a convolutional neural network to obtain a first wrapping phase; randomly selecting three images in the second stripe image as a second input image; inputting the second input image into a convolutional neural network to obtain a second wrapping phase; calculating the difference frequency of the first wrapping phase and the second wrapping phase to obtain a synthetic wrapping phase; unwrapping the synthetic wrapped phase to obtain a continuous phase; and acquiring the three-dimensional morphology characteristics of the object to be detected according to the continuous phases. Compared with the prior art, the method and the device can realize high dynamic range imaging of the three-dimensional morphology. The method and the device can be widely applied to the technical field of three-dimensional measurement.)

1. A three-dimensional topography measurement method based on deep learning is characterized by comprising the following steps:

projecting sinusoidal stripes with a first frequency on an object to be detected, and acquiring a plurality of first stripe images of the object to be detected by adopting a twelve-step phase shift method;

randomly selecting three images in the first stripe image as a first input image;

inputting the first input image into a convolutional neural network to obtain a first wrapping phase;

projecting sinusoidal stripes with a second frequency on the object to be detected, and acquiring a plurality of second stripe images of the object to be detected by adopting a twelve-step phase shift method;

randomly selecting three images in the second stripe image as a second input image;

inputting the second input image into a convolutional neural network to obtain a second wrapping phase;

calculating the difference frequency of the first wrapping phase and the second wrapping phase to obtain a synthetic wrapping phase;

unwrapping the synthetic wrapped phase to obtain a continuous phase;

and acquiring the three-dimensional topography characteristic of the object to be detected according to the continuous phase.

2. The method as claimed in claim 1, wherein the projected light intensity is greater than the upper light intensity limit of the three-step phase shift, and the projected light intensity is less than the upper light intensity limit of the twelve-step phase shift.

3. The deep learning-based three-dimensional morphology measurement method according to claim 1, wherein the training data of the convolutional neural network comprises a training input image, a wrapped phase numerator label and a wrapped phase denominator label, the training input image is three images randomly selected from training fringe images, the training fringe images are collected by a twelve-step phase shift method, and the wrapped phase numerator label and the wrapped phase denominator label are calculated according to the training fringe images.

4. The deep learning-based three-dimensional topography measurement method according to claim 1, wherein the convolutional neural network comprises a first convolutional layer, a pooling layer, an upsampling block, a second convolutional layer and a third convolutional layer, wherein the first convolutional layer abstracts a training input image into multi-scale data, and the sampling block restores the multi-scale data to original sizes.

5. The three-dimensional topography measuring method based on deep learning of claim 4, wherein a residual block is arranged behind the pooling layer, and the residual block is used for accelerating the convergence of the convolutional neural network.

6. The deep learning-based three-dimensional topography measurement method according to claim 1, wherein said convolutional neural network minimizes a loss function using an adaptive moment estimation algorithm.

7. The deep learning-based three-dimensional topography measuring method according to claim 1, wherein the testing steps of the convolutional neural network are as follows:

collecting a plurality of test stripe images by adopting a twelve-step phase shift method;

calculating a test wrapping phase label according to the test stripe image;

randomly selecting three images in the test stripe images as test input images;

inputting the test input image into the convolutional neural network to obtain a test wrapping phase output;

calculating phase errors of the test wrap phase tag and the test wrap phase output;

and determining that the phase error is larger than a threshold value, and continuing to train the convolutional neural network.

8. A three-dimensional topography measurement system based on deep learning, comprising:

the projection acquisition module is used for projecting sinusoidal stripes with a first frequency on an object to be detected and acquiring a plurality of first stripe images of the object to be detected by adopting a twelve-step phase shift method; projecting sinusoidal stripes with a second frequency on the object to be detected, and acquiring a plurality of second stripe images of the object to be detected by adopting a twelve-step phase shift method;

the input selection module is used for randomly selecting three images in the first stripe image as a first input image; randomly selecting three images in the second stripe image as a second input image;

the wrapping phase module is used for inputting the first input image into a convolutional neural network to obtain a first wrapping phase; inputting the second input image into a convolutional neural network to obtain a second wrapping phase;

the synthetic wrapping module is used for calculating the difference frequency of the first wrapping phase and the second wrapping phase to obtain a synthetic wrapping phase;

the wrapping and unwrapping module is used for unwrapping the synthetic wrapping phase to obtain a continuous phase;

and the appearance measuring module is used for acquiring the three-dimensional appearance characteristics of the object to be measured according to the continuous phases.

9. A three-dimensional topography measurement system based on deep learning, comprising:

at least one processor;

at least one memory for storing at least one program;

when executed by the at least one processor, cause the at least one processor to implement the deep learning based three-dimensional topography measurement method according to any of claims 1-7.

10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a method for deep learning based three-dimensional topography measurement according to any one of claims 1 to 7.

Technical Field

The application relates to the technical field of three-dimensional measurement, in particular to a three-dimensional shape measurement method and system based on deep learning and a storage medium.

Background

With the development of technology, three-dimensional information acquisition and processing is becoming more and more popular. Fringe projection profilers have been widely used in many fields, such as manufacturing, education, and entertainment industries, as three-dimensional measurement techniques. A typical fringe projection profiler system consists of a projection device and multiple cameras. The projection device is used for projecting the coded stripe pattern onto an object to be measured. Due to the limited dynamic range of the camera, in the actual measurement, when the scene of the object to be measured contains the object with high reflectivity, the image saturation inevitably occurs. If the projected light intensity is directly reduced, the signal-to-noise ratio of an object with lower reflectivity in the scene where the object to be detected is located is lower. To address this problem, researchers have proposed high dynamic range imaging methods. High dynamic range imaging methods can be divided into two categories: hardware-based methods and algorithm-based methods.

Hardware-based approaches mainly address the high dynamic range imaging problem by adjusting system hardware parameters. Among these methods, the multiple exposure method is the most widely used method. In a photograph, an image taken with a long exposure will show details in a dimly lit area, while an image taken with a short exposure will provide details in a high-lit area. The multiple exposure method improves the dynamic range by merging these images with different details into a single high dynamic range image. Similarly, adjusting the intensity of the projected light has the same effect, which can be considered as another form of multiple exposure. The multiple exposure method has the advantages of wide dynamic range and simple operation. But for any measurement object, it is difficult to find a suitable exposure time quickly due to lack of surface reflectivity information thereof. In order to obtain high-quality three-dimensional measurement results, it is necessary to use as much exposure time or projection light intensity as possible, which causes a problem of inefficient measurement. On the other hand, the object to be measured needs to be kept still until a sufficient number of images with different exposures are collected, which means that the multiple exposure method is only suitable for static measurements. In order to optimize the multiple exposure method, a number of improved methods have been proposed in recent years which avoid experimental blindness and make the measurement process more efficient. However, their measurement speed is still too slow for dynamic measurements. In order to increase the measurement speed, researchers have proposed an optimized multiple exposure method based on digital light processing technology. This method has good real-time performance, but its improvement in dynamic range is limited.

Unlike hardware-based methods, algorithm-based methods do not adjust hardware parameters during the measurement process, and therefore their measurement speed is much faster than most hardware-based methods. They aim to mathematically remove measurement errors caused by high dynamic range imaging, which can be considered as a post-processing compensation method. In fringe projection profiler applications, sinusoidal fringe patterns are the most commonly used patterns due to their robustness to noise and ability to achieve high resolution. For sinusoidal fringe patterns, most algorithm-based methods aim at eliminating phase errors caused by saturation. Their core idea is to increase the number of fringe patterns with different light intensities to ensure that there are at least three unsaturated intensity values on the same pixel. Some simple operations may achieve this effect, for example, increasing the number of phase shifting steps or inserting an inverted fringe pattern. It can be seen that their dynamic range is improved at the expense of an increased projected fringe pattern. Projecting too many patterns can introduce motion errors. Thus, algorithm-based methods also fail to address the challenges between improving dynamic range and ensuring real-time performance.

Disclosure of Invention

In view of the above, an object of the present application is to provide a method, a system and a storage medium for measuring three-dimensional topography based on deep learning, so as to implement high dynamic range and high speed imaging of three-dimensional topography.

The first technical scheme adopted by the application is as follows:

a three-dimensional topography measuring method based on deep learning comprises the following steps:

projecting sinusoidal stripes with a first frequency on an object to be detected, and acquiring a plurality of first stripe images of the object to be detected by adopting a twelve-step phase shift method;

randomly selecting three images in the first stripe image as a first input image;

inputting the first input image into a convolutional neural network to obtain a first wrapping phase;

projecting sinusoidal stripes with a second frequency on the object to be detected, and acquiring a plurality of second stripe images of the object to be detected by adopting a twelve-step phase shift method;

randomly selecting three images in the second stripe image as a second input image;

inputting the second input image into a convolutional neural network to obtain a second wrapping phase;

calculating the difference frequency of the first wrapping phase and the second wrapping phase to obtain a synthetic wrapping phase;

unwrapping the synthetic wrapped phase to obtain a continuous phase;

and acquiring the three-dimensional topography characteristic of the object to be detected according to the continuous phase.

Further, the projected light intensity is greater than the upper light intensity limit for the three-step phase shift and the projected light intensity is less than the upper light intensity limit for the twelve-step phase shift.

Further, the training data of the convolutional neural network comprises a training input image, a wrapped phase numerator label and a wrapped phase denominator label, the training input image is three images randomly selected from training fringe images, the training fringe images are collected by a twelve-step phase shift method, and the wrapped phase numerator label and the wrapped phase denominator label are obtained by calculation according to the training fringe images.

Further, the convolutional neural network comprises a first convolutional layer, a pooling layer, an upsampling block, a second convolutional layer and a third convolutional layer, wherein the first convolutional layer abstracts the training input image into multi-scale data, and the sampling block restores the multi-scale data into the original size.

Further, a residual block is arranged behind the pooling layer and used for accelerating the convergence of the convolutional neural network.

Further, the convolutional neural network minimizes a loss function using an adaptive moment estimation algorithm.

Further, the testing steps of the convolutional neural network are as follows:

collecting a plurality of test stripe images by adopting a twelve-step phase shift method;

calculating a test wrapping phase label according to the test stripe image;

randomly selecting three images in the test stripe images as test input images;

inputting the test input image into the convolutional neural network to obtain a test wrapping phase output;

calculating phase errors of the test wrap phase tag and the test wrap phase output;

and determining that the phase error is larger than a threshold value, and continuing to train the convolutional neural network.

The second technical scheme adopted by the application is as follows:

a deep learning based three-dimensional topography measurement system comprising:

the projection acquisition module is used for projecting sinusoidal stripes with a first frequency on an object to be detected and acquiring a plurality of first stripe images of the object to be detected by adopting a twelve-step phase shift method; projecting sinusoidal stripes with a second frequency on the object to be detected, and acquiring a plurality of second stripe images of the object to be detected by adopting a twelve-step phase shift method;

the input selection module is used for randomly selecting three images in the first stripe image as a first input image; randomly selecting three images in the second stripe image as a second input image;

the wrapping phase module is used for inputting the first input image into a convolutional neural network to obtain a first wrapping phase; inputting the second input image into a convolutional neural network to obtain a second wrapping phase;

the synthetic wrapping module is used for calculating the difference frequency of the first wrapping phase and the second wrapping phase to obtain a synthetic wrapping phase;

the wrapping and unwrapping module is used for unwrapping the synthetic wrapping phase to obtain a continuous phase;

and the appearance measuring module is used for acquiring the three-dimensional appearance characteristics of the object to be measured according to the continuous phases.

The third technical scheme adopted by the application is as follows:

a deep learning based three-dimensional topography measurement system comprising:

at least one processor;

at least one memory for storing at least one program;

when executed by the at least one processor, cause the at least one processor to implement the deep learning based three-dimensional topography measurement method.

The fourth technical scheme adopted by the application is as follows:

a computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method for depth-learning based three-dimensional topography measurement.

The embodiment of the application adopts a twelve-step multi-frequency phase shift method to collect the stripe image of the object to be measured, improves the dynamic range of three-dimensional morphology measurement through twelve-step phase shift, and simultaneously selects three images from each frequency in the twelve-step multi-frequency phase shift image for three-dimensional morphology measurement. Compared with the prior art, the high-dynamic-range high-speed imaging method and device can achieve high-dynamic-range high-speed imaging of the three-dimensional morphology.

Drawings

FIG. 1 is a flowchart of a three-dimensional topography measurement method based on deep learning according to an embodiment of the present application;

fig. 2 is a structural block diagram of a convolutional neural network of the three-dimensional topography measuring method based on deep learning in the embodiment of the present application.

Detailed Description

The conception, specific structure and technical effects of the present application will be described clearly and completely with reference to the following embodiments and the accompanying drawings, so that the purpose, scheme and effects of the present application can be fully understood.

The present application will now be described in further detail with reference to the accompanying drawings and specific examples. The step numbers in the following embodiments are provided only for convenience of illustration, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art. Further, for several described in the following embodiments, it is denoted as at least one.

It will be understood that, although the terms first, second, third, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element of the same type from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure. The use of any and all examples, or exemplary language ("e.g.," such as "etc.), provided herein is intended merely to better illuminate embodiments of the application and does not pose a limitation on the scope of the application unless otherwise claimed.

For many three-dimensional measurement techniques based on fringe projection profilers, measuring objects with large variations in surface reflectivity is always a very difficult problem due to the limited dynamic range of the camera. Many high dynamic range three-dimensional measurement methods have been developed for static scenes, but are not practical for dynamic objects. The method and the device solve the problem that phase information in the high dynamic range image scene is lost, so that three-dimensional reconstruction can be performed from saturated or dark images through deep learning. By using a specially designed convolutional neural network, with proper training, phase information can be accurately extracted at low signal-to-noise ratio and saturation. The method succeeds in three-dimensional reconstruction of static and high-dynamic-range image objects, can improve the dynamic range of three-step phase shift by 4.8 times, does not need any other projection images or hardware adjustment in the measurement process, and finally achieves the three-dimensional measurement speed of about 13.89Hz in an off-line state.

The method and the device introduce deep learning into three-dimensional morphology measurement with a high dynamic range to solve the problem that high speed and the high dynamic range are difficult to be considered. The deep learning is derived from machine learning, the development is very rapid, and the deep learning shows wide application prospects in many fields. For deep learning, an optimized neural network structure can effectively eliminate phase errors caused by high dynamic range if a sufficient amount of training data is provided. The method and the device adopt a three-dimensional phase unwrapping technology to calculate unwrapped phases, and improve the measurement speed.

As shown in fig. 1, an embodiment of the present application provides a deep learning-based three-dimensional topography measurement method, including:

s110, projecting sinusoidal stripes with first frequency on an object to be detected, and collecting a plurality of first stripe images of the object to be detected by adopting a twelve-step phase shift method;

s120, randomly selecting three images in the first stripe image as a first input image;

s130, inputting the first input image into a convolutional neural network to obtain a first wrapping phase;

s140, projecting sinusoidal stripes with a second frequency on the object to be detected, and collecting a plurality of second stripe images of the object to be detected by adopting a twelve-step phase shift method;

s150, randomly selecting three images in the second stripe image as a second input image;

s160, inputting the second input image into a convolutional neural network to obtain a second wrapping phase;

s170, calculating the difference frequency of the first wrapping phase and the second wrapping phase to obtain a synthetic wrapping phase;

s180, unwrapping the synthetic wrapped phase to obtain a continuous phase;

and S190, acquiring the three-dimensional topography of the object to be detected according to the continuous phase.

In the process of measuring the three-dimensional shape, the phase measurement profilometry is adopted to measure the wrapping phase. Firstly, projecting on an object to be measured, projecting sinusoidal stripes with a first frequency on the object to be measured, and collecting twelve first stripe images of the object to be measured by adopting a twelve-step phase shift method; and projecting sinusoidal stripes with a second frequency on the object to be detected, and collecting twelve second stripe images of the object to be detected by adopting a twelve-step phase shift method.

The sinusoidal stripe pattern can be described mathematically as follows:

where A is the average intensity, B is the intensity modulation value, B is the number of phase shift steps, k is the phase shift index, and φ is the phase to be measured. The corresponding images taken by the camera may be represented as:

where α is the camera sensitivity, t is the camera exposure time, r is the surface reflectivity of the object to be measured, InIs random noise. Assuming that α and t remain constant during the measurement, the light intensity valueDepending only on r. The high dynamic range problem is typically two cases for different values of r: the first case is that when the value of r is small, the image brightness of its corresponding region is low, which makes the phase information susceptible to noise; the second case is that when the value of r is large, the corresponding region is easily saturated, which results in loss of phase information.

When the value of r is small, the light intensity value of each pixel is within the dynamic range of the camera, and therefore can be calculated directly using the standard N-step phase shift algorithm:

φ can then be extracted by the following equation:

random noise InConsidered as additive gaussian noise. Thus, InNormally distributed and mean value is zero. On the basis of which the phase error sigma can be calculated2Variance of (a):

σnis InThe phase shift step N may be increased to reduce phase errors due to low reflectivity.

When the value of r is large, the captured image is saturated. The loss of phase information due to saturation becomes a major source of phase error. Assuming that the dynamic range of the camera is infinite, the exact value of φ can be calculated as:

however, the dynamic range of the existing video camera is not infinite, and for an 8-bit camera, when the saturation phenomenon occurs, the corresponding image taken by the camera can be expressed as:

therefore, the calculation formula of the phase is as follows:

the nature of the phase shift algorithm is the discrete fourier transform, which by its nature introduces a phase error when the condition for integer period sampling is not met. Since intensity saturation can also be seen as a special non-linear error, the saturation error can still be reduced by increasing the number of phase shift steps.

In both cases, increasing the number of phase shift steps is a very effective way to reduce the phase error. To better illustrate its effectiveness, three-step phase shifting and twelve-step phase shifting were used to simulate phase errors with different reflectivities. Using an 8-bit camera with a resolution of 1000 x 500, the image can be represented as:

for a twelve-step phase shift, the image can be represented as:

where (x, y) are the pixel coordinates of the camera, IsnIs additive gaussian noise with mean 0 and variance 1. Since the acceptable mean absolute error is within 0.04 radians. The corresponding measurable reflectivity ranges for the three-step phase shift and the twelve-step phase shift are about 0.6-1.0 and 0.3-3.0, respectively. The dynamic range of the N-step phase shift is represented using the quotient of the maximum measurable reflectance divided by the minimum measurable reflectance:

from this, the dynamic range DR of twelve-step phase shift12Dynamic range DR being a three step phase shift36 times of the total weight of the powder. However, the twelve-step phase shift is measured four times slower than the three-step phase shift. It can be seen that the conventional phase shifting method cannot satisfy both high speed and high dynamic range requirements.

After a first stripe image and a second stripe image are obtained by a twelve-step phase shift method, three images are randomly selected from the first stripe image to serve as a first input image, and three images are randomly selected from the second stripe image to serve as a second input image. Inputting a first input image into a pre-trained convolutional neural network to obtain a first wrapping phase; and inputting the second input image into a pre-trained convolutional neural network to obtain a second wrapping phase.

In order to solve the problem of compatibility of high speed and high dynamic range, a convolutional neural network can be adopted to process the fringe image, and the convolutional neural network is composed of an input layer, an output layer and a plurality of hidden layers. The structure of the convolutional neural network is shown in fig. 2, and the deep learning process is divided into two stages of training and testing.

During the training phase, a large amount of training data needs to be collected to train the convolutional neural network. For each object to be measured, a twelve-step phase shift pattern is projected onto the object. The projected light intensity is designed to be greater than the upper measurement limit for the three-step phase shift and less than the upper measurement limit for the twelve-step phase shift, with the camera simultaneously capturing the reflected fringe pattern. Twelve original image markersThe training input image of the convolutional neural network is three images selected from twelve training fringe images, and the label of the convolutional neural network comprises a wrapping phase molecular label SgtAnd wrapping phase denominator label CgtWhich is also called ground truth, its calculation formula is as follows:

inputting training data into a convolutional neural network, and abstracting an image into a feature map with the size of w × h × c after a first convolutional layer. For convolutional layers, the number of channels depends on the number of convolutional filters, the more filters used, the more accurate the network estimate. But using more filters means that more training time is required. To balance network performance and efficiency, the number of filters is set to 50. After convolution of the first convolutional layer, the value of the pixel (x, y) in the jth feature map and in the ith convolutional layer can be expressed as:

wherein b isijIs the bias term for the jth profile, n is the set of profiles in layer (i-1),is the value at the filter location (P, Q), and P and Q are the height and width of the filter, respectively. In the present embodiment, the size of each filter is 3 × 3, and the convolution span is 1. The following merging layers simplify the output of the first convolution layer by performing non-linear down-sampling. Each output data of the first convolutional layer is down-sampled by x 1, x 02, x 14 and x 28 in four different paths. Thus, their sizes are then converted to w × 3h × 450, w2 × h2 × 50, w4 × h4 × 50 and w8 × h8 × 50, which are intended to perceive more surface detail. The output of the pooling layer is followed by four residual blocks, which can speed up the convergence rate of the training phase, thereby improving the training efficiency. The residual block is followed by an upsampling block for returning the multi-scale data to its original size. After passing through the second convolutional layer, the output data of the four data flow paths are connected by the connection blocks into a tensor of w × h × 200. It is input to the third convolutional layer. The output of the third convolutional layer produces two channels, denoted S and C, respectively. S and C are the numerator of the wrapped phase and the denominator of the wrapped phase, so the wrapped phase calculated by the convolutional neural network can be expressed as:

s and C cannot be used directly for phase calculation. Its accuracy should be verified by a loss function. The loss function is defined as S and C relative to SgtAnd CgtThe mean square error of (d) is:

where θ is a parameter space including weights and biases, etc. In training, Loss (θ) may be considered as a feedback parameter. Based on Loss (θ), the adaptive moment estimate may adjust θ to minimize the Loss function. The training phase will continue until a minimum Loss (θ) is obtained.

During the training phase, the network does not see the test object. Their actual wrapped phase phigtCan be calculated as:

unlike the training data, each test data contains only three images, which means that its S and C are calculated by the network, without labels. For test data, phi may be usedCNNAnd phigtTo calculate the phase error. If the phase error is within an acceptable range, the network is considered available for actual measurement. Otherwise, the network parameters need to be adjusted and the training phase is returned.

After the first wrapping phase and the second wrapping phase are obtained through calculation, the first wrapping phase and the second wrapping phase are the obtained wrapping phase main values under different frequencies, so that the mathematical relationship between the first wrapping phase and the second wrapping phase can be expanded in the whole measuring scene, and the absolute phase can be obtained. In this embodiment, the difference frequency between the first wrapping phase and the second wrapping phase is first calculated to obtain a composite wrapping phase, then the composite wrapping phase is unwrapped to obtain a continuous phase, and the three-dimensional morphology feature of the object to be measured is obtained according to the continuous phase.

The first wrapped phase and the second wrapped phase cannot be directly used for three-dimensional reconstruction because the wrapped phases have a 2 pi phase discontinuity. In order to obtain a continuous phase profile, phase unwrapping is necessary. Many mature phase unwrapping methods have been explored and developed over the years, but not all methods are suitable for phase unwrapping according to the present scheme. This is because the phase accuracy of the network of the present solution varies with the edge frequency f. If the fringes are perpendicular to the horizontal axis of the projector, f can be expressed as:

where R is the horizontal resolution of the projector and λ is the wavelength of the sinusoidal fringe pattern. In practical measurements, the network is more suitable for high frequency fringe patterns. The following table shows some measurements where the frequencies are not integer multiples of 10 because the horizontal resolution of the projector is 912 pixels and the frequencies can make the value of the wavelength an integer.

f/N 1/3 48/3 57/3 76/3
MSE 0.9230 0.0196 0.0149 0.0135

Where MSE is the mean square error. The main reason for this is that the phase accuracy of the network depends on SgtAnd Cgt。SgtAnd CgtIs calculated using a twelve-step phase shift algorithm. For low frequencies, in particular at a fundamental frequency f of 1, twelve steps are not sufficient to eliminate saturation errors, which is unavoidableThe output of the network is exempted from being affected. For high frequencies, too high a frequency f > 90 reduces contrast and sinusoidal behavior due to limitations in projector resolution. On the other hand, too high a frequency also increases the influence of saturation due to the point spread function of the camera. It has been found experimentally that a suitable frequency range is about 60-90.

In some embodiments, stereo phase unwrapping is used to handle phase unwrapping. A typical fringe projection profiler system using stereo phase unwrapping consists of two cameras and one projector. Depending on the geometric relationship between the two cameras, the stereo phase unwrapping can calculate unwrapped phases without resorting to a fundamental fringe pattern. Furthermore, another advantage of stereo phase unwrapping is that its robustness can be improved by depth constraints. The measurement range can be roughly estimated according to the motion state of the measured object and the parameters of the fringe projection profiler system. By setting the measurement range between the minimum and maximum depth boundaries, some candidates may be deleted to reduce the amount of computation.

The problem of phase unwrapping can also be addressed in connection with multi-wavelength time phase unwrapping. First two frequencies are selected, e.g. 57 and 76, respectively denoted f1And f2. And their wavelengths corresponding to the fringe image are:

the first wrapped phase and the second wrapped phase may be computed to obtain a composite wrapped phase. Synthesizing the wavelength λ of the pattern from the multi-wavelength time-phase unwrappingsyFrequency fsyAnd phase phisyCan be calculated as:

φsy(x,y)=[φ1(x,y)-φ2(x,y)]mod(2π)

wherein phi is1And phi2Are respectively corresponding to f1And f2Mod is a modulo operation. Continuous phase phifinalThe following can be calculated with the help of the synthesis model:

wherein Round [ X ]]Each element of X is rounded to the nearest integer. For phifinalWith a frequency equal to fsyThe frequency is low enough to avoid phase ambiguity over a large measurement range while maintaining high phase accuracy. In combination with stereo phase unwrapping, high accuracy unwrapped phases can be obtained, and then 3D coordinates can be retrieved through a parameter matrix derived from calibration parameters.

The application also provides a three-dimensional topography measurement system based on degree of deep learning, includes:

the projection acquisition module is used for projecting sinusoidal stripes with a first frequency on an object to be detected and acquiring a plurality of first stripe images of the object to be detected by adopting a twelve-step phase shift method; projecting sinusoidal stripes with a second frequency on the object to be detected, and acquiring a plurality of second stripe images of the object to be detected by adopting a twelve-step phase shift method;

the input selection module is used for randomly selecting three images in the first stripe image as a first input image; randomly selecting three images in the second stripe image as a second input image;

the wrapping phase module is used for inputting the first input image into a convolutional neural network to obtain a first wrapping phase; inputting the second input image into a convolutional neural network to obtain a second wrapping phase;

the synthetic wrapping module is used for calculating the difference frequency of the first wrapping phase and the second wrapping phase to obtain a synthetic wrapping phase;

the wrapping and unwrapping module is used for unwrapping the synthetic wrapping phase to obtain a continuous phase;

and the appearance measuring module is used for acquiring the three-dimensional appearance characteristics of the object to be measured according to the continuous phases.

The contents in the above method embodiments are all applicable to the present system embodiment, the functions specifically implemented by the present system embodiment are the same as those in the above method embodiment, and the beneficial effects achieved by the present system embodiment are also the same as those achieved by the above method embodiment.

The application also provides a three-dimensional topography measurement system based on degree of deep learning, includes:

at least one processor;

at least one memory for storing at least one program;

when executed by the at least one processor, cause the at least one processor to implement the deep learning based three-dimensional topography measurement method.

The contents in the above method embodiments are all applicable to the present system embodiment, the functions specifically implemented by the present system embodiment are the same as those in the above method embodiment, and the beneficial effects achieved by the present system embodiment are also the same as those achieved by the above method embodiment.

In addition, a storage medium is further provided, where processor-executable instructions are stored, and when executed by a processor, the processor-executable instructions are configured to perform the steps of the method for processing mutual information according to any one of the above-mentioned method embodiments. For the storage medium, it may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. It can be seen that the contents in the foregoing method embodiments are all applicable to this storage medium embodiment, the functions specifically implemented by this storage medium embodiment are the same as those in the foregoing method embodiments, and the advantageous effects achieved by this storage medium embodiment are also the same as those achieved by the foregoing method embodiments.

It should be appreciated that the layers, modules, units, platforms, and/or the like included in an embodiment system of the application may be implemented or embodied by computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The methods may be implemented in a computer program using standard programming techniques, including a non-transitory computer-readable storage medium configured with the computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner, according to the methods and figures described in the detailed description. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.

Moreover, the data processing flows performed by the layers, modules, units, and/or platforms included in the system embodiments of the present application may be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The data processing flows correspondingly performed by the layers, modules, units and/or platforms included in the system of embodiments of the present application may be performed under the control of one or more computer systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) collectively executed on one or more processors, by hardware, or a combination thereof. The computer program includes a plurality of instructions executable by one or more processors.

Further, the system may be implemented in any type of computing platform operatively connected to a suitable connection, including but not limited to a personal computer, mini computer, mainframe, workstation, networked or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, and the like. The data processing flows correspondingly executed by the layers, modules, units and/or platforms included in the system of the present application may be implemented in machine readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, an optical read and/or write storage medium, a RAM, a ROM, etc., so that it may be read by a programmable computer, and when the storage medium or device is read by a computer, may be used to configure and operate the computer to perform the processes described herein. Further, the machine-readable code, or portions thereof, may be transmitted over a wired or wireless network. The invention described herein includes these and other different types of non-transitory computer-readable storage media when such media include instructions or programs that implement the steps described above in conjunction with a microprocessor or other data processor. The present application also includes the computer itself when programmed according to the methods and techniques described herein.

The above description is only a preferred embodiment of the present application, and the present application is not limited to the above embodiment, and any modifications, equivalent substitutions, improvements, etc. within the spirit and principle of the present application should be included in the protection scope of the present application as long as the technical effects of the present application are achieved by the same means. Various modifications and variations of the technical solution and/or embodiments thereof are possible within the protective scope of the present application.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于地貌参数的土层厚度估算方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!