Dynamic high-precision three-dimensional measurement method based on fringe image conversion network FPTNet

文档序号:969037 发布日期:2020-11-03 浏览:8次 中文

阅读说明:本技术 基于条纹图像转换网络FPTNet的动态高精度三维测量方法 (Dynamic high-precision three-dimensional measurement method based on fringe image conversion network FPTNet ) 是由 郑东亮 韩静 柏连发 赵壮 于浩天 张钊 于 2020-09-28 设计创作,主要内容包括:本发明涉及基于条纹图像转换网络FPTNet的动态高精度三维测量方法,包括以下步骤:1.搭建条纹图像转换网络,2.条纹图像采集与参数标定,3.条纹图像转换,4.进行包裹相位和绝对相位计算,5.重建三维信息。本发明仅需投影单帧或两帧正弦条纹图像即可计算获得精确的绝对相位,从而得到准确的三维信息,精确测量动态物体的三维信息,有效避免了运动误差,提高了三维测量的速度与精度。(The invention relates to a dynamic high-precision three-dimensional measurement method based on a fringe image conversion network FPTNet, which comprises the following steps of: 1. and (2) establishing a stripe image conversion network, acquiring stripe images and calibrating parameters, 3. converting the stripe images, 4. calculating a wrapping phase and an absolute phase, and 5. reconstructing three-dimensional information. According to the invention, the accurate absolute phase can be calculated and obtained only by projecting a single frame or two frames of sine stripe images, so that accurate three-dimensional information is obtained, the three-dimensional information of the dynamic object is accurately measured, the motion error is effectively avoided, and the speed and the accuracy of three-dimensional measurement are improved.)

1. A dynamic high-precision three-dimensional measurement method based on a fringe image conversion network FPTNet is characterized by comprising the following steps: the method comprises the following steps:

the method comprises the following steps: constructing a stripe image conversion network: deep learning is introduced to build a stripe image conversion network;

step two: acquiring fringe images and calibrating parameters: a camera and a projector are used for building a fringe projection profile system, calibration parameters of the projector and the camera in the fringe projection profile system are obtained through calibration, and an original fringe image of an object to be measured is acquired through the fringe projection profile system;

step three: stripe image conversion: inputting the original stripe image into a stripe image conversion network, and obtaining all sine stripe images by the original stripe image through the stripe image conversion network;

step four: wrapped phase and absolute phase calculations are performed: and carrying out phase calculation on each group of sine stripe images to obtain a wrapping phase as shown in a formula (1):

,(1)

(x, y) are pixel coordinates, N is the number of steps of a set of phase-shifted stripes,

Figure 276389DEST_PATH_IMAGE002

,(2)

in the formula

Figure 722917DEST_PATH_IMAGE005

step five: reconstructing three-dimensional information: and reconstructing accurate three-dimensional information of the object to be measured by combining the absolute phase and calibration parameters of a projector and a camera in the system.

2. The method for dynamic high-precision three-dimensional measurement based on the fringe image conversion network FPTNet of claim 1, wherein: the fringe image conversion network comprises two sub-networks of FPTNet-C and FPTNet-U, and the FPTNet-U is divided into two types of FPTNet-UI and FPTNet-UII.

3. The method for dynamic high-precision three-dimensional measurement based on the fringe image conversion network FPTNet of claim 1, wherein: the original fringe image is divided into a single-frame sine fringe image and two-frame sine fringe images, and the single-frame sine fringe image is input into FPTNet-C and converted into a phase-shifted sine fringe image with the same frequency.

4. The method for dynamic high-precision three-dimensional measurement based on the fringe image conversion network FPTNet of claim 3, wherein: and when the phase shift of the surface of the object to be detected does not exceed one fringe period, the single-frame sine fringe image input FPTNet-UI is converted into sine fringe images with different frequencies, and the two-frame sine fringe image input FPTNet-UII is converted into sine fringe images with different frequencies.

5. The method for dynamic high-precision three-dimensional measurement based on the fringe image conversion network FPTNet of claim 1, wherein: the fringe image conversion network comprises a training stage and an inference stage, wherein the training stage trains the fringe image conversion network to learn the conversion of fringe images by minimizing the difference between output fringe images and the fringe images actually acquired by a fringe projection profile system, and the inference stage enables the trained fringe image conversion network to convert single-frame or two-frame sine fringe images into sine fringe images with different frequencies.

6. The method for dynamic high-precision three-dimensional measurement based on the fringe image conversion network FPTNet of claim 5, wherein: the stripe image conversion network implements parameter optimization using equation (3),

,(3)

where Loss is the Loss function of the stripe image conversion network,a parameter set for the training process of the stripe image transformation network, m represents the number of pixels of a picture, n represents the number of input pictures,outputting the phase-shifted stripe picture for the nth stripe image conversion network,for the nth standard phase shift stripe picture, N represents the number of phase shift steps.

7. The method for dynamic high-precision three-dimensional measurement based on the fringe image conversion network FPTNet of claim 1, wherein: the stripe image conversion network is constructed by connecting a convolution layer, a Batch-norm layer, a ReLu layer and a drop-out layer.

Technical Field

The invention relates to a dynamic high-precision three-dimensional measurement method based on a fringe image conversion network FPTNet, and belongs to the technical field of computer vision.

Background

Dynamic three-dimensional measurement is widely applied to application of biomedicine, reverse engineering, face recognition and the like. Fringe projection profilometry is a typical structured light technique that is often used due to its high resolution, high speed, high resolution, etc. characteristics. Fringe projection profilometry first calculates the required phase using a phase shift algorithm, which requires at least three phase shifted sinusoidal fringes, or a transform-based algorithm, which may produce motion-induced errors for dynamic objects; transform-based algorithms can utilize a single sinusoidal fringe to calculate the required phase, but it is difficult to maintain the edges of the target. The calculated phase is wrapped inIn the range of (1), phase unwrapping is required to obtain an absolute phase. The phase unwrapping method can be divided into two categories, spatial and temporal phase unwrapping. The former often fails on complex surfaces due to local error propagation. The latter is commonly used in practical measurements but requires a large number of fringe patterns, such as binary or ternary encoding, phase encoding or multi-frequency sinusoidal methods, etc. The multi-frequency method is derived from a laser interferometry and can be directly applied to fringe projection profilometry by using two or more groups of phase-shifted sinusoidal fringe images. In a noiseless system, it works well to use two sets of phase shifted sinusoidal fringe images, but in a practical noisy system, multiple sets of images are required.

In recent years, with the increasing performance of computers, deep learning has been rapidly developed and is widely applied to image transformation tasks such as segmentation, super-resolution, style migration, and the like. Recently, researchers have attempted to introduce deep learning in the field of three-dimensional measurement to improve the measurement efficiency or solve the inherent problems of the conventional methods, and some researchers have used deep learning to reduce the required fringes but still need at least three frame fringe images to successfully develop the phase, and some researchers have introduced deep learning to directly convert a single frame fringe image into a three-dimensional shape, but have generated a measurement error as high as 2 mm.

In conclusion, the method with reasonable design reduces the number of fringe images while ensuring the measurement accuracy, and is especially important for dynamic three-dimensional measurement.

Disclosure of Invention

In order to solve the technical problems, the invention provides a dynamic high-precision three-dimensional measurement method based on a fringe image conversion network FPTNet, which has the following specific technical scheme:

a dynamic high-precision three-dimensional measurement method based on a fringe image conversion network FPTNet comprises the following steps:

the method comprises the following steps: constructing a stripe image conversion network: deep learning is introduced to build a stripe image conversion network;

step two: acquiring fringe images and calibrating parameters: a camera and a projector are used for building a fringe projection profile system, calibration parameters of the projector and the camera in the fringe projection profile system are obtained through calibration, and an original fringe image of an object to be measured is acquired through the fringe projection profile system;

step three: stripe image conversion: inputting the original stripe image into a stripe image conversion network, and obtaining all sine stripe images by the original stripe image through the stripe image conversion network;

step four: wrapped phase and absolute phase calculations are performed: and carrying out phase calculation on each group of sine stripe images to obtain a wrapping phase as shown in a formula (1):

,(1)

(x, y) are pixel coordinates, N is the number of steps of a set of phase-shifted stripes,the wrapped phase is unwrapped based on Gray code diagram to obtain absolute phaseAs shown in formula (2):

,(2)

in the formula

Figure 234491DEST_PATH_IMAGE006

Represents from

Figure 98541DEST_PATH_IMAGE007

Is calculated to beThe phase of each wrap-around phase is,

Figure DEST_PATH_IMAGE009

indicates the corresponding stripe order, INT [ x ]]Representing rounding;

step five: reconstructing three-dimensional information: and reconstructing accurate three-dimensional information of the object to be measured by combining the absolute phase and calibration parameters of a projector and a camera in the system.

Further, the fringe image conversion network comprises two sub-networks of FPTNet-C and FPTNet-U, and the FPTNet-U is divided into two types of FPTNet-UI and FPTNet-UII.

Further, the original fringe image is divided into a single-frame sine fringe image and two-frame sine fringe images, and the single-frame sine fringe image input FPTNet-C is converted into a phase-shifted sine fringe image with the same frequency.

Further, when the phase shift of the surface of the object to be measured does not exceed one fringe period, the single-frame sine fringe image input FPTNet-UI is converted into sine fringe images with different frequencies, and the two-frame sine fringe image input FPTNet-UII is converted into sine fringe images with different frequencies.

Further, the fringe image conversion network comprises a training stage and an inference stage, wherein the training stage trains the fringe image conversion network to learn conversion of fringe images by minimizing the difference between output fringe images and fringe images actually acquired by the fringe projection profile system, and the inference stage enables the trained fringe image conversion network to convert single-frame or two-frame sine fringe images into sine fringe images with different frequencies.

Further, the stripe image conversion network realizes parameter optimization by using the formula (3),

,(3)

where Loss is the Loss function of the stripe image conversion network,a parameter set for the training process of the stripe image transformation network, m represents the number of pixels of a picture, n represents the number of input pictures,outputting the phase-shifted stripe picture for the nth stripe image conversion network,for the nth standard phase shift stripe picture, N represents the number of phase shift steps.

Further, the stripe image conversion network is constructed by a convolution layer, a Batch-norm layer, a ReLu layer and a drop-out layer.

The invention has the beneficial effects that:

according to the invention, the accurate absolute phase can be calculated and obtained only by projecting a single frame or two frames of sine stripe images, so that accurate three-dimensional information is obtained, the three-dimensional information of the dynamic object is accurately measured, the motion error is effectively avoided, and the speed and the accuracy of three-dimensional measurement are improved.

Drawings

Figure 1 is a schematic flow diagram of the present invention,

figure 2 is a schematic diagram of the dynamic three-dimensional measurement procedure of the present invention,

figure 3 is a schematic view of the fan measurement of the dynamic object rotation of the present invention,

fig. 4 is a schematic diagram of a measurement of a doll with a dynamic object falling in accordance with the present invention.

Detailed Description

The present invention will now be described in further detail with reference to the accompanying drawings. These drawings are simplified schematic views illustrating only the basic structure of the present invention in a schematic manner, and thus show only the constitution related to the present invention.

As shown in fig. 1, the dynamic high-precision three-dimensional measurement method based on the fringe image transformation network FPTNet of the present invention includes the following steps:

the method comprises the following steps: constructing a stripe image conversion network: deep learning is introduced to build a stripe image conversion network;

step two: acquiring fringe images and calibrating parameters: a camera and a projector are used for building a fringe projection profile system, calibration parameters of the projector and the camera in the fringe projection profile system are obtained through calibration, and an original fringe image of an object to be measured is acquired through the fringe projection profile system;

step three: stripe image conversion: inputting the original stripe image into a stripe image conversion network, and obtaining all sine stripe images by the original stripe image through the stripe image conversion network;

step four: wrapped phase and absolute phase calculations are performed: and carrying out phase calculation on each group of sine stripe images to obtain a wrapping phase as shown in a formula (1):

Figure 952042DEST_PATH_IMAGE002

,(1)

(x, y) are pixel coordinates, N is the number of steps of a set of phase-shifted stripes,the wrapped phase is unwrapped based on Gray code diagram to obtain absolute phase

Figure 551968DEST_PATH_IMAGE004

As shown in formula (2):

,(2)

in the formula

Figure 731725DEST_PATH_IMAGE006

Represents from

Figure 723951DEST_PATH_IMAGE007

Is calculated to beThe phase of each wrap-around phase is,indicates the corresponding stripe order, INT [ x ]]Representing rounding;

step five: reconstructing three-dimensional information: and reconstructing accurate three-dimensional information of the object to be measured by combining the absolute phase and calibration parameters of a projector and a camera in the system. The stripe image conversion network is constructed by connecting a convolution layer, a Batch-norm layer, a ReLu layer and a drop-out layer. The fringe image conversion network comprises two sub-networks of FPTNet-C and FPTNet-U, wherein the FPTNet-U is divided into two types of FPTNet-UI and FPTNet-UII; the original fringe image is divided into a single-frame sine fringe image and two-frame sine fringe images, the single-frame sine fringe image is input into FPTNet-C and converted into a phase-shifted sine fringe image with the same frequency, when the phase shift of the surface of the object to be detected does not exceed one fringe period, the single-frame sine fringe image is input into FPTNet-UI and converted into sine fringe images with different frequencies, and the two-frame sine fringe image is input into FPTNet-UII and converted into sine fringe images with different frequencies. The fringe image conversion network comprises a training stage and an inference stage, wherein the training stage trains the fringe image conversion network to learn the conversion of fringe images by minimizing the difference between output fringe images and the fringe images actually acquired by a fringe projection profile system, and the inference stage enables the trained fringe image conversion network to convert single-frame or two-frame sine fringe images into sine fringe images with different frequencies. The stripe image conversion network implements parameter optimization using equation (3),

(3)

where Loss is the Loss function of the stripe image conversion network,a parameter set for the training process of the stripe image transformation network, m represents the number of pixels of a picture, n represents the number of input pictures,

Figure 931073DEST_PATH_IMAGE012

outputting the phase-shifted stripe picture for the nth stripe image conversion network,

Figure 128836DEST_PATH_IMAGE013

for the nth standard phase shift stripe picture, N represents the number of phase shift steps.

10页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于自准直仪的光学器件平面夹角测量装置及方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!