Image processing apparatus and method
阅读说明:本技术 图像处理装置和方法 (Image processing apparatus and method ) 是由 勝木祐伍 小林直树 于 2018-05-25 设计创作,主要内容包括:本公开涉及一种能够抑制对应点检测的精确度降低的图像处理装置和方法。向作为成像单元拍摄由投影单元投影的预定结构光图案的图像的结果而获得的拍摄图案图像应用单应性变换,并且使用应用了所述单应性变换的所述拍摄图案图像来检测由所述投影单元投影的所述投影图像与由所述成像单元拍摄到的所述拍摄图像之间的对应点。本公开可以应用于例如,图像处理装置、图像投影装置、控制装置、信息处理装置、投影成像系统、图像处理方法、程序等。(The present disclosure relates to image processing apparatuses and methods capable of suppressing a decrease in accuracy of corresponding point detection.A homography transform is applied to a shot pattern image obtained as a result of an imaging unit shooting an image of a predetermined structured light pattern projected by a projection unit, and corresponding points between the projected image projected by the projection unit and the shot image shot by the imaging unit are detected using the shot pattern image to which the homography transform is applied.)
An image processing apparatus of kind, comprising:
a corresponding point detection unit that applies a homography transform to a captured pattern image obtained as a result of an imaging unit capturing an image of a predetermined structured light pattern projected by a projection unit, and detects corresponding points between a projection image projected by the projection unit and a captured image captured by the imaging unit using the captured pattern image to which the homography transform is applied.
2. The image processing apparatus according to claim 1, wherein
The corresponding point detecting unit applies the homography transform based on design values of the projecting unit and the imaging unit so as to convert the shot pattern image into a coordinate system as seen from the front, and detects corresponding points by using the shot pattern image converted into the coordinate system as seen from the front.
3. The image processing apparatus according to claim 2, wherein
The corresponding point detecting unit converts coordinates of four corners of the projection image projected by the projecting unit into a coordinate system of the imaging unit based on the design value, and applies the homography conversion to the shot pattern image using the converted coordinates of the four corners.
4. The image processing apparatus according to claim 2, wherein
The corresponding point detecting unit applies an inverse homography transform that is an inverse transform of the homography transform to the detected corresponding point.
5. The image processing apparatus according to claim 1, wherein
The corresponding point detection unit
Applying the homography transformation based on design values of the projection unit and the imaging unit so as to convert the shot pattern image into a coordinate system as seen from the front, and detecting temporary corresponding points by using the shot pattern image converted into the coordinate system as seen from the front, and
the homography transformation is also applied based on the detected temporary corresponding points so as to convert the shot pattern image converted into a coordinate system as viewed from the front into a coordinate system of a projected image projected by the projection unit, and the corresponding points are detected by using the shot pattern image converted into the coordinate system of the projected image.
6. The image processing apparatus according to claim 5, wherein
The corresponding point detecting unit applies an inverse homography transform that is an inverse transform of the homography transform to the detected corresponding point.
7. The image processing apparatus according to claim 1, wherein
The shot pattern image is an image obtained by using a shot image of the structured light pattern projected superimposed onto another image.
8. The image processing apparatus according to claim 7, wherein
The shot pattern image is a difference image of shot images of each of two projected images including the structured light pattern, the two projected images having the same shape as each other and also having luminance change directions opposite to each other.
9. The image processing apparatus according to claim 8, wherein
The shot pattern images are differential images between composite images including the structured light pattern, the composite images having the mutually opposite luminance change directions, each of the composite images being obtained by adding, at , a respective shot image of a plurality of projection images including the structured light pattern, the plurality of projection images having mutually the same luminance change directions.
10. The image processing apparatus according to claim 1, wherein
The structured light pattern includes two elliptical patterns having mutually opposite brightness variation directions.
11. The image processing apparatus according to claim 10, wherein
The structured light pattern comprises a plurality of patterns of the elliptical shape having different longitudinal directions.
12. The image processing device of claim 1, further comprising:
the projection unit.
13. The image processing apparatus according to claim 12, wherein
The projection unit is positioned proximate to the projection plane.
14. The image processing apparatus according to claim 12, wherein
The projection unit projects the same structured light pattern multiple times.
15. The image processing apparatus according to claim 12, wherein
Is provided with a plurality of said projection units, an
Each projection unit sequentially projects the structured light pattern.
16. The image processing device of claim 1, further comprising:
the imaging unit.
17. The image processing apparatus according to claim 16, wherein
The imaging unit is positioned proximate to the projection plane.
18. The image processing apparatus according to claim 16, wherein
The imaging unit captures multiple projection images of the same structured light pattern.
19. The image processing apparatus according to claim 16, wherein
Is provided with a plurality of the image forming units, an
Each imaging unit takes an image of the projected image of the structured light pattern with .
20, a method of image processing, comprising:
applying a homography transform to a captured pattern image obtained as a result of an imaging unit capturing an image of a predetermined structured light pattern projected by a projection unit, and detecting a corresponding point between a projected image projected by the projection unit and the captured image captured by the imaging unit using the captured pattern image to which the homography transform is applied.
Technical Field
The present disclosure relates to image processing apparatuses and methods, and more particularly to image processing apparatuses and methods capable of suppressing a decrease in accuracy of corresponding point detection.
Background
In the related art, in order to reduce distortion of a projection image projected by a projector and align each of the projection images from a plurality of projectors, there are methods of photographing the projection image with a camera and using the photographed image to perform geometric correction on the projection image according to the position and posture of the projector(s), the shape of a projection plane, and the like.
For example, an Invisible Structured Light (ISL) method of embedding a pattern image into a content image for projection has been conceived as a technique of calculating corresponding points of the content image when projecting the content image, also referred to as online sensing (for example, see patent document 1). With the ISL method, invisibility of a pattern is achieved by embedding and projecting two pattern images having the same pattern and mutually opposite luminance change directions into successive frames of a content image.
Meanwhile, in recent years, an ultra-short focus projector has been developed which can radiate a large projection even in a case where the ultra-short focus projector is installed at a position very close to a projection plane, as compared with a general projector. In the case where distortion correction is performed by the ISL method by using such an ultra-short focus projector, it is conceivable to incorporate a camera into the ultra-short focus projector to make the work easier.
CITATION LIST
Patent document
Patent document 1: japanese patent application laid-open No. 2013-192098
Disclosure of Invention
Problems to be solved by the invention
However, in this case, the camera will take an image of the projected image at a view angle from below to above, for example, in the vicinity of the projection plane, pattern distortion in the taken image will increase, and the like, and there is a concern that: the accuracy of detecting the corresponding point will be reduced.
The present disclosure is designed in view of such a situation, and can suppress a decrease in accuracy of the corresponding point detection.
Problem solving scheme
An image processing apparatus according to an aspect of the present technology includes: a corresponding point detection unit that applies a homography transform to a captured pattern image obtained as a result of the imaging unit capturing an image of the predetermined structured light pattern projected by the projection unit, and detects corresponding points between the projection image projected by the projection unit and the captured image captured by the imaging unit using the captured pattern image to which the homography transform is applied.
An image processing method according to an aspect of the present technology includes: the method includes applying a homography transform to a shot pattern image obtained as a result of an imaging unit shooting an image of a predetermined structured light pattern projected by a projection unit, and detecting a corresponding point between a projection image projected by the projection unit and the shot image shot by the imaging unit using the shot pattern image to which the homography transform is applied.
In the image processing apparatus and method according to aspects of the present technology, homography transformation is applied to a captured pattern image obtained as a result of an imaging unit capturing an image of a predetermined structured light pattern projected by a projection unit, and a corresponding point between a projection image projected by the projection unit and the captured image captured by the imaging unit is detected using the captured pattern image to which the homography transformation is applied.
Effects of the invention
According to the present disclosure, images may be processed. Specifically, the accuracy of the corresponding point detection can be suppressed from being lowered.
Drawings
Fig. 1 is a schematic diagram illustrating an example of a manner in which geometric correction is performed.
Fig. 2 is a schematic diagram illustrating an example of a manner in which geometric correction is performed.
Fig. 3 is a schematic diagram illustrating an example of a manner in which geometric correction is performed.
Fig. 4 is a schematic diagram illustrating an example of a manner of detecting the corresponding point.
Fig. 5 is a schematic diagram illustrating an example of a manner of detecting the corresponding point.
Fig. 6 is a schematic diagram illustrating an example of a manner of detecting the corresponding point.
Fig. 7 is a schematic diagram illustrating an example of an ISL.
Fig. 8 is a schematic diagram illustrating an example of a structured light pattern.
Fig. 9 is a schematic diagram illustrating an example of a positive image and a negative image of a structured light pattern.
Fig. 10 is a schematic diagram illustrating an example of a manner in which an ultra-short focus projector projects an image.
Fig. 11 is a schematic diagram illustrating an example of capturing a pattern image.
Fig. 12 is a diagram illustrating an example of a manner in which a homography transformation is performed.
Fig. 13 is a diagram illustrating an example of design values.
Fig. 14 is a block diagram illustrating an exemplary main configuration of a projection imaging system.
Fig. 15 is a block diagram illustrating an exemplary main configuration of the control apparatus.
Fig. 16 is a functional block diagram illustrating exemplary functions implemented by the control apparatus.
Fig. 17 is a functional block diagram illustrating exemplary functions implemented by the projection imaging processing unit.
Fig. 18 is a functional block diagram illustrating exemplary functions implemented by the corresponding point detection processing unit.
Fig. 19 is a schematic diagram illustrating an example of a housing of the projection imaging apparatus.
Fig. 20 is a block diagram illustrating an exemplary main configuration of a projection imaging apparatus.
Fig. 21 is a block diagram illustrating an exemplary main configuration of the projection unit.
Fig. 22 is a schematic diagram illustrating an example of laser beam scanning.
Fig. 23 is a flowchart illustrating an example of the flow of the geometry correction processing.
Fig. 24 is a flowchart illustrating an example of the flow of the projection imaging process.
Fig. 25 is a flowchart illustrating an example of the flow of the corresponding point detection processing.
Fig. 26 is a schematic diagram illustrating an example of the pattern center of gravity detection result.
Fig. 27 is a diagram illustrating an example of a homography transformation error.
Fig. 28 is a diagram illustrating an example of a homography transformation error.
Fig. 29 is a schematic diagram illustrating an example of a housing of the projection imaging apparatus.
Fig. 30 is a block diagram illustrating another exemplary configuration of a projection imaging system.
Fig. 31 is a block diagram illustrating exemplary main configurations of a projection imaging system and a projection imaging apparatus.
Detailed Description
Hereinafter, embodiments for implementing the present disclosure (hereinafter, embodiments) will be described. Note that description will be made in the following order.
ISL method and corresponding point detection
2. best mode (projection imaging system)
3. Second embodiment (projection imaging system/projection imaging apparatus)
4. Others
<1.ISL method and corresponding Point detection >
< corresponding Point detection and geometry correction >
Depending on the pose (such as position and orientation) of a projection plane (such as a screen or wall) with respect to the projector, the shape of the projection plane, etc., the projected image (also referred to as a projected image) may be distorted and difficult to see in cases, for example, as in a of fig. 1 in which case the projected image may be distorted less and easier to see by performing geometric correction (such as distortion correction) on the image projected by the projector, as in the example of B of fig. 1.
Also, like the example of fig. 2, there are systems that project images with a plurality of projectors and cause a single projection image to be formed, for example, there are methods that increase the contrast ratio and achieve a high dynamic range by projecting images from a plurality of projectors to the same position as each other, like a of fig. 2 as another example, there are methods that achieve a projection image larger than the projection image projected by a single projector (a projection image higher in resolution than the projection image projected by a single projector) by arranging projection images projected respectively from a plurality of projectors, like b of fig. 2, in the case of these methods, if the positional relationship between the projection images projected from each projector, the projection images may be misaligned and superimposed with each other, or an improper gap may be generated, and there is a fear that the image quality of the entire will be reduced.
By performing geometric correction on the image in this way, even in the case where the image is projected onto a curved projection plane from a plurality of projectors as in the example in fig. 3, the image can be projected to look like a single image. Note that in the case where a plurality of projection images are arranged to form a larger projection image like the example in fig. 2B and 3, alignment can be performed more easily by partially superimposing (overlapping) adjacent projection images on each other, like the example in fig. 3.
Accordingly, methods of capturing an image of a projected image projected by a projector using a camera and performing geometric correction using the captured image have been conceived.
For example, as in the example in fig. 4, a standard
In the case where the geometric correction is performed using the camera in this manner, it is necessary to calculate a corresponding point (a pixel in the projection image and the captured image corresponding to the same position as each other in the projection plane) between the projection image (or the image to be projected) and the captured image. In other words, it is necessary to calculate the correspondence between the pixels of the camera 14 (captured image 15) and the pixels of the projector 11 (standard light pattern 12).
Moreover, in the case where a plurality of projectors are used like the example in fig. 2 and 3, it is also necessary to calculate the positional relationship of each projected image with each other.
For example, as in the example in fig. 5, it is assumed that an image is to be projected by cooperation between the projection imaging apparatus 20-1 including the projection unit 21-1 (projector) and the imaging unit 22-1 (camera) and the projection imaging apparatus 20-2 including the projection unit 21-2 (projector) and the imaging unit 22-2 (camera). In this document, the projection imaging device 20-1 and the projection imaging device 20-2 will be referred to as the projection imaging device(s) 20 without distinguishing the projection imaging device 20-1 and the projection imaging device 20-2 in the description. Also, in the case where it is not necessary to distinguish the projection unit 21-1 from the projection unit 21-2 in the description, the projection unit 21-1 and the projection unit 21-2 will be referred to as the projection unit(s) 21. Further, in the case where it is not necessary to distinguish between the imaging unit 22-1 and the imaging unit 22-2 in the description, the imaging unit 22-1 and the imaging unit 22-2 will be referred to as the imaging unit(s) 22.
As illustrated in fig. 5, the projection area (range of projection image) of the projection unit 21-1 of the projection imaging device 20-1 in the
Note that the imaging area (the range included in the captured image) in the
In the case of such a system, as described above, in order to align the projection images with each other, it is necessary to calculate not only the corresponding points between the projection unit 21 and the imaging unit 22 in each projection imaging apparatus 20 but also the corresponding points between the projection unit 21 and the imaging unit 22 in the different projection imaging apparatus 20, and therefore, as in in fig. 6, for example, light is radiated from a specific pixel of the projection unit 21-1 (arrow 27), reflected at X in the
In this way, by calculating the corresponding points between all the projection units 21 and the imaging units 22 for which the corresponding points can be calculated, the alignment of the overlapping regions (the range illustrated by the double arrow 24) can be performed by geometric correction.
< on-line sensing >
Although it is conceivable to perform such corresponding point detection for geometric correction before starting projection of the visual image, due to external disturbance or the like (such as temperature and vibration at the time of projection of the visual image), there are concerns that: after the initial installation, the corresponding point will be shifted. If the corresponding point shifts, the geometric correction becomes inappropriate, and there is a concern that: the projected image will be distorted and misaligned.
In this case, it is necessary to detect the corresponding point again, but it is undesirable for the user who is viewing the visual image to interrupt the projection of the visual image for this reason (there is a concern that the user satisfaction is lowered). Therefore, a method of detecting corresponding points while continuing to project a visual image (online sensing) has been conceived.
For example, a method using invisible light (such as infrared light), a method using image features (such as SIFT), an Invisible Structured Light (ISL) method, and the like have been conceived as online sensing technologies. In the case of a method using invisible light such as infrared light, since a projector (for example, an infrared light projector) that projects the invisible light is also necessary, there is a concern that: the cost increases. Also, in the case of using image features (such as SIFT), since the detection accuracy and detection density of the corresponding points depend on the image content to be projected, it is difficult to perform the corresponding point detection with stable accuracy.
In contrast, since the case of the ISL method uses visible light, an increase in structural elements of the system (i.e., an increase in cost) can be suppressed. Also, the corresponding point detection can be performed with stable accuracy, without depending on the image to be projected.
< ISL method >
The ISL method is the following technique: the predetermined pattern image (i.e., the structured-light pattern) is positively and negatively transformed and embedded in the projection, and the image is projected so that the predetermined pattern image is not perceived by a human.
As illustrated in FIG. 7, by adding a predetermined structured light pattern to a particular frame of an input image, the projector generates a frame image in which a positive image of the structured light pattern is synthesized with the input image (content image), and by subtracting the structured light pattern from the lower frame of the input image, the projector generates a frame image in which a negative image of the structured light pattern is synthesized with the input image.
Instead, the camera takes images of the projected images of these frames, and extracts only the structured light pattern included in the taken images by calculating the difference between the projected images of the two frames. Corresponding point detection is performed using the extracted structured light pattern.
In this way, with the ISL method, since the structured light pattern can be easily extracted by simply calculating the difference between the captured images, ideally, the corresponding point detection is performed with stable accuracy without depending on the image to be projected.
< Structure of structured light Pattern >
A specific example of a structured light pattern is illustrated in fig. 8. The
In fig. 8, a white
In the case of the ISL method, the
By projecting such a positive image 100-1 and a negative image 100-2 to be superimposed on two consecutive frames, a user viewing the projected images may be made less able to perceive the pattern image 100 (which may promote invisibility of the pattern image 100) due to the integration effect.
< ultra short focal length projector >
Meanwhile, there is an ultra-short focus projector capable of radiating a large projection even in a case where the ultra-short focus projector is installed at a position very close to a projection plane, as compared with a general projector. For example, as illustrated in fig. 10, the ultra-short-focus projector 111 is installed near a
Also, if it is assumed that the projector and the camera required for the above-described ISL method are formed as separate devices and can be installed at any position, respectively, the relative positions of these devices need to be calculated to correctly perform triangulation in corresponding point detection (distortion correction). By providing (integrating) the projector and the camera in a single housing, the relative positions of these devices can be regarded as known information (the work of calculating the relative positions becomes unnecessary), and therefore, the corresponding point detection (distortion correction) can be made easier (simplified).
However, if the camera incorporates the ultra-short focus projector 111, the camera will take an image of a projected image at an angle of, for example, looking from below upward near the projection plane, pattern distortion in the taken image will increase, and there is a concern that the accuracy of detecting corresponding points will decrease.A taken pattern image 121 in A of FIG. 11 is a taken image of a pattern image obtained by taking an image of the projected image from the front.
< application of homography transformation to Pattern image >
Accordingly, a homography transform is applied to a shot pattern image obtained as a result of the imaging unit shooting an image of the predetermined structured light pattern projected by the projection unit, and by using the shot pattern image to which the homography transform is applied, a corresponding point between the projection image projected by the projection unit and the shot image shot by the imaging unit is detected.
For example, a plane in which a pattern is arranged in the detected shot pattern image 122 (as illustrated in B of fig. 11) is projected onto a plane as viewed from the front in the projection plane by using a homography transform (projection transform). By converting the pattern into a state of a projection plane as viewed from the front in this way, it is possible to suppress pattern distortion, size variation, and the like (in other words, to make the pattern closer to the shape of the pattern in the image to be projected). Therefore, by detecting the corresponding points using the shot pattern image after the homography conversion, it is possible to suppress a decrease in accuracy of the corresponding point detection.
< transformation of System homography >
As the homography transform, for example, a homography transform based on known design information (design values) of a projection unit (for example, a projector) and an imaging unit (for example, a camera) may be applied. Such homography transformations based on design values are also referred to as system homography transformations.
For example, as illustrated in fig. 12, (each coordinate of) a plane where a pattern is arranged in the
The system homography matrix Hs may be calculated in any manner, but may be calculated, for example, by using the four corner points of the projected image. For example, world coordinates of four corner points of the projection image in the projection plane are calculated (P1, P2, P3, and P4). For example, as illustrated in a of fig. 13, assuming that the origin of world coordinates is set as the center of projection, the size in the vertical direction is b (mm), and the x-coordinate and the y-coordinate are 1(mm) ═ 1, the world coordinates of the four corners of the projection image are P1(a/2, b/2), P2(a/2, -b/2), P3(-a/2, -b/2), and P4(-a/2, b/2).
Next, world coordinates (P1 to P4) of four corners are transformed into a camera coordinate system by using internal parameters approximately known about the camera (imaging unit). In other words, for example, it is specified which positions (coordinates) in the captured image the four corner points of the projection image projected onto the projection plane take (i.e., the correspondence between the projection plane and the captured image) by using information on the position of the image unit, the image capturing direction, the angle of view, and the like. For example, as illustrated in B of fig. 13, if the information is known, such a correspondence (i.e., system homography matrix Hs) between the projection plane (world coordinates) and the captured image (camera coordinate system) can be easily calculated.
In other words, by applying the system homography transform as the homography transform, it is possible to more easily suppress the reduction in accuracy of the corresponding point detection.
Note that, in order to restore the corresponding point detected in the coordinate system back to the original coordinate system (the coordinate system of the captured pattern image 122) after the homography transform, it is sufficient to perform an inverse transform of the homography transform (also referred to as an inverse homography transform) on the corresponding point. Therefore, for example, in order to restore the corresponding points detected in the coordinate system back to the original coordinate system after the system homography transform, it is sufficient to perform the inverse transform (Hs-1P) of the system homography transform described above (also referred to as an inverse system homography transform) on the obtained corresponding points P, as illustrated in fig. 12. In other words, in this case, the inverse matrix Hs-1 of the system homography matrix Hs is regarded as a homography matrix.
However, the system homography transform is derived based on several constraints, such as that the projection unit (projector) and the projection plane are parallel and the internal parameters of the imaging unit (camera) are known to some extent and errors may be generated during actual operation.
< algorithmic homography transformation >
Therefore, as illustrated in fig. 12, as the homography transformation described above, not only the system homography transformation but also the homography transformation for the coordinate system of the image projected by the projection unit and based on the information on the corresponding point detected by using the
For example, as illustrated in FIG. 12, by projecting (each coordinate of) a plane in which a pattern is arranged in a
Note that, in order to restore the corresponding points detected in the coordinate system back to the original coordinate system (the coordinate system of the captured pattern image 123) after the algorithmic homography transform, it is sufficient to perform the inverse transform (Ha-1P) of the algorithmic homography transform described above (also referred to as an inverse algorithmic homography transform) on the obtained corresponding points P, as illustrated in fig. 12. In other words, in this case, the inverse matrix Ha-1 of the algorithm homography matrix Ha is regarded as a homography matrix. Note that by additionally applying the inverse system homography transform, the corresponding points can be restored back to the coordinate system of the
<2 > embodiment
< projection imaging System >
In fig. 14, the projection imaging system 300 is a system capable of projecting an image by the ISL method according to a method applied to the present technology described in <1.ISL method and corresponding point detection >, taking an image of the projected image, and performing corresponding point detection.
As illustrated in fig. 14, the projection imaging system 300 includes a
Hereinafter, in the case where it is not necessary to distinguish the projection imaging devices 302-1 to 302-N in the description, the projection imaging devices 302-1 to 302-N will be referred to as projection imaging device(s) 302. Also, in the case where it is not necessary to distinguish the cables 303-1 to 303-N in the description, the cables 303-1 to 303-N will be referred to as a cable(s) 303.
The control means 301 controls each
The projection imaging devices 302-1 to 302-N include projection units 311-1 to 311-N that project images and imaging units 312-1 to 312-N that capture images of objects, respectively. Hereinafter, in the case where the projection units 311-1 to 311-N are not necessarily distinguished in the description, the projection units 311-1 to 311-N will be referred to as a projection unit(s) 311. Also, in the case where it is not necessary to distinguish the imaging units 312-1 to 312-N in the description, the imaging units 312-1 to 312-N will be referred to as the imaging unit(s) 312.
The
The
In other words, the
There may be any number of
Note that the projection direction and magnification in which the
In addition, the image capturing direction and angle of view in which the
Further, such control of the
The cable 303 is an electrical communication cable of any communication standard by which a communication channel between the
In such a projection imaging system 300, in order to perform geometric correction on an image, the
< control device >
Fig. 15 is a block diagram illustrating an exemplary main configuration of a
As illustrated in fig. 15, the
The CPU 321, ROM 322, and RAM 323 are connected to each other via a bus 324. In addition, an input/output interface 330 is also connected to bus 324. The input unit 331, the output unit 332, the storage unit 333, the communication unit 334, and the driver 335 are connected to the input/output interface 330.
The input unit 331 includes an input device that receives external information such as user input. For example, the input unit 331 may include a keyboard, a mouse, operation buttons, a touch panel, a camera, a microphone, an input terminal, and the like. In addition, various sensors (such as an acceleration sensor, an optical sensor, and a temperature sensor) and an input device (such as a barcode reader) may also be included in the input unit 331. The output unit 332 includes an output device that outputs information (such as images and sounds). For example, the output unit 332 may include a display, a speaker, an output terminal, and the like.
The storage unit 333 includes a storage medium storing information such as programs and data. For example, the storage unit 333 may include a hard disk, a RAM disk, a nonvolatile memory, and the like. The communication unit 334 includes a communication device that performs communication by exchanging information (such as programs and data) with an external device via a predetermined communication medium (any network such as, for example, the internet). For example, the communication unit 334 may include a network interface. For example, the communication unit 334 communicates (exchanges programs and data) with a device external to the
The drive 335 reads out information (such as programs and data) stored in a removable medium 341 (such as, for example, a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory) loaded into the drive 335 itself. The drive 335 supplies the information read out from the removable medium 341 to the CPU 321, the RAM 323, and the like. Also, in the case where the writable removable medium 341 is loaded into the drive 335 itself, the drive 335 can cause information (such as programs and data) supplied from the CPU 321, the RAM 323, and the like to be stored in the removable medium 341.
For example, the CPU 321 loads a program stored in the storage unit 333 into the RAM 323 through the input/output interface 330 and the bus 324 and executes the program to perform various processes. The RAM 323 also appropriately stores data necessary for the CPU 321 to execute various processes and the like.
By executing the program or the like in this manner, the CPU 321 can execute processing related to detecting the corresponding point, such as, for example, processing like those described in <1.ISL method and corresponding point detection >.
< control device function Block >
Fig. 16 is a functional block diagram illustrating an example of functions realized by the
The projection
More specifically, for example, the projection
The corresponding point
More specifically, for example, the corresponding point
The geometric correction processing unit 353 performs processing related to geometric correction of an image to be projected. For example, the geometric correction processing unit 353 performs processing such as pose estimation for a projection unit or the like, reconfiguration of a screen (projection plane), and geometric correction for an image to be projected, based on the corresponding point detected by the corresponding point
Note that the blocks can exchange information (such as, for example, commands and data) with each other as needed.
< projection imaging processing Unit >
An example of functions included in the projection
The
For example, the
The
As described with reference to fig. 10 to 12 and the like, since the
Note that the blocks can exchange information (such as, for example, commands and data) with each other as needed.
< corresponding Point detection processing means >
An example of functions included in the corresponding point
The
For example, the
The pattern difference
Due to this difference, in the pattern difference image, the components of the content image included in the captured pattern image are cancelled and suppressed, and conversely, the components of the
The system homography
The corresponding
The algorithm
The corresponding
The inverse
In other words, these processing units perform processes like those of process described with reference to FIG. 12, FIG. 13, etc., for example, note that the blocks can exchange information (such as, for example, commands and data) with each other as needed.
< projection imaging apparatus >
Fig. 19 is a perspective view illustrating an appearance state of the
With this arrangement, the relative positions of the
In other words, the shot pattern image obtained by the
Fig. 20 is a block diagram illustrating an exemplary main configuration of the
The control unit 401 includes, for example, a CPU, a ROM, a RAM, and the like, and controls each processing unit within the apparatus and executes various processing required for the control, such as, for example, image processing. For example, the control unit 401 executes these processes based on the control of the
The
The
The input unit 411 includes an input device that receives external information (such as user input). The input unit 411 includes, for example, operation buttons, a touch panel, a camera, a microphone, an input terminal, and the like. In addition, various sensors (such as an optical sensor and a temperature sensor) may also be included in the input unit 411. The output unit 412 includes an output device that outputs information (such as images and sounds). For example, the output unit 412 includes a display, a speaker, an output terminal, and the like.
The storage unit 413 includes, for example, a hard disk, a RAM disk, a nonvolatile memory, and the like. For example, the communication unit 414 includes a network interface. For example, the communication unit 414 is connected to the communication cable 303 and is capable of communicating with the
< projection Unit >
Fig. 21 is a block diagram illustrating an exemplary main configuration of the
The
The
The mirror 434-1 reflects the laser beam output from the laser output unit 433-1 and guides the laser beam to the
Note that the example of fig. 21 is described as being provided with three laser output units 433 that output laser beams of three colors, but there may be any number of laser beams (or any number of colors). For example, there may be four or more laser output units 433, or there may be two or less laser output units 433. In other words, the number of laser beams output from the projection imaging device 302 (projection unit 311) may be two or less or four or more. Further, the number of colors of the laser beam output from the projection imaging device 302 (projection unit 311) may be two or less or four or more. Also, the mirror 434 and the
< flow of geometry correction processing >
Next, a process performed in the projection imaging system 300 having such a configuration will be described. As described above, in the projection imaging system 300, the
An example of the flow of the geometry correction processing executed in the
When the geometry correction processing is started, in step S101, the projection
In step S102, the corresponding point
In step S103, the geometry correction processing unit 353 estimates the postures of each
In step S104, based on the processing results of the pose estimation and the projection screen reconfiguration in step S103, the geometric correction processing unit 353 performs geometric correction on the image to be projected from each
When the geometric correction ends, the geometric correction processing ends. The
< flow of projection imaging processing >
Next, an example of the flow of the projection imaging process performed in step S101 of fig. 23 will be described with reference to the flowchart in fig. 24.
When the projection imaging process is started, in step S121, the
In step S122, the
In step S123, the
In step S124, the
In step S125, similar to the processing in step S123, the
In step S126, the
In the case where the processing from step S122 to step S126 is repeatedly performed as described above and it is determined in step S126 that the processing has been repeated a predetermined number of times, the processing proceeds to step S127.
In step S127, the
In other words, in the case where there are a plurality of projection units 311 (or projection imaging devices 302), the processing from step S121 to step S127 is repeatedly performed as described above, and the image of the structured light pattern is projected in turn from each projection unit, further, in the case where there are a plurality of imaging units 312 (or projection imaging devices 302), each
< flow of corresponding Point detection processing >
Next, an example of the flow of the corresponding point detection processing executed in step S102 of fig. 23 will be described with reference to the flowchart in fig. 25.
When the corresponding point detection processing is started, the
In step S142, the
In step S143, the pattern difference
In step S144, for example, as described in < system homography transform > and the like in <1.ISL method and corresponding point detection >, the system
In step S145, for example, as described in < system homography conversion > and the like in <1.ISL method and corresponding point detection >, the corresponding
In step S146, for example, as described in < algorithm homography conversion > or the like in <1.ISL method and corresponding point detection >, the algorithm
In step S147, for example, as described in < algorithm homography transform > or the like in <1.ISL method and corresponding point detection >, the algorithm
In step S148, for example, as described in < algorithm homography conversion > in <1.ISL method and corresponding point detection > or the like, the corresponding
In step S149, for example, as described in < system homography >, < algorithm homography > and the like in <1.ISL method and corresponding point detection >, the inverse
In step S150, the
In this way, each process from step S141 to step S150 is repeatedly executed, and in the event that determination is made in step S150 that all shot pattern images have been processed, the corresponding point detection process ends, and the process returns to fig. 23 in other words, each processing unit of the corresponding point
As described in <1.ISL method and corresponding point detection >, by performing each process as described above, it is possible to suppress a decrease in accuracy of corresponding point detection.
< comparison of the number of detected corresponding points >
Next, the influence of the homography transformation on the number of detected corresponding points will be described in more detail. For example, in the case where the simulation for detecting the corresponding points is performed using the shot pattern image before the homography conversion is performed (for example, the
In other words, by applying the homography transform to the shot pattern image and performing the corresponding point detection as described above, it is possible to suppress a reduction in the number of detected corresponding points. In general, increasing the number of detected corresponding points makes it possible to perform geometric correction by using more accurate corresponding points or based on more information, and therefore, the accuracy of geometric correction can be improved. Since the accuracy of the geometric correction can be improved, this is equivalent to being able to improve the accuracy of the detection of the corresponding points. In other words, by applying the homography transform to the shot pattern image and performing the corresponding point detection as described above, it is possible to suppress the accuracy of the corresponding point detection from being lowered.
< comparison of detection accuracy of corresponding Point >
Next, the influence of the homography transformation on the accuracy of the corresponding point detection will be described more specifically. A of fig. 26 is a schematic diagram illustrating an example of the corresponding point detection result and the accuracy thereof in the case where the corresponding point is detected using the shot pattern image before the homography conversion is performed (for example, the
In a of fig. 26 and B of fig. 26, each circle represents a corresponding point detected in the coordinates of the image to be projected. Also, the hue of each circle represents the magnitude of the error of the corresponding dot detection, with darker hues accounting for greater degrees of error. As clearly shown by the comparison between a of fig. 26 and B of fig. 26, the case (B of fig. 26) where the corresponding point is detected using the
< comparison of detection accuracy of corresponding Point >
Next, a description will be given of the accuracy of the corresponding point detection compared between the case where the
An example of the corresponding point detection result and its accuracy in the case where an image is taken from the vicinity of the projection plane is illustrated in fig. 27. Also, an example of the corresponding point detection result and its accuracy in the case of taking an image from the front of the projection plane is illustrated in fig. 28. As clearly shown in fig. 27 and 28, the accuracy of the corresponding point detection does not greatly vary between the two cases. In other words, by applying the present technology as described above, even if the
< projection imaging apparatus >
Note that, in fig. 19, the
By adopting such a configuration, no additional optical system needs to be added, and the housing of the
Note that in the housing of the
< Pattern image >
Note that although it is described above that the
< method for detecting corresponding Point >
In addition, although it is described above that the ISL method is used, the corresponding point detection method may be any method as long as the method involves using the pattern image, and is not limited to the ISL method.
In addition, it is not necessary to superimpose the pattern image on the content image. In other words, the shot pattern image may also be obtained by shooting an image of the projected projection image without superimposing the pattern image on the content image. That is, the shot pattern image in this case includes the pattern image but does not include the content image. The homography transform can be applied similarly to the case described earlier, and even the homography transform can be applied to such a shot pattern image.
<3. second embodiment >
< other exemplary configurations of projection imaging System and projection imaging apparatus >
Note that the exemplary configuration of the projection imaging system to which the present technology is applied is not limited to the example described above. For example, like the
For example, a communication network and a communication channel according to any communication standard may be included in the
The
The
Even in the case where the
In addition, the
In the case where it is not necessary to distinguish the projection devices 511-1 to 511-N in the description, the projection devices 511-1 to 511-N will be referred to as a projection device(s) 511. In the case where it is not necessary to distinguish the imaging devices 512-1 to 512-M in the description, the imaging devices 512-1 to 512-M will be referred to as the imaging device(s) 512.
Each
Also, the number of each device, the shape and size of the housing, the arrangement position, and the like may be arranged in any manner. Also, as in the example in a of fig. 30, each apparatus may also be communicably connected to each other through other devices (apparatuses or transmission channels) like the
Even in the case where the
Further, the
The projection imaging devices 521-1 to 521-N include control units 523-1 to 523-N, respectively. In the case where it is not necessary to distinguish the control units 523-1 to 523-N in the description, the control units 523-1 to 523-N will be referred to as control unit(s) 523. The
In other words, in the case of the
Even in the case where the
Further, for example, as illustrated in B of fig. 31, the projection imaging system 300 may also be configured as a single apparatus. The
In the
Therefore, even in the case where the
<4. others >
< software >
The series of processes described above may be performed by hardware and may also be performed by software, moreover, some processes may be performed by hardware and the other processes may be performed by software, in the case where the series of processes described above is performed by software, programs, data, and the like forming the software are installed through a network or a recording medium.
For example, in the case of the
As another example, in the case of the
Further, programs, data, and the like may also be provided via a wired or wireless transmission medium (such as a local area network, the internet, or a digital satellite broadcast.) for example, in the case of the
For example, in the case of the
< supplement >
Embodiments of the present technology are not limited to the embodiments described above, and various changes may be made without departing from the scope of the present technology.
For example, the present technology may also be implemented by any configuration in configurations constituting an apparatus or a system, for example, a set (that is, a local configuration of a device) as a processor of a system Large Scale Integration (LSI) or the like, a module using a plurality of processors or the like, a unit using a plurality of modules or the like, a step of adding a set of other functions to the set or the like.
Note that in this specification, a system refers to a set of a plurality of constituent elements (e.g., devices or modules (parts)) regardless of whether all of the constituent elements are in the same casing or not, therefore, devices included in different casings but connected via a network and modules included in casings are all systems.
Also, each of the processing units described above may be implemented by any configuration as long as the configuration has the functions described for the processing unit. For example, a processing unit may be configured by using any type of circuit, LSI, system LSI, processor, module, unit, set, apparatus, device, system, or the like. In addition, the above-described contents may be combined in plural. For example, the same type of configuration, such as a plurality of circuits or a plurality of processors, may be combined, or different types of configurations, such as a circuit and an LSI, may be combined.
Further , for example, elements described as a single device (or processing unit) may be divided and configured into multiple devices (or processing units). conversely, elements described above as multiple devices (or processing units) may be collectively configured as a single device (or processing unit). further elements other than those described above may be added to the configuration of each device (or each processing unit). further, portions of the configuration of a given device (or processing unit) may be included in the configuration of another device (or another processing unit) so long as the configuration or operation of the system as a whole is substantially the same.
In addition, for example, the present technology may employ a configuration of cloud computing that performs processing by distributing and sharing functions of a plurality of devices through a network.
In addition, for example, the above-described program may be executed in any apparatus. In this case, it is sufficient if the apparatus has necessary functions (function blocks, etc.) and can obtain necessary information.
In addition, in the case where a plurality of processes are included in steps, a plurality of processes included in steps may be performed by apparatuses or may be performed by being allocated to a plurality of apparatuses.
In a program executed by a computer, the processes in the steps describing the program may be performed chronologically in the order described in this specification, or the processes in the steps may be performed simultaneously, or the processes in the steps may be performed separately at necessary timings (such as when called).
In examples, part or all of the of the present technology described in any embodiment may be performed in conjunction with part or all of the of the present technology described in another embodiment, additionally, any or all of the present technology described above may be performed in conjunction with another of the technology not described above.
Further, the present technology can also be configured as follows.
(1)
an image processing apparatus includes:
a corresponding point detection unit that applies a homography transform to a captured pattern image obtained as a result of the imaging unit capturing an image of the predetermined structured light pattern projected by the projection unit, and detects a corresponding point between the projection image projected by the projection unit and the captured image captured by the imaging unit using the captured pattern image to which the homography transform is applied.
(2)
The image processing apparatus according to (1), wherein
The corresponding point detecting unit applies homography transformation based on design values of the projecting unit and the imaging unit so as to convert the captured pattern image into a coordinate system as seen from the front, and detects the corresponding point by using the captured pattern image converted into the coordinate system as seen from the front.
(3)
The image processing apparatus according to (2), wherein
The corresponding point detecting unit converts coordinates of four corners of the projection image projected by the projecting unit into a coordinate system of the imaging unit based on the design value, and applies homography transformation to the shot pattern image using the converted coordinates of the four corners.
(4)
The image processing apparatus according to (2) or (3), wherein
The corresponding point detecting unit applies an inverse homography transform that is an inverse transform of the homography transform to the detected corresponding point.
(5)
The image processing apparatus according to (1), wherein
Corresponding point detection unit
Applying homography transformation based on design values of the projection unit and the imaging unit so as to convert the shot pattern image into a coordinate system as seen from the front, and detecting temporary corresponding points by using the shot pattern image converted into the coordinate system as seen from the front, and
homography transformation is also applied based on the detected temporary corresponding points so as to convert the shot pattern image converted into the coordinate system as seen from the front into the coordinate system of the projected image projected by the projection unit, and the corresponding points are detected by using the shot pattern image converted into the coordinate system of the projected image.
(6)
The image processing apparatus according to (5), wherein
The corresponding point detecting unit applies an inverse homography transform that is an inverse transform of the homography transform to the detected corresponding point.
(7)
The image processing apparatus according to any of (1) to (6), wherein
The captured pattern image is an image obtained by using a captured image of the structured light pattern projected superimposed onto the other image.
(8)
The image processing apparatus according to (7), wherein
The shot pattern image is a differential image of respective shot images including two projected images of the structured light pattern, the two projected images having the same shape as each other and also having luminance change directions opposite to each other.
(9)
The image processing apparatus according to (8), wherein
The shot pattern images are differential images between composite images including the structured light pattern, the composite images having mutually opposite luminance change directions, each of the composite images being obtained by adding, at , a corresponding shot image of a plurality of projected images including the structured light pattern, the plurality of projected images having mutually the same luminance change direction.
(10)
The image processing apparatus according to any of (1) to (9), wherein
The structured light pattern comprises two elliptical patterns having mutually opposite directions of brightness variation.
(11)
The image processing apparatus according to (10), wherein
The structured light pattern includes a plurality of patterns having elliptical shapes with different longitudinal directions.
(12)
The image processing apparatus according to any of (1) to (11), further comprising steps of:
a projection unit.
(13)
The image processing apparatus according to (12), wherein
The projection unit is positioned proximate to the projection plane.
(14)
The image processing apparatus according to (12) or (13), wherein
The projection unit projects the same structured light pattern multiple times.
(15)
The image processing apparatus according to (12), wherein
Is provided with a plurality of projection units, an
Each projection unit projects the structured light pattern in turn.
(16)
The image processing apparatus according to any of (1) to (15), further comprising steps of:
an imaging unit.
(17)
The image processing apparatus according to (16), wherein
The imaging unit is positioned proximate to the projection plane.
(18)
The image processing apparatus according to (16) or (17), wherein
The imaging unit captures multiple projected images of the same structured light pattern.
(19)
The image processing apparatus according to any of (16) to (18), wherein,
is provided with a plurality of image forming units, an
Each imaging unit takes an image of the projected image of the structured light pattern with .
(20)
the image processing method comprises:
the method includes applying a homography transform to a shot pattern image obtained as a result of an imaging unit shooting an image of a predetermined structured light pattern projected by a projection unit, and detecting a corresponding point between a projection image projected by the projection unit and the shot image shot by the imaging unit using the shot pattern image to which the homography transform is applied.
List of reference numerals
100 pattern image
101 pattern
300 projection imaging system
301 control device
302 projection imaging device
311 projection unit
312 imaging unit
351 projection imaging processing unit
352 corresponding point detection processing unit
353 normalization correction processing unit
361 process control unit
362 projection control unit
363 imaging control unit
371 control unit
372 noise reduction unit
373 pattern differential image generating unit
374 system homography conversion unit
375 corresponding point detecting unit
376 algorithm homography transformation unit
377 corresponding point detecting unit
378 inverse homography transform unit
401 control unit
500 projection imaging system
501 network
510 projection imaging system
511 projection device
512 imaging device
520 projection imaging system
521 projection imaging device
523 control unit
530 projecting the imaging device.
- 上一篇:一种医用注射器针头装配设备
- 下一篇:一种刚体配置方法及光学动作捕捉方法