Camera module

文档序号:573290 发布日期:2021-05-18 浏览:9次 中文

阅读说明:本技术 摄像头模组 (Camera module ) 是由 李雄 张成河 李昌奕 于 2019-09-20 设计创作,主要内容包括:根据本发明的实施例的摄像头模组包括:光输出部,用于在单个时段中依次输出第一输出光信号和第二输出光信号,第一输出光信号和第二输出光信号照射到物体上;透镜部,用于聚集从物体反射的第一输入光信号和第二输入光信号,透镜部包括红外(IR)滤光器和设置在IR滤光器上的至少一个透镜;图像传感器,用于根据透镜部聚集的第一输入光信号和第二输入光信号产生第一电信号和第二电信号;倾斜部,用于根据预定规则使第一输入光信号和第二输入光信号的光路移动;以及图像控制部,用于通过使用第一电信号以及第一输出光信号与第一输入光信号之间的相位差来获取物体的深度信息,并且通过使用第二电信号来获取物体的2D图像。(The camera module according to the embodiment of the invention comprises: a light output section for sequentially outputting a first output light signal and a second output light signal in a single period, the first output light signal and the second output light signal being irradiated onto an object; a lens part for condensing a first input optical signal and a second input optical signal reflected from an object, the lens part including an Infrared (IR) filter and at least one lens disposed on the IR filter; an image sensor for generating a first electrical signal and a second electrical signal from the first input optical signal and the second input optical signal collected by the lens part; an inclined portion for moving optical paths of the first input optical signal and the second input optical signal according to a predetermined rule; and an image control section for acquiring depth information of the object by using the first electric signal and a phase difference between the first output optical signal and the first input optical signal, and acquiring a 2D image of the object by using the second electric signal.)

1. A camera device, comprising:

a light output section configured to sequentially output a first output light signal and a second output light signal in a period, the first output light signal and the second output light signal being irradiated onto an object;

a lens section including an infrared filter, i.e., an IR filter and at least one lens disposed on the IR filter, and configured to collect a first input optical signal and a second input optical signal reflected from the object;

an image sensor configured to generate first and second electrical signals from the first and second input light signals collected by the lens portion;

an inclined portion configured to move an optical path of the first input optical signal and an optical path of the second input optical signal according to a predetermined rule; and

an image control section configured to obtain a depth map of the object using the first electric signal and a phase difference between the first output optical signal and the first input optical signal, and configured to obtain a two-dimensional image, i.e., a 2D image, of the object using the second electric signal.

2. The camera device according to claim 1, wherein the image control section obtains the depth map of the object using data extracted at a plurality of periods in which the optical path of the first input optical signal is moved a plurality of times according to the predetermined rule.

3. The camera device according to claim 2, wherein according to the predetermined rule, the optical path of the first input optical signal is moved in a first direction based on a preset movement value for a first period of time, moved in a second direction perpendicular to the first direction based on the preset movement value for a second period of time, moved in a third direction perpendicular to the second direction based on the preset movement value for a third period of time, and moved in a fourth direction perpendicular to the third direction based on the preset movement value for a fourth period of time.

4. The camera device according to claim 3, wherein the image control section obtains the depth map of the object by matching a first image obtained from the data extracted at the first period, a second image obtained from the data extracted at the second period, a third image obtained from the data extracted at the third period, and a fourth image obtained from the data extracted at the fourth period using the first electric signal.

5. The camera device according to claim 1, wherein the image control section obtains the 2D image using data extracted in one period in which an optical path of the second input optical signal is moved a plurality of times according to the predetermined rule.

6. The camera device according to claim 5, wherein the optical path of the second input optical signal is shifted according to the predetermined rule in a first direction based on a preset shift value in a first sub-period of the one period, in a second direction perpendicular to the first direction based on the preset shift value in a second sub-period of the one period, in a third direction perpendicular to the second direction based on the preset shift value in a third sub-period of the one period, and in a fourth direction perpendicular to the third direction based on the preset shift value in a fourth sub-period of the one period.

7. The camera device according to claim 6, wherein the image control section obtains the 2D image of the object by matching a first sub-image obtained from the data extracted at the first sub-period, a second sub-image obtained from the data extracted at the second sub-period, a third sub-image obtained from the data extracted at the third sub-period, and a fourth sub-image obtained from the data extracted at the fourth sub-period using the second electric signal.

8. The camera device according to claim 1, wherein the light output section outputs the second output light signal in the form of a continuous wave.

9. The camera device of claim 3, wherein the preset movement value is greater than a value corresponding to zero pixels and less than a value corresponding to one pixel.

Technical Field

The invention relates to a camera module.

Background

Three-dimensional contents are applied to the fields of games and culture, and in addition to the fields of education, manufacturing, automatic driving, and the like, in order to obtain three-dimensional contents, a depth map is required. The depth map is a map representing a spatial distance, and represents perspective information of one point relative to another point in a two-dimensional image.

One method of obtaining a depth map is to project Infrared (IR) structured light onto an object and analyze reflected light from the object to extract a depth map. For IR structured light, there is a problem that it is difficult to obtain a desired level of depth resolution of a moving object.

Meanwhile, as a technique to replace the IR structured light method, a time of flight (ToF) method is attracting attention. According to the ToF method, the time of flight, i.e. the time that light is irradiated, reflected and returned, is measured to calculate the distance to the object. One great advantage of the ToF method is that information about the distance in three-dimensional space is provided quickly in real time. In addition, the user can obtain accurate distance information without applying additional algorithms or performing hardware corrections. In addition, an accurate depth map can be obtained even when a very close object or a moving object is measured.

However, in the case of the current ToF method, there is a problem that information that can be obtained from one frame is insufficient, that is, the resolution thereof is very low. In addition, there is a problem in that the resolution of a two-dimensional (2D) image obtained using IR light is also low.

As a method of increasing the resolution, there is a method of increasing the number of pixels of the image sensor. However, in this case, there is a problem in that the volume and manufacturing cost of the camera module are greatly increased.

Therefore, there is a need for a method of obtaining a depth map that enables resolution to be improved without greatly increasing the volume and manufacturing cost of a camera module.

Disclosure of Invention

Technical problem

The invention aims to provide a camera module which extracts a depth map by using a time-of-flight (ToF) method and generates a two-dimensional (2D) infrared image.

Technical scheme

One aspect of the present invention provides a camera module, including: a light output section configured to sequentially output a first output light signal and a second output light signal in a period, the first output light signal and the second output light signal being irradiated onto an object; a lens part including an Infrared (IR) filter and at least one lens disposed on the IR filter, and configured to collect a first input optical signal and a second input optical signal reflected from an object; an image sensor configured to generate first and second electrical signals from first and second input optical signals collected by the lens section; an inclined portion configured to move an optical path of the first input optical signal and an optical path of the second input optical signal according to a predetermined rule; and an image control section configured to obtain a depth map of the object using a phase difference between the first output optical signal and the first input optical signal and the first electrical signal, and configured to obtain a two-dimensional (2D) image of the object using the second electrical signal.

The image control section may obtain the depth map of the object using data extracted at a plurality of periods in which the optical path of the first input optical signal is moved a plurality of times according to a predetermined rule.

According to a predetermined rule, the optical path of the first input optical signal may be moved in a first direction based on a preset movement value for a first period of time, moved in a second direction perpendicular to the first direction based on the preset movement value for a second period of time, moved in a third direction perpendicular to the second direction based on the preset movement value for a third period of time, and moved in a fourth direction perpendicular to the third direction based on the preset movement value for a fourth period of time.

The image control section may obtain the depth map of the object by matching a first image obtained from the data extracted at the first period with the first electric signal, a second image obtained from the data extracted at the second period, a third image obtained from the data extracted at the third period, and a fourth image obtained from the data extracted at the fourth period.

The image control section may obtain the 2D image using data extracted for a period in which the optical path of the second input optical signal is moved a plurality of times according to a predetermined rule.

According to a predetermined rule, the optical path of the second input optical signal may be shifted in a first direction based on a preset shift value in a first sub-period of the one period, in a second direction perpendicular to the first direction based on the preset shift value in a second sub-period of the one period, in a third direction perpendicular to the second direction based on the preset shift value in a third sub-period of the one period, and in a fourth direction perpendicular to the third direction based on the preset shift value in a fourth sub-period of the one period.

The image control part may obtain the 2D image of the object by matching a first sub-image obtained from the data extracted at the first sub-period using the second electric signal, a second sub-image obtained from the data extracted at the second sub-period, a third sub-image obtained from the data extracted at the third sub-period, and a fourth sub-image obtained from the data extracted at the fourth sub-period.

The optical output section may output the second output optical signal in the form of a continuous wave.

The preset movement value may be greater than a value corresponding to zero pixels and less than a value corresponding to one pixel.

Advantageous effects

According to an embodiment of the present invention, both a depth map and a two-dimensional (2D) infrared image can be obtained using one camera module.

In addition, by shifting the optical path of the incident light signal, a depth map and a 2D infrared image having high resolution can be obtained without greatly increasing the number of pixels of the image sensor.

Drawings

Fig. 1 is a block diagram of a camera module according to an embodiment of the present invention.

Fig. 2 is a sectional view showing one example of the camera module.

Fig. 3 is a view for describing an image sensor section according to an embodiment of the present invention.

Fig. 4 is a view for describing an output optical signal of the light output part according to an embodiment of the present invention.

Fig. 5 is a view for describing a process in which the image sensor section generates the first electric signal according to an embodiment of the present invention.

Fig. 6 is a set of views for describing the optical path of an input optical signal changed by an inclined portion.

Fig. 7 and 8 are views for describing the effect of moving an image frame input to an image sensor according to tilt control of an Infrared (IR) filter.

Fig. 9 is a view for describing a predetermined rule according to an embodiment of the present invention, and an optical path of an input optical signal is moved by an inclined portion at a predetermined rule.

Fig. 10 is a view showing an example of a phase image obtained by the camera module according to an embodiment of the present invention, fig. 11 is a view showing an example of an amplitude image, and fig. 12 is a set of views showing an example of a depth image.

Fig. 13 and 14 are views for describing a Super Resolution (SR) technique according to an embodiment of the present invention.

Fig. 15 is a view for describing a process of arranging pixel values according to an embodiment of the present invention.

Fig. 16 is a flowchart illustrating a method of generating a depth image and a two-dimensional (2D) image by a camera device according to an embodiment of the present invention.

Detailed Description

Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings.

However, the technical spirit of the present invention is not limited to some embodiments to be described, and may be implemented using various other embodiments, and at least one of the components of the embodiments may be selectively coupled, substituted and used within the scope of the technical spirit.

In addition, all terms (including technical and scientific terms) used herein may be interpreted as having a customary meaning to those skilled in the art, and the meaning of general terms (such as terms defined in a common dictionary) will be interpreted by considering the contextual meaning of the related art, unless the context clearly defines otherwise.

Also, the terms used in the embodiments of the present invention are considered in a descriptive sense and not for the purpose of limiting the invention.

In this specification, the singular includes the plural unless the context clearly indicates otherwise, and in the case of the description "A, B and at least one (or one or more) of C", at least one combination of all possible combinations of A, B and C may be included.

In addition, in describing the components of the present invention, terms such as "first", "second", "a", "B", "a", and "(B)" may be used.

These terms are only intended to distinguish one element from another element, and the nature, order, and the like of the elements are not limited by these terms.

In addition, it is to be understood that when an element is referred to as being "connected" or coupled to another element, such description may include the case where the element is directly connected or coupled to the other element and the case where the element is connected or coupled to the other element with another element interposed therebetween.

In addition, where any element is described as being formed or disposed "on or under" another element, such description includes instances where two elements are formed or disposed in direct contact with each other and instances where one or more other elements are interposed between the two elements. In addition, when one element is described as being disposed "on" or "under" another element, such description may include the case where the one element is disposed on the upper side or the lower side with respect to the other element.

First, the structure of a camera module according to an embodiment of the present invention will be described in detail with reference to fig. 1 and 2.

Fig. 1 is a block diagram of a camera module according to an embodiment of the present invention.

Referring to fig. 1, the camera module 100 includes a light output part 110, a lens part 120, an image sensor part 130, an inclined part 140, and an image control part 150.

The light output part 110 generates a first output light signal and a second output light signal, and irradiates the first output light signal and the second output light signal onto an object. In this case, the first output optical signal and the second output optical signal may be sequentially output in one period and repeatedly output in a plurality of periods.

The light output part 110 may generate and output an output light signal in the form of a pulse wave or a continuous wave. The continuous wave may have the form of a sine wave or a square wave. Specifically, the light output section 110 may generate a first output light signal in the form of a pulse wave or a continuous wave, and generate a second output light signal in the form of a continuous wave. According to the embodiment of the present invention, since the second output optical signal is output in the form of a continuous wave, there is an advantage in that the switching loss of the optical output part 110 is reduced.

In this specification, the output light may refer to light output from the light output part 110 and incident on an object, and the input light may refer to light output from the light output part 110, reaching the object, reflected by the object, and input to the camera module 100. From the perspective of the object, the output light may be incident light and the input light may be reflected light.

The light output section 110 irradiates the generated first and second output light signals onto an object with a predetermined exposure time. In this case, the exposure period refers to one frame period. In the case where a plurality of frames are generated, the set exposure period is repeated. For example, the camera module 100 takes an image of an object at 20FPS with an exposure period of 1/20[ seconds ]. In addition, in the case of generating 100 frames, the exposure period may be repeated 100 times.

Referring to fig. 1, to generate the first output optical signal and the second output optical signal, the light output section 110 may include an optical source 112 configured to generate light and an optical modulator 114 configured to modulate the light.

First, the light source 112 generates light. The light source 112 is repeatedly turned on and off at predetermined time intervals to generate a first output optical signal and a second output optical signal having a pulse wave form or a continuous wave form. The predetermined time interval may correspond to the frequency of the output optical signal. The turning on and off of the light source may be controlled by the light modulator 114.

In this case, the light generated by the light source 112 may be infrared light having a wavelength of 770nm to 3000nm, and may also be visible light having a wavelength of 380nm to 770 nm. A Light Emitting Diode (LED) may be used as the light source 112, and the light source 112 may have a form in which a plurality of LEDs are arranged in a predetermined pattern. In addition, light source 112 may also include an organic led (oled) or a Laser Diode (LD). Alternatively, the light source 112 may also be a Vertical Cavity Surface Emitting Laser (VCSEL). A VCSEL is one of laser diodes configured to convert an electrical signal into an optical signal, and may use a wavelength of about 800nm to 1000nm, e.g., about 850nm or 940 nm.

In addition, the optical modulator 114 controls the light source 112 to be turned on and off to control the light source 112 to generate the first output optical signal and the second output optical signal in the form of a continuous wave or a pulse wave. The optical modulator 114 may control the optical source 112 to generate the output optical signal in the form of a continuous wave or a pulse wave by frequency modulation, pulse modulation, or the like.

The lens section 120 collects the first input optical signal and the second input optical signal reflected from the object and transmits the first input optical signal and the second input optical signal to the image sensor section 130. The lens part 120 may include an Infrared (IR) filter and one lens disposed on the IR filter to collect the first input optical signal and the second input optical signal.

The image sensor section 130 generates first and second electrical signals using the first and second input optical signals collected through the lens section 120. In this case, the first electrical signal is a signal corresponding to the first input optical signal, and the second electrical signal is a signal corresponding to the second input optical signal.

Specifically, the image sensor section 130 may be synchronized with the on and off periods of the light output section 110 to receive the first input light signal. The image sensor section 130 may receive the first input optical signal that is in phase and out of phase with the first output optical signal output from the light output section 110. That is, the image sensor section 130 may repeatedly perform an operation of receiving the first input optical signal when the light source is turned on and an operation of receiving the first input optical signal when the light source is turned off.

In addition, the image sensor section 130 may receive a second input optical signal in response to a second output optical signal of the light output section 110. Specifically, the image sensor section 130 may receive the second input optical signal in synchronization with the time when the second output optical signal is output.

Then, the image sensor section 130 generates a first electrical signal and a second electrical signal using the received first input light signal and the received second input light signal, respectively. In this case, the first electrical signal may be generated using a plurality of reference signals having different phase differences.

The inclined portion 140 moves the optical paths of the first and second input optical signals according to a predetermined rule.

Specifically, the slope section 140 may shift the first input optical signal by a preset shift value in a predetermined direction every period. In addition, the slope section 140 may shift the second input optical signal by a preset shift value in a predetermined direction every sub period.

In this case, the slope part 140 may move the optical paths of the first and second input optical signals according to a preset movement value. In this case, the preset movement value may be set in units of sub-pixels of the image sensor section 130. A sub-pixel may be a unit that is greater than zero pixels and less than one pixel. In addition, the slope part 140 may change the direction of at least one optical path of the output optical signal or the input optical signal to one direction of upward, downward, leftward and rightward directions based on the current optical path.

The image control section 150 obtains a depth map and a two-dimensional (2D) image using the first electric signal and the second electric signal.

Referring to fig. 1, the image control part 150 may include a first image acquisition part 151 configured to obtain a depth image and a second image acquisition part 152 configured to obtain a 2D image.

Specifically, the first image acquisition section 151 obtains a depth map of the object using the phase difference between the first output optical signal and the first input optical signal and the first electric signal. The first image acquisition section 151 obtains a depth map of the object using data extracted over a plurality of periods in which the optical path of the first input optical signal is moved a plurality of times according to a predetermined rule. In this case, the first image obtaining part 151 may obtain the depth map of the object by matching a first image obtained from the data extracted at the first period using the first electric signal, a second image obtained from the data extracted at the second period, a third image obtained from the data extracted at the third period, and a fourth image obtained from the data extracted at the fourth period.

In addition, the second image acquisition section 152 obtains a 2D image of the object using the second electric signal. The second image acquisition section 152 may obtain a 2D image using data extracted over a period in which the optical path of the second input optical signal is moved a plurality of times according to a predetermined rule. In this case, the second image acquisition section 152 may obtain a 2D image of the object by matching a first sub-image obtained from the data extracted at the first sub-period using the second electric signal, a second sub-image obtained from the data extracted at the second sub-period, a third sub-image obtained from the data extracted at the third sub-period, and a fourth sub-image obtained from the data extracted at the fourth sub-period.

Fig. 2 is a sectional view showing one example of the camera module.

Referring to fig. 2, the camera module 300 includes a lens assembly 310, an image sensor 320, and a printed circuit board 330. In this case, the lens assembly 310 may correspond to the lens part 120 of fig. 1, and the image sensor 320 may correspond to the image sensor part 130 of fig. 1. In addition, the image control section 150 of fig. 1 and the like may be formed on the printed circuit board 330. Although not shown in the drawings, the light output part 110 of fig. 1 may be disposed beside the image sensor 320 on the printed circuit board 330 or outside the camera module 300, for example, beside the camera module 300.

Lens assembly 310 may include a lens 312, a lens barrel 314, a lens holder 316, and an IR filter 318.

The lens 312 may be provided as a plurality of lenses 312, or may be provided as one lens 312. In the case where the lens 312 is provided as a plurality of lenses 312, the lenses may be aligned with respect to the central axis to form an optical system. In this case, the central axis may be the same as the optical axis of the optical system.

The lens barrel 314 may be coupled to the lens holder 316 and provided with a space for accommodating a lens therein. The lens barrel 314 may be rotatably coupled to one or more lenses, but this is merely exemplary, and the lens barrel 314 may be coupled thereto by a different method, for example, a method using an adhesive (e.g., an adhesive resin such as epoxy).

The lens holder 316 may be coupled to the lens barrel 314 and support the lens barrel 314, and to the printed circuit board 330 on which the image sensor 320 is mounted. Due to the lens holder 316, a space where the IR filter 318 can be attached can be formed below the lens barrel 314. A spiral pattern may be formed on an inner circumferential surface of the lens holder 316, and the lens holder 316 may be rotatably coupled to the lens barrel 314 (a spiral pattern like the spiral pattern of the lens holder 316 is formed on an outer circumferential surface of the lens barrel 314). However, this is merely exemplary, and the lens holder 316 and the lens barrel 314 may be coupled by an adhesive, or the lens holder 316 and the lens barrel 314 may also be integrally formed.

The lens holder 316 may be divided into an upper holder 316-1 coupled to the lens barrel 314 and a lower holder 316-2 coupled to the printed circuit board 330 on which the image sensor 320 is mounted, and the upper holder 316-1 and the lower holder 316-2 may be integrally formed, separately formed and fixed or coupled, or separately formed and spaced apart from each other. In this case, the upper holder 316-1 may be formed to have a diameter smaller than that of the lower holder 316-2.

The above example is only one embodiment, and the lens part 120 may also be provided in a different structure that may collect the first input optical signal and the second input optical signal incident on the camera module 100 and transmit the collected first input optical signal and the collected second input optical signal to the image sensor part 130.

Fig. 3 is a view for describing an image sensor section according to an embodiment of the present invention.

The image sensor section 130 receives a first input optical signal and a second input optical signal to generate a first electrical signal and a second electrical signal, respectively.

For this, the image sensor part 130 may be implemented as a Complementary Metal Oxide Semiconductor (CMOS) image sensor or a Charge Coupled Device (CCD) image sensor, and may be formed in a structure in which a plurality of pixels are arranged in a mesh shape. For example, in the case of the image sensor section 130 having a resolution of 320 × 240 as shown in fig. 3, 76800 pixels may be arranged in a grid shape.

Each pixel 132 may include a first light receiving part 132-1 and a second light receiving part 132-2, the first light receiving part 132-1 including a first photodiode and a first transistor, and the second light receiving part 132-2 including a second photodiode and a second transistor.

A constant gap is generated between a plurality of pixels, as in the shaded area of fig. 3. In the embodiment of the present invention, one pixel having a constant gap adjacent to one pixel will be described as one pixel.

Hereinafter, components of the camera module according to the embodiment of the present invention will be described in detail with reference to fig. 4 to 15.

Fig. 4 is a view for describing an output optical signal of the light output part according to an embodiment of the present invention.

As described above, the first output optical signal and the second output optical signal may be sequentially output for one period (i.e., one exposure period). In addition, the first output optical signal and the second output optical signal may be repeatedly output for a plurality of exposure periods. That is, the first output optical signal and the second output optical signal may be output in the same mode.

In this case, as shown in fig. 4, the first output optical signal and the second output optical signal may be generated to have different frequencies. According to an embodiment of the present invention, as shown in fig. 4, the light output section 110 may be controlled to generate the first output light signal having the frequency f1 in the first half of the exposure period and the second output light signal having the frequency f2 in the remaining half of the exposure period. For example, the optical output section 110 may produce a first output optical signal having a frequency of 80.32MHz and a second output optical signal having a frequency of 60.24 MHz.

Fig. 5 is a view for describing a process in which the image sensor section generates the first electric signal according to an embodiment of the present invention.

According to an embodiment of the present invention, in the image sensor section 130, a process of receiving the first input optical signal and generating the first electrical signal may be different from a process of receiving the second input optical signal and generating the second electrical signal.

First, a process of receiving a first input optical signal and generating a first electrical signal will be described. The first light receiving section 132-1 receives a first input optical signal having the same phase as that of the first output optical signal. That is, when the light source is turned on, the first photodiode is turned on and receives the first input optical signal. In addition, when the light source is turned off, the first photodiode is turned off and stops receiving the first input optical signal. The first photodiode converts the received first input optical signal into a current and transfers the current to the first transistor. The first transistor converts the received current into an electric signal and outputs the electric signal.

The second light receiving section 132-2 receives the first input optical signal having a phase opposite to the phase of the waveform of the output light. That is, when the light source is turned on, the second photodiode is turned off and receives the first input optical signal. In addition, when the light source is turned off, the second photodiode is turned on and stops receiving the first input optical signal. The second photodiode converts the received first input optical signal into a current and transfers the current to the second transistor. The second transistor converts the received current into an electrical signal.

Accordingly, the first light receiving part 132-1 may be referred to as an in-phase receiving unit, and the second light receiving part 132-2 may be referred to as an out-of-phase receiving unit. As described above, when the first and second light receiving parts 132-1 and 132-2 are activated at different times, a difference in the amount of received light is generated according to the distance from the object. For example, in the case where an object is located directly in front of the camera module 100 (i.e., the distance is 0), since the time taken for the light to be output from the light output part 110, reflected by the object, and returned from the object is zero, the on and off periods of the light source are the light receiving periods. Accordingly, only the first light receiving part 132-1 receives light, and the second light receiving part 132-2 does not receive light. As another example, in the case where the object is spaced apart from the camera module 100 by a predetermined distance, since it takes time for the light to be output from the light output part 110 and reflected by the object and returned from the object, the turn-on and turn-off periods of the light source are different from the light receiving period. Accordingly, a difference in the amount of received light is generated between the first light receiving part 132-1 and the second light receiving part 132-2. That is, the distance to the object may be calculated using the difference in the amount of received light between the first light receiving part 132-1 and the second light receiving part 132-2.

The image sensor section 130 may generate a first electric signal corresponding to each of a plurality of reference signals having different phase differences using the reference signals and electric signals generated by the transistors of the first light receiving section 132-1 and the transistors of the second light receiving section 132-2. As shown in fig. 5, when generating a first electrical signal corresponding to a first input optical signal according to an embodiment of the present invention, four reference signals C may be used1To C4. Reference signal C1To C4May have the same frequency as that of the output optical signal and have a phase difference of 90 ° from each other. One reference signal C of the four reference signals1May have a phase with the output optical signalThe same phase is located. The phase of the input optical signal is delayed by the distance that the output optical signal is incident on and reflected by and returns from the object. The image sensor section 130 mixes the input optical signal and each corresponding reference signal. Then, the image sensor part 130 may generate a first electrical signal corresponding to each shaded region of the reference signal of fig. 5.

In this case, the frequency of each reference signal may be set to be the same as the frequency of the first output optical signal output from the optical output part 110. In addition, the image sensor section 130 may convert the received second input optical signal into an electrical signal. Each electrical signal may include information about an amount of charge or an amount of voltage corresponding to the reference signal.

Next, a process of receiving the second input optical signal and generating the first electrical signal will be described. The process of generating the second electrical signal may be different from the process of generating the first electrical signal. Since the second electrical signal is an electrical signal for obtaining a 2D image but not for obtaining a depth image, a reference signal may not be used and the first and second light receiving parts 132-1 and 132-2 may simultaneously receive the second input optical signal. However, the first and second light receiving parts 132-1 and 132-2 may receive light in synchronization with the tilting period of the tilting part 140.

Next, the inclined portion according to an embodiment of the present invention will be described in detail with reference to fig. 6 to 8.

Fig. 6 is a set of views for describing the optical path of an input optical signal changed by an inclined portion.

In fig. 6A, a portion shown by a solid line shows a current optical path of the input optical signal, and a portion shown by a broken line shows a changed optical path thereof. When the exposure period corresponding to the current optical path ends, the slope part 140 may change the optical path of the input optical signal to the optical path shown by the dotted line. Then, the path of the input optical signal is shifted by one sub-pixel from the current optical path. For example, as shown in fig. 6A, when the inclined part 140 moves the current optical path to the right by 0.173 °, the input optical signal incident on the image sensor part 130 may move to the right by 0.5 pixels (sub-pixels).

According to an embodiment of the present invention, the inclined portion 140 may change the optical path of the input optical signal in a clockwise direction from the reference position. For example, as shown in fig. 6B, after the first exposure period ends, the inclined portion 140 moves the optical path of the input optical signal by 0.5 pixels based on the image sensor portion 130 in the second exposure period. In addition, the slope section 140 moves the optical path of the input optical signal downward by 0.5 pixels based on the image sensor section 130 in the third exposure period. In addition, the slope section 140 shifts the optical path of the input optical signal by 0.5 pixels to the left based on the image sensor section 130 in the fourth exposure period. In addition, the slope section 140 moves the optical path of the input optical signal upward by 0.5 pixels based on the image sensor section 130 in the fifth exposure period. That is, the slope part 140 may move the optical path of the input optical signal to the original position by four exposure periods. Since it can be similarly implemented when moving the optical path of the output optical signal, a detailed description thereof will be omitted. In addition, the variation pattern corresponding to the optical path in the clockwise direction is merely exemplary, and the variation pattern may correspond to the counterclockwise direction.

Meanwhile, a sub-pixel may be larger than zero pixels and smaller than one pixel. For example, the sub-pixels may have a size of 0.5 pixels, and may also have a size of 1/3 pixels. The design of the sub-pixels may be varied by those skilled in the art.

Fig. 7 and 8 are views for describing the effect of moving the image frame input to the image sensor according to the tilt control of the IR filter. Fig. 7 is a view showing a simulation result of a moving distance according to an inclination angle under the condition that the thickness of the IR filter is 0.21mm and the refractive index of the IR filter is 1.5.

Referring to fig. 7 and equation 1 below, the tilt angle θ 1 and the moving distance of the IR filter 318 may have the following relationship.

[ equation 1]

In this case, θ2Can be expressed by equation 2.

[ equation 2]

In addition, θ1Is the tilt angle, n, of the IR filter 318gIs the refractive index of the IR filter 318, and d is the thickness of the IR filter 318. For example, referring to equations 1 and 2, in order to move an image frame input to the image sensor by 7 μm, the IR filter 318 may be tilted by 5 ° to 6 °. In this case, the vertical displacement of the IR filter 318 may be about 175 μm to 210 μm.

As described above, when the inclination angle of the IR filter 318 is controlled, moving image data can be obtained even without tilting the image sensor 320.

According to an embodiment of the present invention, the tilting part 140 for tilting the inclination angle of the IR filter may include an actuator directly or indirectly connected with the IR filter, and the actuator may include at least one of a Micro Electro Mechanical System (MEMS) device, a Voice Coil Motor (VCM), and a piezoelectric element.

In this case, the sub-pixels are larger than zero pixels and smaller than one pixel, as described above, and very precise control is required to move the input optical signal within the range. In the case of tilting the IR filter using the actuator, the tilt angle of the tilted IR filter and the shift value of the input optical signal may be different from preset values according to the accuracy of the actuator. In particular, in the case where an error or malfunction occurs during operation of the actuator or the arrangement of the components of the actuator becomes disjointed due to long-term use of the actuator, the tilt angle error of the IR filter and the shift value error of the input optical signal may become very large.

According to an embodiment of the present invention, the inclined part 140 may change the optical path of the input optical signal in a software or hardware manner. The example in which the inclined part 140 changes the optical path of the input optical signal by controlling the inclination angle of the IR filter has been described above, but the present invention is not limited thereto.

Fig. 9 is a view for describing a predetermined rule according to an embodiment of the present invention, and an optical path of an input optical signal is moved by an inclined portion at a predetermined rule.

In an embodiment of the present invention, the inclined part 140 may move the optical path of the first input optical signal and the optical path of the second input optical signal a plurality of times according to a predetermined rule.

As described above, since the light output section 110 sequentially outputs the first output light signal and the second output light signal within one period, the first input light signal and the second input light signal are also sequentially input to the lens section 120 within one period (i.e., one exposure period).

For example, as shown in fig. 9, when it is assumed that one exposure period includes eight sub-periods, the first input light signal may be input at the first to fourth sub-periods, and the second input light signal may be input at the fifth to eighth sub-periods.

According to a predetermined rule, the first input optical signal may move in a first direction based on a preset movement value for a first period of time, in a second direction perpendicular to the first direction based on the preset movement value for a second period of time, in a third direction perpendicular to the second direction based on the preset movement value for a third period of time, and in a fourth direction perpendicular to the third direction based on the preset movement value for a fourth period of time through the slope part 140.

Referring to fig. 9, for example, the inclined portion 140 may move an optical path of the first input optical signal in a first direction for a first period in units of sub-pixels, which are greater than zero pixels and smaller than one pixel of the image sensor portion 130, in a second direction perpendicular to the first direction for a second period in units of sub-pixels, move the optical path in a third direction perpendicular to the second direction for a third period in units of sub-pixels, move the optical path in a fourth direction perpendicular to the third direction for a fourth period in units of sub-pixels, and may repeatedly perform the corresponding processes. In this specification, a sub-pixel may refer to a unit larger than zero pixels and smaller than one pixel. In this specification, a moving distance in a first period in a first direction, a moving distance in a second direction in a second period, a moving distance in a third direction in a third period, and a moving distance in a fourth direction in a fourth period may be described by a sub-pixel moving value or a moving value. For example, in the case where one pixel includes four (2 × 2) sub-pixels and is shifted in units of sub-pixels, the shift value may be expressed as one sub-pixel, 0.5 pixels, or the like.

According to a predetermined rule, the second input optical signal may be moved in a first direction based on a preset movement value during a first sub-period of one period, in a second direction perpendicular to the first direction based on the preset movement value during a second sub-period of one period, in a third direction perpendicular to the second direction based on the preset movement value during a third sub-period of one period, and in a fourth direction perpendicular to the third direction based on the preset movement value during a fourth sub-period of one period by the inclined part 140.

Referring to fig. 9, for example, the inclined portion 140 may move an optical path of the first input optical signal in a first direction in a fifth sub-period of the first period in units of sub-pixels, which are greater than zero pixels and smaller than one pixel of the image sensor portion 130, in a sixth sub-period of the first period in units of sub-pixels, in a second direction perpendicular to the first direction, in a seventh sub-period of the first period in units of sub-pixels, in a third direction perpendicular to the second direction, in an eighth sub-period of the first period in units of sub-pixels, in a fourth direction perpendicular to the third direction, and may repeatedly perform the corresponding process in each exposure period. In this specification, a sub-pixel may refer to a unit larger than zero pixels and smaller than one pixel. In this specification, the moving distance in the first direction, the second direction, the third direction, and the fourth direction in the first period, the second period, the third period, and the fourth period may be expressed as a sub-pixel moving value or a moving value. For example, in the case where one pixel includes four (2 × 2) sub-pixels and the optical path is moved in units of one sub-pixel, the movement value may be expressed as one sub-pixel, 0.5 pixels, or the like.

Hereinafter, obtaining a depth map and a 2D image by the image control section according to the embodiment of the present invention will be described in detail with reference to fig. 10 to 15.

As described above, the image control section 150 calculates the phase difference between the first output optical signal and the first input optical signal using the first electrical signal received from the image sensor section 130, and calculates the distance between the object and the camera module 100 using the phase difference.

Specifically, the image control section 150 may calculate a phase difference between the first output optical signal and the first input optical signal using information on the charge amount of the first electrical signal.

As described above, four electrical signals may be generated for the frequency of the first output optical signal. Accordingly, the image control part 150 may calculate the phase difference t between the first output optical signal and the first input optical signal using the following equation 3d

[ equation 3]

Here, Q1To Q4Is the charge amount of the four electrical signals. Q1Is the amount of charge of the electrical signal corresponding to the reference signal having the same phase as the phase of the first output optical signal. Q2Is the amount of charge of the electrical signal corresponding to the reference signal having a phase that lags 180 ° with respect to the phase of the first output optical signal. Q3Is the amount of charge of the electrical signal corresponding to the reference signal having a phase that lags by 90 ° with respect to the phase of the first output optical signal. Q4Is the amount of charge of the electrical signal corresponding to the reference signal having a phase that lags 270 deg. with respect to the phase of the first output optical signal.

Then, the image control part 150 may calculate the distance between the object and the camera module 100 using the phase difference between the first output optical signal and the first input optical signal. In this case, the image control part 150 may calculate the distance d between the object and the camera module 100 using the following equation 4.

[ equation 4]

Where c is the speed of light and f is the frequency of the first output light.

According to an embodiment of the present invention, a time-of-flight (ToF) IR image and a depth image may be obtained from the camera module 100.

More specifically, as shown in fig. 10, original images according to four phases may be obtained from the camera module 100 according to the embodiment of the present invention. In this case, the four phases may be 0 °, 90 °, 180 °, and 270 °, and the original image of each phase may be an image including pixel values digitized for each phase, and may also be referred to as a phase image, a phase IR image, or the like.

When the calculation is performed using the four phase images of fig. 10 and equation 5, an amplitude image (ToF IR image of fig. 5) may be obtained.

[ equation 5]

Here, Raw (x)0) May be the data value, Raw (x), of each pixel received by the sensor at 0 deg. phase90) May be the data value, Raw (x), of each pixel received by the sensor at 90 deg. phase180) May be the data value, Raw (x), of each pixel received by the sensor at 180 deg. phase270) May be the data value of each pixel received by the sensor at 270 deg. phase.

Alternatively, when the calculation is performed using the four phase images of fig. 10 and equation 6, an intensity image (another ToF IR image) may be obtained.

[ equation 6]

Intensity | Raw (x)90)-Raw(x270)|+|Raw(x180)-Raw(x0)|

In this case, Raw (x)0) May be the data value, Raw (x), of each pixel received by the sensor at 0 deg. phase90) Can be used forSo that it is the data value, Raw (x), of each pixel that the sensor receives at 90 deg. phase180) May be the data value, Raw (x), of each pixel received by the sensor at 180 deg. phase270) May be the data value of each pixel received by the sensor at 270 deg. phase.

As described above, the ToF IR image is an image generated from the remaining two phase images by an operation of subtracting two phase images of the four phase images, and the background light can be removed by this operation. Therefore, in the ToF IR image, only the signal having the wavelength output by the light source remains, so that the IR sensitivity to the object can be improved and the noise can be significantly reduced.

In the present specification, the ToF IR image may refer to an amplitude image or an intensity image, and the intensity image may also be referred to as a confidence image. As shown in fig. 7, the ToF IR image may be a grayscale image.

Meanwhile, when calculation is performed using the four phase images of fig. 10 and equations 7 and 8, the depth image of fig. 11 may be obtained.

[ equation 7]

[ equation 8]

In addition, the image control part 150 may obtain a 2D image of the object using the second electric signal received from the image sensor part 130.

As described above, since the inclined part 140 inclines the second input optical signal at each sub-period of one period, and the image sensor part 130 synchronizes with the inclined part 140 and generates the second electrical signal, the second electrical signal corresponding to the second input optical signal may include a plurality of signals. For example, when the second input optical signal is input for four sub periods, the second electrical signal may include four electrical signals.

The image control part 150 may generate a plurality of sub-frames using the electric signals generated for the sub-periods. For example, in a case where the second electric signal includes four electric signals corresponding to four sub-periods, the image control part 150 may generate four sub-frames.

The depth map generated at one period or a plurality of subframes generated at one period may be used as a depth image or a 2D image. However, when the resolution of the image sensor unit 130 is low, there is a problem in that the resolution of the depth image or the 2D image is reduced. Therefore, the image control part 150 according to an embodiment of the present invention generates one high resolution depth image and one high resolution 2D image by matching the plurality of low resolution depth images and the plurality of low resolution 2D images, respectively.

Specifically, the image control section obtains a depth map of the object using data extracted at a plurality of periods in which the optical path of the first input optical signal is moved a plurality of times according to a predetermined rule. The image control section obtains a depth map of the object by matching a first image obtained from the data extracted at the first period using the first electric signal, a second image obtained from the data extracted at the second period, a third image obtained from the data extracted at the third period, and a fourth image obtained from the data extracted at the fourth period.

In addition, the image control section obtains the 2D image using data extracted in one period in which the optical path of the second input optical signal is moved a plurality of times according to a predetermined rule. The image control part obtains a 2D image of the object by matching a first sub-image obtained from the data extracted at the first sub-period using the second electric signal, a second sub-image obtained from the data extracted at the second sub-period, a third sub-image obtained from the data extracted at the third sub-period, and a fourth sub-image obtained from the data extracted at the fourth sub-period.

In an embodiment of the present invention, in order to increase the resolution of the depth image and the resolution of the 2D image, a Super Resolution (SR) technique is used. The SR technique is a technique for obtaining a high resolution image from a plurality of low resolution images, and a mathematical model of the SR technique can be expressed as equation 9.

[ equation 9]

yk=DkBkMkx+nk

Where 1 ═ k ≦ p, p is the number of low resolution images, ykIs a low resolution image (═ y)k,1,yk,2To yk,M]TWhere M is equal to N1*N2),DkIs a down-sampled matrix, BkIs an optical blur matrix, MkIs an image warping matrix (image warping matrix), and x is a high-resolution image ([ x ])1,x2To xN]TWhere N is L1N1*L2N2),nkIs noise. That is, according to the SR technique, the inverse function of the estimated resolution degradation factor is applied to ykTo estimate x. The SR technique can be largely classified into a statistical method and a multi-frame method, and the multi-frame method can be largely classified into a space division method and a time division method. However, in the case of obtaining a depth map using the SR technique, due to M of equation 9kThe inverse of (c) does not exist and therefore statistical methods can be tried. However, in the case of the statistical method, there is a problem of low efficiency because repeated calculation is required.

For this reason, in the present invention, since the slope part 140 changes the optical paths of the first and second input optical signals by the movement value preset according to the predetermined rule to obtain the low resolution image map in order to solve the problem, M of equation 9 can be accurately calculated even without using the statistical methodkThe inverse function of (c).

Fig. 13 and 14 are views for describing an SR technique according to an embodiment of the present invention. In fig. 13 and 14, a process of obtaining a high resolution depth map using a low resolution depth map is shown.

Referring to fig. 13, the image control section 150 may extract a plurality of low resolution depth maps using a plurality of low resolution sub-frames generated in one exposure period, i.e., one frame. In addition, the image control part 150 may extract the high resolution depth map by rearranging pixel values of the plurality of low resolution depth maps. In this case, optical paths of the first input optical signal corresponding to the plurality of low resolution depth maps may be different from each other.

For example, the image control section 150 may generate the low-resolution sub-frames 1-1 to 4-4 using a plurality of electrical signals included in the first electrical signal. The low resolution subframes 1-1 to 1-4 are low resolution subframes generated for a first exposure period. The low resolution subframes 2-1 to 2-4 are low resolution subframes generated for the second exposure period. The low resolution subframes 3-1 to 3-4 are low resolution subframes generated for the third exposure period. The low resolution subframes 4-1 to 4-4 are low resolution subframes generated for the fourth exposure period. Then, the image control section 150 applies the depth map extraction technique to the plurality of low-resolution subframes generated for the exposure period to extract the low-resolution depth maps LRD-1 to LRD-4. The low resolution depth map LRD-1 is a low resolution depth map extracted using subframes 1-1 to 1-4. The low resolution depth map LRD-2 is a low resolution depth map extracted using subframes 2-1 to 2-4. The low resolution depth map LRD-3 is a low resolution depth map extracted using subframes 3-1 to 3-4. The low resolution depth map LRD-4 is a low resolution depth map extracted using subframes 4-1 to 4-4. The image control unit 150 extracts the high-resolution depth map HRD by rearranging the pixel values of the low-resolution depth maps LRD-1 to LRD-4.

As another example, referring to fig. 14, the image control part 150 may generate a high-resolution sub-frame by rearranging pixel values of a plurality of sub-frames corresponding to one reference signal. In this case, optical paths of the first input optical signal corresponding to the plurality of subframes are different. In addition, the image control part 150 may extract the high resolution depth map using a plurality of high resolution subframes.

For example, in fig. 14, the image control section 150 generates the low-resolution sub-frames 1-1 to 4-4 using a plurality of electric signals included in the first electric signal. The low resolution subframes 1-1 to 1-4 are low resolution subframes generated in the first exposure period. The low resolution subframes 2-1 to 2-4 are low resolution subframes generated in the second exposure period. The low resolution subframes 3-1 to 3-4 are low resolution subframes generated in the third exposure period. The low resolution subframes 4-1 to 4-4 are low resolution subframes generated in the fourth exposure period. In this case, the low resolution subframes 1-1, 2-1, 3-1 and 4-1 correspond to one reference signal C1 and to different optical paths. Then, the image control section 150 may generate the high-resolution sub-frame H-1 by rearranging the pixel values of the low-resolution sub-frames 1-1, 2-1, 3-1, and 4-1. When the high-resolution subframes H-1 to H-4 are generated by rearranging the pixel values, the image control section applies the depth map extraction technique to the high-resolution subframes H-1 to H-4 to extract the high-resolution depth map HRD.

Meanwhile, the image control part 150 may obtain one high-resolution 2D image using a plurality of low-resolution sub-frames generated in one exposure period, i.e., one frame. For example, the image control part 150 may generate a plurality of low resolution sub-frames, i.e., sub-images, using a plurality of electrical signals included in the second electrical signal, and obtain one high resolution 2D image by matching the sub-images.

As described above, in the case of a depth image, since one depth map is obtained by matching depth maps generated at a plurality of periods, the number of depth maps can be small when compared with the image capturing speed of the camera module 100. For example, in the case where one depth map is obtained by matching depth maps of four epochs, a camera module having 100fps can obtain 25 high-resolution depth maps per second.

On the other hand, in the case of a 2D image, since one high-resolution 2D image is obtained by matching a plurality of sub-images generated in one period, the number of obtained 2D images can be matched with the image capturing speed of the camera module 100. For example, a camera module with 100fps can obtain 100 high resolution 2D images per second.

Fig. 15 is a view for describing a process of arranging pixel values according to an embodiment of the present invention.

In this case, it is assumed that four low-resolution images each having a size of 4 × 4 are used to generate one high-resolution image having a size of 8 × 8. In this case, the high-resolution pixel grid has 8 × 8 pixels, the number of which is the same as the number of pixels of the high-resolution image. In this case, the low resolution image may refer to include low resolution subframes and a low resolution depth map, and the high resolution image may refer to include high resolution subframes and a high resolution depth map.

In fig. 15, the first to fourth low-resolution images are images taken while moving the optical path in units of sub-pixels, where the sub-pixels are 0.5 pixels. The image control section 150 arranges the pixel values of the second to fourth low-resolution images according to the direction in which the optical path is moved so as to match the high-resolution image based on the first low-resolution image in which the optical path is not moved.

Specifically, the second low-resolution image is an image shifted one sub-pixel to the right from the first low-resolution image. Therefore, the pixel B of the second low resolution image is disposed at the pixel located on the right side of the pixel a of the first low resolution image.

The third low resolution image is an image shifted one sub-pixel downward from the second low resolution image. Therefore, the pixel C of the third low-resolution image is disposed at the pixel located below the pixel B of the second low-resolution image.

The fourth low-resolution image is an image shifted one sub-pixel to the left from the third low-resolution image. Therefore, the pixel D of the fourth low-resolution image is disposed at the pixel located on the left side of the pixel C of the third low-resolution image.

When the pixel values of the first to fourth low-resolution images are rearranged in the high-resolution pixel grid, a high-resolution image frame is generated, the resolution of the high-resolution image frame being four times the resolution of each of the low-resolution images.

Meanwhile, the image control section 150 may apply the weight to the pixel values to be arranged. In this case, the weight may be set to be different according to the size of the sub-pixel or the moving direction of the optical path, and the weight of the low-resolution image may be set to be different.

According to one embodiment, the inclined part 140 may move the input optical signal by a method of controlling the inclination angle of a lens assembly, for example, an IR filter 318 (see fig. 2) included in the lens assembly, so that data moved by one sub-pixel may be obtained.

Fig. 16 is a flowchart illustrating a method of generating a depth image and a two-dimensional (2D) image by a camera device according to an embodiment of the present invention.

Referring to fig. 16, the image control section 150 of the camera device 100 according to the embodiment of the present invention may obtain eight sub-images for each period. Of the eight sub-images, four sub-images may be used to generate the depth map and the remaining four sub-images may be used to generate the 2D image.

First, the image control part 150 obtains a first image using the sub-images 1-1 to 1-4 obtained at the first period (S1610). In addition, the image control part 150 obtains the 2D image generated at the first period by matching the sub-images 1-5 to 1-8 obtained at the first period (S1620).

Next, the image control section 150 obtains a second image using the sub-images 2-1 to 2-4 obtained at the second period (S1630). In addition, the image control part 150 obtains the 2D image matching generated at the second period by matching the sub-images 2-5 to 2-8 obtained at the second period (S1640).

Next, the image control part 150 obtains a third image using the sub-images 3-1 to 3-4 obtained in the third period (S1650). In addition, the image control part 150 obtains the 2D image generated at the third period by matching the sub-images 3-5 to 3-8 obtained at the third period (S1660).

Next, the image control section 150 obtains a fourth image using the sub-images 4-1 to 4-4 obtained at the fourth period (S1670). In addition, the image control part 150 obtains the 2D image generated at the fourth time period by matching the sub-images 4-5 to 4-8 obtained at the fourth time period (S1680).

The image control unit 150 generates one depth map by matching the first image, the second image, the third image, and the fourth image. For this purpose, the first image, the second image, the third image and the fourth image may be matched as one depth map or one depth image using the SR technique as described above.

As described above, the first image may be an image obtained from data extracted during a first period in which the optical path of the first input optical signal is moved in the first direction based on a preset movement value, the second image may be an image obtained from data extracted during a second period in which the optical path of the first input optical signal is moved in the second direction perpendicular to the first direction based on the preset movement value, the third image may be an image obtained from data extracted during a third period in which the optical path of the first input optical signal is moved in the third direction perpendicular to the second direction based on the preset movement value, and the fourth image may be an image obtained from data extracted during a fourth period in which the optical path of the first input optical signal is moved in the fourth direction perpendicular to the third direction based on the preset movement value.

In addition, among the sub-images, sub-images 1-5, 2-5, 3-5, and 4-5 for generating the 2D image of the time period may be images obtained from data extracted during a first period in which an optical path of the second input optical signal moves in a first direction based on a preset movement value, sub-images 1-5, 2-5, 3-5, and 4-5 may be images obtained from data extracted during a second period in which the optical path of the second input optical signal moves in a second direction perpendicular to the first direction based on a preset movement value, sub-images 1-5, 2-5, 3-5, and 4-5 may be images obtained from data extracted during a third period in which the optical path of the second input optical signal moves in a third direction perpendicular to the second direction based on a preset movement value, sub-images 1-5, and 4-5, 2-5, 3-5, and 4-5 may be images obtained from data extracted during a fourth period in which the optical path of the second input optical signal is shifted in a fourth direction perpendicular to the third direction based on a preset shift value.

Although the present invention has been mainly described above with reference to the embodiments, it will be understood by those skilled in the art that the present invention is not limited to the embodiments, which are merely exemplary, and various modifications and applications not shown above may fall within the scope of the present invention without departing from the essential characteristics of the embodiments. For example, the components specifically described in the embodiments may be modified and implemented. In addition, it should be construed that differences related to modifications and applications fall within the scope of the present invention as defined by the appended claims.

29页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:获取深度信息的方法以及摄像头模块

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类