Camera device

文档序号:621636 发布日期:2021-05-07 浏览:7次 中文

阅读说明:本技术 相机装置 (Camera device ) 是由 张成河 朴炷彦 朱洋贤 于 2019-09-27 设计创作,主要内容包括:根据本发明实施例的相机装置包括:光输出单元,输出照射到物体的输出光信号;透镜单元,将从物体反射的输入光信号会聚;图像传感器,从通过透镜单元会聚的输入光信号产生电信号;图像处理单元,使用输出光信号与图像传感器接收的输入光信号间的时间差和其间的相位差的至少一个提取物体的深度信息,透镜单元包括红外(IR)滤光器、设置在IR滤光器上的多个固体透镜、设置在多个固体透镜上或之间的液体透镜。相机装置包括:第一驱动单元,控制IR滤光器或图像传感器的移动;第二驱动单元,控制液体透镜的曲率,第一和第二驱动单元之一根据预定规则多次移动输入光信号的光路,第一和第二驱动单元的另一个根据预定控制信息移动输入光信号的光路。(A camera device according to an embodiment of the present invention includes: an optical output unit outputting an output optical signal irradiated to an object; a lens unit converging an input optical signal reflected from an object; an image sensor generating an electrical signal from an input optical signal converged by the lens unit; an image processing unit extracting depth information of an object using at least one of a time difference and a phase difference between an output optical signal and an input optical signal received by the image sensor, the lens unit including an Infrared (IR) filter, a plurality of solid lenses disposed on the IR filter, and a liquid lens disposed on or between the plurality of solid lenses. The camera device includes: a first driving unit controlling movement of the IR filter or the image sensor; and a second driving unit which controls a curvature of the liquid lens, one of the first and second driving units moving an optical path of the input optical signal a plurality of times according to a predetermined rule, and the other of the first and second driving units moving the optical path of the input optical signal according to predetermined control information.)

1. A camera device, comprising:

a light output unit that outputs an output light signal to be irradiated to an object;

a lens unit that converges an input optical signal reflected from the object;

an image sensor generating an electrical signal from the input optical signal converged by the lens unit; and

an image processing unit extracting depth information of the object using at least one of a time difference and a phase difference between the output optical signal and the input optical signal received through the image sensor,

wherein the lens unit includes:

IR filters, i.e., infrared filters;

a plurality of solid lenses disposed on the IR filter; and

a liquid lens disposed on or between the plurality of solid lenses,

wherein the camera device further comprises:

a first driving unit controlling movement of the IR filter or the image sensor; and

a second driving unit that controls a curvature of the liquid lens, and,

wherein one of the first drive unit and the second drive unit moves an optical path of the input optical signal a plurality of times according to a predetermined rule, and

the other of the first drive unit and the second drive unit moves the optical path of the input optical signal according to predetermined control information.

2. The camera device according to claim 1, wherein the optical path of the input optical signal is moved by a sub-pixel unit larger than 0 pixel and smaller than 1 pixel of the image sensor in a first direction for a first period, moved by the sub-pixel unit in a second direction perpendicular to the first direction for a second period, moved by the sub-pixel unit in a third direction perpendicular to the second direction for a third period, and moved by the sub-pixel unit in a fourth direction perpendicular to the third direction for a fourth period according to the predetermined rule, and

wherein the predetermined control information includes control information for OIS, i.e., optical image stabilization.

3. The camera device according to claim 2, wherein the second driving unit further moves the optical path of the input optical signal according to control information for AF, which is autofocus.

4. The camera apparatus according to claim 2, wherein the control information for the OIS is extracted from at least one of motion information and pose information of the camera apparatus.

5. The camera device according to claim 2, wherein the first driving unit moves the optical path of the input optical signal according to the predetermined rule, and

the second driving unit moves the optical path of the input optical signal according to the control information for OIS.

6. The camera device according to claim 5, wherein the first driving unit controls the IR filter or the image sensor to be regularly tilted at a predetermined angle with respect to a plane perpendicular to an optical axis.

7. The camera apparatus according to claim 2, wherein the first driving unit moves the optical path of the input optical signal according to the control information for OIS, and

the second driving unit moves the optical path of the input optical signal according to the predetermined rule.

8. The camera device according to claim 7, wherein the first driving unit controls the IR filter or the image sensor to move in a direction perpendicular to an optical axis.

9. An image processing method of a camera device, comprising the steps of:

outputting the output optical signal to illuminate an object;

moving an optical path of an input optical signal reflected from the object and converged by a lens unit to reach an image sensor; and

extracting depth information of the object using at least one of a time difference and a phase difference between the output optical signal and the input optical signal received through the image sensor, and

wherein the moving step comprises the steps of:

moving the optical path of the input optical signal a plurality of times according to a predetermined rule; and

moving the optical path of the input optical signal according to predetermined control information.

10. The method of claim 9, wherein the predetermined control information includes control information for OIS extracted from at least one of motion information and pose information of the camera apparatus, the OIS being optical image stabilization.

Technical Field

The present invention relates to a camera apparatus capable of extracting depth information.

Background

3D content is applied to numerous fields such as education, manufacturing, autopilot, and games and culture, and a depth map is required to generate the 3D content. The depth map is information on spatial distance, and represents perspective information of another point of the 2D image with respect to one point.

As a method of acquiring a depth map, a method of projecting light of an IR (infrared) structure onto an object, a method of using a stereo camera, and a method of TOF (time of flight) have been used. According to the TOF method, the distance to an object is calculated by measuring the time of flight (i.e. the time of emission and reflection of light). The biggest advantage of the TOF method is that it provides fast, real-time distance information for 3D space. Further, the user can obtain accurate distance information without applying a separate algorithm or performing hardware correction. Further, even in the case of measuring an object in close proximity or measuring a moving object, an accurate depth map can be obtained.

Therefore, the TOF method is attempted to be used for biometric authentication. For example, it is well known that the shape of veins scattered on fingers and the like does not change throughout the life from the time of a fetus and varies from person to person. Thus, a camera device equipped with TOF functionality can be used to recognize vein patterns. For this, after photographing the fingers, each finger may be detected by removing a background based on the color and shape of the finger, and a vein pattern of each finger may be extracted from color information of each detected finger. That is, the average color of the finger, the color of the veins distributed on the finger, and the color of the wrinkles on the finger may be different from each other. For example, the red color of veins distributed on a finger may be weaker than the average color of the finger, and the color of wrinkles on the finger may be darker than the average color of the finger. With these features, an approximation to a vein can be calculated for each pixel, and a vein pattern can be extracted using the calculated result. In addition, an individual can be identified by comparing the extracted vein pattern of each finger with data registered in advance.

However, in order for a camera device equipped with TOF function to extract vein patterns, it is necessary to accurately photograph fingers at a close distance with high resolution. In particular, when the camera apparatus is held and operated with only one hand and the vein pattern of the other hand is photographed, there is a high possibility that a shake is caused by a hand trembling.

Disclosure of Invention

Technical problem

It is a technical object of the present invention to provide a camera apparatus capable of extracting a depth map using a TOF method.

Technical scheme

A camera apparatus according to an embodiment of the present invention may include: a light output unit that outputs an output light signal to be irradiated to an object; a lens unit that condenses an input optical signal reflected from an object; an image sensor generating an electrical signal from an input optical signal converged by the lens unit; and an image processing unit extracting depth information of the object using at least one of a time difference and a phase difference between an output optical signal and an input optical signal received through the image sensor, the lens unit including an IR (infrared) filter, a plurality of solid lenses disposed on the IR filter, and a liquid lens disposed on or between the plurality of solid lenses, the camera device further including: a first driving unit controlling movement of the IR filter or the image sensor; and a second driving unit that controls a curvature of the liquid lens, one of the first driving unit and the second driving unit moving an optical path of the input optical signal a plurality of times according to a predetermined rule, and the other of the first driving unit and the second driving unit moving the optical path of the input optical signal according to predetermined control information.

According to a predetermined rule, the optical path of the input optical signal may be shifted by a sub-pixel unit greater than 0 pixel and less than 1 pixel of the image sensor in a first direction for a first period, shifted by the sub-pixel unit in a second direction perpendicular to the first direction for a second period, shifted by the sub-pixel unit in a third direction perpendicular to the second direction for a third period, and shifted by the sub-pixel unit in a fourth direction perpendicular to the third direction for a fourth period, and the predetermined control information may have control information for OIS (optical image stabilization).

The second driving unit may further move the optical path of the input optical signal according to control information for AF (auto focus).

The control information for the OIS may be extracted from at least one of motion information and pose information of the camera apparatus.

The first driving unit may move the optical path of the input optical signal according to a predetermined rule, and the second driving unit may move the optical path of the input optical signal according to control information for OIS.

The first driving unit may control the IR filter or the image sensor to be regularly tilted at a predetermined angle with respect to a plane perpendicular to the optical axis.

The first driving unit may move the optical path of the input optical signal according to the control information for OIS, and the second driving unit may move the optical path of the input optical signal according to a predetermined rule.

The first driving unit may control the IR filter or the image sensor to move in a direction perpendicular to the optical axis.

An image processing method of a camera apparatus according to an embodiment of the present invention may include the steps of: outputting the output optical signal to illuminate an object; moving an optical path of an input optical signal reflected from an object and converged by a lens unit to reach an image sensor; and extracting a depth map of the object using at least one of a time difference and a phase difference between the output optical signal and the input optical signal received through the image sensor, the moving including: moving an optical path of an input optical signal a plurality of times according to a predetermined rule; and moving the optical path of the input optical signal according to predetermined control information.

The predetermined control information may include control information for OIS (optical image stabilization) extracted from at least one of motion information and posture information of the camera apparatus.

Technical effects

The camera apparatus according to the embodiment of the present invention can simultaneously perform the SR function and the OIS function, and thus, can obtain a high resolution and high quality depth map. In particular, since the SR function and the OIS function are performed by separate driving units, each of the SR function and the OIS function can be more precisely performed.

Drawings

Fig. 1 is a block diagram of a camera device according to an embodiment of the present invention.

Fig. 2 is a graph showing the frequency of an output optical signal.

Fig. 3 is a diagram illustrating a process for generating an electrical signal according to an embodiment of the present invention.

Fig. 4 is a diagram illustrating an image sensor according to an embodiment of the present invention.

Fig. 5 is an original image of four phases obtained from the camera apparatus according to the embodiment of the present invention.

Fig. 6 is an amplitude image obtained from the camera apparatus according to the embodiment of the present invention.

Fig. 7 is a depth image obtained from a camera device according to an embodiment of the present invention.

Fig. 8 is a block diagram of a camera device according to an embodiment of the present invention.

Fig. 9 is a side view of a camera device according to an embodiment of the present invention.

Fig. 10 is a cross-sectional view of a portion of a camera device according to an embodiment of the invention.

Fig. 11 is a sectional view of a portion of a camera device according to another embodiment of the invention.

Fig. 12 is an example of a liquid lens included in a camera apparatus according to an embodiment of the present invention.

Fig. 13 is another example of a liquid lens included in a camera apparatus according to an embodiment of the present invention.

Fig. 14 is a diagram showing that the optical path of the input optical signal is changed by the first driving unit.

Fig. 15 and 16 are diagrams illustrating an SR technique according to an embodiment of the present invention.

Fig. 17 is a diagram illustrating a pixel value shift process according to an embodiment of the present invention.

Fig. 18 to 19 are diagrams illustrating a moving effect of an image frame input on an image sensor according to tilt control of an IR filter.

Detailed Description

Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings.

However, the technical idea of the present invention is not limited to some embodiments to be described, but may be implemented in various different forms, and one or more constituent elements may be selectively combined and replaced between the embodiments within the scope of the technical idea of the present invention.

In addition, unless explicitly defined and described, terms (including technical and scientific terms) used in the embodiments of the present invention are generally understood by those of ordinary skill in the art. It can be interpreted as the meaning of a commonly used term, for example, a term defined in a dictionary can be interpreted in consideration of the meaning in the context of the related art.

In addition, terms used in the embodiments of the present invention are used to describe the embodiments, and are not intended to limit the present invention.

In this specification, the singular form may include the plural form unless explicitly stated in the wording, and A, B and C are combined when described as "a and (with) at least one (or more) of B and C". Which may contain one or more of all possible combinations.

In addition, terms such as first, second, A, B, (a) and (b) may be used to describe constituent elements of embodiments of the present invention.

These terms are only used to distinguish one element from another element, and are not used to limit the nature, order, or sequence of the elements.

Also, when an element is referred to as being "connected," "coupled," or "in contact with" another element, it is not only directly connected, coupled, or in contact with the other element, but also due to another element between the elements being "connected," "coupled," or "in contact with" the other element.

In addition, when it is described that it is formed or disposed in "top (upper) or bottom (lower)" of each component, the top (upper) or bottom (lower) includes a case where two components are in direct contact with each other, and a case where one or more other components are formed or disposed between the two components. In addition, when it is expressed as "top (upper) or bottom (lower)", the meaning thereof may include not only an upward direction based on one component but also a downward direction based on one component.

Fig. 1 is a block diagram of a camera device according to an embodiment of the present invention.

Referring to fig. 1, the camera apparatus 100 includes a light output unit 110, a lens unit 120, an image sensor 130, and an image processing unit 140.

The light output unit 110 generates an output light signal and then irradiates the signal to an object. In this case, the optical output unit 110 may generate and output the output optical signal in the form of a pulse wave or a continuous wave. The continuous wave may be in the form of a sine wave or a square wave. By generating the output optical signal in the form of a pulse wave or a continuous wave, the camera apparatus 100 can detect a phase difference between the output optical signal output from the optical output unit 110 and the input optical signal input to the camera apparatus 100 after being reflected from an object. In this specification, the output light may be light output from the light output unit 110 and input to the object, and the input light may be light output from the light output unit 110, reaching the object, reflected from the object, and then input to the camera apparatus 100. From the perspective of the object, the output light may be incident light, and the input light may be reflected light.

The light output unit 110 irradiates the generated output light signal to an object for a predetermined exposure time. Here, the exposure time refers to one frame period. When a plurality of frames are generated, the set exposure time is repeated. For example, when the camera apparatus 100 photographs an object at 20FPS, the exposure time is 1/20[ sec ]. In addition, when 100 frames are generated, the exposure time may be repeated 100 times.

The optical output unit 110 may generate a plurality of output optical signals having different frequencies. The optical output unit 110 may sequentially generate a plurality of output optical signals having different frequencies a plurality of times. Alternatively, the optical output unit 110 may simultaneously generate a plurality of output optical signals having different frequencies.

Fig. 2 is a graph showing the frequency of an output optical signal. According to an embodiment of the invention, the light output unit 110 may be controlled to generate light having a frequency f in the first half of the exposure time1And generates an output optical signal having a frequency f during the other half of the exposure time2As shown in fig. 2.

According to another embodiment, the light output unit 110 may control several of the plurality of light emitting diodes to generate light having the frequency f1And controls the remaining leds to generate light having a frequency f2To output an optical signal.

To this end, the light output unit 110 may include a light source 112 generating light and a light modulator 114 modulating the light.

First, the light source 112 generates light. The light generated by the light source 112 may be infrared light having a wavelength of 770 to 3000nm, or visible light having a wavelength of 380 to 770 nm. The light source 112 may use a Light Emitting Diode (LED), and may have a form in which a plurality of LEDs are arranged in a predetermined pattern. In addition, the light source 112 may include an Organic Light Emitting Diode (OLED) or a Laser Diode (LD). Alternatively, the light source 112 may be a VCSEL (vertical cavity surface emitting laser). The VCSEL is one of laser diodes that convert an electrical signal into an optical signal, and a wavelength of about 800 to 1000nm (e.g., about 850nm or about 940nm) may be used.

The light source 112 repeatedly blinks (on/off) at predetermined time intervals to generate an output light signal in the form of a pulse wave or a continuous wave. The predetermined time interval may be the frequency of the output optical signal. The flashing of the light source may be controlled by the light modulator 114.

The optical modulator 114 controls the flashing of the optical source 112 such that the optical source 112 produces an output optical signal in the form of a continuous wave or a pulsed wave. The optical modulator 114 may control the optical source 112 to generate the output optical signal in the form of a continuous wave or a pulsed wave by frequency modulation or pulse modulation.

On the other hand, the lens unit 120 condenses an input optical signal reflected from an object and transmits it to the image sensor 130.

The image sensor 130 generates an electrical signal by using an input optical signal condensed by the lens unit 120.

The image sensor 130 may absorb the input optical signal in synchronization with the blinking period of the light output unit 110. In more detail, the image sensor 130 may absorb light of each of the in-phase and out-of-phase of the output light signal output from the light output unit 110. That is, the image sensor 130 may perform the step of absorbing the incident light signal when the light source is turned on and the step of absorbing the incident light signal when the light source is turned off a plurality of times.

Next, the image sensor 130 may generate an electrical signal corresponding to each reference signal by using a plurality of reference signals having different phase differences. The frequency of the reference signal may be set equal to the frequency of the output optical signal output from the optical output unit 110. Accordingly, when the light output unit 110 generates an output light signal having a plurality of frequencies, the image sensor 130 generates an electrical signal using a plurality of reference signals, wherein each reference signal corresponds to each frequency. The electrical signal may include information about the amount of charge or voltage corresponding to each reference signal.

Fig. 3 is a diagram illustrating a process for generating an electrical signal according to an embodiment of the present invention.

As shown in FIG. 3, there may be four reference signals (C) according to embodiments of the present invention1To C4). Each reference signal (C)1To C4) May have the same frequency as the output optical signal but may have a phase difference of 90 degrees from each other. One (C) of the four reference signals1) May have the same phase as the output optical signal. The phase of the input optical signal is delayed by the distance over which the output optical signal is incident on the object and reflected back. The image sensor 130 mixes the input optical signal and each reference signal, respectively. Then, the image sensor 130 may generate an electric signal corresponding to the hatched portion of fig. 3 for each reference signal.

In another embodiment, when the output optical signal is generated at multiple frequencies during the exposure time, the image sensor 130 absorbs the input optical signal according to the multiple frequencies. For example, assume at frequency f1And f2The output optical signals are generated such that the plurality of reference signals have a phase difference of 90 degrees. Then, since the incident light signal also has the frequency f1And f2Thus, a frequency f can be used1And four reference signals corresponding thereto to generate four electrical signals. In addition, a frequency f may be used2And four reference signals corresponding thereto to generate four electrical signals. Thus, a total of eight electrical signals may be generated.

The image sensor 130 may be configured in a structure in which a plurality of pixels are arranged in a grid form. The image sensor 130 may be a CMOS (complementary metal oxide semiconductor) image sensor or may be a CCD (charge coupled device) image sensor. In addition, the image sensor 130 may include a ToF sensor that receives infrared light reflected from an object and measures a distance using a time difference or a phase difference.

Fig. 4 is a diagram illustrating an image sensor according to an embodiment of the present invention. For example, in the case of the image sensor 130 having a resolution of 320 × 240 as shown in fig. 4, 76800 pixels are arranged in a grid form. In this case, as shown by the hatched portion in fig. 4, a constant interval may be formed between a plurality of pixels. In the embodiment of the present invention, one pixel is described as including a predetermined interval adjacent to the pixel.

According to an embodiment of the present invention, each pixel 132 may include a first light receiving unit 132-1 having a first photodiode and a first transistor, and a second light receiving unit 132-2 having a second photodiode and a second transistor.

The first light receiving unit 132-1 receives the input light signal with the same phase as the waveform of the output light. That is, when the light source is turned on, the first photodiode is turned on to absorb the input optical signal. When the light source is turned off, the first photodiode is turned off to stop absorbing the input light. The first photodiode converts the absorbed input optical signal into a current and transmits it to the first transistor. The first transistor converts the received current into an electric signal and outputs it.

The second light receiving unit 132-2 receives the input light signal in a phase opposite to the waveform of the output light. That is, when the light source is turned on, the second photodiode is turned off to absorb the input optical signal. When the light source is turned off, the second photodiode is turned on to stop absorbing the input light. The second photodiode converts the absorbed input optical signal into a current and transfers it to the second transistor. The second transistor converts the received current into an electrical signal.

Accordingly, the first light receiving unit 132-1 may be referred to as an in-phase receiving unit, and the second light receiving unit 132-2 may be referred to as an out-of-phase receiving unit. In this way, when the first light receiving unit 132-1 and the second light receiving unit 132-2 are activated with a time difference, there is a difference in the amount of received light according to the distance to the object. For example, when the object is located directly in front of the camera apparatus 100 (i.e., when the distance is 0), the time taken to reflect from the object after outputting light from the lighting unit 110 is 0, and thus, the blinking period itself of the light source is a light receiving period. Therefore, only the first light receiving unit 132-1 receives light, and the second light receiving unit 132-2 does not receive light. As another example, if an object is located at a predetermined distance from the camera apparatus 100, since it takes time to reflect from the object after outputting light from the light output unit 110, a blinking period of the light source is different from a light receiving period. Therefore, a difference occurs in the amount of light received by the first light receiving unit 132-1 and the second light receiving unit 132-2. That is, the distance of the object may be calculated using the difference between the amounts of light input to the first light receiving unit 132-1 and the second light receiving unit 132-2. Referring back to fig. 1, the image processing unit 140 calculates a phase difference between output light and input light using an electrical signal received from the image sensor 130, and calculates a distance between an object and the camera apparatus 100 using the phase difference.

Specifically, the image processing unit 140 may calculate a phase difference between the output light and the input light using information about the charge amount of the electric signal.

As described above, four electrical signals may be generated for each frequency of the output optical signal. Accordingly, the image processing unit 140 may calculate the phase difference (t) between the output optical signal and the input optical signal using the following equation 1d)。

[ formula 1 ]

Wherein Q is1To Q4Is the amount of charge of each of the four electrical signals. Q1Is the charge of the electrical signal corresponding to the reference signal in phase with the output optical signal. Q2Is the charge of the electrical signal corresponding to the reference signal that is 180 degrees later in phase than the output optical signal. Q3Is the charge of the electrical signal corresponding to the reference signal that is 90 degrees later in phase than the output optical signal. Q4Is the charge of the electrical signal corresponding to the reference signal that is 270 degrees later in phase than the output optical signal.

Then, the image processing unit 140 may calculate the distance between the object and the camera apparatus 100 by using the phase difference between the output optical signal and the input optical signal. In this case, the image processing unit 140 may calculate the distance (d) between the object and the camera apparatus 100 using the following equation 2.

[ formula 2 ]

Where c is the speed of light and f is the frequency of the output light.

According to an embodiment of the present invention, a ToF IR image and a depth image may be obtained from the camera device 100. Accordingly, a camera device according to an embodiment of the present invention may be referred to as a ToF camera device or a ToF camera module.

In this regard, in more detail, as shown in fig. 5, raw images of four phases may be obtained from the camera apparatus 100 according to an embodiment of the present invention. Here, the four phases may be 0 °, 90 °, 180 °, and 270 °, and the original image for each phase may be an image composed of digitized pixel values for each phase, and may be mixed with the phase image and the phase IR image.

If the four phase images of fig. 5 are used and calculated by using equation 3, an amplitude image of the ToF IR image of fig. 6 can be obtained.

[ formula 3 ]

Wherein, Raw (x)0) May be a data value for each pixel received by the sensor at phase 0 deg., Raw (x)90) May be a data value for each pixel received by the sensor at phase 90 deg., Raw (x)180) May be a data value for each pixel received by the sensor at 180 deg. phase, Raw (x)270) May be a data value for each pixel received by the sensor at phase 270 deg..

Alternatively, if the four phases of fig. 5 are used and calculated using equation 4, an intensity image may be obtained as another ToF IR image.

[ formula 4 ]

Intensity | Raw (x)90)-Raw(x270)|+|Raw(x180)-Raw(x0)|

Wherein, Raw(x0) May be a data value for each pixel received by the sensor at phase 0 deg., Raw (x)90) May be a data value for each pixel received by the sensor at phase 90 deg., Raw (x)180) May be a data value for each pixel received by the sensor at 180 deg. phase, Raw (x)270) May be a data value for each pixel received by the sensor at phase 270 deg..

As described above, the ToF IR image is an image generated by a process of subtracting two of the four phase images from each other, and in this process, the background light can be removed. Therefore, only the signal in the wavelength band output from the light source remains in the ToF IR image, thereby improving the IR sensitivity of the object and significantly reducing noise.

In this specification, a ToF IR image may refer to an amplitude image or an intensity image, and the intensity image may be used together with a confidence image. As shown in fig. 6, the ToF IR image may be a gray image.

On the other hand, if the four phase images of fig. 5 are used and calculated using equations 5 and 6, the depth image of fig. 7 can also be obtained.

[ FORMULA 5 ]

[ formula 6 ]

On the other hand, in the embodiment of the present invention, in order to improve the resolution of the depth image, a Super Resolution (SR) technique is used. The SR technique is a technique for obtaining a high resolution image from a plurality of low resolution images, and a mathematical model of the SR technique can be expressed as equation 7.

[ formula 7 ]

yK=DKBKMKx+nK

Where 1. ltoreq. k. ltoreq.p, p is the number of low-resolution images, ykIs a low resolution image (═ y)k,1,yk,2,...,yk,M]TWhere M is N1N 2), DkIs a down-sampled matrix, BkIs an optical blur matrix, MkIs an image deformation matrix, x is a high resolution image (═ x)1,x2,...,xN]TWherein N ═ L1N1*L2N2),nkRepresenting noise. That is, according to the SR technique, it is performed by applying the inverse function of the estimated resolution deterioration element to ykTo estimate x. The SR technique can be largely classified into a statistical scheme and a multi-frame scheme, and the multi-frame scheme can be largely classified into a space division scheme and a time division scheme. When the depth image is acquired using the SR technique, since M in equation 1 does not existkThe inverse of (c), so a statistical approach can be tried. However, in the case of the statistical scheme, since an iterative calculation process is required, there is a problem of inefficiency.

To apply the SR technique to extract the depth map, the image processing unit 140 generates a plurality of low resolution sub-frames using the electrical signal received from the image sensor 130, and then extracts a plurality of low resolution depth maps using the plurality of low resolution sub-frames. In addition, a high resolution depth map may be extracted by reconstructing pixel values of a plurality of low resolution depth maps.

Here, high resolution is a relative meaning indicating a higher resolution than low resolution.

Here, the sub-frame may refer to image data generated by an electric signal corresponding to any reference signal at any exposure time. For example, when the electric signals are generated from eight reference signals at the first exposure time, i.e., in one image frame, eight sub-frames may be generated, and one start frame may be further generated. In this specification, the subframe may be used in common with image data, subframe image data, and the like.

Alternatively, in order to apply the SR technique according to an embodiment of the present invention to the extraction of the depth map, the image processing unit 140 may generate a plurality of low resolution sub-frames using the electrical signal received from the image sensor 130, and then generate a plurality of high resolution sub-frames by reconstructing pixel values of the plurality of low resolution sub-frames. Also, a high resolution depth map may be extracted by using the high resolution subframes.

For this purpose, pixel shifting techniques may be used. That is, after several image data of moving sub-pixels are acquired for each sub-frame using the pixel moving technique, a plurality of high resolution sub-frame image data may be obtained by applying the SR technique to each sub-frame, and a depth image having a high resolution may be extracted by using the obtained data.

On the other hand, when the camera apparatus 100 according to the embodiment of the present invention is applied to an application requiring high-quality image capturing, for example, when applied to an application requiring an accurate image (such as biometric authentication or the like), or when applied to an application requiring a user to manipulate the camera apparatus 100 and photograph using only one hand, a technique for preventing or correcting image shake due to hand shake is also required. A technique for preventing or correcting image blur may be referred to as an OIS (optical image stabilizer) technique. In the OIS technology, when the optical axis is referred to as the Z-axis, it is possible to prevent or correct the shake of an image by moving structural components (e.g., lenses) in the camera apparatus 100 in the X-axis and Y-axis directions perpendicular to the optical axis.

In order for the camera apparatus 100 to have the SR function and the OIS function, the camera apparatus 100 according to an embodiment of the present invention may further include a driving unit for moving the internal structure.

Fig. 8 is a block diagram of a camera apparatus according to an embodiment of the present invention, fig. 9 is a side view of the camera apparatus according to the embodiment of the present invention, fig. 10 is a sectional view of a portion of the camera apparatus according to the embodiment of the present invention, and fig. 11 is a sectional view of a portion of the camera apparatus according to another embodiment of the present invention. Here, for convenience of description, a repetitive description of the same contents as those of fig. 1 to 7 is omitted.

Referring to fig. 8, the camera apparatus 100 according to the embodiment of the present invention further includes a first driving unit 150 and a second driving unit 160.

Referring to fig. 8 to 11, the image sensor unit 130 may be disposed on the printed circuit board 900, and the image processing unit 140 may be implemented in the printed circuit board 900. The transmitting part (Tx), i.e., the light output unit 110, may be disposed on one side of the receiving part (Rx) on the printed circuit board 900.

Referring to fig. 10 to 11, fig. 10 to 11 are sectional views of a receiving part (Rx) of the camera apparatus 100, and the lens unit 120 includes an IR (infrared) filter 122, a plurality of solid lenses 124 disposed on the IR filter, and a plurality of liquid lenses 126 disposed on the plurality of solid lenses 124 or between the plurality of solid lenses 124. A method in which the liquid lens 126-1 is disposed on the plurality of solid lenses 124 is referred to as a method of adding thereto, and a method in which the liquid lens 126-2 is disposed between the plurality of solid lenses 124 is referred to as a method of adding thereto. With a method added thereto, the liquid lens 126-1 may be supported by a shaper (not shown) outside the lens unit 120 and may be inclined.

The liquid lens 126 may be a film type liquid lens shown in fig. 12 or a Y lens type liquid lens shown in fig. 13, but is not limited thereto, and may be a variable lens whose shape varies according to an applied voltage. For example, as shown in fig. 12, the liquid lens 126 may be in the form of a liquid filled membrane filled with liquid, and the shape of the liquid filled membrane may be convex, flat, or concave depending on the voltage applied to a ring 1002 around the edge of the liquid filled membrane 1000. As another example, as shown in fig. 13, the liquid lens 126 may include two types of liquids (e.g., a conductive liquid and a non-conductive liquid) having different properties, and an interface 1100 may be formed between the two liquids, and a curvature and a slope of the interface may be changed according to an applied voltage.

The plurality of solid lenses 124 and liquid lenses 126 may be aligned with respect to a central axis to form an optical system. Here, the central axis may be the same as the optical axis of the optical system, and may be referred to as a Z axis in this specification.

The lens unit 120 may further include a lens barrel 128, and a space capable of accommodating at least a part of the lens may be provided inside the lens barrel 128. The lens barrel 128 may be rotationally coupled to one or more lenses, but this is exemplary and may be coupled in other ways, such as a method using an adhesive (e.g., an adhesive resin such as epoxy).

A lens holder (not shown) may be coupled to the lens barrel 128 to support the lens barrel 128, and may be coupled to a printed circuit board (not shown) on which the image sensor 130 is mounted. A space to which the IR filter 122 can be attached may be formed below the lens barrel 126 by a lens holder (not shown). A spiral pattern may be formed on an inner circumferential surface of the lens holder, and similarly, the lens holder may be rotationally coupled to the lens barrel 128 having a spiral pattern formed on an outer circumferential surface. However, this is exemplary, and the lens holder and the lens barrel 128 may be coupled by an adhesive, or the lens holder and the lens barrel 128 may be integrally formed.

However, this exemplary illustration is merely an example, and the lens barrel and the lens holder of the lens unit 120 may be composed of various structures capable of condensing an input optical signal incident to the camera apparatus 100 and transmitting it to the image sensor unit 130.

According to an embodiment of the present invention, the first driving unit 150 controls the movement of the IR filter 122 or the image sensor 130, and the second driving unit 160 controls the curvature of the liquid lens 126. Here, the first driving unit 150 may include an actuator directly or indirectly connected to the IR filter 122 or the image sensor 130, and the actuator may include at least one of a MEMS (micro electro mechanical system), a VCM (voice coil motor), and a piezoelectric element. In addition, the second driving unit 160 is directly or indirectly connected to the liquid lens 126, and the second driving unit 160 may control the curvature of the liquid lens 126 by directly applying a voltage to the liquid lens 126 or controlling a voltage applied to the liquid lens 126.

The optical path of the input optical signal may be moved a plurality of times by one of the first and second driving units 150 and 160 according to a predetermined rule, and the optical path of the input optical signal may be moved by the other of the first and second driving units 150 and 160 according to predetermined control information.

When the optical path of the input optical signal is moved a plurality of times according to a predetermined rule, the SR function may be performed using the moved optical path. In addition, when the optical path of the input optical signal is moved according to predetermined control information, the OIS function may be performed using the moved optical path. For example, the predetermined control information may include control information for OIS extracted from motion information, posture information, and the like of the camera apparatus 100.

Hereinafter, an embodiment in which the SR function is performed by the first driving unit 150 and the OIS function is performed by the second driving unit 160 will be described first.

As described above, the camera apparatus 100 according to the embodiment of the present invention may perform the SR technique using the pixel shifting technique.

For the pixel movement, the first driving unit 150 may move the inclination of the IR filter 122 or the image sensor 130. That is, the first driving unit 150 may tilt the IR filter 122 or the image sensor 130 to have a predetermined inclination with respect to an XY plane, which is a plane perpendicular to the optical axis (Z axis). Accordingly, the first driving unit 150 may change the optical path of at least one of the input optical signals in units of sub-pixels of the image sensor 130. Here, the sub-pixel may be a unit larger than 0 pixel and smaller than 1 pixel.

The first driving unit 150 changes an optical path of at least one of the input optical signals for the image frame. As described above, one image frame may be generated at each exposure time. Therefore, when one exposure time is ended, the first driving unit 150 changes the optical path of at least one of the output optical signal or the input optical signal.

The first driving unit 150 changes the sub-pixel unit based on the optical path of the image sensor 130 to output at least one of the optical signal or the input optical signal. At this time, the first driving unit 150 changes the optical path of at least one of the input optical signals in one of the up, down, left, and right directions based on the current optical path.

Fig. 14 is a diagram showing that the optical path of the input optical signal is changed by the first driving unit.

In fig. 14 (a), a portion indicated by a solid line indicates a current optical path of the input optical signal, and a portion indicated by a broken line indicates a changed optical path. When the exposure time corresponding to the current optical path ends, the first driving unit 150 may change the optical path of the input optical signal as indicated by a dotted line. Then, the optical path of the input optical signal is shifted from the current optical path by the sub-pixels. For example, as shown in (a) of fig. 14, when the first driving unit 150 moves the current optical path to the right by 0.173 degrees, the input optical signal incident on the image sensor 130 may move to the right by 0.5 pixels (sub-pixels).

According to an embodiment of the present invention, the first driving unit 150 may change the optical path of the input optical signal clockwise from the reference position. For example, as shown in (b) of fig. 14, after the first exposure time is ended, the first driving unit 150 moves the optical path of the input optical signal to the right by 0.5 pixels in the second exposure time based on the image sensor 130. In addition, the first driving unit 150 moves the optical path of the input optical signal downward by 0.5 pixels in the third exposure time based on the image sensor 130. In addition, the first driving unit 150 moves the optical path of the input optical signal to the left by 0.5 pixels in the fourth exposure time based on the image sensor 130. In addition, the first driving unit 150 moves the optical path of the input optical signal upward by 0.5 pixels in the fifth exposure time based on the image sensor 130. That is, the first driving unit 150 may move the optical path of the input optical signal to the initial position for four exposure periods. When the optical path of the output optical signal is moved, it can be applied in the same manner, and a detailed description thereof will be omitted. The clockwise direction of the change pattern of the optical path is merely an example, and may be the counterclockwise direction.

On the other hand, the sub-pixels may be larger than 0 pixels and smaller than 1 pixel. For example, the sub-pixels may have a size of 0.5 pixels, or may have a size of 1/3 pixels. The size of the sub-pixels may be varied by those skilled in the art.

Fig. 15 and 16 are diagrams illustrating an SR technique according to an embodiment of the present invention.

Referring to fig. 15, the image processing unit 140 may extract a plurality of low resolution depth maps using a plurality of low resolution sub-frames generated in the same frame (i.e., at the same exposure time). In addition, the image processing unit 140 may extract a high resolution depth map by reconstructing pixel values of a plurality of low resolution depth maps. Here, optical paths of the output optical signal or the input optical signal corresponding to the plurality of low resolution depth maps may be different from each other.

For example, the image processing unit 140 may generate the low resolution sub-frames 1-1 to 4-8 using a plurality of electrical signals. The low resolution subframes 1-1 to 1-8 are low resolution subframes generated in the first exposure time. The low resolution subframes 2-1 to 2-8 are low resolution subframes generated in the second exposure time. The low resolution subframes 3-1 to 3-8 are low resolution subframes generated in the third exposure time. The low resolution subframes 4-1 to 4-8 are low resolution subframes generated in the fourth exposure time. Then, the image processing unit 140 extracts the low resolution depth maps LRD-1 to LRD-4 by applying the depth map extraction technique to the plurality of low resolution sub-frames generated in each exposure time. The low resolution depth map LRD-1 is a low resolution depth map extracted using subframes 1-1 to 1-8. The low resolution depth map LRD-2 is a low resolution depth map extracted using subframes 2-1 to 2-8. The low resolution depth map LRD-3 is a low resolution depth map extracted using subframes 3-1 to 3-8. The low resolution depth map LRD-4 is a low resolution depth map extracted using subframes 4-1 to 4-8. In addition, the image processing unit 140 reconstructs pixel values of the low resolution depth maps LRD-1 to LRD-4 to extract the high resolution depth map HRD.

Alternatively, as described above, the image processing unit 140 may generate the high-resolution subframe by reconstructing pixel values of a plurality of subframes corresponding to the same reference signal. In this case, the plurality of subframes have optical paths different from the optical paths of the corresponding output optical signals or input optical signals. In addition, the image processing unit 140 may extract a high resolution depth map by using a plurality of high resolution subframes.

For example, in FIG. 16, image controller 150 generates low resolution sub-frames 1-1 through 4-8 using a plurality of electrical signals. The low resolution subframes 1-1 to 1-8 are low resolution subframes generated in the first exposure time. The low resolution subframes 2-1 to 2-8 are low resolution subframes generated in the second exposure time. The low resolution subframes 3-1 to 3-8 are low resolution subframes generated in the third exposure time. The low resolution subframes 4-1 to 4-8 are low resolution subframes generated in the fourth exposure time. Here, the low resolution subframes 1-1, 2-1, 3-1, and 4-1 correspond to the same reference signal C1, but correspond to different optical paths. Then, the image processing unit 140 may generate the high resolution sub-frame H-1 by reconstructing pixel values of the low resolution sub-frames 1-1, 2-1, 3-1, and 4-1. When generating the high-resolution subframes H-1 to H-8 by pixel value reconstruction, the image controller may extract the high-resolution depth map HRD by applying the depth map extraction technique to the high-resolution subframes H-1 to H-8.

Fig. 17 is a diagram illustrating a pixel value shift process according to an embodiment of the present invention.

Here, it is assumed that one 8 × 8 high resolution image is generated by using four 4 × 4 low resolution images. In this case, the high-resolution pixel grid has 8 × 8 pixels identical to the pixels of the high-resolution image. Here, the low resolution image may refer to include low resolution subframes and a low resolution depth map, and the high resolution image may refer to high resolution subframes and a high resolution depth map.

In fig. 17, the first to fourth low-resolution images are images captured by shifting the optical path by a sub-pixel unit of 0.5 pixel. The image processing unit 140 moves the pixel values of the second to fourth low-resolution images according to the moving direction of the optical path based on the first low-resolution image in which the optical path is not moved to reconstruct the high-resolution image.

Specifically, the second low-resolution image is an image in which the sub-pixel is shifted rightward from the first low-resolution image. Therefore, the pixel (B) of the second low resolution image is shifted to the right of each pixel (a) of the first low resolution image.

The third low resolution image is an image in which the sub-pixel is shifted downward from the second low resolution image. Therefore, the pixel (C) of the third low-resolution image is moved to below each pixel (B) of the second low-resolution image.

The fourth low resolution image is an image shifted the sub-pixel to the left from the third low resolution image. Therefore, the pixel (D) of the fourth low-resolution image is shifted to the left of the pixel (C) of the third low-resolution image.

When all the pixel values of the first to fourth low-resolution images are reconstructed in the high-resolution pixel grid, a high-resolution image frame having a resolution 4 times higher than that of the low-resolution images is generated.

On the other hand, the image processing unit 140 may apply weights to the shifted pixel values. In this case, the weight may be differently set according to the size of the sub-pixel or the moving direction of the optical path, and may be differently set for each low resolution image.

According to one embodiment, the first driving unit 150 moves the input optical signal by controlling the tilt of the IR filter or the image sensor, and thus, data of moving sub-pixels may be obtained.

Fig. 18 to 19 are diagrams illustrating a moving effect of an image frame input on an image sensor according to tilt control of an IR filter. Fig. 19 shows a simulation result of a moving distance with respect to a tilt angle under the condition that the thickness of the IR filter is 0.21mm and the refractive index of the IR is 1.5.

Referring to fig. 18 and equation 8 below, the slope (θ) of the IR filter 1221) And the moving distance may have the following relationship.

[ formula 8 ]

Wherein, theta2As shown in equation 9.

[ formula 9 ]

In addition, θ1Is the slope (i.e., tilt angle), n, of the IR filter 122gIs the refractive index of the IR filter 122 and d is the thickness of the IR filter 122. For example, referring to equations 8 through 9, the IR filter 122 may be tilted by about 5 to 6 to be shown in the figureThe image frames input on the image sensor are shifted by 7 μm. At this time, the vertical displacement of the IR filter 122 may be about 175 μm to 210 μm.

In an embodiment of the present invention, the first driving unit 150 may move the optical path of the input optical signal a plurality of times according to a predetermined rule. For example, according to a predetermined rule, the first driving unit 150 may move the optical path of the input optical signal in a first direction by a sub-pixel unit larger than 0 pixel and smaller than 1 pixel of the image sensor unit 130 in a first period, then move the sub-pixel unit in a second direction perpendicular to the first direction in a second period, then move the sub-pixel unit in a third direction perpendicular to the second direction in a third period, then move the sub-pixel unit in a fourth direction perpendicular to the third direction in a fourth period, and may repeat the process. In this specification, a sub-pixel may represent a unit larger than 0 pixel and smaller than 1 pixel. In this specification, the degree of movement in each of the first direction, the second direction, the third direction, and the fourth direction in each of the first period, the second period, the third period, and the fourth period may be expressed as a sub-pixel movement value or a movement value. For example, when one pixel includes 4 subpixels of 2 × 2 and is shifted by one subpixel unit, the shift value may be expressed as 1 subpixel or 0.5 pixel.

In the embodiment of the present invention, the image processing unit 140 may obtain one depth map by reconstructing a first image obtained from data extracted in a first period using a super-resolution technique, a second image obtained from data extracted in a second period, a third image obtained from data extracted in a third period, and a fourth image obtained from data extracted in a fourth period. Here, the first, second, third and fourth periods may be used in common with the first, second, third and fourth exposure times, and each of the first, second, third and fourth images may be used in common with the above-described low-resolution sub-frame and low-resolution image.

For this, the first driving unit 150 may control the IR filter 122 or the image sensor 130 to be regularly inclined at a predetermined angle with respect to a plane (XY plane) perpendicular to the optical axis (Z).

Referring back to fig. 8 to 13, the second driving unit 160 may control the curvature of the liquid lens 126 by using control information for OIS (optical image stabilization). For example, when there is shake of the camera apparatus 100, the optical path of the incident optical signal may be distorted with respect to the optical axis. In this case, the camera apparatus 100 may detect motion information or posture information of the camera apparatus 100 through various sensors (not shown) installed therein, and extract control information for OIS by using the detected motion information or posture information. In addition, the second driving unit 160 may control the curvature of the liquid lens 126 by using control information for OIS, and thus, the optical path of the incident optical signal may be moved to be parallel to the optical axis. For example, as shown in fig. 13, the interface 1100 of the liquid lens 126 may be tilted to the left or right for OIS.

In this way, when the first driving unit 150 performs the SR function and the second driving unit 160 performs the OIS function, the first driving unit 150 may be automatically driven according to a predetermined rule set in advance, and the second driving unit 160 may be driven according to feedback information or control information.

On the other hand, in another embodiment of the present invention, the first driving unit 150 may perform an OIS function, and the second driving unit 160 may perform an SR function.

That is, the first driving unit 150 may be driven according to control information for OIS extracted from motion information or posture information of the camera apparatus 100, and thus may control to move the IR filter 122 or the image sensor 130 in directions perpendicular to the optical axis (Z-axis), i.e., X-axis and Y-axis directions. In addition, the second driving unit 160 may control the shape or curvature of the interface of the liquid lens 126 to be changed according to a predetermined rule.

As described above, since the camera apparatus according to the embodiment of the present invention can simultaneously perform the SR function and the OIS function, a depth map having high resolution and high quality can be obtained. In particular, since the SR function and the OIS function are performed by separate driving units, each of the SR function and the OIS function can be more precisely performed.

On the other hand, the camera apparatus 100 according to the embodiment of the present invention may further perform an AF (auto focus) function. For this, the second driving unit 160 may further move the optical path of the input optical signal according to the control information for AF. For example, the second driving unit 160 may control the interface of the liquid lens 126 to be convex or concave in the Z-axis direction according to control information for AF. The function for AF may be performed together with the SR function or the OIS function. For example, when the second driving unit 160 performs the SR function, the second driving unit 160 may control the interface of the liquid lens 126 to be automatically changed according to a predetermined rule while being further moved forward or backward in the Z-axis direction according to control information for AF. In addition, when the second driving unit 160 performs the OIS function, the second driving unit 160 may control the interface of the liquid lens 126 to be inclined in the X-axis direction and the Y-axis direction according to control information for OIS while being further moved forward or backward in the Z-axis direction according to control information for AF.

Although the embodiments have been described above, these are only examples and do not limit the present invention, and those skilled in the art to which the present invention pertains will appreciate that various modifications and applications not shown above can be made within a scope that does not depart from the essential features of the present embodiments. For example, each component specifically illustrated in the embodiments may be modified and implemented. And differences relating to such modifications and applications should be construed as being included in the scope of the present invention as defined in the appended claims.

[ description of reference numerals ]

100: the camera device 110: light output unit

120: lens unit 130: image sensor with a plurality of pixels

140: image processing unit

31页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:自动车辆的相机评估技术

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类