Camera module

文档序号:1943006 发布日期:2021-12-07 浏览:11次 中文

阅读说明:本技术 相机模块 (Camera module ) 是由 朱洋贤 李昌奕 于 2020-04-27 设计创作,主要内容包括:根据本发明的实施例,公开了一种相机模块包括:光输出单元,向对象输出光信号;光学单元,传输从对象反射的光信号;传感器,接收通过光学单元传输的光信号;以及控制单元,使用由传感器接收到的光信号获取对象的深度图,其中,传感器包括布置有光接收元件的有效区域和除有效区域之外的无效区域,并且传感器包括第一行区域和第二行区域,在第一行区域中有效区域和无效区域沿行方向交替布置,在第二行区域中有效区域和无效区域沿行方向交替布置,并且有效区域设置在沿列方向不与第一行区域的有效区域重叠的位置处,到达第一行区域的有效区域的光通过第一次移位控制而被控制以到达第一行区域的无效区域或第二行区域的无效区域,到达第二行区域的有效区域的光通过第一移位控制而被控制以到达第二行区域的无效区域或第一行区域的无效区域。(According to an embodiment of the present invention, there is disclosed a camera module including: an optical output unit outputting an optical signal to a subject; an optical unit transmitting an optical signal reflected from an object; a sensor receiving an optical signal transmitted through the optical unit; and a control unit acquiring a depth map of the object using the light signal received by the sensor, wherein, the sensor includes an active area in which light receiving elements are arranged and a non-active area other than the active area, and the sensor includes a first row area and a second row area, the effective areas and the ineffective areas are alternately arranged in the row direction in the first row area, the effective areas and the ineffective areas are alternately arranged in the row direction in the second row area, and the effective area is provided at a position not overlapping with the effective area of the first row area in the column direction, the light reaching the effective area of the first row area is controlled by the first shift control to reach the ineffective area of the first row area or the ineffective area of the second row area, and the light reaching the effective area of the second row area is controlled by the first shift control to reach the ineffective area of the second row area or the ineffective area of the first row area.)

1. A camera module, comprising:

a light output unit configured to output a light signal to a subject;

an optical filter configured to pass optical signals reflected by the object through the optical filter;

a sensor configured to receive the light signal therethrough; and

a control unit configured to acquire depth information of the object and color information of the object adjacent to the depth information of the object using the light signal received by the sensor,

wherein the filter includes a first filter region through which a first wavelength band as a pass band passes and a second filter region through which a second wavelength band different from the first wavelength band as the pass band passes,

the sensor comprises a first sensing region for receiving a first signal and a second sensing region for receiving a second signal,

the control unit acquires the color information of the object from the first sensing region and acquires the depth information of the object from the second sensing region,

the first signal is an optical signal passing through the first filter region, and

the second signal is an optical signal passing through the second filter region.

2. The camera module of claim 1, wherein the first filter region surrounds the second filter region, and

the first sensing region surrounds the first sensing region.

3. The camera module according to claim 1, wherein the second sensing region is provided as a plurality of second sensing regions spaced apart from each other.

4. The camera module in accordance with claim 3, wherein adjacent second sensing regions have the same distance in a row direction or a column direction.

5. The camera module of claim 3, wherein each of the second sensing regions comprises a plurality of pixels, at least a portion of the plurality of pixels disposed in contact with each other.

6. The camera module of claim 1, wherein the light output unit comprises a light collection unit configured to output the light signals in a plurality of arrays.

7. The camera module according to claim 1, further comprising a calculation unit configured to output three-dimensional content of the object using the acquired color information of the object and the acquired depth information of the object.

8. The camera module of claim 7, wherein the computing unit comprises:

an image generator configured to generate a plurality of images using the acquired color information of the object and the acquired depth information of the object;

an extractor configured to extract feature points of each of the plurality of images;

a graph generator configured to generate a depth map using the feature points; and

a content generator configured to generate the three-dimensional content by applying the depth map to the plurality of images.

9. The camera module of claim 8, wherein the feature point corresponds to a position of the acquired depth information of the object.

10. The camera module according to claim 1, wherein the sensor includes an active area provided with a light receiving element and an inactive area other than the active area, and

the sensor includes: a first row area in which the active areas and the inactive areas are alternately arranged in a row direction; and a second row area in which the effective areas and the ineffective areas are alternately arranged in the row direction, and the effective areas are arranged at positions that do not overlap with the effective areas of the first row area in the column direction.

Technical Field

The present invention relates to a camera module for extracting depth information.

Background

Three-dimensional (3D) content finds applications in many fields such as education, manufacturing, autopilot, and games, culture, and depth information () depth maps are required to acquire 3D content). The depth information is information representing a spatial distance, and refers to perspective information of one point relative to another point in a two-dimensional image.

As a method of acquiring depth information, a method of projecting Infrared (IR) structured light onto an object, a method using a stereo camera, a time of flight (TOF) method, and the like are being used. According to the TOF method, information about the emitted and reflected light is used to calculate the distance to the object. The biggest advantage of the ToF method is to provide distance information about the 3D space in fast real-time. Furthermore, accurate distance information can be obtained without applying a separate algorithm or performing hardware correction by a user. Furthermore, accurate depth information can be obtained even when measuring very close objects or measuring moving objects.

However, there is a high processing speed in sorting and correcting the depth information and the color information. Further, there is a problem that the accuracy is lowered when the distance from the object is increased.

Disclosure of Invention

Technical problem

The present invention is directed to a camera module for extracting depth information using a time of flight (TOF) method.

The present invention is also directed to providing a camera module that generates three-dimensional content at a high processing speed by acquiring distance information of a local area from an image sensor.

The present invention also aims to provide a camera module capable of easily generating a depth map even when the distance increases.

Technical scheme

According to an exemplary embodiment of the present invention, a camera module includes: a light output unit configured to output a light signal to a subject; an optical filter configured to pass optical signals reflected by the object through the optical filter; a sensor configured to receive a light signal therethrough; and a control unit configured to acquire depth information of an object and color information of an object adjacent to the depth information of the object using an optical signal received by the sensor, wherein the filter includes a first filter region through which a first wavelength band as a pass band passes and a second filter region through which a second wavelength band different from the first wavelength band as a pass band passes, the sensor includes a first sensing region for receiving a first signal and a second sensing region for receiving a second signal, the control unit acquires the color information of the object from the first sensing region and acquires the depth information of the object from the second sensing region, the first signal is an optical signal passing through the first filter region, and the second signal is an optical signal passing through the second filter region.

The first filter region may surround the second filter region, and the first sensing region may surround the first sensing region.

The second sensing region may be provided as a plurality of second sensing regions spaced apart from each other.

The adjacent second sensing regions have the same distance in the row direction or the column direction.

Each of the second sensing regions may include a plurality of pixels, at least a portion of which are disposed in contact with each other.

The optical output unit may include an optical collection unit configured to output the optical signals in the form of a plurality of arrays.

The camera module may further include a calculation unit configured to output three-dimensional content for the object using the acquired color information of the object and the acquired depth information of the object.

The calculation unit may include: an image generator configured to generate a plurality of images using the acquired color information of the object and the acquired depth information of the object; an extractor configured to extract feature points of each of the plurality of images; a graph generator configured to generate a depth map using the feature points; and a content generator configured to generate three-dimensional content by applying the depth map to the plurality of images.

The feature point may correspond to a position of the acquired depth information of the object.

The sensor may include an active area in which the light receiving elements are disposed and an inactive area other than the active area, and the sensor may include a first row area in which the active area and the inactive area are alternately disposed in a row direction and a second row area in which the active area and the inactive area are alternately disposed in the row direction and the active area is disposed at a position that does not overlap with the active area of the first row area in a column direction.

The first sensing region and the second sensing region may overlap the active region.

The width of the second sensing region may vary according to a distance between the object and the light output unit.

Advantageous effects

According to an exemplary embodiment of the present invention, three-dimensional contents may be easily output through distance information of a partial region of an image acquired from an image sensor.

Further, even if the distance is increased, the accuracy of the distance recognition can be improved.

In addition, matching between the color information and the distance information is facilitated, thereby increasing the processing speed of generating the three-dimensional content.

Further, depth information can be acquired with high resolution by shifting the optical path of the incident optical signal without significantly increasing the number of pixels of the sensor.

Further, it is possible to provide a camera module that reduces the amount of processing data by easily calculating depth information.

Drawings

Fig. 1 is a conceptual diagram illustrating a camera module according to an exemplary embodiment;

fig. 2 is a diagram illustrating a light output unit according to an exemplary embodiment;

FIG. 3 is a diagram showing one surface of the object in FIG. 2;

fig. 4 is a graph for describing an influence of a distance of a light output unit on light intensity according to an exemplary embodiment;

FIG. 5 is a graph depicting the frequency of an optical signal in accordance with an exemplary embodiment;

FIG. 6 is a cross-sectional view of a camera module according to an exemplary embodiment;

FIG. 7 illustrates a conceptual diagram of a filter and sensor according to an example embodiment;

fig. 8 is an enlarged view of a portion K in fig. 7;

fig. 9 is an enlarged view of a portion M in fig. 7;

FIG. 10 is a schematic diagram showing a second region of the sensor as a function of distance from the object;

fig. 11 is a top view of a sensor according to a modification;

FIG. 12 is a diagram for describing a process of generating an electrical signal in a sensor according to an exemplary embodiment;

FIG. 13 is a diagram for describing a sensor according to an exemplary embodiment;

fig. 14 to 17 are diagrams for describing various modifications of the sensor;

FIG. 18 shows raw images acquired from a camera module for four phases in accordance with an exemplary embodiment;

fig. 19 illustrates a magnitude image (amplitude image) acquired from a camera module according to an exemplary embodiment;

FIG. 20 shows a depth image acquired from a camera module according to an example embodiment;

fig. 21 illustrates a diagram for describing an operation of obtaining depth information and color information in a camera module according to an exemplary embodiment;

FIG. 22 is a block diagram of a computing unit in accordance with an illustrative embodiment;

fig. 23 to 25 are diagrams for describing an image control method in a camera module according to an exemplary embodiment;

fig. 26 to 28 are diagrams for describing a control method of acquiring high resolution in a camera module according to an exemplary embodiment.

Detailed Description

Exemplary embodiments of the present invention are described in detail below with reference to the accompanying drawings.

However, the technical spirit of the present invention is not limited to some exemplary embodiments to be described, and may be implemented in various forms. One or more elements of the embodiments may be selectively combined or used instead without departing from the technical spirit of the present invention.

Further, terms (including technical and scientific terms) used herein may be interpreted in the meaning commonly understood by one of ordinary skill in the art, unless otherwise defined. General terms such as terms defined in dictionaries may be understood in consideration of their background meanings in the related art.

In addition, the terminology used herein is not intended to be limiting of the invention, but rather to describe exemplary embodiments.

In the specification, the singular form may also include the plural form unless explicitly stated otherwise. When expressed as "at least one (or one or more) of A, B and C," it may also include one or more of all possible combinations of A, B and C.

In addition, terms such as first, second, A, B, (a) and (b) may be used to describe elements of exemplary embodiments of the invention.

Each term is not used to define the nature, order, sequence, etc. of the corresponding element, but rather is used to distinguish one element from another.

When an element is described as being "connected," "coupled," or "coupled" to another element, such description may include the following two cases: the element may be directly connected, coupled or coupled to another element; and the element may also be "connected," "coupled," or "coupled" to another element by yet another element located between the element and the other element.

Moreover, when an element is described as being formed or disposed "on or below" another element, such description can include the following two cases: two elements may be in direct contact with each other; and one or more other elements may also be interposed between the two elements. Further, when an element is described as being formed "on (or under)" another element, such description may include the following cases: the one element may be formed on the upper side or the lower side with respect to the other element.

A camera module according to an exemplary embodiment to be described below may be used as an optical device or a part of an optical device. First, the optical device may include any one of a cellular phone, a mobile phone, a smart phone, a portable smart device, a digital camera, a notebook computer, a digital broadcasting terminal, a Personal Digital Assistant (PDA), a Portable Multimedia Player (PMP), and a navigation device. However, the type of the optical device is not limited thereto, and any device for capturing an image or a photograph may be included in the optical device.

The optical device may include a body. The body may have a bar shape. Alternatively, the main body may have various structures, such as a sliding type, a folding type, a swing type, and a rotating type, in which two or more sub-bodies are coupled to be relatively movable. The body may include an outer shell (shell, housing or cover) forming an exterior. For example, the main body may include a front case and a rear case. Various electronic components of the optical device may be embedded in a space formed between the front case and the rear case.

The optical device may comprise a display. The display may be disposed on one surface of a body of the optical device. The display may output an image. The display may output images captured by the camera.

The optical device may comprise a camera. The camera may include a time-of-flight (ToF) camera module. The ToF camera module may be disposed on a front surface of a body of the optical device. In this case, the ToF camera module may be used for various types of biometric recognition (e.g., face recognition, iris recognition, and vein recognition of a user) to enable secure authentication of the optical device.

Fig. 1 illustrates a conceptual diagram of a camera module according to an exemplary embodiment.

Referring to fig. 1, a camera module 100 according to an exemplary embodiment may include a light output unit 110, an optical unit 120, a sensor 130, a control unit 140, and a calculation unit 150.

The light output unit 110 may generate light in a desired signal form and irradiate the light to the object O. In particular, the light output unit may be a light emitting module, a light emitting unit, a light emitting assembly or a light emitting device. Specifically, the light output unit 110 may generate a light signal and then irradiate the generated light signal onto the object. In this case, the optical output unit 110 may generate and output an optical signal in the form of a pulse wave or a continuous wave. The continuous wave may be in the form of a sine wave or a square wave. When the light output unit 110 generates the light signal in the form of a pulse wave or a continuous wave, the camera module 100 may use a phase difference or a time difference between the light signal output from the light output unit 110 and an input light signal reflected from a subject and then input to the camera module 100. In this specification, the output light LS1 refers to light that is output from the light output unit 110 and is incident on the subject, and the input light LS2 refers to light that is output from the light output unit 110, reaches the subject, is reflected from the subject, and is then input to the camera module 100. From the perspective of the object, the output light LS1 may be incident light and the input light LS2 may be reflected light.

The light output unit 110 irradiates the generated light signal onto the subject for a predetermined exposure period (integration time). Here, the exposure period refers to one frame period. When a plurality of frames are generated, the set exposure period is repeated. For example, when the camera module 100 photographs an object at 20 Frames Per Second (FPS), the exposure period is 1/20[ sec ]. When generating 100 frames, the exposure period may be repeated 100 times.

The optical output unit 110 may generate not only an output optical signal having a predetermined frequency but also a plurality of optical signals having different frequencies. In addition, the optical output unit 110 may sequentially and repeatedly output a plurality of optical signals having different frequencies. Alternatively, the optical output unit 110 may simultaneously output a plurality of optical signals having different frequencies.

For such operation, in an exemplary embodiment, the light output unit 110 may include a light source 112, a light changing unit 114, and a light collecting unit 116.

First, the light source 112 may generate light. The light generated by the light source 112 may be infrared light having a wavelength of 770nm to 3000nm, or may be visible light having a wavelength of 380nm to 770 nm. The light source 112 may include a Light Emitting Diode (LED), and may have a form in which a plurality of LEDs are arranged according to a specific pattern. In addition, the light source 112 may further include an Organic Light Emitting Diode (OLED) or a Laser Diode (LD). Alternatively, the light source 112 may also be a Vertical Cavity Surface Emitting Laser (VCSEL). The VCSEL may be one of laser diodes that convert an electrical signal into an optical signal, and may use a wavelength of about 800nm to 1000nm, such as about 850nm or about 940 nm.

The light source 112 is repeatedly turned on/off at specific time intervals to generate an optical signal in the form of a pulse wave or a continuous wave. The specific time interval may be the frequency of the optical signal. The turning on/off of the light source 112 may be controlled by the light changing unit 114.

The light varying unit 114 controls on/off of the light source 112 and controls the light source 112 to generate a light signal in the form of a continuous wave or a pulse wave. The light varying unit 114 may control the light source 112 by frequency modulation, pulse modulation, or the like to generate the light signal in the form of a continuous wave or a pulse wave.

The light collection unit 116 may change the optical path such that the light generated from the light source 112 has an array of spots. For example, the light collection unit 116 may include an imaging lens, a microlens array, or a Diffractive Optical Element (DOE). Due to this configuration, the light emitted from the camera module 100 toward the object O may have a plurality of array spots. Therefore, even when the distance between the camera module 100 and the object O increases, the light emitted from the camera module 100 may easily reach the object O due to being collected. Accordingly, the camera module 100 according to the exemplary embodiment may realize a longer distance light transmission. In this case, the number of array spots may be set differently, and the configuration and effect of the light collection unit 116 will be described in detail below.

Meanwhile, the optical unit 120 may include at least one lens. The optical unit 120 may collect an input optical signal reflected from an object through at least one lens to transmit the collected optical signal to the sensor 130. At least one lens of the optical unit 120 may include a solid lens. Further, the at least one lens may comprise a variable lens. The variable lens may be a variable focus lens. Further, the variable lens may be a focus adjustable lens. Further, the variable lens may be at least one of a liquid lens, a polymer lens, a liquid crystal lens, a Voice Coil Motor (VCM) type, and a Shape Memory (SMA) type. The liquid lens may include a liquid lens having one liquid and a liquid lens having two liquids. In a liquid lens having one kind of liquid, the focal point can be changed by adjusting a diaphragm provided at a position corresponding to the liquid, for example, by pressing the diaphragm with electromagnetic force of a magnet and a coil. A liquid lens having two types of liquids may include a conductive liquid and a non-conductive liquid, and an interface formed between the conductive liquid and the non-conductive liquid may be adjusted using a voltage applied to the liquid lens. In the polymer lens, the focal point can be changed by controlling the polymer material with a piezoelectric driver or the like. In the liquid crystal lens, the focal point can be changed by controlling the liquid crystal with an electromagnetic force. In the VCM type, the focus may be changed by controlling a solid lens or a lens assembly including a solid lens using an electromagnetic force between a magnet and a coil. In the SMA type, the focus may be changed by controlling a solid lens or a lens assembly including a solid lens using a shape memory alloy. In addition, the optical unit 120 may include an optical plate. The optical plate may be a light transmitting plate.

In addition, the optical unit 120 may include a filter F that transmits light in a specific wavelength range. In an exemplary embodiment, the filter F of the optical unit 120 may transmit only light within a preset wavelength range, and may block light other than the light within the preset wavelength range. In this case, the filter F may partially pass light in an Infrared (IR) region. For example, the filter F may include an IR band pass filter that partially passes light having a wavelength of 780nm to 1000 nm. A detailed description thereof will be provided below.

The sensor 130 may generate an electrical signal using the input optical signal collected by the optical unit 120. In an exemplary embodiment, the sensor 130 may absorb the input optical signal in synchronization with the on/off period of the light output unit 110. Specifically, the sensor 130 may absorb respective lights that are in phase and out of phase with the optical signal output from the optical output unit 110.

In addition, the sensor 130 may generate an electrical signal corresponding to each reference signal using a plurality of reference signals having different phases. For example, the electrical signal may be a signal resulting from mixing each reference signal with the input light, which may include convolution, multiplication, or the like. Further, the frequency of the reference signal may be set to correspond to the frequency of the optical signal output from the optical output unit 110. In an exemplary embodiment, the frequency of the reference signal may be the same as the frequency of the optical signal of the optical output unit 110.

As described above, when the light output unit 110 generates the light signal at a plurality of frequencies, the sensor 130 may generate the electrical signal using a plurality of reference signals corresponding to each frequency. The electrical signal may include information about the amount of charge or voltage corresponding to each reference signal. Further, the electrical signal may be calculated for each pixel.

The control unit 140 may control the optical unit 120 to shift the optical path of the input optical signal. Due to such a configuration, a plurality of image data for extracting a high resolution depth image can be output as described below. A detailed description thereof will be provided below.

Further, the calculation unit 150 may calculate depth information having a higher resolution than that of the image data using the electric signals received from the sensor 130 and combining the plurality of image data extracted from the control unit 140. Further, the calculation unit 150 may be provided in an optical apparatus including a camera module or the illustrated camera module 100 to perform the calculation. Hereinafter, a description will be provided based on the calculation unit 150 provided in the camera module 100.

The calculation unit 150 may receive information sensed by the sensor 130 from the camera module 100 to perform calculation thereon. The calculation unit 150 may receive a plurality of low resolution information using the electrical signal received from the sensor 130 and generate high resolution depth information using the plurality of low resolution information. For example, the high resolution depth information may be generated by rearranging a plurality of low resolution information.

In this case, the calculation unit 150 may calculate the distance between the object and the camera module 100 using a time difference between the light signal output from the light output unit and the light signal received by the sensor or using a plurality of pieces of information acquired during a plurality of integration times of the sensor such that the effective area of the sensor is exposed at different phases.

The term "cell" used in the present exemplary embodiment refers to a software component or a hardware component, such as a Field Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC), which performs a specific task. However, the term "unit" is not limited to software components or hardware components. A "unit" may be included in the addressable storage medium and configured to operate one or more processors. Thus, by way of example, a unit may include components such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, sub-processes, segments of program code, drivers, firmware, microcode, circuitry, data, databases, database structures, tables, arrays, and parameters. The functionality provided for in the components and units may be combined into fewer components and units or further separated into additional components and units. Furthermore, the components and units may be implemented such that the components and units operate one or more Central Processing Units (CPUs) in the device or secure multimedia card.

Fig. 2 is a diagram illustrating a light output unit according to an exemplary embodiment, fig. 3 illustrates a diagram of one surface of an object in fig. 2, and fig. 4 illustrates a diagram for describing an effect of a distance of the light output unit according to an exemplary embodiment on light intensity.

Referring to fig. 2, as described above, light emitted from the light source 112 may pass through the light collection unit 116 to be irradiated onto the object O. In addition, the light irradiated onto the object O may be in the form of an array spot, and the light collection unit 116 may also be provided with imaging lenses arranged in an array form corresponding to the above-described form. In this case, in the light collection unit 116, the interval d1 of light irradiated to each single lens may be different from the interval d2 of light passing through each single lens. Here, the interval of light may be measured in a front region and a rear region of the light collection unit 116 having the same interval in the light collection unit 116.

The interval d1 of light irradiated to each einzel lens may be greater than the interval d2 of light passing through each einzel lens. Due to this configuration, the camera module can easily receive input light even if the distance from the light source 112 to the object O increases. In other words, the camera module according to the exemplary embodiment can easily perform depth measurement even if the distance from the object is long.

Referring to fig. 3, the light passing through the light collection unit 160 may be condensed on the object O in the form of an array spot. In an exemplary embodiment, the single light spots K may exist in various array forms according to the shape of the imaging lens of the light collection unit 160. In an exemplary embodiment, each individual spot K may be disposed to be spaced apart from an adjacent spot by a predetermined interval. Due to this configuration, even if the distance from the object O increases, pieces of depth information according to the distance can be easily distinguished from each other. In other words, the accuracy of the depth information can be improved. Furthermore, the number of spots in the array of spots may be varied differently.

Referring to fig. 4(a) and 4(b), fig. 4(a) shows the light intensity when the light collection unit is not present, and fig. 4(b) shows the light intensity when the light collection unit is present. In this case, the light intensity may be greatest at the center 0 of the single spot when the light collecting unit is present and when the light collecting unit is absent. However, even if the distance to the object is the same, the light intensity at the single spot center 0 may be different depending on the presence or absence of the light collection unit.

More specifically, since the light intensity at the center of a single spot is increased by the light collection unit, the magnitude of the electrical signal converted by the sensor may also be increased according to the light intensity. It can be seen that the depth also increases with increasing width of the electrical signal in the sensor. Therefore, the accuracy of the depth information according to the distance can be further improved. Further, since the light intensity at the center of the spot increases even when the distance from the object increases, the decrease in light intensity can be compensated for according to the distance from the object.

Fig. 5 is a graph for describing a frequency of an optical signal according to an exemplary embodiment.

Referring to fig. 5, in an exemplary embodiment, as shown in fig. 5, the light output unit 110 may perform control to generate a light signal having a frequency f1 in the first half of the exposure period, and the light output unit 110 may perform control to generate a light signal having a frequency f2 in the other half of the exposure period.

According to another exemplary embodiment, the light output unit 110 may control some of the plurality of LEDs to generate a light signal having a frequency f1, and may control the remaining LEDs to generate a light signal having a frequency f 2. As described above, the light output unit 110 may generate an output signal having a different frequency for each exposure period.

For example, optical signals may be generated at frequencies f1 and f2, and the plurality of reference signals may have a phase difference of 90 °. In this case, since the incident optical signal also has frequencies f1 and f2, the sensor to be described below can generate four electrical signals by the input optical signal having a frequency f1 and four reference signals corresponding thereto. The sensor may generate four electrical signals from an input optical signal having a frequency f2 and four reference signals corresponding thereto. Thus, the sensor can generate a total of eight electrical signals. However, as described above, the optical signal may be generated to have one frequency (e.g., f 1).

Fig. 6 is a cross-sectional view of a camera module according to an exemplary embodiment.

Referring to fig. 6, a camera module according to an exemplary embodiment may include a lens assembly 310, a sensor 320, and a printed circuit board 330. Here, lens assembly 310 may correspond to optical unit 120 of fig. 1, and sensor 320 may correspond to sensor 130 of fig. 1. The control unit 140 of fig. 1 may be implemented on the printed circuit board 330 or the sensor 320. Although not shown, the light output unit 110 of fig. 1 may be provided on the printed circuit board 330 or may be provided as a separate component. The light output unit 110 may be controlled by the control unit 140.

Lens assembly 310 may include a lens 312, a lens barrel 314, a lens holder 316, and an IR filter 318.

The lens 312 may be provided as a plurality of lenses, or may be provided as one lens. When the lens 312 is provided as a plurality of lenses, the respective lenses may be arranged with respect to the central axis thereof to form an optical system. Here, the central axis may be the same as the optical axis of the optical system. The lens 312 may include the variable lens described above.

The lens barrel 314 is coupled to the lens holder 316, and may form a space for accommodating a lens therein. Although the lens barrel 314 may be rotatably coupled to one lens or a plurality of lenses, this is merely an example, and the lens barrel 314 may be coupled by other methods using an adhesive (e.g., an adhesive resin such as epoxy) or the like.

The lens holder 316 may be coupled to the lens barrel 314 to support the lens barrel 314, and may be disposed on a printed circuit board 330 on which the sensor 320 is mounted. Due to the lens holder 316, a space in which the IR filter 318 can be disposed can be formed in the lens barrel 314. Although not shown, an actuator capable of tilting or shifting the IR barrel 314 under the control of the control unit 140b may be provided in the barrel 314. The spiral pattern may be formed on an inner circumferential surface of the lens holder 316, and the lens holder 316 may be rotatably coupled to the lens barrel 314 in which the spiral pattern is similarly formed on an outer circumferential surface thereof. However, this is merely an example, the lens holder 316 and the lens barrel 314 may be coupled by an adhesive, or the lens holder 316 and the lens barrel 314 may be integrally formed.

The lens holder 316 may be divided into an upper holder 316-1 coupled to the lens barrel 314 and a lower holder 316-2 disposed on the printed circuit board 330 on which the sensor 320 is mounted. The upper holder 316-1 and the lower holder 316-2 may be integrally formed; the upper holder 316-1 and the lower holder 316-2 may be formed as separate structures and then connected or coupled; or the upper holder 316-1 and the lower holder 316-2 may have a structure separated and spaced apart from each other. In this case, the diameter of the upper holder 316-1 may be smaller than the diameter of the lower holder 316-2.

The above example is only an exemplary embodiment, and the optical unit 120 may be formed as another structure capable of condensing an input optical signal incident to the ToF camera module 100 and transmitting the input optical signal to the sensor 130.

Fig. 7 illustrates a conceptual diagram of a filter and a sensor according to an exemplary embodiment, fig. 8 is an enlarged view of a portion K in fig. 7, and fig. 9 is an enlarged view of a portion M in fig. 7. Fig. 10 is a diagram showing a second region of the sensor according to a distance from the object. Fig. 11 is a top view of a sensor according to a modification.

Referring to fig. 7-9, reflected light LS2 may pass through filter F to be ultimately received by sensor 130. In this case, the reflected light may be light having a predetermined wavelength band as described above, and a part of the light may be blocked by the filter F.

Specifically, the filter F may include a first filter area FA1 through which a first wavelength band passes as a pass band and a second filter area FA2 through which a second wavelength band passes as a band different from the first wavelength band. In other words, the filter F may be divided into the first filter area FA1 and the second filter area FA 2.

Further, in an exemplary embodiment, the second wavelength band may be the same as a wavelength band that transmits IR light. Accordingly, since the second filter region F2 filters a wavelength region of the IR light, the second filter region F2 may operate as a band pass filter for the IR light. On the other hand, the first wavelength band may include the second wavelength band, or include a region not including the second wavelength band. In an exemplary embodiment, the first wavelength band is a pass band, which is a wavelength band that does not include the second wavelength band, which will be mainly described below.

In this case, the first filter area FA1 may be disposed to surround the second filter area FA 2. Specifically, the second filter area FA2 may be disposed as a plurality of second filter areas FA2, and the plurality of second filter areas FA2 may be disposed in the filter F and spaced apart from each other. In this case, the second filter areas FA2 may be spaced apart from each other by a predetermined interval. For example, the widths W1 between the second filter regions FA2 adjacent in the first direction (X-axis direction) may all be the same, and the heights h1 between the second filter regions FA2 adjacent in the second direction (Y-axis direction) may all be the same. Here, the first direction (X-axis direction) refers to one direction in which a plurality of pixels arranged in an array form in the sensor are arranged in parallel, and the second direction (Y-axis direction) is a direction perpendicular to the first direction and refers to a direction in which a plurality of pixels are arranged in parallel. Further, the third direction (Z-axis direction) may be a direction perpendicular to both the first direction and the second direction. In the following description, the first direction (X-axis direction) is a row direction, and the second direction (Y-axis direction) is a column direction. In this specification, the row direction may be used interchangeably with the first direction, and the column direction may be used interchangeably with the second direction.

Due to such a configuration, as will be described later, both depth information and color information can be detected from image data.

Further, the reflected light may pass through the first and second filter areas FA1 and FA2 to be received by the sensor 130 therebelow. In this case, the optical signal (reflected light) passing through the first filter area FA1 will be described as a first signal, and the optical signal (reflected light) passing through the second filter area FA1 will be described as a second signal.

Sensor 130 may include a first sensing zone SA1 for receiving a first signal and a second sensing zone SA2 for receiving a second signal. In other words, the sensor 130 may be divided into the first sensing region SA1 and the second sensing region SA2 according to the wavelength band of the reflected light passing through the filter F.

First, the first sensing areas SA1 may correspond to the first filter areas FA 1. In other words, the first sensing area SA1 may be an area where an optical signal passing through the first filter area FA1 reaches the sensor 130.

Similarly, the second sensing areas SA2 may correspond to the second filter areas FA 2. The second sensing area SA1 may be an area where an optical signal passing through the second filter area FA2 reaches the sensor 130.

Further, since the first and second sensing areas SA1 and SA2 correspond to the first and second filter areas FA1 and FA2, respectively, the first sensing area SA1 may be disposed to surround the second sensing area SA 2.

More specifically, as described above, the sensor 130 may include a plurality of pixels, and the plurality of pixels may be disposed in parallel in the row direction and the column direction. The second sensing region SA2 may be disposed as a plurality of second sensing regions SA2, and the plurality of second sensing regions SA2 may be disposed to be spaced apart from each other.

In addition, each of the second sensing regions SA2 spaced apart from each other may be located on at least one pixel. In an exemplary embodiment, each of the second sensing regions SA2 may include a plurality of pixels, at least a portion of which are disposed in contact with each other. In this case, even when the distance between the camera module and the object changes (for example, when images of various objects set at different distances are photographed), the accuracy of the depth information on the distance from the object can be improved by extracting the depth information for a plurality of pixels of each object.

In the sensor 130, a plurality of pixels PX1-1 to PX9-9 may be arranged in the row direction and the column direction. For example, in sensor 130, a pixel may have nine rows and nine columns. This means that the 1 st-1 st pixels are located in the first row and the first column. In this case, the 2 nd-2 nd pixel, the 4 th-2 nd pixel, the 6 th-2 nd pixel, the 8 th-2 nd pixel, the 2 nd-4 th pixel, the 4 th-4 th pixel, the 6 th-4 th pixel, the 8 th-4 th pixel, the 2 nd-6 th pixel, the 4 th-6 th pixel, the 6 th-6 th pixel, the 8 th-6 th pixel, the 2 th-8 th pixel, the 4 th-8 th pixel, the 6 th-8 th pixel, and the 8 th-8 th pixel may correspond to the second sensing region SA 2.

In this case, each pixel corresponding to the second sensing region SA2 may be surrounded by the respective pixels of the first sensing region SA 1. For example, the 2 nd-2 nd pixels may be arranged to be surrounded by the 1 st-1 st through 1 st-3 rd pixels, the 2 nd-1 st, the 2 nd-3 rd, and the 3 st-1 st through 3 rd pixels. Therefore, even when the distance from the object is changed, the plurality of second sensing areas SA2 are prevented from overlapping each other as much as possible, thereby improving the accuracy of the depth information.

In addition, the second sensing regions SA2 may be spaced apart from each other by a predetermined interval. In an exemplary embodiment, the widths W2 between the adjacent second sensing regions SA2 in the first direction (X-axis direction) may all be the same. In addition, the heights h2 of the adjacent second sensing regions SA2 in the second direction (Y-axis direction) may all be the same.

In addition, the width of the first filter area FA1 may be different from the width of the first sensing area SA 1. Similarly, the width of the second filter area FA2 may be different from the width of the second sensing area SA 2. In an exemplary embodiment, the width of the first filter area FA1 may be greater than the width of the first sensing area SA1, and the width of the second filter area FA2 may be greater than the width of the second sensing area SA 2.

Further, a width W1 between adjacent second filter regions FA2 in the first direction may be different from a width W2 between adjacent second sensing regions SA2 in the first direction. In an exemplary embodiment, a width W1 between adjacent second filter regions FA2 in the first direction may be greater than a width W2 between adjacent second sensing regions SA2 in the first direction.

A height h1 between adjacent second filter regions FA2 in the second direction may be different from a height h2 between adjacent second sensing regions SA2 in the second direction. In an exemplary embodiment, a height h1 between adjacent second filter areas FA2 in the second direction may be greater than a height h2 between adjacent second sensing areas SA2 in the second direction. Due to this configuration, the camera module can provide image data having a wider viewing angle through a plurality of pixels of the sensor.

Referring to fig. 10, the width of the second sensing region may vary according to the distance from the object O. As an example, the object O may include a first point PO1, a second point PO2, and a third point PO3 having different distances from the camera module. The first point PO1 may be a longer distance from the camera module than the second point PO2 and the third point PO 3. The third point PO3 may be a shorter distance from the camera module than the first point PO1 and the second point PO 2.

In this case, the phase delay of the reflected light may be different according to the distance from the object. For example, the reflected light may include a first reflected light LS2-1 as a light signal reflected from the first point PO1, a second reflected light LS2-2 as a light signal reflected from the second point PO2, and a third reflected light LS2-3 as a light signal reflected from the third point PO 3.

The first reflected light LS2-1, the second reflected light LS2-2, and the third reflected light LS2-3 may pass through the second filter area FA2 to be received in the second sensing area SA2 of the sensor 130.

In this case, the second sensing region SA2 may include a 2-1 sensing region SA2a for receiving the first reflected light LS2-1, a 2-2 sensing region SA2b for receiving the second reflected light LS2-2, and a 2-3 sensing region SA2c for receiving the third reflected light LS 2-3.

The width of 2-1 sensing region SA2a may be smaller than the width of 2-2 sensing region SA2b and the width of 2-3 sensing region SA2 c. The width of 2 nd-2 nd sensing region SA2b may be smaller than the width of 2 nd-1 st sensing region SA2a and the width of 2 nd-3 rd sensing region SA2 c. The width of 2-3 sensing region SA2c may be greater than the width of 2-1 sensing region SA2a and the width of 2-2 sensing region SA2 b.

In addition, when the 2 nd-1 st sensing region SA2a corresponds to one pixel, the 2 nd-2 nd sensing region SA2b and the 2 nd-3 rd sensing region SA2c may correspond to a plurality of pixels. Accordingly, since the plurality of second sensing regions SA2 may be disposed to be spaced apart from each other, the second sensing regions SA2 may be spaced apart from each other and may not overlap each other in a row direction or a column direction. Accordingly, the camera module according to an exemplary embodiment may calculate depth information reflecting all different distances between the camera module and the object.

Referring to fig. 11, in the filter, the first filter region may surround the second filter regions, and the first filter regions surrounding one second filter region may not overlap each other. In other words, the filter may be provided as a plurality of collective filters including the second filter region and the first filter region surrounding the second filter region, and the plurality of collective filters may not overlap with each other in the third direction (Z-axis direction).

In response to such a filter, even in the sensor, the first sensing regions SA1 may surround the second sensing region SA2, and the first sensing regions surrounding one first sensing region SA1 may not overlap each other. Further, the sensor may include a set pixel BDX including a second sensing region SA2 and a first sensing region SA1 surrounding a second sensing region SA 2. In this case, a plurality of set pixels BDX may be provided and the plurality of set pixels BDX may not overlap with each other in the third direction (Z-axis direction). Due to this configuration, even when the distance to the object changes, accurate depth measurement can be performed.

Fig. 12 is a schematic diagram for describing a process of generating an electric signal in a sensor according to an exemplary embodiment.

Referring to fig. 12, as described above, the phase of the reflected light (input light) LS2 may be delayed by as much as the distance the incident light (output light) LS1 is reflected back after being incident on the object.

Further, as described above, there may be a plurality of reference signals, and in an exemplary embodiment, as shown in fig. 12, there may be four reference signals C1 through C4. The reference signals C1 to C4 may each have the same frequency as the optical signal, and may have a phase difference of 90 °. One signal C1 of the four reference signals may have the same phase as the optical signal.

In the sensor 130, an active area of the sensor 130 may be exposed in response to each reference signal. The sensor 130 may receive the light signal during the integration time.

The sensor 130 may mix the input optical signal with each reference signal. Then, the sensor 130 may generate an electrical signal corresponding to the hatched portion of fig. 12.

In another exemplary embodiment, when the optical signal is generated at multiple frequencies during the integration time, the sensor 130 absorbs the input optical signal according to the multiple frequencies. For example, it is assumed that optical signals are generated at frequencies f1 and f2, and a plurality of reference signals have a phase difference of 90 °. In this case, since the incident optical signal also has frequencies f1 and f2, four electrical signals can be generated by the input optical signal having a frequency f1 and four reference signals corresponding thereto. Four electrical signals may be generated by an input optical signal having a frequency f2 and four reference signals corresponding thereto. Thus, a total of eight electrical signals may be generated. Hereinafter, this will be mainly described, but as described above, the optical signal may be generated to have one frequency (e.g., f 1).

Fig. 13 is a diagram for describing a sensor according to an exemplary embodiment, and fig. 14 to 17 are diagrams for describing various modifications of the sensor.

Referring to fig. 13 to 17, as described above, the sensor 130 may include a plurality of pixels and have an array structure. In this case, the sensor 130 may be an Active Pixel Sensor (APS) and may be a Complementary Metal Oxide Semiconductor (CMOS) sensor. Further, the sensor 130 may be a Charge Coupled Device (CCD) sensor. Further, the sensor 130 may include a ToF sensor that receives IR light reflected by an object to measure a distance using a time difference or a phase difference.

The plurality of pixels may be disposed in parallel in the first direction and the second direction. For example, the plurality of pixels may be in the form of a matrix.

Also, in an exemplary embodiment, the plurality of pixels may include a first pixel P1 and a second pixel P2. The first and second pixels P1 and P2 may be alternately disposed in the first and second directions. That is, with respect to one first pixel P1, a plurality of second pixels P2 may be disposed adjacent to each other in the first and second directions. For example, in the sensor 130, the first and second pixels P1 and P2 may be arranged in a checkerboard pattern.

Further, the first and second pixels P1 and P2 may be pixels that receive light beams having different wavelength bands as peak wavelengths. For example, the first pixel P1 may receive light having an IR band as a peak wavelength. The second pixel P2 may receive light having a wavelength other than the IR band as a peak wavelength.

In addition, any one of the first and second pixels P1 and P2 may not receive light. In an exemplary embodiment, the plurality of pixels may include an active area SA in which the light receiving element is disposed and an inactive area IA other than the active area.

The active area SA may receive light to generate a predetermined electrical signal, and the inactive area IA may be an area that receives light without generating an electrical signal or receiving light. That is, the ineffective area IA may have a meaning including a case where an electric signal cannot be generated by light even when the light receiving element is located in the ineffective area IA.

The first pixel P1 may include the active area SA, but the second pixel P2 may include only the inactive area IA in which the active area SA is not present. For example, a light receiving element such as a photodiode may be located only in the first pixel and may not be located in the second pixel. Further, for example, the sensor 130 may include a plurality of row regions RR including active regions SA and inactive regions IA alternately arranged in the row direction. Further, in an exemplary embodiment, the sensor 130 may include a plurality of column regions CR including active regions SA and inactive regions alternately arranged in a column direction.

In an exemplary embodiment, the sensor 130 may include a first row region RR1 in which the effective regions SA and the ineffective regions IA are alternately disposed and a second row region RR2 in which the effective regions SA and the ineffective regions IA are alternately disposed in the row direction and the effective regions are disposed at positions that do not overlap with the effective regions of the first row region RR1 in the column direction. Accordingly, the sensor 130 may include a plurality of column regions CR including active areas SA and inactive areas IA alternately arranged in the column direction.

In addition, the first and second pixels P1 and P2 may have various shapes, such as quadrangles, triangles, polygons, and circles. The active area SA may also have various shapes such as a quadrangle, a triangle, a polygon, and a circle (see fig. 14 and 15).

In addition, a component electrically connected to the adjacent first pixel P1 may be located in the second pixel P2. The above components may include electric elements such as wiring and capacitors. Further, the above-described components may be located on the first pixel or the second pixel (see fig. 14).

In an exemplary embodiment, each pixel may be a region defined by an interval between the same active regions adjacent in the arrangement direction (e.g., the first direction or the second direction) on the sensor. Here, the same effective region refers to effective regions having the same function (for example, effective regions for receiving light beams having the same wavelength band).

Further, the first pixel P1 may have only the effective area SA, or may have both the effective area SA and the ineffective area IA. The effective area SA may exist at each of various positions within the first pixel P1. Therefore, although the center of the pixel and the center of the effective region may be different from each other, the following description is made on the premise that the center of the pixel and the center of the effective region are the same.

Further, as shown in fig. 13, 76800 pixels may be arranged in a grid form in the case of the sensor 130 having a resolution of 320 × 240. In this case, the plurality of pixels may be arranged to be spaced apart from each other by a predetermined interval. That is, as shown in fig. 5 by the hatched portion, a certain interval L may be formed between a plurality of pixels. The width dL of the interval L may be much smaller than the size of the pixel. The wiring and the like may be provided at such an interval L. In this specification, the interval L will be omitted.

Further, in an exemplary embodiment, each pixel 132 (e.g., a first pixel) may include a first light receiving unit 132-1 and a second light receiving unit 132-1, the first light receiving unit 132-1 including a first photodiode and a first transistor, and the second light receiving unit 132-1 including a second photodiode and a second transistor.

The first light receiving unit 132-1 receives an input light signal having the same phase as the waveform of the output light. That is, when the light source is turned on, the first photodiode is turned on to absorb the input optical signal. When the light source is turned off, the first photodiode is turned off to stop absorbing the input light. The first photodiode converts the absorbed input optical signal into a current and transmits the current to the first transistor. The first transistor converts the received current into an electrical signal and outputs the electrical signal.

The second light receiving unit 132-2 receives the input light signal of the opposite phase to the phase of the waveform of the output light. That is, when the light source is turned on, the second photodiode is turned off to absorb the input optical signal. When the light source is off, the second photodiode is turned on to stop absorbing the input light. The second photodiode converts the absorbed input optical signal into a current and transfers the current to the second transistor. The second transistor converts the received current into an electrical signal.

Accordingly, the first light receiving unit 132-1 may be referred to as an in-phase receiving unit, and the second light receiving unit 132-2 may be referred to as an out-of-phase receiving unit. As described above, when the first light receiving unit 132-1 and the second light receiving unit 132-2 are activated with a time difference, a difference in the amount of received light occurs according to the distance from the object. For example, when the object is right in front of the camera module 100 (i.e., when the distance is zero), since the time taken for the light to be output from the light output unit 110 and then reflected from the object is zero, the on/off period of the light source is a light receiving period without any change. Therefore, only the first light receiving unit 132-1 receives light, and the second light receiving unit 132-2 does not receive light. As another example, when an object is placed to be spaced apart from the camera module 100 by a predetermined distance, since it takes time for light to be output from the light output unit 110 and then reflected from an object, the on/off period of the light source is different from the reception period of the light. Therefore, a difference is generated between the amount of light received by the first light receiving unit 132-1 and the amount of light received by the second light receiving unit 132-2. That is, the distance to the object may be calculated using the difference between the amount of light input to the first light receiving unit 132-1 and the amount of light input to the second light receiving unit 132-2. In other words, the control unit 140 calculates a phase difference between the output light and the input light using the electrical signal received from the sensor 130, and calculates a distance between the object and the camera module 100 using the phase difference.

More specifically, the control unit 140 may calculate a phase difference between the output light and the input light using the charge amount information of the electrical signal.

As described above, four electrical signals may be generated for each frequency of the optical signal. Accordingly, the control unit 140 may calculate the phase difference t between the optical signal and the input optical signal using the following equation 1d

[ equation 1]

Here, Q1To Q4Representing the amount of charge of the four electrical signals. Q1Representing the amount of charge of the electrical signal corresponding to a reference signal having the same phase as the optical signal. Q3Which represents the amount of charge of the electrical signal corresponding to the reference signal that is 180 deg. later in phase than the optical signal. Q3Which represents the amount of charge of the electrical signal corresponding to the reference signal that is delayed in phase by 90 deg. from the phase of the optical signal. Q4Representing the amount of charge of the electrical signal corresponding to the reference signal that is delayed in phase by 270 deg. from the phase of the optical signal.

The control unit 140 may calculate a distance between the object and the camera module 100 using a phase difference between the light signal and the input light signal. In this case, the control unit 140 may calculate the distance d between the object and the camera module 100 using equation 2 below (see equation 2).

[ equation 2]

Here, c represents the speed of light, and f represents the frequency of output light.

According to an exemplary embodiment, the ToF IR image and the depth image may be acquired from the camera module 100. Accordingly, the camera module according to an exemplary embodiment of the present invention may be referred to as a ToF camera module or a ToF camera module.

More specifically, as shown in fig. 18, the camera module 100 according to an exemplary embodiment may generate raw images for four phases. Here, the four phases may be 0 °, 90 °, 180 °, and 270 °, and the original image for each phase may be an image having pixel values digitized for each phase and may be used interchangeably with a phase image, a phase IR image, and the like. In this case, raw images for four phases may be acquired by the electric signals generated from the second sensing region, each of the images shown in fig. 18 to 20 may be an image acquired for each phase when the entire region of the sensor operates as the second sensing region, or each of the images shown in fig. 18 to 20 may be acquired from the image.

Fig. 18 illustrates raw images for four phases acquired from a camera module according to an exemplary embodiment, fig. 19 illustrates a magnitude image acquired from a camera module according to an exemplary embodiment, and fig. 20 illustrates a depth image acquired from a camera module according to an exemplary embodiment.

Referring to fig. 18 and 19, when four phase images Raw (x) are used0)、Raw(x90)、Raw(x180) And Raw (x)270) (see fig. 18) when the calculation is performed as in equation 3, a magnitude image (see fig. 19) as a ToF IR image can be acquired.

[ equation 3]

Here, Raw (x)0) Can represent the data value for each pixel received by the sensor at 0 deg. phase, Raw (x)90) Can represent the data value for each pixel received by the sensor at 90 deg. phase, Raw (x)180) May represent a data value for each pixel received by the sensor at 180 deg. phase, Raw (x)270) May represent data values for each pixel received by the sensor at 270 deg. phase.

When the calculation is performed as in equation 4 using the four phase images of fig. 18, an intensity image may be acquired as another ToF IR image.

[ equation 4]

Intensity | Raw (x)90)-Raw(x270)|+|Raw(x180)-Raw(x0)|

Here, Raw (x)0) Can represent the data value for each pixel received by the sensor at 0 deg. phase, Raw (X)90) Can represent the data value for each pixel received by the sensor at 90 deg. phase, Raw (x)180) May represent a data value for each pixel received by the sensor at 180 deg. phase, Raw (x)270) May represent data values for each pixel received by the sensor at 270 deg. phase.

As described above, the ToF IR image may be generated by a process of removing two phase images from the other two phase images among the four phase images, respectively. For example, two phase images in which one phase image is removed from the other may have a phase difference of 180 °. In the process of removing two phase images from the other two phase images, respectively, the background light may be removed. Therefore, only the signal in the wavelength band output by the light source remains in the ToF IR image, thereby improving IR sensitivity with respect to the subject and significantly reducing noise.

In this specification, a ToF IR image may refer to a magnitude image or an intensity image, and the intensity image may be used interchangeably with a confidence image. As shown in fig. 19, the ToF IR image may be a grayscale image.

Meanwhile, when the calculation is performed as in equations 5 and 6 using the four phase images of fig. 18, the depth image of fig. 20 may also be obtained. Equations 5 and 6 may correspond to equations 1 and 2 above, respectively.

[ equation 5]

[ equation 6]

Fig. 21 illustrates a diagram for describing an operation of obtaining depth information and color information in a camera module according to an exemplary embodiment.

Referring to fig. 21, as described above, depth information for each pixel may be acquired through four phase images acquired during the integration time, and such depth information may be acquired through the electric signal in the second sensing region SA 2. In addition, color information may be acquired by the electrical signal of the first sensing region SA 1. The first and second sensing regions SA1 and SA2 may be disposed to overlap the pixels or the active regions described above.

In this case, the second sensing area SA2 may vary according to a distance from an object, and in an exemplary embodiment, the second sensing area SA2 may be located on the sensor 130 to overlap some of the plurality of pixels. In other words, the first sensing region SA1 may also be disposed to overlap some of the plurality of pixels. Hereinafter, the explanation is made based on nine pixels adjacent to the second sensing region SA2, wherein the nine pixels may be different from the pixels described above with reference to fig. 7 to 11.

Among the nine pixels, in the row direction, the 1-1 st pixel P1a to the 1-3 st pixel P1c may be located in the first row, the 2-1 st pixel P2a to the 2-3 rd pixel P2c may be located in the second row, and the 3-1 st pixel P3a to the 3-3 rd pixel P3c may be located in the third row.

In addition, the second sensing region SA2 may overlap with a partial region of each of the 1 st-1 st pixel P1a to 1 st-3 rd pixel P1c, the 2 st-1 st pixel P2a, the 2 st-3 rd pixel P2c, and the 3 st-1 st pixel P3a to 3 rd-3 rd pixel P3 c. That is, the entire region of the 2 nd-2 nd pixel P2b may overlap the second sensing region SA2, but a partial region of the remaining pixels may overlap the second sensing region SA 2.

In this case, according to an exemplary embodiment, the control unit may acquire depth information for the object from the 2 nd-2 nd pixel P2 b. However, since the entire area of the remaining pixels adjacent to the 2-2 nd pixel P2b does not overlap the second sensing region SA2, there may be an error in the electric signals generated in the remaining pixels.

Therefore, in an exemplary embodiment, in order to acquire a high-resolution depth image, which will be described below, the path of reflected light may be changed by moving an optical unit, a sensor, or the like. In addition, the depth information of the remaining pixels may be calculated by changing the path of the reflected light and using an interpolation technique on the depth information of the pixels adjacent to each of the remaining pixels.

In an exemplary embodiment, the interpolation techniques may include linear interpolation, polynomial interpolation, spline interpolation, exponential interpolation, log linear interpolation, lagrange interpolation, newton interpolation, bilinear interpolation, geographic interpolation, and the like. For example, the depth information of the 1-1 st pixel P1a may be calculated using the depth information of each of the 1-2 st pixel P1b, the 2-1 st pixel P2a, and the 2-2 nd pixel P2b, which are pixels adjacent to the 1-1 st pixel P1 a. In this case, different weights may be applied to the 1-2 st pixel P1b, the 2-1 st pixel P2a, and the 2-2 nd pixel P2b, which are pixels adjacent to the 1-1 st pixel P1 a. Due to this interpolation technique, the speed of acquiring depth information can be increased.

In addition, the first sensing region SA1 may overlap with a partial region of each of the 1-1 st pixel P1a to 1-3 st pixel P1c, the 2-1 st pixel P2a, the 2-3 rd pixel P2c, and the 3-1 st pixel P3a to 3-3 rd pixel P3 c. That is, the entire region of the 2 nd-2 nd pixel P2b may overlap the first sensing region SA1, but a partial region of the remaining pixels may overlap the first sensing region SA 1.

In this case, in an exemplary embodiment, the control unit may acquire color information of the object from the 2 nd-2 nd pixel P2 b. However, since the entire area of the remaining pixels adjacent to the 2-2 nd pixel P2b does not overlap the first sensing region SA1, there may be some errors in color information acquired from the electrical signals generated in the remaining pixels.

Therefore, similarly to the above-described depth information, the path of the reflected light can be changed by moving the optical unit, the sensor, and the like. By the changing path of the reflected light, the remaining pixels may be disposed such that the entire area thereof overlaps the first sensing area SA 1. Further, the color information of the remaining pixels may be calculated using an interpolation technique on the color information of the pixels adjacent to each of the remaining pixels.

Fig. 22 is a block diagram of a calculation unit according to an exemplary embodiment, and fig. 23 to 25 are diagrams for describing an image control method in a camera module according to an exemplary embodiment.

Referring to fig. 22, the calculation unit according to an exemplary embodiment may output three-dimensional (3D) content for an object using color information and depth information of the object acquired by the control unit. As described above, since the control unit acquires the depth information and the color information from one or more images, the depth image based on the depth information and the color image based on the color information can be acquired by a single process rather than individual processes, thereby reducing the amount of calculation to improve the processing speed. That is, calibration or alignment between the depth image and the color image may not be performed. Further, since one sensor is provided, reliability can be improved when an impact occurs, and power consumption can be reduced.

More specifically, the calculation unit 150 according to an exemplary embodiment may include an image generator 151, an extractor 152, a graph generator 9153, and a content generator 154.

First, the image generator 151 may generate a plurality of images using color information of an object and depth information of the object acquired by the control unit. In this case, the plurality of images may include both color information and depth information. In other words, the image may have a depth image from depth information in a partial region thereof and a color image from color information in another region thereof.

The extractor 152 may extract feature points of each of the plurality of images. In this case, the feature point may correspond to a position of the depth information of the object. In other words, the feature point may correspond to the second sensing region. Further, the size or position of the feature point may be changed according to the change of the optical path. Further, the size of the feature point may be increased according to the interpolation technique described above, and since the feature point corresponds to the second sensing region, the feature point may be easily calculated in the image.

The map generator 153 may generate a depth map using the calculated feature points. In other words, the map generator 153 may calculate depth information about the entire region of the image by applying a simultaneous localization and mapping (SLAM) technique to the feature points. SLAM technology refers to a technology for recognizing its own location in a mobile device and simultaneously mapping a surrounding environment. In this case, the position can be identified by matching the color information of each color image with each position in the plurality of images. The location may be identified by matching feature points in the two-dimensional image with 3D coordinates and obtaining a projection matrix applying homogeneous coordinates. The depth map may be computed by matching each feature point to a point in the image from the image having the color image and the depth image. In this case, the position recognition and the map construction may be complementarily performed.

The content generator 154 may generate 3D content by applying a depth map to a plurality of images.

Referring to fig. 23 and 24, a plurality of images may be shifted in one direction (right direction in the drawings). In this case, the calculation unit according to an exemplary embodiment may estimate the state vector of the (k +1) th image and the position information of the landmark using the state vector X of the k-th image, the shift displacement U, and the observation vector Z of the landmark m on the frame. That is, the calculation unit may use the state vector of the kth imageXkK-th shift displacement UkLandmark m on imagejAnd an observation vector ZkTo estimate the camera state vector X of the (k +1) th framek+1And location information of the map mk. By repeating such a method, position information can be estimated, and depth information about pixels that are not acquired can be estimated. Thus, a depth map of the entire area of the image can be calculated, and finally 3D content can be calculated.

Fig. 26 to 28 are diagrams for describing a control method for acquiring high resolution in a camera module according to an exemplary embodiment.

Referring to fig. 26 to 28, in an exemplary embodiment, in order to increase the resolution of the depth image, a Super Resolution (SR) technique may be used. First, in an exemplary embodiment, as described above, the camera module may change the path of the reflected light received by the sensor 130 using the SR technique to acquire a high resolution image. For example, the path of the reflected light received by the sensor for each predetermined amplitude may be changed, and fig. 26 shows the change in the path of the reflected light when the reflected light is shifted by 0.5 pixels. However, the change of the path of the reflected light is not limited thereto.

Further, in an exemplary embodiment, the control unit may control the movement of the optical unit or the sensor to shift the input optical signal by a predetermined shift distance on the sensor. The control unit may control the variable lens of the optical unit to shift the input optical signal by a predetermined shift distance on the sensor. Further, the control unit may control the filter of the optical unit to shift the input optical signal by a predetermined shift distance on the sensor. For example, the input optical signal may be shifted over the sensor by tilting the filter of the optical unit. Although not shown, the camera module may include an actuator for tilting the optical filter. The driver may drive the filter using a driving force such as a Voice Coil Motor (VCM) type or a piezoelectric type.

The SR technique is a technique for acquiring a high resolution image from a plurality of low resolution images, and a mathematical model of the SR technique can be represented by equation 7.

[ equation 7]

yk=DkBkMkx+nk

Here, 1. ltoreq. k.ltoreq.p, p representing the number of low resolution images, ykRepresenting a low resolution image yk1,yk2,...,and ykM]T (where M ═ N)1×N2),DkRepresenting a down sampling matrix, BkRepresenting an optical blur matrix, MkRepresenting an image warping matrix, x representing a high resolution image x1,x2,...,and xN]T (wherein N ═ L)1N1×L2N2),nkRepresenting noise. That is, the SR technique refers to a technique of estimating the resolution degradation factor by applying the inverse function of the estimated resolution degradation factor to ykTo estimate x. The SR technique can be mainly classified into a statistical method and a multi-frame method, and the multi-frame method can be mainly classified into a spatial division method and a temporal division method. When the depth image is acquired using the SR technique, since M in equation 1 does not existkThe inverse of (c), so statistical methods can be tried. However, since the statistical method requires an iterative calculation process, there is a problem of inefficiency.

To apply the SR technique to the depth information extraction, the control unit may generate a plurality of low resolution sub-frames LRSF using the electrical signal received from the sensor 130, and then may extract a plurality of low resolution images LRI and a plurality of low resolution depth information using the plurality of low resolution sub-frames LRSF. The high resolution depth information may be extracted by rearranging pixel values of a plurality of low resolution depth information. Therefore, the calculation unit can finally output the high resolution depth image HRDI. In the present specification, high resolution is a relative meaning that the resolution is higher than the low resolution.

In addition, the sub-frame may refer to image data generated from an electric signal corresponding to any one exposure period, and a reference signal. For example, when the electric signal is generated by eight reference signals in one exposure period (i.e., one frame image), eight sub-frames may be generated, and one start frame may also be generated. In this specification, the subframe may be used interchangeably with image data, subframe image data, and the like.

Alternatively, in order to apply the SR technique according to an exemplary embodiment of the present invention to the depth information extraction, the calculation unit 150 may generate a plurality of low resolution subframes LRSF and a plurality of low resolution pictures LRI including the plurality of low resolution subframes LRSF, and then may generate a plurality of high resolution subframes HRSF by rearranging pixel values of the plurality of low resolution subframes LRSF. The high-resolution sub-frames HRSF may be used to extract high-resolution depth information and generate a high-resolution depth image HRDI. As described above, the high resolution depth information may be extracted by such a method, and the method may be equally applied to each exemplary embodiment described below or a variation thereof (see fig. 27).

In addition, in order to extract such high resolution depth information, after acquiring a plurality of subframes each shifted by a predetermined shift distance, a plurality of high resolution subframes HRSF may be acquired by applying an SR technique to each subframe, and depth information of each subframe may be extracted using the high resolution subframes HRSF to extract a high resolution depth image HRDI (see fig. 28).

Meanwhile, when the camera module 100 according to an exemplary embodiment of the present invention is applied to an application requiring high-quality image capturing, for example, when the camera module 100 is applied to an application requiring an accurate image (as in biometric authentication), or when the camera module 100 is applied to an application in which a user should operate the camera module 100 with only one hand and take a picture, a technique for preventing or correcting image shake caused by hand shake is also required. A technique for preventing or correcting image shake may be referred to as an Optical Image Stabilizer (OIS) technique. In the OIS technology, when the optical axis is the Z axis, image shake can be prevented or corrected by using a method of moving a structure (e.g., a lens, etc.) within the camera module 100 on the X and Y axes perpendicular to the optical axis.

In addition, in order for the camera module 100 to have the SR function and the OIS function, the camera module 100 according to an exemplary embodiment of the present invention may further include a driver for moving the structure therein.

The present invention has been described based on exemplary embodiments, but the exemplary embodiments are for illustrative purposes and not limiting the present invention, and those skilled in the art will appreciate that various modifications and applications not illustrated in the exemplary embodiments can be made without departing from the scope of the essential features of the present exemplary embodiments. For example, each component described in detail in the exemplary embodiments may be modified. Furthermore, differences relating to modifications and applications should be construed as being included in the scope of the present invention as defined in the appended claims.

43页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于视频编解码的自适应环路滤波

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类