Camera module

文档序号:441173 发布日期:2021-12-24 浏览:8次 中文

阅读说明:本技术 相机模块 (Camera module ) 是由 朴贵莲 李庸仙 于 2020-05-13 设计创作,主要内容包括:本发明公开了一种相机模块,包括:光输出单元,向对象输出第一光信号和第二光信号;传感器,用于接收作为被对象反射的第一光信号的第一反射光信号;以及控制单元,使用第一光信号和第一反射光信号来获取对象的第一距离信息,其中,第一光信号的输出比第二光信号的输出小,并且控制单元确定是否使用第一距离信息输出第二光信号。(The invention discloses a camera module, comprising: a light output unit outputting a first light signal and a second light signal to a subject; a sensor for receiving a first reflected light signal that is a first light signal reflected by an object; and a control unit acquiring first distance information of the object using the first optical signal and the first reflected light signal, wherein an output of the first optical signal is smaller than an output of the second optical signal, and the control unit determines whether to output the second optical signal using the first distance information.)

1. A camera module, comprising:

a light output unit configured to output a first light signal and a second light signal to a subject;

a sensor configured to receive a first reflected light signal, the first reflected light signal being the first light signal reflected by the object; and

a control unit configured to acquire first distance information of the object using the first light signal and the first reflected light signal,

wherein an output power of the first optical signal is smaller than an output power of the second optical signal, and

the control unit determines whether to output the second optical signal using the first distance information.

2. The camera module according to claim 1, wherein the control unit controls the first light signal to output the first light signal when the distance information is less than a preset value, and

and when the distance information is larger than the preset value, the control unit controls the second optical signal to output the second optical signal.

3. The camera module of claim 1, wherein the light output unit includes a first channel unit and a second channel unit, and

the second channel unit includes a greater number of light sources than the first channel unit.

4. The camera module according to claim 3, wherein the first light signal is output from the first channel unit, and

the second optical signal is output from the second channel unit.

5. The camera module of claim 1, wherein the light output unit includes a plurality of light sources, and

the output power of the plurality of light sources at the time of the second optical signal output is larger than the output power of the plurality of light sources at the time of the first optical signal output.

6. The camera module of claim 1, wherein a period of the first light signal is shorter than a period of the second light signal.

7. The camera module according to claim 1, wherein the light output unit outputs a frame signal at a preset period, the frame signal being a minimum unit for calculating the first distance information.

8. The camera module according to claim 1, wherein the control unit acquires second distance information of the object using the second light signal, and

the optical output unit alternately outputs a frame signal of the first optical signal and a frame signal of the second optical signal.

9. The camera module according to claim 1, wherein when the first reflected light signal received by the sensor is received in an amount less than or equal to a first received light amount, the second light signal is output, and

turning off output of the first optical signal when the first reflected optical signal is received by an amount less than or equal to a second received light amount or when the first reflected optical signal is not received, wherein the second received light amount is less than the first received light amount.

10. A camera module, comprising:

a light output unit configured to output a first light signal and a second light signal to a subject;

a sensor configured to receive a first reflected light signal, the first reflected light signal being the first light signal reflected by the object; and

a control unit configured to output the second optical signal when distance information acquired using the first optical signal and the first reflected light signal is greater than a preset value, and configured to output a first output signal when the distance information is less than the preset value,

wherein the second optical signal is used to acquire three-dimensional information, i.e. 3D information, of the object.

Technical Field

The present invention relates to a camera module for extracting distance information.

Background

Three-dimensional content is applied to many fields, for example, education, manufacturing, automatic driving, and games and culture, and distance information (depth map) is required to acquire the three-dimensional content. The distance information is information indicating a spatial distance, and refers to viewing angle information of one point with respect to another point in a two-dimensional image.

As a method of acquiring distance information, a method of projecting Infrared (IR) structured light onto an object, a method of using a stereo camera, a time of flight (TOF) method, and the like are being used. According to the TOF method, information about the emitted and reflected light is used to calculate the distance to the object. The biggest advantage of the ToF method is that distance information about a 3D space can be provided quickly in real time. Furthermore, accurate distance information can be obtained without a user applying a separate algorithm or performing hardware correction. Furthermore, accurate distance information can be obtained even when a very close object is measured or a moving object is measured.

However, when a certain amount of light is irradiated on a very close subject (e.g., skin or eyes) for a predetermined time, there is a problem in that it is difficult to secure the subject.

Disclosure of Invention

Technical problem

The present invention is directed to a camera module that extracts distance information using a time of flight (TOF) method.

The present invention is also directed to providing a camera module that accurately determines a distance to an object while easily ensuring the security of the object.

The present invention is also directed to providing a camera module having improved power efficiency when a short distance to an object is determined.

Technical scheme

According to an exemplary embodiment of the present invention, a camera module includes: a light output unit configured to output a first light signal and a second light signal to a subject; a sensor configured to receive a first reflected light signal, the first reflected light signal being a first light signal reflected by an object; and a control unit configured to acquire first distance information of an object using a first light signal and a first reflected light signal, wherein output power of the first light signal is smaller than output power of the second light signal, the control unit determining whether to output the second light signal using the first distance information.

The control unit may control the first optical signal to be output when the distance information is less than a preset value, and may control the second optical signal to be output when the distance information is greater than the preset value.

The light output unit may include a first channel unit and a second channel unit, and the second channel unit may include a greater number of light sources than the first channel unit.

The first optical signal may be output from the first channel unit, and the second optical signal may be output from the second channel unit.

The light output unit may include a plurality of light sources, and the output power of the plurality of light sources when the second light signal is output is greater than the output power of the plurality of light sources when the first light signal is output.

The period of the first optical signal may be shorter than the period of the second optical signal.

The light output unit may output a frame signal, which is a minimum unit for calculating the first distance information, at a preset period.

The control unit may acquire second distance information of the object using the second optical signal.

The optical output unit may alternately output a frame signal of the first optical signal and a frame signal of the second optical signal.

The preset value may be a value corresponding to 10 cm.

When the first reflected light signal received by the sensor is received in an amount less than or equal to the first received light amount, the second light signal may be output.

The output of the first optical signal may be turned off when the first reflected optical signal received by the sensor is received in an amount less than or equal to a second received light amount or when the first reflected optical signal is not received, wherein the second received light amount is less than the first received light amount.

According to an exemplary embodiment of the present invention, a camera module includes: a light output unit configured to output a first light signal and a second light signal to a subject; a sensor configured to receive a first reflected light signal, the first reflected light signal being a first light signal reflected by an object; and a control unit configured to output a second light signal when distance information acquired using the first light signal and the first reflected light signal is greater than a preset value, and configured to output a first output signal when the distance information is less than the preset value, wherein three-dimensional (3D) information of the object is acquired using the second light signal.

Advantageous effects

According to an exemplary embodiment of the present invention, a distance to an object may be accurately determined to easily secure the object.

Further, the distance to the object may be determined using a pre-pulse of low output power, and then a main pulse having high output power may be output, thereby securing the safety of the object and also increasing the measurable distance.

Further, the distance to an object located at a short distance can be accurately determined.

Further, when the distance information is generated, power consumption can be reduced.

Drawings

Fig. 1 is a diagram illustrating a conceptual diagram of a camera module according to an exemplary embodiment;

fig. 2 is a diagram illustrating a light output unit according to an exemplary embodiment;

fig. 3 is a diagram illustrating an optical signal of an optical output unit according to an exemplary embodiment;

fig. 4 is a diagram illustrating a light output unit according to an exemplary embodiment;

fig. 5 is a graph for describing the amount of irradiation according to the distance from the first channel unit in fig. 4;

fig. 6 is a graph for describing the amount of irradiation according to the distance from the second channel unit in fig. 4;

fig. 7 is a diagram illustrating an optical output unit and an optical signal according to another exemplary embodiment;

fig. 8 is a diagram illustrating an optical signal of an optical output unit according to still another exemplary embodiment;

fig. 9 is a diagram illustrating an optical signal of an optical output unit according to still another exemplary embodiment;

fig. 10 is a diagram illustrating an optical signal of an optical output unit according to an exemplary embodiment;

FIG. 11 is a schematic diagram of a camera module according to an example embodiment;

FIG. 12 is a diagram for describing a sensor according to an exemplary embodiment;

FIG. 13 is a diagram for describing a process of generating an electrical signal in a sensor according to an exemplary embodiment;

FIG. 14 is a timing diagram of a frame period in which a range image is generated in a sensor, according to an example embodiment;

fig. 15 is a diagram for describing driving of a sensor according to an exemplary embodiment;

fig. 16 is a diagram for describing driving of a sensor according to another exemplary embodiment;

fig. 17 is a diagram for describing driving of a sensor according to still another exemplary embodiment;

FIG. 18 shows raw images of four phases acquired from a camera module according to an example embodiment;

FIG. 19 is a magnitude image acquired from a camera module according to an exemplary embodiment;

FIG. 20 shows a range image acquired from a camera module according to an example embodiment;

fig. 21 is a flowchart for describing a method of driving a camera module according to an exemplary embodiment;

fig. 22 is a flowchart for describing a method of driving a camera module according to another exemplary embodiment.

Detailed Description

Hereinafter, exemplary embodiments of the present invention are described in detail with reference to the accompanying drawings.

However, the technical spirit of the present invention is not limited to some exemplary embodiments to be described, and may be implemented in various forms. One or more elements may be selectively combined or used instead between the embodiments without departing from the technical spirit of the present invention.

Further, terms (including technical and scientific terms) used herein may be interpreted in the meaning commonly understood by one of ordinary skill in the art, unless otherwise defined. General terms such as terms defined in dictionaries may be understood in consideration of their background meanings in the related art.

In addition, the terminology used herein is not intended to be limiting of the invention, but rather to describe the exemplary embodiments.

In the specification, the singular form may also include the plural form unless explicitly stated otherwise. When expressed as "at least one (or one or more) of A, B and C," it may also include one or more of all possible combinations of A, B and C.

In addition, terms such as first, second, A, B, (a) and (b) may be used herein to describe components of exemplary embodiments of the invention.

Each term is not intended to limit the nature, order, sequence, etc. of the corresponding component but only to distinguish the corresponding component from other components.

When one element is described as being "connected," "coupled," or "joined" to another element, such description may include the following two cases: one component is directly connected, coupled, and joined to another component; and the one component being "connected", "coupled" or "joined" to the other component by yet another component located between the one component and the other component.

Moreover, when one element is described as being formed or disposed "on or below" another element, such description can encompass both: the two parts are formed in direct contact with each other; and two members are formed to be in indirect contact with each other such that one or more other members are interposed between the two members. Further, when one component is described as being formed "on or under" another component, such description may include a case where one component is formed on an upper side or a lower side with respect to the other component.

A camera module according to an exemplary embodiment to be described below may be used as an optical device or a part of an optical device. First, the optical apparatus may include any one of a cellular phone, a mobile phone, a smart phone, a portable smart device, a digital camera, a notebook computer, a digital broadcasting terminal, a Personal Digital Assistant (PDA), a Portable Multimedia Player (PMP), and a navigation device. However, the type of the optical device is not limited thereto, and any device for capturing an image or a photograph may be included in the optical device.

The optical device may include a body. The body may be in the form of a strip. Alternatively, the main body may have one of various structures, such as a slide type, a folding type, a swing type, and a rotation type, in which two or more sub-bodies are coupled to be relatively movable. The body may include an outer shell (shell, housing or cover) forming an exterior. For example, the body may include a front shell and a rear shell. Various electronic components of the optical device may be embedded in a space formed between the front case and the rear case.

The optical device may comprise a display. The display may be disposed on one surface of a body of the optical device. The display may output an image. The display may output an image captured by the camera.

The optical device may comprise a camera. The camera may include a time-of-flight (ToF) camera module. The ToF camera module may be disposed on a front surface of a body of the optical device. In this case, the ToF camera module may be used for various types of biometric recognition (e.g., face recognition, iris recognition, and vein recognition of the user) to perform secure authentication of the optical device.

Fig. 1 illustrates a conceptual diagram of a camera module according to an exemplary embodiment.

Referring to fig. 1, a camera module 100 according to an exemplary embodiment may include a light output unit 110, an optical unit 120, a sensor 130, and a control unit 140.

First, the light output unit 110 may generate and irradiate light to the object O in a desired signal form. Specifically, the light output unit 110 may be any one of a light emitting module, a light emitting unit, a light emitting assembly, and a light emitting device.

The optical output unit 110 may generate and output an optical signal in the form of a pulse wave or a continuous wave. Here, the continuous wave may be in the form of a sine wave or a square wave, but is not necessarily limited thereto.

Further, since the light output unit 110 generates the light signal in the form of a pulse wave or a continuous wave, the camera module 100 may use a phase difference or a time difference between the light signal output from the light output unit 110 and a reflected light signal reflected from the object and then input to the camera module 100. In an exemplary embodiment, the camera module 9100 may calculate the distance to the object using a phase difference or a time difference between the reflected light signals.

In this specification, the illumination light GS may refer to a light signal output from the light output unit 110 and incident on the subject, and the reflection light RS may refer to a light signal that is output from the light output unit 110, reaches the subject, is reflected by the subject, and is input to the illumination light of the camera module 100. From the perspective of the subject, the illumination light GS may be the incident light, and from the perspective of the camera module, the reflected light RS may be the input light. Hereinafter, the irradiated light will be described as a light signal (e.g., a first light signal or a second light signal) or an irradiated light signal, and the reflected light will be described as a reflected light signal.

Further, the light output unit 110 irradiates the generated light signal onto the object for a predetermined integration time. Here, the integration time refers to a time period during which the pixel receives the reflected light to generate electric charges to acquire distance information. The integration time may be set to one or more integration times, and the one or more integration times may constitute a frame period. A detailed description thereof will be provided below.

Further, when a plurality of frames are generated, the above-described integration time may exist a plurality of times. For example, the integration time may be repeated. For example, when the camera module 100 photographs an object at 20 Frames Per Second (FPS), the integration time may be 1/20[ seconds ]. When 100 frames are generated and the integration time and the frame period are the same, the integration time may be repeated 100 times. However, the present invention is not limited thereto, and a plurality of integration times may exist in one frame period.

The optical output unit 110 may generate not only an output optical signal having a predetermined frequency but also a plurality of optical signals having different frequencies. In addition, the optical output unit 110 may sequentially and repeatedly output a plurality of optical signals having different frequencies. Alternatively, the optical output unit 110 may simultaneously output a plurality of optical signals having different frequencies. For such an operation, in an exemplary embodiment, the light output unit 110 may include a light source E (see fig. 2), a light changing unit (not shown), and a light collecting unit (not shown).

First, the light source E may generate light. The light generated by the light source E may be infrared light having a wavelength of 770 to 3000nm, or may be visible light having a wavelength of 380 to 770 nm. The light source E may include a Light Emitting Diode (LED), and may have a form in which a plurality of LEDs are arranged according to a certain pattern. In addition, the light source E may further include an Organic Light Emitting Diode (OLED) or a Laser Diode (LD). Alternatively, the light source E may be a Vertical Cavity Surface Emitting Laser (VCSEL). The VCSEL may be one of laser diodes that convert an electrical signal into an optical signal, and may use a wavelength of about 800nm to 1000nm, such as about 850nm or about 940 nm.

The light source E is repeatedly turned on/off at specific time intervals to generate a light signal in the form of a pulse wave or a continuous wave. The specific time interval may be related to the frequency of the optical signal. The on/off of the light source E may be controlled by the light changing unit.

The light changing unit (not shown) may control on/off of the light source E and control the light source E to generate a light signal in the form of a continuous wave or a pulse wave. That is, the light changing unit (not shown) may control the light source E to generate the light signal in the form of a continuous wave or a pulse wave by frequency modulation, pulse modulation, or the like.

The light collection unit (not shown) may change the light path so that the light generated from the light source E has an array spot. For example, the light collection unit (not shown) may include an imaging lens, a microlens array, or a Diffractive Optical Element (DOE). Due to this configuration, the light emitted from the camera module 100 toward the object O may have a plurality of array spots. Therefore, even when the distance between the camera module 100 and the object O increases, the light emitted from the camera module 100 can easily reach the object O due to being collected. Accordingly, the camera module 100 according to the exemplary embodiment may implement long-distance optical transmission. In this case, the number of arrayed light spots may be set differently, but in this specification, description will be provided based on the light output unit 110 including the light source E.

Further, the light change unit may include an actuator. For example, when a lens held by an actuator moves up and down, light may be collected and emitted to a subject as spot light, or may be emitted to the subject as planar light.

Meanwhile, the optical unit 120 may include at least one lens. The optical unit 120 may collect a reflected light signal reflected from the object through at least one lens to transmit the collected light signal to the sensor 130.

At least one lens of the optical unit 120 may include a solid lens. Further, the at least one lens may comprise a variable lens. The variable lens may be a variable focus lens. Further, the variable lens may be a focus adjustable lens. Further, the variable lens may be at least one of a liquid lens, a polymer lens, a liquid crystal lens, a Voice Coil Motor (VCM) type, and a Shape Memory (SMA) type. The liquid lens may include a liquid lens containing one liquid and a liquid lens containing two liquids. In a liquid lens containing one kind of liquid, the focal point can be changed by adjusting a diaphragm provided at a position corresponding to the liquid, for example, the focal point can be changed by pressing the diaphragm with electromagnetic force of a magnet and a coil. The liquid lens containing the two types of liquids may include a conductive liquid and a non-conductive liquid, and an interface formed between the conductive liquid and the non-conductive liquid may be adjusted using a voltage applied to the liquid lens. In a polymer lens, the focal point can be changed by controlling the polymer material via a piezoelectric driver or the like. In the liquid crystal lens, the focal point can be changed by controlling the liquid crystal with an electromagnetic force. In the VCM type, the focus may be changed by controlling a solid lens or a lens assembly including the solid lens through an electromagnetic force between a magnet and a coil. In the SMA type, the focus may be changed by controlling a solid lens or a lens assembly including a solid lens using SMA. In addition, the optical unit 120 may include an optical plate. The optical plate may be a light transmitting plate.

In addition, the optical unit 120 may include a filter (not shown) that transmits light within a specific wavelength range. In an exemplary embodiment, the filter (not shown) of the optical unit 120 may transmit only light of a preset wavelength band, and may block light other than the light of the preset wavelength band. In this case, a filter (not shown) may partially pass light of an Infrared (IR) band. For example, the filter (not shown) may include an IR band pass filter that partially passes light having a wavelength of 780nm to 1000 nm.

The sensor 130 may generate an electrical signal using the input optical signal collected by the optical unit 120. In an exemplary embodiment, the sensor 130 may absorb the input optical signal in synchronization with the on/off period of the optical output unit 110 for each of the irradiation lights GS. For example, the sensor 130 may absorb respective lights that are in phase and out of phase with the optical signal output from the optical output unit 110.

In addition, the sensor 130 may generate an electrical signal corresponding to each reference signal using a plurality of reference signals having different phases. For example, the electric signal may be a signal obtained by mixing each reference signal and the reflected light, and the mixing may include convolution, multiplication, or the like. Further, the frequency of the reference signal may be set to correspond to the frequency of the optical signal output from the optical output unit 110. In an exemplary embodiment, the frequency of the reference signal may be the same as the frequency of the optical signal of the optical output unit 110.

As described above, when the light output unit 110 generates the optical signal having the plurality of frequencies, the sensor 130 may generate the electrical signal according to the plurality of reference signals corresponding to each frequency of the optical signal. For example, a switching operation of a gate may be performed in each pixel of the sensor 130 in response to a plurality of reference signals, and charges of an electric signal generated by absorbing the reflected light signal RS may be charged into a charging element (e.g., a capacitor) according to the switching operation of the gate, so that the electric signal (charged with charges) may be finally output. For example, the electric signal may correspond to the charge amount or voltage of each reference signal, and the electric signal may be output for each pixel.

The control unit 140 may acquire information on the distance to the object using the phase difference between the irradiation light GS and the reflected light signal RS. Here, the phase difference may be calculated from the electric signal output from the sensor 130.

In addition, in an exemplary embodiment, in order to detect the object O located at a short distance from the camera module 100, the control unit 140 may acquire information on the distance to the object (first distance information) through preprocessing by using a phase difference between the first light signal output from the light output unit 110 and the first reflected light signal corresponding to the first light signal, and may determine whether the light output unit 110 irradiates a second light signal, which is a light signal irradiated after the first light signal, according to the acquired distance to the object.

In other words, when the distance between the object O and the camera module 100 is very short, the amount of reflected illumination may be large, and thus the electrical signal generated from the reflected light signal may have a large amplitude. For example, since the electric signal has a magnitude greater than a reference value, it may be difficult for the control unit to accurately calculate information on a distance from each pixel. Further, when the adjacent object is a part of a human body (e.g., skin or eyes), it may be difficult to ensure safety. Therefore, when measuring a distance, the control unit according to the exemplary embodiment may initially irradiate an optical signal (here, the first optical signal) having a low irradiation amount, and then may determine an output power of an optical signal (here, the second optical signal) having a high irradiation amount, thereby easily ensuring safety for the optical signal. A detailed description thereof will be provided below.

The distance information acquired by the first optical signal and the first reflected light signal will be described as first distance information, and the distance information acquired by the second optical signal and the second reflected light signal will be described as second distance information. The first reflected light signal is a signal that is a first light signal reflected by the object and input to the camera module, and the second reflected light signal is a signal that is a second light signal reflected by the object and input to the camera module.

Further, the control unit 140 may control the optical unit 120 to shift the optical path of the reflected light signal (e.g., the first reflected light signal or the second reflected light signal). Due to this configuration, a plurality of image data for extracting a high-resolution distance image can be output.

In addition, the camera module 100 according to an exemplary embodiment may further include a calculation unit (not shown). The calculation unit (not shown) may calculate depth information having a higher resolution than that of the image data using the electrical signals received from the sensor 130 and combining the plurality of image data extracted from the control unit 140. Further, a calculation unit (not shown) may be arranged in the optical apparatus including the camera module or in the camera module 100 as shown to perform the calculation. Hereinafter, description is made based on a calculation unit (not shown) provided in the camera module 100.

The calculation unit (not shown) may receive information detected by the sensor 130 from the camera module 100 to perform calculation thereon. A calculation unit (not shown) may receive the plurality of low resolution information using the electrical signal received from the sensor 130 and generate high resolution distance information using the plurality of low resolution information. For example, the high resolution distance information may be generated by rearranging a plurality of low resolution information.

In this case, the calculation unit (not shown) may calculate the distance between the object and the camera module 100 using a time difference between the light signal output from the light output unit and the light signal received by the sensor or using a plurality of pieces of information acquired during a plurality of exposure times of the sensor that expose the effective area of the sensor at different phases.

The term "cell" used in the present exemplary embodiment refers to a software component or a hardware component, such as a Field Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC), which performs a specific task. However, the term "unit" is not limited to software components or hardware components. The "unit" may be configured to reside on an addressable storage medium and configured to operate one or more processors. Thus, by way of example, a unit may include components such as software components, object-oriented software components, class components and task components, procedures, functions, attributes, programs, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, database structures, tables, arrays, and parameters. The functionality provided for in the components and units may be combined into fewer components and units or further separated into additional components and units. Further, the components and units may be implemented such that the components and units operate one or more Central Processing Units (CPUs) in the device or secure multimedia card.

Fig. 2 is a diagram illustrating a light output unit according to an exemplary embodiment, and fig. 3 is a diagram illustrating an optical signal of the light output unit according to an exemplary embodiment.

Referring to fig. 2 and 3, as described above, the light output unit 110 according to an exemplary embodiment may include a plurality of light sources E. As an example, the plurality of light sources E may be arranged in the form of a matrix or the like. The light output unit 110 according to an exemplary embodiment may output the first and second light signals GS1 and GS 2. That is, each light source E may output a first light signal GS1 and a second light signal GS 2. However, hereinafter, the first and second optical signals GS1 and GS2 will be described as all optical signals output from the optical output unit 110.

In an exemplary embodiment, the phase of the first optical signal GS1 may lead the phase of the second optical signal GS 2. In other words, the first optical signal GS1 may be output before the second optical signal GS 2.

The output power of the first light signal GS1 (hereinafter, interchangeably used with the first output power S1) may be less than the output power of the second light signal GS2 (hereinafter, interchangeably used with the second output power S2). That is, the first output power S1 may be smaller than the second output power S1Two output powers S2. Here, the first output power S1 is the illuminance (mW/cm) of the first optical signal GS12) The second output power S2 is the illuminance (mW/cm) of the second light signal GS22). Illuminance refers to light output power per unit area.

That is, the camera module according to an exemplary embodiment may irradiate the first light signal GS1 having the illuminance less than that of the second light signal GS2 before the second light signal GS2 to acquire the distance information (or distance). Accordingly, when the object O is very close to the camera module, the camera module may determine whether the second light signal GS2, which may be more harmful to a human body or the like, is irradiated at the corresponding distance, thereby ensuring the safety of the camera module.

The first light signal GS1 may have a first amplitude a1, a first period T1, and a first frame period F1. The first amplitude a1 of the first light signal GS1 may be an amplitude of the first light signal GS1, the first period T1 of the first light signal GS1 may be a signal period, and the first frame period F1 of the first light signal GS1 may be a period in which distance information may be calculated. The product of the first amplitude a1 and the first period T1 in the first light signal GS1 may be the total illuminance as the total output power of the first light signal G1.

Similarly, the second optical signal GS2 may have a second amplitude a2, a second period T2, and a second frame period F2. The second amplitude a2 of the second light signal GS2 may be an amplitude of the second light signal GS2, the second period T2 of the second light signal GS2 may be a signal period, and the second frame period F2 of the second light signal GS2 may be a period in which distance information may be calculated.

In this case, in an exemplary embodiment, the first amplitude a1 of the first light signal GS1 may be smaller than the second amplitude a2 of the second light signal GS 2. The first period T1 of the first optical signal GS1 may be the same as the second period T2 of the second optical signal GS 2. In addition, the first frame period F1 of the first light signal GS1 may be the same as the second frame period F2 of the second light signal GS 2. Similarly, the product of the second amplitude a2 and the second period T2 in the second optical signal GS2 may be the total illuminance which is the total output power of the second optical signal G2. Therefore, since the magnitude of the first light signal GS1 is smaller than the magnitude of the second light signal GS2, the first output power S1 may be larger than the second output power S2.

Fig. 4 is a schematic diagram illustrating a light output unit according to an exemplary embodiment. Fig. 4 is a graph for describing an irradiation amount according to a distance from the first channel unit in fig. 5, and fig. 6 is a graph for describing an irradiation amount according to a distance from the second channel unit in fig. 4.

Referring to fig. 4, the light output unit 110 according to an exemplary embodiment may be divided according to the number of light sources E disposed in an area, and may include a first channel unit CH1 and a second channel unit CH 2.

First, the first channel unit CH1 and the second channel unit CH2 may each include one or more light sources E. In addition, the number of the light sources E in the first channel unit CH1 may be less than the number of the light sources E in the second channel unit CH 2.

The first channel unit CH1 may at least partially overlap with the second channel unit CH 2. However, the present invention is not limited thereto, and the first channel unit CH1 and the second channel unit CH2 may be disposed to be spaced apart from each other.

Also, in an exemplary embodiment, the first optical signal may be an optical signal irradiated from the first channel unit CH1, and the second optical signal may be an optical signal irradiated from the second channel unit CH 2. Accordingly, as described above, since the number of the light sources E in the first channel unit CH1 is less than the number of the light sources E in the second channel unit CH2, the first output power of the first optical signal may be less than the second output power of the second optical signal.

In other words, in order to make the first output power smaller than the second output power, the area of the light output unit 110 may be divided and driven such that the irradiation amount is different. Due to this configuration, the camera module can accurately measure the adjacent object and ensure safety for the light signal of the light output unit 110.

Referring to fig. 5 and 6, when the light output unit includes 203 light sources within 3mm × 3mm and is divided into the first channel unit CH1 having one light source and the second channel unit CH2 having 203 light sources, and each of the first channel unit CH1 and the second channel unit CH2 is driven, illuminance according to distance is measured. Fig. 5 and table 1 below show illuminance according to distance when the first channel unit CH1 is driven, and fig. 6 and table 2 below show illuminance according to distance when the second channel unit CH2 is driven.

[ Table 1]

Distance (mm) Irradiation (mW/cm)2)
350 62.53
550 23.45
750 12.55
1000 7.05

[ Table 2]

Distance (mm) Irradiation (mW/cm)2)
5 1666.60
25 58.57
50 14.66
75 6.38
100 3.59

That is, at the same distance, the irradiation amount of the first channel unit CH1 may be smaller than that of the second channel unit CH 2. Further, the distance from the point of the first channel unit CH1 having the same illuminance as the second channel unit CH2 may be shorter than that in the second channel unit CH 2. In other words, since the illumination is very large when the camera module according to the exemplary embodiment is close to the object, in order to accurately measure the distance and ensure the safety of the human body, the first and second light signals may be output by being distinguished according to the number of light sources. Fig. 7 is a diagram illustrating an optical output unit and an optical signal according to another exemplary embodiment.

Referring to fig. 7, the light output unit 110 according to another exemplary embodiment may output the first light signal GS1 and the second light signal GS2 by driving light sources of the same channel unit.

As an example, the light output unit 110 may include a third channel unit CH3 including all light sources, and the third channel unit CH3 may be driven to output the first light signal GS1 and the second light signal GS 2. That is, the first light signal GS1 and the second light signal GS2 may be light signals illuminated from the same number of light sources.

However, in another exemplary embodiment, the magnitude of the first light signal GS1 of each light source may be smaller than the magnitude of the second light signal GS2 of each light source. In fig. 7, the first optical signal GS1 and the second optical signal GS2 are shown as one light source, but the first optical signal GS1 and the second optical signal GS2 refer to all optical signals output from the third channel unit CH 3.

Therefore, even when the first and second optical signals GS1 and GS2 are output from the same channel unit, since the first amplitude a1 ' of the first optical signal GS1 is smaller than the second amplitude a2 of the second optical signal GS2, the first output power S1 ' of the first optical signal GS1 may be smaller than the second output power a2 ' of the second optical signal GS 2. That is, in the light output unit 110, when the second light signal GS2 is output, the light output power of each light source may be larger than when the first light signal GS1 is output.

In this case, the period T1 'of the first optical signal GS1 may be the same as the period T2' of the second optical signal GS 2. Therefore, by adjusting a driving signal (e.g., a driving current or a driving voltage) applied to the light source of the light output unit 110, the irradiation amount of the light output unit can be adjusted by low power and simple operation.

Fig. 8 is a diagram illustrating an optical signal of an optical output unit according to still another exemplary embodiment.

Referring to fig. 8, as described above, the light output unit according to still another exemplary embodiment may output the first light signal GS1 before the second light signal GS2, and the first output power S1 "may be smaller than the second output power S2".

In this case, the first amplitude a1 "of the first optical signal GS1 may be the same as the second amplitude a 2" of the second optical signal GS 2. For example, first light signal GS1 and second light signal GS2 may be output from the same channel and light source with the same output power.

However, unlike the amplitude of the optical signal described above, the first period T1 "of the first optical signal GS1 may be different from the second period T2" of the second optical signal GS2, and the first period T1 "may be shorter than the second period T2". Accordingly, the first frame period F1 ″ may be shorter than the second frame period F2 ″.

Further, in a sensor to be described below, the integration time of the first reflected light signal may be different from the integration time of the second reflected light signal, and the integration time of the first reflected light signal may be shorter than the integration time of the second reflected light signal.

Fig. 9 is a diagram illustrating an optical signal of an optical output unit according to still another exemplary embodiment.

Referring to fig. 9, the light output unit according to still another exemplary embodiment may output the first light signal GS1 before the second light signal GS2, and the first output power S1 '″ may be smaller than the second output power S2' ″.

In this case, the first amplitude a1 "'and the first period T1"' of the first optical signal GS1 may be different from the second amplitude a2 "'and the second period T2"' of the second optical signal GS2, respectively. In an exemplary embodiment, the first amplitude a1 "'may be less than the second amplitude a 2"', and the first period T1 "'may be shorter than the second period T2"'. For example, the first light signal GS1 and the second light signal GS2 may be output from different channels or light sources having different output power. Further, the first frame period F1 '″ may be different from the second frame period F2' ″, and the first frame period F1 '″ may be shorter than the second frame period F2' ″.

Further, in addition to the exemplary embodiments described in this specification, when any one of the first magnitude value a1 '″ and the first period T1' ″ is less than any one of the second magnitude value a2 '″ and the second period T2' ″ and the other one of the first magnitude value a1 '″ and the first period T1' ″ is greater than the other one of the second magnitude value a2 '″ and the second period T2' ″, the first output power S1 '″ may be less than the second output power S2' ″, and the present invention may include all such examples.

Fig. 10 is a diagram illustrating an optical signal of an optical output unit according to an exemplary embodiment.

Referring to fig. 10, the light output unit according to an exemplary embodiment may sequentially output the first and second light signals GS1 and GS2 and then output the first light signal GS1 again.

That is, when the distance to the subject measured using the first light signal GS1 and the first reflected light signal is recognized to be longer than the preset distance, the control unit may irradiate the second light signal GS2 in consideration that the safety of the human body or the like is ensured. The control unit may acquire information on a distance to the object using the second light signal and the second reflected light signal. The acquisition of the distance information will be described in detail below.

Further, the control unit may control the light output unit to output the first light signal GS1 for a preset period. For example, the light output unit may output the first light signal GS1 in the first drive period td. Accordingly, the camera module may periodically check the distance between the object and the camera module, and when the distance between the object and the camera module is less than a preset distance, the camera module may turn off the output of the second light signal GS2 to ensure safety. Accordingly, the camera module according to the exemplary embodiment may reduce the risk of a security problem caused by movement or the like when measuring the distance to the object.

In addition, as a modification, the light output unit may output the first light signal every frame period that is a minimum unit of the second light signal. Accordingly, it may be periodically determined whether to acquire the second distance information according to the distance from the object using the first distance information of the object. Further, the output power of the optical signal may be determined at each frame period as a minimum unit, thereby effectively measuring the second distance information representing the distance to the unsafe real object.

According to the above-described exemplary embodiments, the control unit may output the second light signal when the first reflected light signal received by the sensor is received in an amount less than or equal to the first received light amount. Here, the first light-receiving amount may be set to an amount harmful to the human body at a point 10cm away from the object.

For example, when the distance from the object is long, the control unit may output the second light signal since the first reflected light signal is received by an amount smaller than the first received light amount (including the case where there is no reflected signal). Due to this configuration, it is possible to accurately acquire information on the distance to an object at a long distance while protecting the human body from being injured by an object at a short distance.

Further, as described above, when the object is located very close to the camera module, the control unit may not output the second light signal since the result of the first reflected light signal may be greater than the first received light amount. Further, the output of the first optical signal may be turned off. In this case, a case where a null value due to overcharge of the capacitor, which will be described below, is larger than the first light receiving amount will be described.

Further, the control unit may turn off the output of the first optical signal when the first reflected light signal is received by an amount less than or equal to a second received light amount smaller than the first received light amount or when the first reflected light signal is not received. Due to this configuration, safety of a human body can be ensured at a short distance at which an object may come into contact with the light output unit.

Fig. 11 is a sectional view of a camera module according to an exemplary embodiment.

Referring to fig. 11, a camera module according to an exemplary embodiment may include a lens assembly 310, a sensor 320, and a printed circuit board 330. Here, the lens assembly 310 may correspond to the optical unit 120 of fig. 1, and the sensor 320 may correspond to the sensor 130 of fig. 1. The control unit 140 of fig. 1 may be implemented on the printed circuit board 330 or the sensor 320. Although not shown, the light output unit 110 of fig. 1 may be provided on the printed circuit board 330, or may be provided as a separate component. Further, the output of the optical signal of the optical output unit 110 may be controlled by the control unit 140.

Specifically, lens assembly 310 may include a lens 312, a lens barrel 314, a lens holder 316, and an IR filter 318.

The lens 312 may be provided as a plurality of lenses or may be provided as one lens. When the lens 312 is provided as a plurality of lenses, the respective lenses may be arranged with respect to the central axis thereof to form an optical system. Here, the central axis may be the same as the optical axis of the optical system. The lens 312 may include the variable lens described above.

The lens barrel 314 may be coupled to the lens holder 316, and may have a space capable of accommodating a lens therein. Although the lens barrel 314 may be rotationally coupled to a lens or lenses, this is merely an example, and the lens barrel 314 may be coupled by other methods, for example, using an adhesive (e.g., an adhesive resin such as epoxy).

The lens holder 316 may be coupled to the lens barrel 314 to support the lens barrel 314, and may be disposed on a printed circuit board 330 on which the sensor 320 is mounted. Due to the lens holder 316, a space in which the IR filter 318 can be disposed can be formed in the lens barrel 314. Although not shown, it can be in the control unit 140Can be provided in the barrel 314, a driver to tilt or shift the IR barrel 314. A spiral pattern may be formed on the inner circumferential surface of the lens holder 316; and the lens holder 316 may be rotatably coupled to the lens barrel 314, in which the spiral pattern is similarly formed on the outer circumferential surface thereof in the lens barrel 314. However, this is merely an example, the lens holder 316 and the lens barrel 314 may be coupled by an adhesive, or the lens holder 316 and the lens barrel 314 may be integrally formed.

The lens holder 316 may be divided into an upper holder 316-1 coupled to the lens barrel 314 and a lower holder 316-2 disposed on the printed circuit board 330 on which the sensor 320 is mounted. The upper holder 316-1 and the lower holder 316-2 may be integrally formed; the upper holder 316-1 and the lower holder 316-2 may be formed as separate structures and then connected or coupled; or the upper holder 316-1 and the lower holder 316-2 may have a structure separated and spaced apart from each other. In this case, the diameter of the upper holder 316-1 may be smaller than the diameter of the lower holder 316-2.

The above example is only an exemplary embodiment, and the optical unit 120 may be formed in another structure capable of condensing a reflected light signal incident to the ToF camera module 100 and transmitting the collected light signal to the sensor 130.

Fig. 12 is a diagram for describing a sensor according to an exemplary embodiment. Fig. 13 is a diagram for describing a process of generating an electric signal in the sensor according to the exemplary embodiment, and fig. 14 is a timing diagram of one frame period of generating a distance image in the sensor according to the exemplary embodiment.

Referring to fig. 12, as described above, the sensor 130 may include a plurality of pixels PX and have an array structure. In this case, the sensor 130 may be an Active Pixel Sensor (APS) and may be a Complementary Metal Oxide Semiconductor (CMOS) sensor. Further, the sensor 130 may be a Charge Coupled Device (CCD) sensor. The sensor 130 may include a ToF sensor that receives IR light signals reflected by an object to measure distance using a time difference or a phase difference.

For example, in the sensor 130, a plurality of pixels may be arranged in parallel along the first direction and the second direction. The plurality of pixels may be in the form of a matrix. Also, in an exemplary embodiment, the plurality of pixels may include a first pixel P1 and a second pixel P2. The first and second pixels P1 and P2 may be alternately arranged in the first and second directions. That is, with respect to one first pixel P1, a plurality of second pixels P2 may be disposed adjacent to each other in the first and second directions. For example, in the sensor 130, the first and second pixels P1 and P2 may be disposed in a checkerboard pattern. That is, as shown in fig. 12, 76800 pixels may be arranged in a grid form in the case of the sensor 130 having a resolution of 320 × 240.

In addition, the first and second pixels P1 and P2 may be pixels that receive light beams having peak wavelengths of different wavelength bands. For example, the first pixel P1 may receive light having an IR band as a peak wavelength. The second pixel P2 may receive light having a wavelength outside the IR band as a wavelength peak. In addition, any one of the first and second pixels P1 and P2 may not receive light.

In addition, the plurality of pixels PX may have various shapes, such as quadrangles, triangles, polygons, and circles. In addition, the active area in the pixel PX may also have various shapes, such as a quadrangle, a triangle, a polygon, and a circle.

That is, the plurality of pixels PX may be disposed to be spaced apart from each other by a certain interval. Such a spatial interval may be much smaller than the size of the pixels PX, and the wiring may be disposed in the spatial interval. Hereinafter, a description will be provided by ignoring the spatial interval in this specification.

The pixel PX may include a detection unit 131 (photogate), a switching unit (hereinafter, referred to as a first gate 132 and a second gate 133), and accumulation units 134 and 135. The detection unit 131 may include an N-type semiconductor layer, a P-type semiconductor layer, and an active layer disposed between the N-type semiconductor layer and the P-type semiconductor layer, and may generate a current according to a reflected optical signal. In other words, the detection unit 131 may receive the reflected light signal to generate electrons.

The first and second gates 132 and 133 may regulate transfer of electrons generated by the detection unit 131 to the accumulation units 134 and 135, respectively. Fig. 12 shows that a plurality of gates, for example, a first gate 132 and a second gate 133 are provided, and electrons are selectively transferred to the accumulation units 134 and 135 according to a control signal for switching the gates. Further, on/off of the gate may be controlled according to a reference signal described in the specification. The accumulation units 134 and 135 may accumulate the transferred electrons. The electron accumulation time or period may be controlled by a signal applied to the switching unit. The accumulation units 134 and 135 may accumulate electrons for a certain time, output the amount of the accumulated electrons, and then release the accumulated electrons.

In other words, the sensor 130 may include a charging element and a switching element. Here, the charging elements may be the accumulation units 134 and 135, and the switching elements may be the first gate 132 and the second gate 133. In addition, in this case, the charging element may include a capacitor or the like, and the switching element may include various switching elements such as a field effect transistor, but the present invention is not limited to the above type.

Through the above operation, each pixel PX may generate an electric signal. In addition, the pixel PX may include a plurality of photodiodes and a plurality of transistors.

Referring to fig. 13, the phase of the reflected light (input light) RS may be delayed according to the distance that the input light (illumination light) GS is reflected back after being incident on the object.

In this case, the control unit according to an exemplary embodiment may supply a reference signal to the gate of the pixel in order to derive a phase difference between the irradiated light GS and the reflected light signal RS. In this case, there may be a plurality of reference signals. In an exemplary embodiment, as shown in fig. 13, there may be four reference signals C1 through C4. The reference signals C1 to C4 may each have the same frequency as the optical signal or the reflected light signal, and may have a phase difference of 90 °. Of the four reference signals, one signal (e.g., C1) may have the same phase as the optical signal. The reference signals C1 to C4 may be applied to the sensor, and the sensor may generate an electric signal from the reflected light LS2 according to the reference signals. In other words, in the sensor, the effective area of the sensor may be exposed in response to each reference signal, and the sensor may receive the reflected light signal in the time (exposure time) during which the effective area is exposed. When the reference signal is in an on state (positive state), the sensor may be charged with electric charges from the reflected light LS2 to generate an electric signal. Therefore, the sensor can generate an electric signal corresponding to the hatched portion of fig. 13.

In another exemplary embodiment, the light signal may be generated at a plurality of frequencies during the exposure time. In this case, the sensor absorbs the incoming reflected light signal according to a plurality of frequencies. For example, it is assumed that optical signals are generated at frequencies f1 and f2, and a plurality of reference signals have a phase difference of 90 °. In this case, since the reflected light signal also has frequencies f1 and f2, four electrical signals can be generated by the light signal having frequency f1 and four reference signals corresponding thereto. Four electrical signals may be generated by the reflected light signal having the frequency f2 and four reference signals corresponding thereto. Thus, a total of eight electrical signals may be generated. Hereinafter, a case where the optical signal is generated at one frequency will be described, but as described above, the optical signal may be generated at a plurality of frequencies.

Further, the integration time refers to a predetermined period in which the gate is turned on/off in response to the reference signal. For example, the integration time may be changed differently according to a driving method of the sensor. When a plurality of light receiving units (e.g., a plurality of photodiodes) are present in a pixel of the sensor, distance information of the pixel can be acquired within one integration time. That is, one integration time may correspond to one frame period. However, the present invention is not limited thereto, the sensor may be driven in a multi-phase type (for example, one-phase type or two-phase type), and a plurality of integration times may constitute one frame period.

Referring to fig. 14, four integration times P1 to P4 may form one frame period (one frame cycle). The four integration times may include a first integration time P1, a second integration time P2, a third integration time P3, and a fourth integration time P4.

The first reference signal C1 may be provided to the pixel PX during the first integration time P1. During the second integration time P2, the pixel PX may be turned on, after which the second reference signal C2 may be provided. A third reference signal C3 may be provided to the pixel during a third integration time P3. The fourth reference signal C4 may be provided to the pixel PX during the fourth integration time P4.

The first integration time P1 to the fourth integration time P4 may constitute one frame period, and there may be a readout-out between the integration times. For example, the readout may exist between a first integration time P1 and a second integration time P2. In this case, readout is a portion where the amount of charge charged in the pixel is discharged. One frame period may be a time period including the first integration time P1 to the fourth integration time P4 and the readout ensemble sequentially located between the integration times.

Further, since the first to fourth reference signals C1 to C4 are signals for controlling charging of the charging elements in the pixels and are gate signals in the switching elements, each of the plurality of pixels can output each of electric signals corresponding to the reference signals. Therefore, the charge stored in the pixel can be easily discharged by each readout, thereby accurately calculating the amount of charge charged in each integration time.

In addition, more specifically, the phase difference between the optical signal and the reflected optical signal may be calculated using the reference signal. As described above, four electrical signals may be generated for the optical signal of each frame period. Accordingly, the control unit 140 may calculate the phase difference t between the optical signal and the reflected optical signal using the following equation 1d

[ equation 1]

Here, Q1To Q4Each represents the charge amount (hereinafter referred to as charge amount, in which the charge amount is supplied to the control unit, and the distance information or the distance information is calculated as described below) of one of the four electrical signals. Q1Representing the amount of charge of an electrical signal corresponding to a reference signal having the same phase as the optical signal. Q2Representing the amount of charge of the electrical signal corresponding to the reference signal having a phase delayed by 180 deg. from the phase of the optical signal. Q3Representing the amount of charge of the electrical signal corresponding to the reference signal having a phase delayed by 90 deg. from the phase of the optical signal. Q4Representing the amount of charge of the electrical signal corresponding to the reference signal having a phase delayed by 270 deg. from the phase of the optical signal.

However, as described above, the phase difference t between the optical signal and the reflected optical signal within one frame period is calculateddThe required integration time may be variously changed according to the number of charging elements, the number of switching elements, and the number of light receiving units in the sensor 130.

In addition, there may be a plurality of integration times in one frame period, and four reference signals having a phase difference of 90 ° may be supplied to the pixels for each integration time. The control unit may calculate the above-described phase difference t between the optical signal and the reflected optical signal using the amount of charge of the electrical signal generated during each integration timed

In addition, the control unit 140 may use a phase difference t between the optical signal and the reflected light signaldThe distance between the object and the camera module 100 is calculated. In this case, the control unit 140 may calculate the distance d between the object and the camera module 100 using the following equation 2 (see equation 2).

[ equation 2]

Here, c represents the speed of light, and f represents the frequency of output light.

According to an exemplary embodiment, a ToF IR image and a distance (depth) image may be acquired from the camera module 100. Accordingly, the camera module according to an exemplary embodiment of the present invention may be referred to as a ToF camera module or a ToF camera module. Accordingly, in the camera module according to the exemplary embodiment, the control unit may extract distance information representing a distance between the object and the camera module.

In the present exemplary embodiment, one frame period may be repeated twice. That is, in the exemplary embodiment, since the distance measurement of the second light signal G2 is performed after the distance measurement of the first light signal GS1 is performed, the first integration time P1 to the fourth integration time P4 may be repeated.

More specifically, as shown in fig. 18, the camera module 100 according to an exemplary embodiment may generate original images for four phases. Here, the four phases may be 0 °, 90 °, 180 °, and 270 °, and the original image for each phase may be an image having pixel values digitized for each phase or analog pixel values, and may be used interchangeably with a phase image, a phase infrared image, and the like. In this case, raw images for four phases may be acquired by the electric signals generated from the second sensing region, the images shown in fig. 18 to 20 may be images acquired for the phases when the entire region of the sensor is the extraction region, or the images shown in fig. 18 to 20 may be amplitude images or distance images acquired from the images.

Fig. 15 is a diagram for describing driving of a sensor according to an exemplary embodiment.

Referring to fig. 15, there may be four integration times and four readouts during one frame period that the sensor may extract the range image.

One frame period may include a first integration time P1, a second integration time P2, a third integration time P3, and a fourth integration time P4. The readout can be performed between the respective integration times.

Furthermore, for each integration time, each pixel may generate an electrical signal for a different phase. I.e. different reference signals may be applied. Specifically, a first reference signal (corresponding to C1 described above) having the same period as the first light signal GS1 may be applied to the pixel PX at the first integration time P1. In this case, it is assumed that the first optical signal GS1 and the second optical signal GS2 have the same period. Thus, the first to fourth integration times may be performed again for the second light signal GS2 after the first to fourth integration times are performed for the first light signal GS 1.

However, as described above, when the period of the first light signal GS1 is shorter than the period of the second light signal GS2, the periods of the reference signals C1 to C4 applied to the first reflected light signal may be shorter than the periods of the reference signals C1 'to C4' applied to the second reflected light signal.

In addition, in the second integration time P2, a second reference signal (corresponding to C2 described above) having a phase delayed by 180 ° from that of the first reference signal may be applied to the pixel PX. In the third integration time P3, a third reference signal (corresponding to C3 described above) having a phase delayed by 90 ° from that of the first reference signal C1 may be applied to the pixel PX. Further, in the fourth integration time P4, a fourth reference signal (corresponding to C4 described above) having a phase delayed by 270 ° from that of the first reference signal may be applied.

The first to fourth reference signals C1 'to C4' with respect to the second light signal GS2 may be respectively applied to the pixels during the first to fourth integration times P1 to P4.

Accordingly, at the first integration time P1, the pixel PX may generate the 1-1 st charge amount Q1 from the first reference signal C1, the 1-1 st charge amount Q1 being a charge amount corresponding to the first light signal GS 1. Within the second integration time P2, the pixel PX may generate the 1-2 st charge amount Q2 from the second reference signal C2, the 1-2 nd charge amount Q2 being a charge amount corresponding to the first light signal GS 1. Within the third integration time P3, the pixel PX may generate the 1-3 st charge amount Q3 from the third reference signal C3, the 1-3 rd charge amount Q3 being a charge amount corresponding to the first light signal GS 1. At the fourth integration time P4, the pixel PX may generate the 1-4 th charge amount Q4 from the fourth reference signal C4, the 1-4 th charge amount Q4 being a charge amount corresponding to the first light signal GS 1.

In addition, after one frame period, the first to fourth integration times P1 to P4 may be sequentially performed for the second light signal. Accordingly, the pixel PX may generate the 2-1 st charge amount Q1 ' from the first reference signal C1 ' within the first integration time P1, and the 2-1 st charge amount Q1 ' is the charge amount corresponding to the second light signal GS 2. Within the second integration time P2, the pixel PX may generate the 2-2 nd charge amount Q2 ' according to the second reference signal C2 ', the 2-2 nd charge amount Q2 ' being a charge amount corresponding to the second light signal GS 2. Within the third integration time P3, the pixel PX may generate the 2-3 nd charge amount Q3 ', which is the charge amount corresponding to the second light signal GS2, according to the third reference signal C3'. In the fourth integration time P4, the pixel PX may generate the 2-4 th charge amount Q4 ' according to the fourth reference signal C4 ', and the 2-4 th charge amount Q4 ' is the charge amount corresponding to the second light signal GS 2.

Fig. 16 is a schematic diagram for describing driving of a sensor according to another exemplary embodiment.

Referring to fig. 16, one frame period may include two integration times (the two integration times include a preceding first integration time and a following second integration time, which will be mainly described in the present drawing). The sensor 130 may provide each of the first reference signal C1 and the second reference signal C2 to the pixel PX during the first integration time. During the second integration time, the sensor 130 may provide each of the third reference signal C3 and the fourth reference signal C4 to the pixel PX. Thus, Q1 and Q2 may be generated in a first integration time, and Q3 and Q4 may be generated in a second integration time. Accordingly, the control unit may generate all of Q1 to Q4 in the previous frame period, and may calculate the phase difference between the first light signal GS1 and the first reflected light signal RS1 using the charge amounts of the generated four electrical signals. Accordingly, the control unit according to an exemplary embodiment may output distance information.

In the subsequent one frame period, the sensor 130 may provide each of the first reference signal C1 'and the second reference signal C2' to the pixel PX during the first integration time. During the second integration time, the sensor 130 may provide each of the third reference signal C3' and the fourth reference signal C4 to the pixel PX. Thus, Q1 'and Q2' may be generated in a first integration time, and Q3 'and Q4' may be generated in a second integration time. Accordingly, the control unit may generate all of Q1 'to Q4' in the subsequent one frame period, and may calculate the phase difference between the first light signal GS2 and the first reflected light signal RS2 using the charge amounts of the generated four electrical signals. Accordingly, the control unit according to an exemplary embodiment may output distance information.

Further, as described above, when the period of the first light signal GS1 is shorter than the period of the second light signal GS2, the periods of the reference signals C1 to C4 applied to the first reflected light signal may be shorter than the periods of the reference signals C1 'to C4' applied to the second reflected light signal.

Fig. 17 is a diagram for describing driving of a sensor according to still another exemplary embodiment.

One frame period may include one integration time. The sensor 130 may provide each of the first to fourth reference signals C1 to C4 to the pixel PX during the integration time. Therefore, Q1, Q2, Q3, and Q4 may be generated within the integration time. Therefore, the control unit may generate all of Q1 to Q4 in the previous one frame period, and may calculate the phase difference between the first light signal GS1 and the first reflected light signal RS1 using the charge amounts of the generated four electrical signals. Accordingly, the control unit according to an exemplary embodiment may output distance information.

In the subsequent one frame period, the sensor 130 may provide each of the first to fourth reference signals C1 'to C4' to the pixel PX during the integration time. Therefore, Q1 ', Q2', Q3 'and Q4' can be generated within the integration time. Accordingly, the control unit may generate all of Q1 'to Q4' in the subsequent one frame period, and may calculate the phase difference between the first light signal GS2 and the first reflected light signal RS2 using the charge amounts of the generated four electrical signals. Accordingly, the control unit according to an exemplary embodiment may output distance information.

Further, the control unit may process an average value of the electric signals acquired through the plurality of sub-integration times into an electric signal of one integration time. Therefore, the accuracy of the electric signal for the distance can be further improved.

Further, as described above, when the period of the first light signal GS1 is shorter than the period of the second light signal GS2, the periods of the reference signals C1 to C4 applied to the first reflected light signal may be shorter than the periods of the reference signals C1 'to C4' applied to the second reflected light signal.

Fig. 18 illustrates raw images of four phases acquired from a camera module according to an exemplary embodiment, fig. 19 is a magnitude image acquired from a camera module according to an exemplary embodiment, and fig. 20 illustrates a distance image acquired from a camera module according to an exemplary embodiment.

Referring to fig. 18 and 19, when four phase images Raw (x) are used0)、Raw(x90)、Raw(x180) And Raw (x)270) (see fig. 18) when the calculation is performed as in equation 3, a magnitude image (see fig. 19) as a ToF IR image can be obtained.

[ equation 3]

Here, Raw (x)0) Can represent the data value for each pixel received by the sensor at 0 deg. phase, Raw (x)90) Can represent the data value for each pixel received by the sensor at 90 deg. phase, Raw (x)180) May represent a data value for each pixel received by the sensor at 180 deg. phase, Raw (x)270) May represent the data value for each pixel received by the sensor at 270 deg. phase. Here, the phase means a phase delayed from that of the first reference signal.

Alternatively, when calculation is performed as in equation 4 using the four phase images of fig. 18, an intensity image may be acquired as another ToF IR image.

[ equation 4]

Intensity | Raw (x)90)-Raw(x270)|+|Raw(x180)-Raw(x0)|

Here, Raw (x)0) Can represent the data value for each pixel received by the sensor at 0 deg. phase, Raw (x)90) Can represent the data value for each pixel received by the sensor at 90 deg. phase, Raw (x)180) May represent a data value for each pixel received by the sensor at 180 deg. phase, Raw (x)270) May represent the data value for each pixel received by the sensor at 270 deg. phase.

As described above, the ToF IR image may be generated by a process of extracting two phase images from the other two phase images of the four phase images, respectively. For example, two phase images in which one phase image is extracted from the other image may have a phase difference of 180 °. In extracting two phase images from the other two phases, respectively, the background light may be removed. Therefore, only the signal in the wavelength band output by the light source is harsh in the ToF IR image, thereby improving IR sensitivity to the subject and significantly reducing noise.

In this specification, a ToF IR image may refer to an amplitude image or an intensity image, and an intensity image may be used interchangeably with a confidence image. As shown in fig. 19, the ToF IR image may be a grayscale image.

Meanwhile, when calculation is performed as in equations 5 and 6 using the four phase images of fig. 18, the distance image of fig. 20 may also be obtained. Equations 5 and 6 may correspond to equations 1 and 2, respectively, described above.

[ equation 5]

[ equation 6]

Fig. 21 is a flowchart for describing a method of driving a camera module according to an exemplary embodiment, and fig. 22 is a flowchart for describing a method of driving a camera module according to another exemplary embodiment.

Referring to fig. 21 and 22, the camera module according to an exemplary embodiment may output a first light signal (S1000). The camera module may calculate the phase difference from a first reflected light signal, which is a first light signal reflected back by the object. As described above, the control unit may derive the distance between the object and the camera module or information on the distance (first distance information) by the phase difference (S1100). In this case, when the object is located within the preset distance, the camera module may recalculate the distance to the object by re-illuminating the first light signal. For example, the camera module may determine whether the object is located within a preset distance from a measurement of the distance to the object. For example, the camera module may determine that the object is located within a preset distance when the electrical signal reaches or is higher than a predetermined threshold or when the electrical signal reaches saturation. However, when the distance between the object and the camera module is greater than the preset distance, the second light signal may be output (S1200). In this case, as described above, the second light signal may have an illuminance different from that of the first light signal.

In an exemplary embodiment, the illumination amount of the second optical signal may be greater than the illumination amount of the first output signal. In other words, when the distance between the object and the camera module is short, the first light signal may be output. Due to this configuration, the camera module according to the exemplary embodiment can accurately measure the distance to the object, and when the object is a human body, the safety of the human body can be ensured. However, the above may be equally applied to various exemplary embodiments related to the relationship between the first optical signal and the second optical signal.

After the second optical signal is output (S2000), it may be checked whether a preset time has elapsed (S2100). That is, when a preset time elapses after the second light signal is output, the control unit may output the first light signal again (S2200). This is to secure the safety of the object in response to the object moving again.

It is checked again whether the object is located within the preset distance (S2300), and when the object is present within the preset position, the first light signal may be output. When the object is located at a distance longer than the preset distance, the second light signal may be output (S2400) to acquire second distance information, and thus the distance between the object and the camera module may be measured again.

The present invention has been described based on exemplary embodiments, which are intended to be illustrative rather than limiting, and it will be understood by those skilled in the art that various modifications and applications not illustrated in the present invention may be made without departing from the scope of essential features of the present exemplary embodiments. For example, each component described in detail in the exemplary embodiments may be modified. Furthermore, differences related to the modifications and the application should be construed as being included in the scope of the present invention which is defined in the appended claims.

37页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:视频编解码中的自适应分辨率改变

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类