Camera with a camera module

文档序号:835740 发布日期:2021-03-30 浏览:34次 中文

阅读说明:本技术 相机 (Camera with a camera module ) 是由 朴贞娥 禹廷玗 金敏奎 于 2019-08-02 设计创作,主要内容包括:根据本发明实施例的相机包括:发光模块,其用于根据设置的控制模式输出输出光;光接收模块,其用于根据控制模式接收与输出光相对应的输入光;以及控制模块,其用于基于输入光来检测对象的存在和与对象的距离中的至少一个,根据检测结果重置控制模式,根据重置的控制模式控制发光模块的输出和光接收模块的输入,并且基于根据重置的控制模式输入的输入光生成对象的深度图。(A camera according to an embodiment of the present invention includes: a light emitting module for outputting output light according to a set control mode; a light receiving module for receiving input light corresponding to the output light according to a control mode; and a control module for detecting at least one of a presence of an object and a distance from the object based on the input light, resetting a control mode according to a detection result, controlling an output of the light emitting module and an input of the light receiving module according to the reset control mode, and generating a depth map of the object based on the input light input according to the reset control mode.)

1. A camera, comprising:

a light emitting module configured to output light according to a set control mode;

a light receiving module configured to receive input light corresponding to output light according to the control mode; and

a control module configured to detect at least one of a presence of an object and a distance to the object based on the input light, reset the control mode according to a detection result, control an output of the light emitting module and an input of the light receiving module according to a reset control mode, and generate a depth map of the object based on the input light input according to the reset control mode.

2. The camera of claim 1, wherein the control mode comprises a first control mode and a second control mode, and

setting at least one of an exposure time of the light emitting module and a frame rate of the light receiving module and the number of activated pixels differently in the first control mode and the second control mode.

3. The camera of claim 2, wherein the light emitting module outputs the first output light according to a preset first control mode when the camera operation signal is input, and

the light receiving module receives first input light corresponding to the first output light according to the first control mode.

4. The camera according to claim 3, wherein the control module resets to the second control mode when the object is detected as a result of detecting the presence of the object based on the first input light, the light emitting module outputs second output light according to the second control mode, and the light receiving module receives second input light reflected from the object.

5. The camera according to claim 1, wherein the control modes include third to fifth control modes, and at least one of an exposure time and a modulation frequency of the light emitting module is set differently in the third to fifth control modes.

6. The camera of claim 5, wherein the modulation frequency is set to a first frequency in a third control mode,

setting the modulation frequency to a second frequency having a value greater than that of the first frequency in a fourth control mode, and

setting the modulation frequency to the first frequency and the second frequency in a fifth control mode.

7. The camera of claim 6, wherein the light emitting module outputs a third output light according to the third control mode when a camera operation signal is input, and

and the light receiving module receives third input light corresponding to the third output light according to the third control mode.

8. The camera of claim 7, wherein, as a result of detecting the presence of the object based on the third input light:

when the object is detected, the control module calculates a distance to the object based on the third input light, an

When the object is not detected, the control module performs a reset to change to the fifth control mode, the light emitting module outputs fifth output light to the object according to the reset fifth control mode, and the light receiving module receives fifth input light reflected from the object.

9. The camera according to claim 8, wherein the control module performs a reset to maintain the third control mode when a distance from the object is greater than or equal to a threshold value, the light emitting module outputs third output light to the object according to the reset third control mode, and the light receiving module receives third input light reflected from the object, and

when the distance from the object is less than a threshold value, the control module performs a reset to change to the fourth control mode, the light emitting module outputs fourth output light to the object according to the reset fourth control mode, and the light receiving module receives fourth input light reflected from the object.

10. The camera of claim 9, wherein the control module generates a depth map of an object based on one of third input light to fifth input light reflected from the object, and

when the depth map is generated based on the third input light or the fifth input light reflected from the object, the control module generates a depth map having a higher resolution than a resolution of the depth map based on the fourth input light by a super-resolution method.

Technical Field

Embodiments relate to cameras.

Background

A technology for obtaining a 3D image using a photographing device is being developed. In order to obtain a 3D image, depth information (depth map) is required. The depth information is information representing a distance in space, and represents perspective information of one point with respect to another point of the 2D image.

One of methods of obtaining depth information is a method of projecting light of an Infrared (IR) structure onto an object and analyzing the light reflected from the object to extract depth information. According to the IR structured light method, there is a problem in that it is difficult to obtain a desired level of depth resolution for a moving object.

As a technique to replace the IR structured light method, a time of flight (TOF) method is attracting attention. According to the TOF method, the distance to the object is calculated by measuring the time of flight (i.e., the time of emission and reflection of light).

Generally, in the case of the TOF method, in order to accurately measure the distance to the object, an amount of light sufficient to illuminate the surface should be ensured even from a long distance, and therefore, much power is consumed.

However, for each application that wants to use depth information obtained by a TOF camera, a different specification is required. For example, in some applications, low resolution depth information may be required, or depth information for small frames may be required. In this case, when the TOF camera is driven in such a manner as to generate high-resolution depth information or high-frame depth information, there is a problem of unnecessarily increasing the resource occupation in addition to consuming unnecessary power.

Therefore, a technique capable of optimizing the drive of the TOF camera is required.

Disclosure of Invention

[ problem ] to

Embodiments aim to provide a camera, in particular a camera capable of providing a TOF camera driving method optimized for a subject photographing situation.

The problem to be solved by the embodiment is not limited thereto, and includes objects and effects that can be understood from the technical solutions or embodiments of the invention described below.

[ solution ]

One aspect of the present invention provides a camera comprising: a light emitting module configured to output light according to a set control mode; a light receiving module configured to receive input light corresponding to the output light according to a control mode; and a control module configured to detect at least one of a presence of an object and a distance from the object based on the input light, reset a control mode according to a detection result, control an output of the light emitting module and an input of the light receiving module according to the reset control mode, and generate a depth map of the object based on the input light input according to the reset control mode.

The control mode may include a first control mode and a second control mode, and at least one of an exposure time of the light emitting module and a frame rate of the light receiving module and the number of activated pixels may be set differently in the first control mode and the second control mode.

When the camera operation signal is input, the light emitting module may output first output light according to a preset first control mode, and the light receiving module may receive first input light corresponding to the first output light according to the first control mode.

As a result of detecting the presence of the object based on the first input light, when the object is detected, the control module is reset to the second control mode, the light emitting module may output the second output light according to the second control mode, and the light receiving module may receive the second input light reflected from the object.

The control modes may include third to fifth control modes, and at least one of an exposure time and a modulation frequency of the light emitting module may be differently set in the third to fifth control modes.

The modulation frequency may be set to a first frequency in the third control mode, the modulation frequency may be set to a second frequency having a larger value than the first frequency in the fourth control mode, and the modulation frequency may be set to the first frequency and the second frequency in the fifth control mode.

When the camera operation signal is input, the light emitting module may output third output light according to a third control mode, and the light receiving module may receive third input light corresponding to the third output light according to the third control mode.

As a result of detecting the presence of the object based on the third input light, when the object is detected, the control module may calculate a distance to the object based on the third input light, and when the object is not detected, the control module may perform a reset to change to a fifth control mode, the light emitting module may output fifth output light to the object according to the reset fifth control mode, and the light receiving module may receive fifth input light reflected from the object.

When the distance from the object is greater than or equal to the threshold, the control module may perform resetting to maintain the third control mode, the light emitting module may output third output light to the object according to the reset third control mode, and the light receiving module may receive the third input light reflected from the object, and when the distance from the object is less than the threshold, the control module may perform resetting to change to a fourth control mode, the light emitting module may output fourth output light to the object according to the reset fourth control mode, and the light receiving module may receive the fourth input light reflected from the object.

The control module may generate a depth map of the object based on one of the third input light to the fifth input light reflected from the object, and when the depth map is generated based on the third input light or the fifth input light reflected from the object, the control module may generate the depth map having a higher resolution than a resolution of the depth map based on the fourth input light by a super-resolution method.

The control module may calculate a size of the object based on the depth information on the generated depth map and transmit the calculated size of the object to the connected application.

Drawings

Fig. 1 is a block diagram of a camera according to an embodiment of the present invention.

Fig. 2 is a flowchart illustrating a first example of a camera control method according to an embodiment of the present invention.

Fig. 3 is a flowchart illustrating a second example of a camera control method according to an embodiment of the present invention.

Fig. 4 is a flowchart illustrating a third example of a camera control method according to an embodiment of the present invention.

Fig. 5 is a diagram for describing a control mode according to an embodiment of the present invention.

Fig. 6 is a diagram for describing a fifth control mode according to the embodiment of the invention.

Fig. 7 is a diagram for describing an optimized camera operation according to an embodiment of the present invention.

Detailed Description

Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings.

However, the technical idea of the present invention is not limited to some embodiments to be described, but may be implemented in various different forms, and one or more components may be selectively combined and replaced between the embodiments within the scope of the technical idea of the present invention.

In addition, unless explicitly defined and described, terms (including technical terms and scientific terms) used in the embodiments of the present invention may have meanings that can be commonly understood by those of ordinary skill in the art, and general terms such as terms defined in dictionaries may be understood in consideration of their meanings in the context of the related art.

In addition, terms used in the embodiments of the present invention are used to describe the embodiments, and are not intended to limit the present invention.

In this specification, the singular form may also include the plural form unless explicitly stated in the phrase, and when describing "at least one (or more than one) of a and B or A, B and C", may include one or more of all combinations that may be combined by A, B and C.

In addition, terms such as first, second, A, B, (a) and (b) may be used to describe components of embodiments of the invention.

These terms are only used to distinguish one component from another component, and the nature, order, or sequence of the components are not limited by the terms.

Further, when an element is described as being "connected," "coupled," or "coupled" to another element, the element may be directly connected, coupled, or coupled to the other element, and an element may be "connected," "coupled," or "coupled" to the other element by another element located between the element and the other element.

Further, the case where an element is described as being formed or disposed "above (upper) or" below (lower) "another element includes not only the case where two elements are in direct contact with each other but also the case where one or more other elements are formed or disposed between the two elements. In addition, the case of expressing "above (upper) or below (lower)" may include not only a meaning based on an upward direction of one component but also a meaning based on a downward direction of one component.

First, a configuration of a camera according to an embodiment of the present invention will be described with reference to fig. 1.

Fig. 1 is a block diagram of a camera according to an embodiment of the present invention.

As shown in fig. 1, a camera 100 according to an embodiment of the present invention includes a light emitting module 110, a light receiving module 120, and a control module 130.

First, the light emitting module 110 outputs output light according to a set control mode. The light emitting module 110 may include a light source and a light modulator to output light.

The light source generates light. The light generated by the light source may be infrared light having a wavelength of 770 to 3000nm, or visible light having a wavelength of 380 to 770 nm. The light source may be implemented by a Light Emitting Diode (LED), and may be implemented in a form in which a plurality of LEDs are arranged according to a predetermined pattern. In addition, the light source may include an Organic Light Emitting Diode (OLED) or a Laser Diode (LD). The light source repeats blinking (on/off) at predetermined time intervals to output light in the form of a pulse wave or a continuous wave. All of the plurality of light emitting diodes may repeatedly blink at the same time interval. In addition, all of the plurality of light emitting diodes may repeatedly blink at different time intervals during a portion of the exposure time. In addition, among the plurality of light emitting diodes, the first group of light emitting diodes and the second group of light emitting diodes may repeatedly blink at different time intervals.

The light modulator controls the blinking of the light source according to a control pattern. The light modulator may control the flicker of the light source such that the output light having the modulation frequency according to the control mode is output by frequency modulation, pulse modulation, or the like. Further, the light modulator may control the blinking of the light source according to a control pattern to output light during the exposure time.

Next, the light receiving module 120 receives input light corresponding to the output light according to the control mode. The light receiving module 120 may include a lens unit and an image sensor unit to receive input light.

The lens unit condenses input light and transfers the condensed light to the image sensor unit. To this end, the lens unit may include a lens, a lens barrel, a lens holder, and an IR filter.

A plurality of lenses may be provided or one lens may be provided. When a plurality of lenses are provided, the respective lenses may be aligned based on the central axis to form an optical system. Here, the central axis may be the same as the optical axis of the optical system.

The lens barrel may be coupled to the lens holding frame and may have a space for accommodating the lens therein. The lens barrel may be rotationally coupled to a lens or lenses. However, this is exemplary, and the lens barrel may be coupled to the lens or lenses in a different manner, for example, a method using an adhesive (e.g., an adhesive resin such as an epoxy resin).

The lens holding frame may be coupled to the lens barrel to support the lens barrel, and may be coupled to a printed circuit board on which the image sensor is mounted. The lens holding frame may have a space below the lens barrel, in which space the IR filter may be attached. The spiral pattern is formed on an inner peripheral surface of the lens holding frame, and the lens holding frame may be rotatably coupled to a lens barrel similarly having the spiral pattern formed on an outer peripheral surface. However, this is exemplary, the lens holding frame and the lens barrel may be coupled to each other by an adhesive, or the lens holding frame and the lens barrel may be integrally formed with each other.

The lens holding frame may be divided into an upper holding frame coupled to the lens barrel and a lower holding frame coupled to a printed circuit board on which the image sensor is mounted, and the upper holding frame and the lower holding frame may be integrally formed with each other or may be formed in a separate structure and then fastened or coupled to each other. In this case, the diameter of the upper holder may be formed smaller than that of the lower holder. In this specification, the lens holding frame may be a housing.

The image sensor unit absorbs the converged input light to generate an electrical signal.

The image sensor unit may absorb the input light in synchronization with a flicker period of the light source. In particular, the image sensor unit may absorb input light that is in and out of phase with the output light. That is, the image sensor unit may repeatedly perform the step of absorbing the input light when the light source is turned on and the step of absorbing the input light when the light source is turned off.

The image sensor unit may generate an electric signal corresponding to each reference signal using a plurality of reference signals having different phase differences. The frequency of the reference signal may be set equal to the frequency of the output light. Therefore, when the output light is output at a plurality of frequencies, the image sensor unit generates an electric signal using a plurality of reference signals corresponding to the respective frequencies. The electrical signal may include information about the amount of charge or voltage corresponding to each reference signal.

According to an embodiment of the invention, there may be four reference signals C1To C4. Reference signal C1To C4May have the same frequency as the output light, and may have each otherA phase difference of 90 deg.. One C of the four reference signals1May have the same phase as the output light. The phase of the input light is delayed by a distance that the output light is reflected and returned after being incident on the object. The image sensor unit may mix the input light and each reference signal to generate an electrical signal for each reference signal.

In another embodiment, when the output light is generated at a plurality of frequencies during the exposure time, the image sensor unit absorbs the input light according to the plurality of frequencies. For example, assume at frequency f1And f2Output light is generated, and the plurality of reference signals have a phase difference of 90 °. Then, since the input light also has a frequency f1And f2And thus can be tuned by having a frequency f1And four reference signals corresponding to the input light to generate four electrical signals. In addition, it can be implemented by having a frequency f2And four reference signals corresponding to the input light to generate four electrical signals. Thus, a total of 8 electrical signals may be generated.

The image sensor unit may be implemented as an image sensor in which a plurality of photodiodes are arranged in a grid shape. The image sensor may be a Complementary Metal Oxide Semiconductor (CMOS) image sensor or may be a Charge Coupled Device (CCD) image sensor.

Meanwhile, the light emitting module 110 and the light receiving module 120 may be implemented in a plurality in the camera. For example, when a camera according to an embodiment of the present invention is included in a smart phone, the first light emitting module 110 and the first light receiving module 120 corresponding thereto may be disposed on a front surface of the smart phone, and the second light emitting module 110 and the second light receiving module 120 corresponding thereto may be disposed on a rear surface of the smart phone.

Next, the control module 130 detects at least one of a presence of an object and a distance to the object based on the input light. In particular, the control module 130 may detect at least one of a presence of an object and a distance to the object through a depth map, which is generated by the input light. For example, the control module 130 may generate the depth map by an electrical signal corresponding to the input light. When the output light is output in the form of a continuous wave, the distance to the object can be detected using the following equation 1.

[ equation 1]

Here, f denotes the frequency of the output light, c denotes the speed of the light, and Φ denotes the phase difference between the output light and the corresponding input light.

In addition, the phase difference between the output light and the corresponding input light may be calculated by the following equation 2.

[ equation 2]

Here, τ represents a flight time. Q1To Q4Each of which is a charge amount of each of the four electric signals. Q1Is the amount of charge of the electrical signal corresponding to a reference signal that is in phase with the incident optical signal. Q2Is the amount of charge of the electrical signal corresponding to the reference signal that is 180 deg. slower in phase than the incident optical signal. Q3Is the amount of charge of the electrical signal corresponding to the reference signal that is 90 deg. slower in phase than the incident optical signal. Q4Is the amount of charge of the electrical signal corresponding to the reference signal that is 270 deg. slower in phase than the incident optical signal.

The control module 130 resets the control mode according to a detection result of at least one of the presence of the object and the distance from the object.

Specifically, when the presence of the object is detected, the control module 130 resets the set first control mode to the second control mode. Meanwhile, when the presence of the object is not detected, the control module 130 performs a reset, thereby maintaining the set first control mode.

Alternatively, when the presence of the object is detected and the distance from the object is greater than or equal to the threshold value, the reset is performed so that the set third control mode is maintained. Further, when the presence of the object is detected and the distance from the object is greater than or equal to the threshold, the control module 130 may perform a reset such that the set third control mode is changed to the fourth control mode. Meanwhile, if the presence of the object is not detected, the control module 130 performs a reset such that the set third control mode is changed to the fifth control mode.

The control module 130 controls the output of the light emitting module 110 and the input of the light receiving module 120 according to the reset control mode.

The control module 130 generates a depth map of the object based on the input light input according to the reset control mode. The process of generating the depth map is the same as the process described by the above equation, and thus a detailed description thereof will be omitted.

Meanwhile, when the depth map is generated based on the third input light or the fifth input light reflected from the object, the control module 130 may generate the depth map having a higher resolution than the depth map based on the fourth input light through the super-resolution method.

For example, when the depth map based on the fourth input light has a resolution of the level of QVGA (320x240), the depth map based on the third input light or the fifth input light may have a resolution of the level of VGA (640x 480).

The super resolution method, that is, a Super Resolution (SR) technique, is a technique for obtaining a high resolution image from a plurality of low resolution images, and a mathematical model of the SR technique may be represented as the following equation 3.

[ equation 3]

yk=DkBkMkx+nk

Here, 1. ltoreq. k.ltoreq.p, p representing the number of low resolution images, ykRepresenting a low resolution image (═ y)k,1,yk,2,…,yk,M]TWherein M ═ N1*N2),DkRepresenting a downsampled matrix, BkRepresenting an optical blur matrix, MkIs an image warping matrix (warping matrix), and x represents a high-resolution image (═ x)1,x2,…,xN]TWherein N ═ L1N1*L2N2),nkRepresenting noise. That is, according to the SR technique, it means by applying an inverse function of the estimated resolution deterioration element to ykTo estimate x. The SR technique can be largely classified into a statistical method and a multi-frame method, and the multi-frame method can be largely classified into a spatial division method and a time division method.

The control module 130 may send the depth map of the object to the connected application. In addition, the control module 130 may detect the size of the object through the depth map of the object and transmit the detected size information to the connected application.

Fig. 2 is a flowchart illustrating a first example of a camera control method according to an embodiment of the present invention.

Referring to fig. 2, when a camera operation signal is input, the light emitting module outputs first output light according to a set first control mode (S205).

Then, the light receiving module 120 receives the first input light corresponding to the first output light according to the set first control mode (S210).

Next, the control module 130 detects the presence of the object based on the first input light (S215).

When the presence of the object is detected, the control module 130 performs a reset such that the set first control mode is changed to the second control mode (S220).

Then, according to the reset second control mode, the light emitting module 110 outputs the second output light, and the light receiving module 120 receives the second input light reflected from the object (S225).

Then, the control module 130 generates a depth map of the object based on the second input light (S230).

Meanwhile, when the presence of the object is not detected, it proceeds again from step S205 after the reset is performed to maintain the first control mode, and when the object is not yet detected after repeating the predetermined number of times, the termination may be performed.

Fig. 3 is a flowchart illustrating a second example of a camera control method according to an embodiment of the present invention.

When the camera operation signal is input, the light emitting module 110 outputs third output light according to a preset third control mode (S305).

Then, the light receiving module 120 receives third input light corresponding to the third output light according to the set third control mode (S310).

Next, the control module 130 detects the presence of the object based on the third input light (S315).

When the presence of the object is detected, the control module 130 calculates a distance to the object based on the third input light and compares the calculated distance to the object with a threshold value (S320).

In this case, when the distance from the object is greater than or equal to the threshold, the control module 130 performs a reset, thereby maintaining the third control mode (S325). Then, according to the reset third control mode, the light emitting module 110 outputs the third output light to the object, and the light receiving module 120 receives the third input light reflected from the object (S330).

When the distance from the object is less than the threshold, the control module 130 performs a reset to change the set third control mode to the fourth control mode (S335). Then, according to the reset fourth control mode, the light emitting module 110 outputs the fourth output light to the object, and the light receiving module 120 receives the fourth input light reflected from the object (S340).

Meanwhile, when the object is not detected, the control module 130 performs a reset such that the preset third control mode is changed to the fifth control mode (S345). Then, according to the reset fifth control mode, the light emitting module 110 outputs fifth output light to the object, and the light receiving module 120 receives the fifth input light reflected from the object (S350).

Then, the control module 130 generates a depth map of the object based on any one of the third to fifth input lights (S355). The process of generating the depth map is the same as the process described by the above equation, and thus a detailed description thereof will be omitted. Meanwhile, when the depth map is generated based on the third input light or the fifth input light reflected from the object, the control module 130 may generate the depth map having a higher resolution than that based on the fourth input light reflected from the object by the super-resolution method.

Further, the control module 130 may calculate the size of the object based on the depth information of the generated depth map (S360), and transmit the size of the object to the connected application (S365). In this case, the connected application may be an application to which the camera operation signal of step S305 is input.

Fig. 4 is a flowchart illustrating a third example of a camera control method according to an embodiment of the present invention.

According to the camera control method shown in fig. 4, the camera control method shown in fig. 3 and the camera control method shown in fig. 4 may be implemented together.

Referring to fig. 4, when the control module 130 receives a camera operation signal (S405), the type of the camera operation signal is determined (S410). In this case, the control module 130 may operate the first light emitting module 110 and the first light receiving module 120 according to the type of the camera operation signal, or may operate the second light emitting module 110 and the second light receiving module 120.

Specifically, when the input camera operation signal is a first camera operation signal, the control module 130 may operate the first light emitting module 110 and the first light receiving module 120. Then, the control module 130 performs camera control according to the camera control method illustrated in fig. 2. For example, when the user inputs a first camera operation signal through a button input or a motion input for 3D face recognition, the control module 130 may operate the first light emitting module 110 and the first light receiving module 120 to perform camera control according to steps S205 to S230 of fig. 2.

Meanwhile, when the input camera operation signal is a second camera operation signal, the control module 130 may operate the second light emitting module 110 and the second light receiving module 120. Then, the control module 130 performs camera control according to the camera control method illustrated in fig. 3. For example, when the user inputs a second camera operation signal through the application to detect the size of the object, the control module 130 may operate the second light emitting module 110 and the second light receiving module 120 to perform camera control according to steps S305 to S365.

Steps S205 to S230 and steps 305 to S355 are described above with reference to fig. 2 and 3, and thus, detailed description thereof will be omitted.

Fig. 5 is a diagram for describing a control mode according to an embodiment of the present invention.

Referring to fig. 5, the control modes according to the embodiment of the present invention may include first to fifth control modes, may be grouped into first and second control modes, and may be grouped into third to fifth control modes.

Specifically, the camera according to the embodiment of the present invention may be controlled according to one of the first control mode and the second control mode according to the camera operation signal.

The first control mode may be a control mode for searching for an object, and the second control mode may be a control mode for accurately measuring an object. For example, in the case of face recognition, the first control mode may be a control mode for detecting the presence of a face (object), and the second control mode may be a control mode for generating a depth map of the face (object).

Table 1 below is a table showing characteristics of the first control mode and the second control mode.

[ Table 1]

As shown in table 1, in the first control mode and the second control mode, the exposure time of the light emitting module 110, and at least one of the frame rate and the number of activated pixels of the light receiving module 120 may be differently set. In addition, the effect is also different. Specifically, the exposure time of the light emitting module 110 in the first control mode may be set to be shorter than that of the second control mode. For example, the exposure time of the first control mode may be set to be shorter than 0.1ms, and the exposure time of the second control mode may be set to be longer than 0.7 ms.

The frame rate of the light receiving module 120 in the first control mode may be set to be less than that of the second control mode. For example, the frame rate of the first control mode may be set to 1fps, and the frame rate of the second control mode may be set to more than 15 fps. In particular, since the first control mode is used to detect the presence of an object, the frame rate of the light receiving module 120 may be set to 1fps, thereby generating only one frame.

The number of activated pixels of the light receiving module 120 in the first control mode may be set to be smaller than the number of activated pixels of the light receiving module 120 in the second control mode. That is, the viewing angle of the light receiving module 120 in the first control mode may be set smaller than the viewing angle of the light receiving module 120 in the second control mode. For example, in the first control mode, 112x86 pixels may be activated and the viewing angle of the light receiving module 120 may be set to 40 °, and in the second control mode, 224x172 pixels may be activated and the viewing angle of the light receiving module 120 may be set to 80 °.

When the camera according to the embodiment of the present invention is operated according to the first control mode, the depth accuracy is lower than that of the second control mode, but an object located at a longer distance can be measured with less power. That is, after the presence of the object is detected with a small amount of power, when it is determined that the object is present, accurate photographing is performed according to the second control mode, and therefore, power consumption of the camera can be reduced.

Next, the camera according to the embodiment of the present invention may be controlled according to any one of the third to fifth control modes according to the camera operation signal.

Table 2 below is a table showing characteristics of the third to fifth control modes.

[ Table 2]

As shown in table 2, in the third to fifth control modes, at least one of the exposure time and the modulation frequency of the light emitting module 110 may be differently set. The exposure time of the light emitting module 110 in the fourth control mode may be set to be shorter than the exposure time in the third and fifth control modes. For example, the exposure time of the fourth control mode may be set to be less than 1.5ms, and the exposure times of the third control mode and the fifth control mode may be set to be greater than 1.5 ms. The fourth control mode is a control mode for photographing an object located at a short distance such as within 1m, and therefore, even when the exposure time is shorter than those of the third control mode and the fifth control mode, the light receiving module 120 can secure a sufficient amount of light to generate a depth map.

In the third control mode, the modulation frequency is set to a first frequency, the fourth control mode is set to a second frequency whose modulation frequency is greater than the first frequency, and in the fifth control mode, the modulation frequency may be set to a combination of the first frequency and the second frequency (i.e., two frequencies). For example, the modulation frequency may be set to 60MHz in the third control mode, may be set to 80MHz in the fourth control mode, and may be set to 60MHz and 80MHz in the fifth control mode.

As shown in table 2, the camera according to the embodiment of the present invention differently controls the light emitting module 110 and the light receiving module 120 according to the distance from the object through the third to fifth control modes. That is, since the camera is operated by the control module 130 optimized according to the distance from the object, power consumption of the camera may be reduced.

Fig. 6 is a diagram for describing a fifth control mode according to the embodiment of the present invention.

Fig. 6 shows a process of combining two modulation frequencies. For example, assume that the first frequency is 60MHz and the second frequency is 80 MHz.

The maximum distance at which the object can be measured is determined from the frequency of the output light, an object located at a maximum distance of 1.8657m can be measured by the output light from the first frequency of 60MHz, and an object located at a maximum distance of 2.4876m can be measured by the output light from the second frequency of 80 MHz. In this way, as the frequency increases, the maximum distance over which the object can be measured also increases. However, in order to increase the frequency, it is necessary to rapidly control the blinking period of the light emitting module 110, and thus, power consumption increases.

Therefore, in the fifth control mode according to the embodiment of the present invention, the measured distance of the object can be increased by simultaneously outputting the first frequency and the second frequency.

As shown in fig. 6, when the output light according to the first frequency and the output light according to the second frequency are simultaneously output, the first frequency and the second frequency form waveforms having different periods, and it may occur that a part of the phases of the two frequencies overlap each other. In this way, when two frequencies are simultaneously output, a portion where the phases of the two frequencies overlap with each other can be regarded as one cycle. That is, when the frequencies of 60MHz and 80MHz are simultaneously outputted, it can be regarded as one output light having a frequency of 240 MHz. In this case, power consumption can be greatly reduced as compared with outputting light having a frequency of 240 MHz.

Fig. 7 is a diagram for describing an optimized camera operation according to an embodiment of the present invention.

Fig. 7 illustrates an example for describing a camera operation according to the camera control method illustrated in fig. 3. As shown in fig. 7, when photographing a small-sized object such as a ring, a bolt, and food shown in (a) to (c), the user performs photographing after placing the camera at a position close to the object. In this case, the control module 130 generates a depth map by photographing an object according to the fourth control mode based on information about the presence and distance to the object according to the third control mode.

When photographing a relatively large object (e.g., a sofa or a curtain shown in (d) and (e)), the user performs photographing after placing the camera at a position equal to or more than a certain distance from the object. In this case, the control module 130 generates a depth map by photographing the object according to the third control mode.

Meanwhile, as shown in (f), when the indoor-location photographing is performed, the user performs photographing after placing the camera at a position far from the subject. In this case, the control module 130 generates a depth map by photographing an object according to the fifth control mode based on information regarding the presence of the object according to the third control mode.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于形成大视点变化的扩展焦平面的方法和系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类