Imaging element, imaging device, image data processing method, and program

文档序号:621626 发布日期:2021-05-07 浏览:5次 中文

阅读说明:本技术 成像元件、摄像装置、图像数据处理方法及程序 (Imaging element, imaging device, image data processing method, and program ) 是由 樱武仁史 长谷川亮 河合智行 于 2019-06-27 设计创作,主要内容包括:一种成像元件,其内置有处理电路及存储器。存储器存储通过以第1帧速率拍摄被摄体而得到的摄像图像数据。处理电路根据存储在存储器中的摄像图像数据进行处理。输出电路将基于摄像图像数据的输出用图像数据以第2帧速率输出到成像元件的外部。第1帧速率是比第2帧速率高的帧速率,并且根据频闪的产生周期而确定,处理电路根据多个帧量的摄像图像数据来检测避免频闪对成像元件的拍摄的影响的频闪影响避免时刻。(An imaging element has a processing circuit and a memory built therein. The memory stores captured image data obtained by capturing an object at a1 st frame rate. The processing circuit performs processing based on the captured image data stored in the memory. The output circuit outputs image data for output based on the captured image data to the outside of the imaging element at a2 nd frame rate. The 1 st frame rate is a frame rate higher than the 2 nd frame rate and is determined according to a generation cycle of a strobe, and the processing circuit detects a strobe influence avoiding timing to avoid an influence of the strobe on photographing of the imaging element from the captured image data of a plurality of frame amounts.)

1. An imaging element, comprising:

a storage section that stores captured image data obtained by capturing an object at a1 st frame rate and is built in the imaging element;

a processing section that executes processing based on the captured image data stored in the storage section and is built in the imaging element;

an output section that outputs image data for output based on the captured image data to an outside of the imaging element at a2 nd frame rate and is built in the imaging element,

the 1 st frame rate is a frame rate higher than the 2 nd frame rate, and is determined according to a generation period of a strobe,

the processing section detects a strobe influence avoidance timing at which an influence of the strobe on the imaging of the imaging element is avoided, from the captured image data of a plurality of frame amounts.

2. The imaging element according to claim 1,

the processing includes measurement detection processing of measuring a luminance difference between frames of the captured image data and detecting the strobe-influence-avoiding timing based on the measured luminance difference,

the processing time required in the measurement detection processing is determined based on the generation cycle.

3. The imaging element according to claim 2,

the captured image data is classified between the frames into 1 st captured image data and 2 nd captured image data obtained by capturing after the 1 st captured image data,

the luminance difference is a subtraction result of subtracting the luminance of the 2 nd captured image data from the luminance of the 1 st captured image data,

the strobe-influence avoiding timing is a timing at which the luminance difference transitions from a positive value to a negative value.

4. The imaging element according to claim 3,

the processing unit terminates the measurement detection process under the condition that the luminance difference changes from the positive value to the negative value 2 times.

5. The imaging element according to claim 3 or 4,

the 2 nd captured image data is image data obtained by capturing 2 or more frames after the 1 st captured image data.

6. The imaging element according to any one of claims 2 to 5,

after the measurement detection processing is performed, the 1 st frame rate is set to a frame rate lower than the frame rate in the measurement detection processing until a predetermined condition is satisfied.

7. The imaging element according to claim 6,

the processing unit performs the measurement detection process again when the predetermined condition is satisfied.

8. The imaging element according to claim 7,

when the predetermined condition is satisfied and the luminance difference measured by the processing unit changes from the luminance difference measured within the processing time, the measurement detection processing is performed again.

9. The imaging element according to any one of claims 2 to 8,

the luminance difference is a luminance difference of local regions corresponding to each other between the frames of the captured image data.

10. The imaging element according to any one of claims 1 to 9,

the strobe-influence-avoidance timing is a timing at which the brightness of the image represented by the captured image data reaches a peak.

11. The imaging element according to any one of claims 1 to 8,

the output image data is image data based on the captured image data obtained by capturing at the strobe-influence-avoiding timing.

12. The imaging element according to any one of claims 1 to 11,

the strobe is a line strobe light that is,

the subject is photographed within an image pickup area selected in accordance with a clipping coefficient determined in accordance with the strobe-influence-avoiding timing.

13. The imaging element according to any one of claims 1 to 12,

the generation period is set in advance to a generation period of a strobe caused by a light source that blinks by supplying an alternating current from a commercial power supply.

14. The imaging element according to any one of claims 1 to 13,

the 1 st frame rate increases as the generation period shortens.

15. The imaging element according to any one of claims 1 to 14,

the captured image data is image data obtained by capturing the subject in a rolling shutter manner.

16. The imaging element according to any one of claims 1 to 15,

the imaging element is a laminated imaging element having a photoelectric conversion element, and the memory portion is laminated on the photoelectric conversion element.

17. An image pickup apparatus, comprising:

the imaging element of any one of claims 1 to 16; and

a control section that performs control of causing a display section to display an image based on the output-use image data output by the output section included in the imaging element.

18. An image data processing method of an imaging element incorporating: a storage unit that stores captured image data obtained by capturing an object at a1 st frame rate; a processing unit that performs processing based on the captured image data stored in the storage unit; and an output unit for outputting image data for output based on the captured image data to the outside at a2 nd frame rate,

setting the 1 st frame rate to a frame rate higher than the 2 nd frame rate, and determining the 1 st frame rate according to a generation period of a strobe,

the processing section detects a strobe influence avoidance timing at which an influence of the strobe on the imaging of the imaging element is avoided, from the captured image data of a plurality of frame amounts.

19. A program for causing a computer to function as a processing section and an output section included in an imaging element having built therein: a storage unit that stores captured image data obtained by capturing an object at a1 st frame rate; the processing unit that performs processing based on the captured image data stored in the storage unit; and an output unit configured to output image data for output based on the captured image data to the outside at a2 nd frame rate,

the 1 st frame rate is a frame rate higher than the 2 nd frame rate, and is determined according to a generation period of a strobe,

the processing section detects a strobe influence avoidance timing at which an influence of the strobe on the imaging of the imaging element is avoided, from the captured image data of a plurality of frame amounts.

Technical Field

The present technology relates to an imaging element, an imaging apparatus, an image data processing method, and a program.

Background

Japanese patent application laid-open No. 2017-188760 discloses an image processing apparatus including: a storage section that stores pixel signals output from the imaging element; a signal processing unit that performs signal processing on the pixel signal stored in the storage unit; and a detection unit that completes detection processing of the pixel signals in the same frame before the signal processing unit completes the signal processing.

Japanese patent application laid-open No. 2018-007210 discloses a signal processing device including: a brightness information calculation unit that calculates brightness information of the captured image; a luminance reference calculation unit that calculates a luminance reference value indicating reference luminance from luminance information at a plurality of times; and a correction parameter calculation unit that calculates a correction parameter for correcting the luminance of the captured image based on the luminance information and the luminance reference value.

WO2015/163145 discloses an image processing apparatus including: an intensity ratio calculation unit that calculates a ratio relating to the intensity of a signal value at a predetermined position in an image captured under different exposure conditions; and a contribution degree calculation unit that calculates a contribution degree indicating whether the intensity ratio calculated by the intensity ratio calculation unit is derived from the periodic noise component or from the movement.

Japanese patent application publication No. 2017-501627 discloses an image sensor including: an imaging region including a plurality of pixels; and one or more strobe detection regions each including one or more pixels, at least one of the one or more pixels in the at least one strobe detection region being employed a plurality of times during a period in which the image is captured by the at least one pixel in the imaging region.

Examples of imaging devices to which the techniques described in japanese patent laid-open nos. 2017-188760, 2018-007210, WO2015/163145, and jp 2017-501627 are applied include imaging devices that perform imaging by a rolling shutter method using an electronic shutter and a mechanical shutter in combination. In such an imaging device, for example, in the photoelectric conversion element, exposure is sequentially started for each line, and a charge signal corresponding to the amount of exposure is read. Then, image data is generated from the read charge signal, and an image represented by the generated image data is displayed on a display.

Disclosure of Invention

Technical problem to be solved by the invention

However, when the rolling shutter type image capturing is performed, rolling variation of the electronic shutter and rolling variation of the mechanical shutter occur. Here, the rolling deviation of the electronic shutter means, for example, a time difference from the start of exposure to the initial row to the start of exposure to the final row in the photoelectric conversion element. The rolling deviation of the mechanical shutter is, for example, a time difference from the passage of the front curtain of the mechanical shutter through the initial row to the passage through the final row in the photoelectric conversion element.

As an example, as shown in fig. 28, the rolling deviation of the electronic shutter of the imaging element is longer than the rolling deviation of the mechanical shutter. In the example shown in fig. 28, the rolling deviation of the mechanical shutter is shorter than that of the electronic shutter.

When the frequency of the commercial power source is 50Hz (hertz), a light source such as a fluorescent lamp that is powered from the commercial power source and flickers at a frequency of 100 Hz. In such an environment, when photographing is performed by the rolling shutter method, a line strobe appears in the vertical direction at intervals of 10ms in a photographed image P1 obtained by photographing, as shown in fig. 30, for example. In addition, when the through image is displayed, a plurality of line strobes appear to flow in the vertical direction.

When the image pickup is performed by the rolling shutter method shown in fig. 28 as an example using the full-size or medium-size photoelectric conversion element, for example, 4 line stroboscopic images appear in the picked-up image P1 obtained by the image pickup as shown in fig. 30. Further, when photographing in the rolling shutter method shown as an example in fig. 28 is performed by a photoelectric conversion element having a size smaller than the full size or the middle size, that is, an APS-C (advanced Photo System type-C) size, for example, 3 line strobes appear in the photographed image P1 as an example shown in fig. 30.

On the other hand, as shown in fig. 29, for example, in recent years, the rolling deviation time of the electronic shutter is shorter than that of the conventional one, and the rolling deviation time of the electronic shutter is closer to that of the mechanical shutter than that of the conventional one.

That is, in the shooting of the rolling shutter method shown in fig. 29, the rolling deviation timing of the electronic shutter is closer to the rolling deviation timing of the mechanical shutter than in the shooting of the rolling shutter method shown in fig. 28 as an example.

Accordingly, when the rolling shutter method shown as an example in fig. 29 is used for shooting, the number of line strobes appearing in the captured image is reduced as compared with the rolling shutter method shown as an example in fig. 28.

For example, if photographing is performed by a full-size or medium-size photoelectric conversion element as a rolling shutter method as shown in fig. 29, 2-line stroboscopic occurs in the photographed image P2 as shown in fig. 31, for example. Further, when the rolling shutter type photographing shown in fig. 29 as an example is performed by the APS-C size photoelectric conversion element, as an example, as shown in fig. 31, the interval between line strobes is longer than the length of the image area corresponding to the APS-C size image pickup area in the vertical direction. Therefore, only 1 line strobe occurs in an image area equivalent to an APS-C size imaging area in the imaged image P2.

In the case of performing the rolling shutter type imaging in which the influence of the line stroboflash is avoided, it is important to determine the imaging timing at which the influence of the line stroboflash can be avoided. However, although the techniques described in patent documents 1 to 4 can detect the presence of the wireless strobe, it is difficult to specify the imaging timing at which the influence of the wireless strobe can be avoided. As an example, as shown in fig. 31, when only 1 line strobe occurs in an image area corresponding to an APS-C size imaging area in the captured image P2, it becomes increasingly difficult to specify an imaging timing at which the influence of the line strobe can be avoided.

The same can be said for the case of shooting in the global shutter method. In the case of performing imaging in the global shutter method, the surface strobe is detected by the photoelectric conversion element, but in this case, in order to perform imaging in which the influence of the surface strobe is avoided, it is also important to determine the imaging timing at which the influence of the surface strobe can be avoided. In the case of using the techniques described in patent documents 1 to 4, the presence or absence of the surface strobe can be detected, but it is difficult to specify the imaging timing at which the influence of the surface strobe can be avoided.

An embodiment of the present invention provides an imaging element, an imaging device, an image data processing method, and a program, which can perform imaging while avoiding the influence of stroboscopic light.

Means for solving the technical problem

A1 st aspect according to the present technology is an imaging element including: a storage section that stores captured image data obtained by capturing an object at a1 st frame rate and is built in the imaging element; a processing section that executes processing based on the captured image data stored in the storage section and is built in the imaging element; an output unit that outputs image data for output based on the captured image data to the outside of the imaging element at a2 nd frame rate and is built in the imaging element, wherein the 1 st frame rate is a frame rate higher than the 2 nd frame rate and is determined according to a generation cycle of a strobe, and the processing unit detects a strobe influence avoidance timing for avoiding an influence of the strobe on imaging of the imaging element from the captured image data of a plurality of frame amounts.

Accordingly, the imaging element according to claim 1 of the present invention can perform imaging without being affected by stroboscopic light.

A2 nd aspect relating to the technology of the present invention is the imaging element according to the 1 st aspect, wherein the processing includes measurement detection processing of measuring a luminance difference between frames of the captured image data and detecting the strobe-influence-avoiding timing based on the measured luminance difference, and a processing time required for the measurement detection processing is determined based on the generation cycle.

Accordingly, the imaging element of the 2 nd aspect according to the technique of the present invention can appropriately determine the processing time required for the measurement detection processing, as compared with the case where the luminance difference between frames of the captured image data is not used.

A3 rd aspect according to the present invention is the imaging element according to the 2 nd aspect, wherein the captured image data is classified between frames into 1 st captured image data and 2 nd captured image data obtained by capturing an image after the 1 st captured image data, the luminance difference is a subtraction result of subtracting luminance of the 2 nd captured image data from luminance of the 1 st captured image data, and the strobe-influence-avoidance timing is a timing at which a positive value of the luminance difference changes to a negative value.

Accordingly, the imaging element according to claim 3 of the present invention can detect the timing at which the image represented by the captured image data becomes brightest as the strobe-influence-avoiding timing.

A 4 th aspect relating to the technology of the present invention is the imaging element according to the 3 rd aspect, wherein the processing unit terminates the measurement detection processing under a condition that the luminance difference changes from a positive value to a negative value 2 times.

Accordingly, the imaging element according to claim 4 of the present invention can minimize the processing time required for the measurement detection processing.

A 5 th aspect relating to the technology of the present invention is the imaging element according to the 3 rd or 4 th aspect, wherein the 2 nd captured image data is image data obtained by capturing 2 or more frames after the 1 st captured image data.

Accordingly, the imaging element of the 5 th aspect according to the technique of the present invention can suppress erroneous detection of the strobe influence avoidance timing, as compared with the case of using a luminance difference between adjacent frames.

A 6 th aspect according to the technique of the present invention is the imaging element according to any one of the 2 nd to 5 th aspects, wherein the 1 st frame rate is set to a frame rate lower than a frame rate in the measurement detection processing until a predetermined condition is satisfied after the measurement detection processing is performed.

Accordingly, the imaging element of the 6 th aspect according to the technique of the present invention can reduce power consumption, as compared to a case where the frame rate in the measurement detection process is applied to a process other than the measurement detection process.

A 7 th aspect relating to the technology of the present invention is the imaging element according to the 6 th aspect, wherein the processing unit performs the measurement detection processing again when a predetermined condition is satisfied.

Therefore, the 7 th aspect of the present invention can reduce power consumption compared to the case where the measurement detection process is always executed.

An 8 th aspect relating to the technology of the present invention is the imaging element according to the 7 th aspect, wherein the measurement detection process is performed again when a predetermined condition is satisfied and the luminance difference measured by the processing unit changes from the luminance difference measured within the processing time.

Accordingly, the imaging element according to the 8 th aspect of the present invention can avoid performing unnecessary measurement detection processing.

A 9 th aspect relating to the technology of the present invention is the imaging element according to any one of the 2 nd to 8 th aspects, wherein the luminance difference is a luminance difference of local regions corresponding to each other between frames of the captured image data.

Therefore, the 9 th aspect of the present invention can measure a luminance difference with less influence of movement of the object and/or camera shake, as compared with the case where a luminance difference is measured for all image areas between frames of captured image data.

A 10 th aspect relating to the technology of the present invention is the imaging element according to any one of the 1 st to 9 th aspects, wherein the strobe-influence-avoidance timing is a timing at which a luminance of an image represented by the captured image data reaches a peak.

Therefore, the imaging element according to the 10 th aspect of the present invention can obtain a brighter image than a case where a time different from a time at which the brightness of the image represented by the captured image data reaches the peak is set as the strobe-influence-avoiding time.

An 11 th aspect relating to the technology of the present invention is the imaging element according to any one of the 1 st to 8 th aspects, wherein the image data for output is image data based on captured image data obtained by capturing images at strobe-influence-avoiding timings.

Accordingly, the imaging element according to claim 11 of the present invention can output image data for output that avoids the influence of stroboscopic light.

A 12 th aspect relating to the technology of the present invention is the imaging element according to any one of the 1 st to 11 th aspects, wherein the strobe is a line strobe, and the subject is imaged in an imaging region selected according to a clipping coefficient determined according to the strobe-influence-avoiding timing.

Therefore, the imaging element according to the 12 th aspect of the present invention can suppress the stroboscopic reflection in the image, as compared with the case where the image is captured in all the imaging regions.

A 13 th aspect relating to the technology of the present invention is the imaging element according to any one of the 1 st to 12 th aspects, wherein a generation cycle is set in advance to a generation cycle of a strobe caused by a light source that blinks when an alternating current is supplied from a commercial power supply.

Accordingly, the imaging element according to the 13 th aspect of the present invention can reduce the number of steps for determining the generation cycle of the strobe, as compared with a case where the generation cycle of the strobe is not predetermined.

A 14 th aspect relating to the technology of the present invention is the imaging element according to any one of the 1 st to 13 th aspects, wherein the 1 st frame rate is increased as the generation cycle is shortened.

Therefore, the 14 th aspect of the present invention can improve the accuracy of detecting the strobe-influence-avoiding time, compared to the case where the 1 st frame rate is fixed.

A 15 th aspect relating to the technology of the present invention is the imaging element according to any one of the 1 st to 14 th aspects, wherein the captured image data is image data obtained by capturing an object by a rolling shutter method.

Accordingly, the 15 th aspect of the present invention can avoid the influence of the strobe generated when the subject is imaged by the rolling shutter method.

A 16 th aspect relating to the technology of the present invention is the imaging element according to any one of the 1 st to 15 th aspects, wherein the imaging element is a laminated imaging element having a photoelectric conversion element, and the memory portion is laminated on the photoelectric conversion element.

Therefore, the imaging element of the 16 th aspect according to the technique of the present invention can improve the detection accuracy of the strobe-influence-avoiding timing, as compared with the case of using the imaging element of the type in which the storage portion is not stacked on the photoelectric conversion element.

A 17 th aspect according to the present invention is an imaging apparatus including: the imaging element according to any one of modes 1 to 16; and a control unit that performs control for causing a display unit to display an image based on the output image data output by the output unit included in the imaging element.

Accordingly, the imaging apparatus according to claim 17 of the present invention can perform imaging while avoiding the influence of stroboscopic light.

An 18 th aspect relating to the technology of the present invention is an image data processing method for an imaging element incorporating: a storage unit that stores captured image data obtained by capturing an object at a1 st frame rate; a processing unit that performs processing based on the captured image data stored in the storage unit; and an output unit that outputs image data for output based on the captured image data to the outside at a2 nd frame rate, wherein the 1 st frame rate is set to a frame rate higher than the 2 nd frame rate, the 1 st frame rate is determined according to a generation cycle of a strobe, and the processing unit detects a strobe influence avoidance timing that avoids an influence of the strobe on imaging of the imaging element, from the captured image data of a plurality of frames.

Accordingly, the image data processing method according to the 18 th aspect of the present invention can perform imaging while avoiding the influence of the strobe.

A 19 th aspect of the present invention is a program for causing a computer to function as a processing unit and an output unit included in an imaging element, the imaging element incorporating: a storage unit that stores captured image data obtained by capturing an object at a1 st frame rate; a processing unit that performs processing based on the captured image data stored in the storage unit; and an output unit that outputs image data for output based on the captured image data to the outside at a2 nd frame rate, wherein the 1 st frame rate is a frame rate higher than the 2 nd frame rate and is determined according to a generation cycle of a strobe, and the processing unit detects a strobe influence avoiding timing for avoiding an influence of the strobe on imaging of the imaging element from the captured image data of a plurality of frame amounts.

Accordingly, the image data processing method according to claim 19 of the present invention can perform imaging while avoiding the influence of the strobe.

A 20 th aspect according to the present technology is an imaging element including: a storage section that stores captured image data obtained by capturing an object at a1 st frame rate and is built in the imaging element; and a processor that performs processing based on the captured image data stored in the storage unit, outputs image data for output based on the captured image data to the outside of the imaging element at a2 nd frame rate, and is built in the imaging element, the 1 st frame rate being a frame rate higher than the 2 nd frame rate and being determined according to a generation cycle of a strobe, and detects a strobe influence avoidance timing that avoids an influence of the strobe on shooting of the imaging element from the captured image data of a plurality of frame amounts.

According to an embodiment of the present invention, an effect of performing imaging while avoiding the influence of a strobe can be obtained.

Drawings

Fig. 1 is a perspective view showing an example of an external appearance of an imaging device as a lens interchangeable camera according to an embodiment.

Fig. 2 is a rear view showing the rear side of the imaging device according to the embodiment.

Fig. 3 is a block diagram showing an example of a hardware configuration of the imaging apparatus according to the embodiment.

Fig. 4 is a schematic configuration diagram showing an example of a configuration of a hybrid finder of the imaging apparatus according to the embodiment.

Fig. 5 is a schematic configuration diagram showing an example of a schematic configuration of an imaging element included in the imaging apparatus according to the embodiment.

Fig. 6 is a block diagram showing an example of a configuration of a main part of an imaging element included in the imaging apparatus according to the embodiment.

Fig. 7 is a conceptual diagram illustrating an example of a relationship between a rolling deviation and a strobe cycle characteristic in the imaging device according to the related art.

Fig. 8 is a conceptual diagram illustrating an example of a relationship between the rolling deviation and the strobe cycle characteristic in the imaging device according to the embodiment.

Fig. 9 is a graph showing an example of a voltage change of a commercial power supply at 50 Hz.

Fig. 10 is a graph showing an example of a flicker cycle characteristic of a flicker light source that flickers at a frequency of 100 Hz.

Fig. 11 is a conceptual diagram illustrating an example of captured images of a plurality of frames obtained by capturing an object by the imaging apparatus according to the embodiment.

Fig. 12 is an explanatory diagram for explaining an example of a method of detecting strobe-influence-avoiding timing according to the embodiment.

Fig. 13 is a conceptual diagram showing an example of a manner in which the luminance difference is sampled when the sampling frequency is 200 Hz.

Fig. 14 is a conceptual diagram showing an example of a manner in which the luminance difference is sampled when the sampling frequency is 400 Hz.

Fig. 15 is a conceptual diagram illustrating an example of a manner in which a luminance difference is sampled when the sampling frequency is 800 Hz.

Fig. 16 is a flowchart showing an example of the flow of the image pickup processing according to the embodiment.

Fig. 17 is a flowchart showing an example of a strobe-avoided imaging process flow according to the embodiment.

Fig. 18 is an explanatory diagram for explaining the termination timing of the measurement detection processing according to the embodiment.

Fig. 19 is an image diagram showing an example of a captured image obtained by capturing an image using a mechanical shutter and an example of a captured image obtained by capturing an image using an electronic shutter.

Fig. 20 is a conceptual diagram illustrating an example of a positional relationship between an image area and a line strobe when a trimming area is set.

Fig. 21 is a conceptual diagram illustrating an example of a mode of outputting a display frame according to the timing of the light amount peak.

Fig. 22 is a timing chart showing an example of sequence processing realized by executing the imaging processing and the strobe-avoided imaging processing according to the embodiment.

Fig. 23 is a conceptual diagram illustrating an example of a relationship between the flicker cycle characteristic of the initial flicker light source and the flicker cycle characteristic of the flicker light source several hours later.

Fig. 24 is a conceptual diagram showing a plot example of luminance differences when the luminance difference calculated in the entire one frame is affected by movement of an object and/or hand shake.

Fig. 25 is a conceptual diagram showing an example of plotting luminance differences when luminance differences calculated from a1 st divided image (upper divided image) obtained by dividing each of two captured images of two frames into two in the vertical direction are affected by movement of an object and/or hand shake, and an example of plotting luminance differences when luminance differences calculated from a2 nd divided image (lower divided image) obtained by dividing each of two captured images of two frames into two in the vertical direction are affected by movement of an object and/or hand shake.

Fig. 26 is a conceptual diagram illustrating an example of a mode in which the program according to the embodiment is installed in the imaging element from a storage medium in which the program according to the embodiment is stored.

Fig. 27 is a block diagram showing an example of a schematic configuration of a smart device incorporating the imaging element according to the embodiment.

Fig. 28 is a conceptual diagram illustrating an example of the relationship between the rolling deviation of the electronic shutter and the rolling deviation of the mechanical shutter.

Fig. 29 is a conceptual diagram illustrating an example of the relationship between the rolling deviation of the electronic shutter and the rolling deviation of the mechanical shutter when the rolling deviation of the electronic shutter is shorter than that illustrated in fig. 28.

Fig. 30 is an image diagram showing an example of a captured image obtained by the rolling shutter method illustrated in fig. 28.

Fig. 31 is an image diagram showing an example of a captured image obtained by the rolling shutter method illustrated in fig. 29.

Detailed Description

Hereinafter, an example of an embodiment of an imaging device according to the technique of the present invention will be described with reference to the drawings.

As an example, as shown in fig. 1, the imaging device 10 is a lens interchangeable camera. The image pickup apparatus 10 is a digital camera that includes an image pickup apparatus body 12, an interchangeable lens 14 mounted to the image pickup apparatus body 12 in an interchangeable manner, and omits a mirror. The interchangeable lens 14 includes an imaging lens 18, and the imaging lens 18 has a focus lens 16 that is movable in the optical axis direction by manual operation.

A hybrid finder (registered trademark) 21 is provided in the imaging apparatus main body 12. Here, the hybrid finder 21 is a finder that selectively uses, for example, an optical finder (hereinafter, referred to as "OVF") and an electronic finder (hereinafter, referred to as "EVF"). In addition, OVF means "optical viewfinder: optical viewfinder "is short. And, EVF means "electronic viewfin der: electronic viewfinder "is short.

The interchangeable lens 14 is mounted to the image pickup apparatus body 12 in an interchangeable manner. A focus ring 22 used in the manual focus mode is provided in the lens barrel of the interchangeable lens 14. The focus lens 16 moves in the optical axis direction in accordance with the manual rotation operation of the focus ring 22, and at a focus position corresponding to the subject distance, subject light is imaged on an imaging element 20 (see fig. 3) described later.

An OVF finder window 24 included in the hybrid finder 21 is provided on the front surface of the image pickup apparatus main body 12. A viewfinder switching lever (viewfinder switching unit) 23 is provided on the front surface of the imaging apparatus main body 12. When the finder switching lever 23 is rotated in the direction of arrow SW, switching is performed between an optical image that can be visually recognized by OVF and an electronic image that can be visually recognized by EVF (through image). The through image is a moving image for display obtained by imaging with the photoelectric conversion element.

The optical axis L2 of the OVF is different from the optical axis L1 of the interchangeable lens 14. On the upper surface of the imaging apparatus main body 12, a release button 25, a setting dial 28 such as an imaging system mode and a playback system mode are provided.

The release button 25 functions as an imaging preparation instructing unit and an imaging instructing unit, and can detect two stages of pressing operations, i.e., an imaging preparation instructing state and an imaging instructing state. The imaging preparation instruction state is, for example, a state in which the image is pressed from the standby position to the intermediate position (half-pressed position), and the imaging instruction state is a state in which the image is pressed to the final pressed position (full-pressed position) beyond the intermediate position. Hereinafter, the "state of being pressed from the standby position to the half-pressed position" is referred to as a "half-pressed state", and the "state of being pressed from the standby position to the full-pressed position" is referred to as a "full-pressed state".

In the imaging apparatus 10 according to the present embodiment, as the operation mode, an imaging mode and a playback mode are selectively set in accordance with an instruction from a user. In the image capturing mode, the manual focus mode and the auto focus mode are selectively set in accordance with an instruction from the user. In the autofocus mode, the release button 25 is half-pressed to adjust the imaging conditions, and then, immediately after the release button is fully pressed, exposure is performed. That is, after the release button 25 is half-pressed to start an AE (Automatic Exposure) function to set an Exposure state, an AF (Auto-Focus) function is started to control focusing, and when the release button 25 is fully pressed, shooting is performed.

As an example, as shown in fig. 2, a touch panel/display 30, a cross key 32, a menu key 34, an instruction button 36, and a viewfinder eyepiece portion 38 are provided on the back surface of the imaging apparatus main body 12.

The touch panel/display 30 includes a liquid crystal display (hereinafter, referred to as "display 1") 40 and a touch panel 42 (see fig. 3).

The 1 st display 40 displays image and character information and the like. The 1 st display 40 displays a through image (live view image) which is an example of a continuous frame image obtained by shooting in continuous frames in the image capturing mode. The 1 st display 40 is also used to display a still image, which is an example of a single frame image captured in a single frame when an instruction to capture a still image is given. The 1 st display 40 is also used to display a playback image, a menu screen, and the like in the playback mode.

The touch panel 42 is a transmission type touch panel, and overlaps with a surface of the display area of the 1 st display 40. The touch panel 42 detects a contact by a pointer such as a finger or a stylus pen. The touch panel 42 outputs detection result information indicating a detection result (whether or not the pointer is in contact with the touch panel 42) to a predetermined output destination (for example, a CPU52 (see fig. 3) described later) at a predetermined cycle (for example, 100 msec). The detection result information includes two-dimensional coordinates (hereinafter, referred to as "coordinates") that can determine the position of the pointer-based contact on the touch panel 42 in the case where the pointer-based contact is detected by the touch panel 42, and does not include coordinates in the case where the pointer-based contact is not detected by the touch panel 42.

The cross key 32 functions as a multi-function key that outputs various command signals such as selection of one or more menus, zooming, and/or frame transfer. The menu key 34 is an operation key having the following functions: a function as a menu button for issuing an instruction to display one or more menus on the screen of the 1 st display 40; and a function as an instruction button for issuing an instruction to determine and execute the selected content or the like. The instruction button 36 is operated when deleting a desired object such as a selection item, canceling the specified content, and returning to the previous operation state or the like.

The imaging apparatus 10 has a still image imaging mode and a moving image imaging mode as operation modes of the imaging system. The still image capturing mode is an operation mode for recording a still image obtained by capturing an object by the image capturing device 10, and the moving image capturing mode is an operation mode for recording a moving image obtained by capturing an object by the image capturing device 10.

As an example, as shown in fig. 3, the imaging device 10 includes a mount 46 (see also fig. 1) provided in the imaging device body 12 and a mount 44 on the interchangeable lens 14 side corresponding to the mount 46. The interchangeable lens 14 is replaceably attached to the image pickup apparatus body 12 by the bayonet 44 being keyed to the bayonet 46.

The imaging lens 18 includes an aperture 47 and a motor 49. The diaphragm 47 is disposed closer to the image pickup apparatus main body 12 than the focus lens 16, and is connected to a motor 49. The diaphragm 47 is operated by power of a motor 49 to adjust exposure.

The imaging lens 18 includes a slide mechanism 48 and a motor 50. The slide mechanism 48 moves the focus lens 16 along the optical axis L1 by operating the focus ring 22. The focus lens 16 is slidably attached to the slide mechanism 48 along the optical axis L1. The slide mechanism 48 is connected to a motor 50, and the slide mechanism 48 receives power of the motor 50 to slide the focus lens 16 along the optical axis L1.

The motors 49 and 50 are connected to the image pickup apparatus main body 12 via the bayonets 44 and 46, and control driving in accordance with a command from the image pickup apparatus main body 12. In the present embodiment, a stepping motor is applied as an example of the motors 49 and 50. Accordingly, the motors 49 and 50 operate in synchronization with the pulse power in accordance with a command from the imaging apparatus main body 12. Further, although the example shown in fig. 3 shows an example in which the motors 49 and 50 are provided on the imaging lens 18, the present invention is not limited to this, and the motors 49 and 50 may be provided on the imaging apparatus main body 12.

The imaging apparatus 10 is a digital camera that records a still image and a moving image obtained by imaging a subject. The imaging device main body 12 includes an operation unit 54, an external interface (I/F)63, and a subsequent stage circuit 90. The subsequent circuit 90 is a circuit on the side of receiving data sent from the imaging element 20. In the present embodiment, as the subsequent Circuit 90, an IC "Integrated Circuit: an integrated circuit ". An example of the IC includes LSI (Large-Scale Integration). The subsequent stage circuit 90 is an example of the "circuit" according to the technique of the present invention.

The subsequent stage circuit 90 includes a CPU (Central Processing Unit) 52, an I/F56, a main storage Unit 58, an auxiliary storage Unit 60, an image Processing Unit 62, a1 st display control Unit 64, a2 nd display control Unit 66, a position detection Unit 70, and a device control Unit 74. In the present embodiment, one CPU is exemplified as the CPU52, but the technique of the present invention is not limited to this, and a plurality of CPUs may be used instead of the CPU 52. That is, various processes performed by the CPU52 may be performed by one processor or physically separate processors.

In the present embodiment, the image processing unit 62, the 1 st display control unit 64, the 2 nd display control unit 66, the position detection unit 70, and the device control unit 74 are each realized by an ASIC (Application Specific Integrated Circuit), but the technique of the present invention is not limited to this. For example, at least one of PLD (Programmable Logic device) and FPGA (Field-Programmable Gate array) may be used instead of ASIC. Also, at least one of an ASIC, PLD, and FPGA may be employed. Also, a computer including a CPU, a ROM (Read Only Memory) and a RAM (Random Access Memory) may be used. The number of the CPUs may be one or plural. At least one of the image processing unit 62, the 1 st display control unit 64, the 2 nd display control unit 66, the position detection unit 70, and the device control unit 74 may be implemented by a combination of a hardware configuration and a software configuration.

The CPU52, I/F56, main storage section 58, auxiliary storage section 60, image processing section 62, 1 st display control section 64, 2 nd display control section 66, operation section 54, external I/F63, and touch panel 42 are connected to each other via a bus 68.

The CPU52 controls the entire image pickup apparatus 10. In the imaging apparatus 10 according to the present embodiment, in the autofocus mode, the CPU52 controls the driving of the motor 50 to perform focus control so that the contrast value of an image obtained by imaging is maximized. In the autofocus mode, the CPU52 calculates AE information that is a physical quantity indicating the brightness of an image obtained by shooting. When the release button 25 is set to the half-pressed state, the CPU52 derives the shutter speed and the F value corresponding to the brightness of the image indicated by the AE information. Then, the exposure state is set by controlling the relevant parts so that the derived shutter speed and F value are obtained.

The main storage unit 58 is a volatile memory, for example, a RAM. The auxiliary storage unit 60 is a nonvolatile memory, for example, a flash memory or an HDD (Hard Disk Drive).

The auxiliary storage unit 60 stores an imaging program 60A. The CPU52 reads the image pickup program 60A from the auxiliary storage unit 60, and expands the read image pickup program 60A to the main storage unit 58. The CPU52 executes an imaging process (see fig. 16) described later in accordance with the imaging program 60A extended to the main storage unit 58.

The operation unit 54 is a user interface that is operated by a user when various instructions are given to the subsequent stage circuit 90. The operation section 54 includes a release button 25, a dial 28, a viewfinder switching lever 23, a cross key 32, a menu key 34, and an instruction button 36. Various instructions received through the operation unit 54 are output as operation signals to the CPU52, and the CPU52 executes processing corresponding to the operation signals input from the operation unit 54.

The position detection unit 70 is connected to the CPU 52. The position detector 70 is connected to the focus ring 22 via the bayonets 44 and 46, detects the rotation angle of the focus ring 22, and outputs rotation angle information indicating the rotation angle as a result of the detection to the CPU 52. The CPU52 executes processing corresponding to the rotation angle information input from the position detection unit 70.

When the image pickup mode is set, image light representing a subject is formed on the light receiving surface of the color imaging element 20 via the imaging lens 18 including the focus lens 16 that is movable by manual operation and the mechanical shutter 72.

The device control section 74 is connected to the CPU 52. The device control unit 74 is connected to the imaging element 20 and the mechanical shutter 72. Further, the apparatus control section 74 is connected to the motors 49, 50 of the imaging lens 18 via the bayonets 44, 46.

The apparatus control section 74 controls the imaging element 20, the mechanical shutter 72, and the motors 49, 50 under the control of the CPU 52.

As an example, as shown in fig. 4, the hybrid finder 21 includes an OVF76 and an EVF 78. The OVF76 is a reverse galileo viewfinder with an objective lens 81 and an eyepiece lens 86, and the EVF78 has a2 nd display 80, a prism 84, and an eyepiece lens 86.

A liquid crystal shutter 88 is disposed in front of the objective lens 81, and when the EVF78 is used, the liquid crystal shutter 88 shields light so that an optical image does not enter the mirror 81.

The prism 84 reflects the electronic image or various information displayed on the 2 nd display 80 and guides to the eyepiece lens 86, and synthesizes the optical image and the electronic image and/or various information displayed on the 2 nd display 80.

Here, when the finder switching lever 23 is rotated in the direction of arrow SW shown in fig. 1, the OVF mode in which an optical image can be viewed by the OVF76 and the EVF mode in which an electronic image can be viewed by the EVF78 are alternately switched every rotation.

In the OVF mode, the 2 nd display control unit 66 controls the liquid crystal shutter 88 to be in a non-light-shielding state so that the optical image can be visually recognized from the eyepiece unit. In the EVF mode, the 2 nd display control unit 66 controls the liquid crystal shutter 88 to be in a light-shielding state so that only the electronic image displayed on the 2 nd display 80 can be visually recognized from the eyepiece unit.

In the following description, for convenience of explanation, the display device is referred to as a "display device" without reference characters when it is not necessary to separately describe the 1 st display 40 and the 2 nd display 80. The display device is an example of the "display unit" according to the technique of the present invention. In addition, hereinafter, for convenience of explanation, when it is not necessary to separately explain the 1 st display control unit 64 and the 2 nd display control unit 66, they are referred to as "display control units" without reference characters.

The imaging element 20 is an example of a "laminated imaging element" according to the technique of the present invention. The imaging element 20 is, for example, a CMOS (Complementary Metal Oxide Semiconductor) image sensor. As an example, as shown in fig. 5, the imaging element 20 incorporates a photoelectric conversion element 92, a processing circuit 94, and a memory 96. In the imaging element 20, the processing circuit 94 and the memory 96 are stacked on the photoelectric conversion element 92. The processing circuit 94 is an example of a "processing unit" according to the technique of the present invention, and the memory 96 is an example of a "storage unit" according to the technique of the present invention.

The processing circuit 94 is, for example, an LSI, and the memory 96 is, for example, a RAM. In the present embodiment, a DRAM (Dynamic Random Access Memory) is used as an example of the Memory 96, but the technique of the present invention is not limited thereto, and may be an SRAM (Static Random Access Memory).

In the present embodiment, the processing circuit 94 is implemented by an ASIC, but the technique of the present invention is not limited to this. For example, at least one of a PLD and an FPGA may be used instead of the ASIC. Also, at least one of an ASIC, PLD, and FPGA may be employed. Also, a computer including a CPU, a ROM, and a RAM may be employed. The number of the CPUs may be one or plural. The processing circuit 94 may be implemented by a combination of a hardware configuration and a software configuration.

The photoelectric conversion element 92 has a plurality of photosensors arranged in a matrix. In this embodiment, a photodiode is used as an example of the photosensor. As an example of the plurality of photosensors, photodiodes of 4896 × 3265 pixels can be given.

The photoelectric conversion element 92 includes color filters including a G filter corresponding to G (green), an R filter corresponding to R (red), and a B filter corresponding to B (blue), which are most useful for obtaining a luminance signal. In the present embodiment, the G color filter, the R color filter, and the B color filter are arranged in a predetermined periodicity in each of the row direction (horizontal direction) and the column direction (vertical direction) with respect to the plurality of photodiodes of the photoelectric conversion element 92. Therefore, when the imaging apparatus 10 performs R, G, B signal synchronization processing or the like, the processing can be performed in a repetitive pattern. The synchronization process means the following process: all color information is calculated for each pixel from a mosaic image corresponding to the color filter arrangement of the single-plate color imaging element. For example, in the case of an imaging element composed of RGB three-color filters, the synchronization processing refers to the following processing: from a mosaic image composed of RGB, RGB all-color information is calculated for each pixel.

Although the CMOS image sensor is exemplified as the imaging element 20, the technique of the present invention is not limited to this, and the technique of the present invention is also applicable to, for example, a CCD (Charge Coupled Device) image sensor as the photoelectric conversion element 92.

The imaging element 20 has a so-called electronic shutter function, and the charge accumulation time of each photodiode in the photoelectric conversion element 92 is controlled by activating the electronic shutter function under the control of the device control section 74. The charge accumulation time is a so-called shutter speed.

In the imaging apparatus 10, still image shooting and moving image shooting are performed in a rolling shutter method. The still image shooting is realized by activating the electronic shutter function and operating the mechanical shutter 72, and the through image shooting is realized by activating the electronic shutter function without operating the mechanical shutter 72.

The processing circuit 94 is controlled by the device control section 74. The processing circuit 94 reads captured image data obtained by capturing an object by the photoelectric conversion element 92. Here, the "captured image data" refers to image data representing an object. The captured image data is signal charges accumulated in the photoelectric conversion element 92. The processing circuit 94 performs a/D (Analog/Digital) conversion on the captured image data read from the photoelectric conversion element 92. The processing circuit 94 stores captured image data obtained by a/D conversion of the captured image data in the memory 96. The processing circuit 94 acquires captured image data from the memory 96, and outputs image data based on the acquired captured image data, that is, output image data, to the I/F56 of the subsequent circuit 90. Hereinafter, for convenience of explanation, "image data for output, which is image data based on captured image data" will be simply referred to as "image data for output".

The processing circuit 94 performs the 1 st processing and the 2 nd processing on the captured image data. The first processing means the following processing: the picked-up image data is read from the photoelectric conversion element 92, and the read-out picked-up image data is stored in the memory 96. The 2 nd process refers to a process of outputting the image data for output to the outside of the imaging element 20. Here, the term "outside the imaging element 20" refers to, for example, the I/F56 of the subsequent-stage circuit 90. The subsequent stage circuit 90 is an example of the "circuit" according to the technique of the present invention.

In the imaging element 20, a subject is photographed at the 1 st frame rate. The processing circuit 94 performs the 1 st processing at the 1 st frame rate and performs the 2 nd processing at the 2 nd frame rate. The 1 st frame rate is a higher frame rate than the 2 nd frame rate.

In the present embodiment, 60fps (frames per second) is used as an example of the 2 nd frame rate, but the technique of the present invention is not limited to this, and the 2 nd frame rate may be changed if the correlation "2 nd frame rate < 1 st frame rate" is satisfied.

The 1 st frame rate is set to a frame rate that can be changed within a range not to be the 2 nd frame rate or less. For example, in the present embodiment, the 1 st frame rate is a frame rate that can be switched between a high frame rate and a low frame rate by the processing circuit 94.

The high frame rate refers to a frame rate higher than the low frame rate. In the present embodiment, 100fps is used as an example of the low frame rate. In the present embodiment, the high frame rates are classified into the 1 st high frame rate and the 2 nd high frame rate. The 1 st high frame rate is a frame rate used when the later-described strobe generation period is 10ms, and the 2 nd high frame rate is a frame rate used when the later-described strobe generation period is 8.33 ms. In the present embodiment, 200fps is used as an example of the 1 st high frame rate, and 240fps is used as an example of the 2 nd low frame rate. In addition, between the 1 st high frame rate and the 2 nd high frame rate, a relationship of "1 st high frame rate < 2 nd high frame rate" holds. Also, for example, the high frame rate may be a frame rate higher than 240fps, such as 400fps, 480fps, 800fps, 960fps, or the like.

A strobe may be reflected in output image data obtained by imaging a subject with the imaging element 20. In the case of shooting in the rolling shutter method, for example, as shown in fig. 7 and 8, a line strobe, i.e., a line strobe, is generated. Fig. 7 shows an example of a state in which a through image obtained by imaging a subject with an imaging device according to the related art is displayed on a display. Fig. 8 shows an example of a state in which a through image obtained by imaging the subject by the imaging device 10 is displayed on the display device.

The line strobe is a phenomenon caused by a rolling variation of the imaging device 10 and a flicker of a light source that periodically flickers (hereinafter referred to as a "flicker light source"). In the case where the through image is displayed on the display device, the line strobe appears to flow in the vertical direction. In the example of fig. 7, a 2-line strobe occurs within the instant preview image. In the example shown in fig. 8, since the scroll deviation is short as compared with the example shown in fig. 7, 1 line strobe occurs in the through image.

As an example of the flicker light source, a fluorescent lamp may be mentioned. In the case where an alternating current is supplied from a commercial power supply to a fluorescent lamp, the flicker period of the fluorescent lamp is half of the voltage period of the commercial power supply. For example, as shown in fig. 9 and 10, when the frequency of the commercial power source is 50Hz, the flicker frequency of the fluorescent lamp is 100Hz, and when the frequency of the commercial power source is 60Hz, the flicker frequency of the fluorescent lamp is 120 Hz. Here, although the fluorescent lamp is exemplified as the blinking light source, the blinking light source is not limited to the fluorescent lamp. The blinking Light source may be, for example, an LED (Light Emitting Diode) used for a display and illumination connected to a personal computer.

In the imaging apparatus 10, in order to realize imaging in which a line strobe is not reflected in an image represented by output image data, that is, imaging in which the influence of the line strobe is avoided, the processing circuit 94 determines the 1 st frame rate in accordance with the generation cycle of the line strobe. That is, the processing circuit 94 changes the 1 st frame rate from the low frame rate to the 1 st or 2 nd high frame rate in accordance with the generation cycle of the line strobe (hereinafter, referred to as "strobe generation cycle").

The strobe generation period is a blinking period of the blinking light source. Therefore, for example, when photographing is performed in an environment illuminated by a fluorescent lamp, the blinking period of the fluorescent lamp is used as a strobe generation period by the processing circuit 94. The strobe generation cycle used by the processing circuit 94 may be a variable value that is changed in accordance with an instruction received by the touch panel 42 and/or the operation unit 54, or may be a fixed value, for example.

As an example, as shown in fig. 6, the processing circuit 94 includes a photoelectric conversion element driving circuit 94A, AD (Analog-to-Digital converter) conversion circuit 94B, an image processing circuit 94C, an output circuit 94D, and a storage circuit 94E. The processing circuit 94 operates under the control of the CPU52 via the device control unit 74. The storage circuit 94E also stores a clipping coefficient derivation table 98. As will be described in detail later, the clipping coefficient derivation table 98 is a table used in the strobe-avoided imaging process described later.

The photoelectric conversion element driving circuit 94A is connected to the photoelectric conversion element 92 and the AD conversion circuit 94B. The memory 96 is connected to the AD conversion circuit 94B and the image processing circuit 94C. The image processing circuit 94C is connected to the output circuit 94D. The output circuit 94D is connected to the I/F56 of the subsequent stage circuit 90.

The photoelectric conversion element driving circuit 94A controls the photoelectric conversion element 92 under the control of the apparatus control section 74, and reads analog picked-up image data from the photoelectric conversion element 92. The AD conversion circuit 94B digitizes the captured image data read by the photoelectric conversion element drive circuit 94A, and stores the digitized captured image data in the memory 96. The memory 96 is a memory capable of storing captured image data of a plurality of frames. The image processing circuit 94C performs processing on the captured image data.

In the imaging apparatus 10, in order to realize imaging in which a line strobe is not reflected in an image (hereinafter, referred to as "imaging image") represented by imaging image data, a strobe influence avoidance timing is detected by the processing circuit 94. The strobe-influence avoiding timing is a timing at which influence of line strobing on the imaging of the imaging element 20 is avoided. In the present embodiment, as an example of the strobe-effect avoidance timing, a timing at which the luminance of the captured image reaches a peak is used. The time at which the luminance of the captured image reaches the peak is, for example, the time at which the luminance difference of the captured image between adjacent frames becomes "0". The strobe-effect avoidance timing does not necessarily have to be the timing at which the brightness of the captured image reaches the peak. In this case, for example, the strobe-influence-avoiding timing may be a timing that deviates from the peak of the luminance of the captured image within a range in which no strobe occurs within the captured image.

Here, the difference between the average luminance of all captured images of one frame and the average luminance of the entire imaging element of another frame is used as the luminance difference, but the technique of the present invention is not limited to this. For example, the luminance difference may be a difference between the average luminance of a local region of a captured image of one frame and the average luminance of a local region of a captured image of another frame.

Here, an example of a method of detecting the strobe-influence-avoiding timing will be described with reference to fig. 11 and 12. In the example of fig. 11, the captured images of the 1 st to nth frames are shown as an example of captured images of a plurality of frames obtained by capturing images at a high frame rate. The 1 st frame is the latest frame and the nth frame is the frame obtained first in the past.

As an example, as shown in fig. 12, the sequence of the method for detecting the strobe-influence-avoiding timing is divided into step S1 to step S4. First, in step S1, every time new captured image data is stored in the memory 96, a luminance difference between the latest captured image data and the captured image data of the previous frame is calculated, and the calculated luminance difference is plotted in time series to generate a luminance difference period characteristic graph G indicating the period characteristic of the luminance difference.

In addition, in the example shown in fig. 12, a point a1Is the difference in luminance between the captured image data of the 1 st frame and the captured image data of the 2 nd frame, point A2Is the luminance difference between the captured image data of the 2 nd frame and the captured image data of the 3 rd frame.

In step S2, a timing when the luminance difference is within a range from a negative value to a positive value and the luminance difference becomes "0", and a timing when the luminance difference is within a range from a positive value to a negative value and the luminance difference becomes "0" are detected. In the example shown in fig. 12, the luminance difference at the "a" position and the luminance difference at the "xxx" position in the luminance difference period characteristic graph G are both "0". In the example shown in fig. 12, the position "a" indicates a point at which the luminance difference changes from a negative value to a positive value and the luminance difference becomes "0". Also, the "xxx" position indicates a position at which the luminance difference changes from a positive value to a negative value and the luminance difference becomes "0".

In other words, the position "a" shown in fig. 12 is a position where the luminance difference is "0" in the range where the differential coefficient of the luminance difference period characteristic map G shows a positive value. Also, the "t" position shown in fig. 12 is a position where the luminance difference is "0" in a range where the differential coefficient of the luminance difference period characteristic graph G shows a negative value.

In step S3, a time interval at which the luminance difference becomes "0" is detected. In the example of FIG. 12, time interval B is detected1And time interval B2. Time interval B1Is the time interval from the "xxx" position to the "a-solidup" position, time interval B2Is the time interval from the "a" position to the "t" position.

In step S4, the strobe-influence avoiding timing is detected from the detection result in step S2 and the detection result in step S3. In the example shown in fig. 12, the timing at which the photographic image becomes brightest in the luminance difference period characteristic graph G, that is, the timing at which the luminance reaches the peak value is the strobe influence avoidance timing, and the "xxx" position on the luminance difference period characteristic graph G is the strobe influence avoidance timing. The timing at which the influence of stroboflash is avoided, i.e. "t" position on graph G of luminance difference period characteristic at "time interval B1+ time interval B2"a constant period occurs.

Detecting the strobe influence avoidance timing is important for realizing shooting for avoiding the influence of line stroboflash. Therefore, in the imaging element 20, the measurement detection processing is performed at the 1 st frame rate by the processing circuit 94 in the same manner as the 1 st processing. In other words, the measurement detection process is included in the process performed by the processing circuit 94.

The measurement detection treatment means the following treatment: a luminance difference between frames of the captured image data is measured, and a strobe-influence-avoiding timing is detected from the measured luminance difference. As an example of the measurement detection process, a series of processes shown in step S1 to step S3 in fig. 12 can be given.

The detection of the strobe influence avoiding timing is realized on the premise of creating the luminance difference period characteristic graph G. In order to improve the detection accuracy of the strobe-influence-avoiding timing, it is necessary to improve the accuracy of the luminance difference period characteristic graph G. In order to improve the accuracy of the luminance difference period characteristic graph G, it is necessary to increase the sampling frequency of the luminance difference in step S1 (i.e., the number of times of measurement of the luminance difference/1 second). However, since the conventional imaging element needs to perform thinning out of captured image data in order to realize the sampling of the luminance difference shown in fig. 12, the accuracy of the luminance difference decreases by the amount of thinning out. If the precision of the brightness difference is reduced, the flash influence avoidance time is easily detected by mistake. In contrast, the imaging element 20 can secure a necessary sampling frequency without thinning out the captured image data in addition to detecting the strobe-influence avoidance timing.

In order to ensure detection accuracy at the time of avoiding the influence of stroboflash, it is important to appropriately determine the sampling frequency. Further, it is preferable to appropriately determine the sampling frequency even from the viewpoint of reducing power consumption.

The sampling frequency is uniquely determined by the 1 st frame rate. Therefore, in the imaging element 20, the processing circuit 94 determines the 1 st frame rate according to the strobe generation period.

For example, when the processing circuit 94 executes the measurement detection processing during a period from when the release button 25 is fully pressed to when the shooting is started, it is preferable that the processing time (hereinafter, simply referred to as "processing time") required for the processing of the measurement detection processing be shorter. The reduction in processing time means a reduction in time lag from the full-press state to the start of shooting.

In the present embodiment, as an example of the processing time, the shortest time required to detect a position where the luminance difference becomes "0" 3 times is adopted. For example, as shown in fig. 12, the shortest time required to detect a position where the luminance difference becomes "0" 3 times means the shortest time required to detect a "xxx" position 2 times.

In the present embodiment, the shortest time required to detect the position where the luminance difference becomes "0" 3 times is adopted as the processing time, but the technique of the present invention is not limited to this. For example, the processing time may be the shortest time required to detect a position where the luminance difference becomes "0" 2 times. The shortest time required to detect the position where the luminance difference becomes "0" 2 times means to detect time interval B shown as step S3 of fig. 12 as an example1The minimum time required. By detecting time intervals B1Thereafter, if it is regarded as the brightness difference, the time interval B is1When the same time interval becomes "0", the strobe-effect avoiding timing is determined in the same manner as when the position where the luminance difference becomes "0" is detected 3 times.

The processing circuit 94 determines the processing time according to the strobe generation cycle. For example, when shooting is performed in an environment where a blinking light source having a blinking frequency of 100Hz blinks, the processing circuit 94 sets the 1 st high frame rate as the 1 st high frame rate. In the present embodiment, 200fps is used as an example of the 1 st high frame rate. In this case, the processing circuit 94 sets 30ms (10 ms (1/100 Hz) × 3 times) as the processing time. Here, "3 times" refers to the number of times the luminance difference becomes the position of "0".

Also, for example, in the case of shooting in an environment where a blinking light source having a blinking frequency of 120Hz blinks, the processing circuit 94 sets the 1 st frame rate to the 2 nd high frame rate. In the present embodiment, 240fps is used as an example of the 2 nd high frame rate. In this case, the processing circuit 94 sets 25ms (═ 8.33ms (═ 1/120Hz) × 3 times) as the processing time. Here, "3 times" refers to the number of times the luminance difference becomes the position of "0".

The processing time of 30ms and 25ms is the shortest processing time, and may be longer than this. This is because, when a position where the luminance difference becomes "0" is detected 3 times in consideration of movement of the subject, hand shake, or the like, the processing time of 30ms and 25ms may be insufficient. In this case, for example, the processing circuit 94 may detect the movement of the subject or the degree of camera shake, and add a necessary time to the processing time of 30ms or 25ms in accordance with the detection result.

In order to improve the accuracy of the luminance difference period characteristic graph G, it is preferable to increase the sampling frequency. For example, when the sampling frequency is 200Hz, the luminance difference is measured 3 times in 1 cycle of the blinking light source as shown in fig. 13. For example, when the sampling frequency is 400Hz, the luminance difference is measured 5 times in 1 cycle of the blinking light source as shown in fig. 14. For example, when the sampling frequency is 800Hz, the luminance difference is measured 9 times in 1 cycle of the blinking light source as shown in fig. 15.

Next, the operation of the portion of the imaging apparatus 10 according to the present invention will be described.

When receiving an instruction to start the image capturing process from the operation unit 54, the CPU52 executes the image capturing process in accordance with the image capturing program 60A. Hereinafter, an image pickup process executed by the CPU52 will be described with reference to fig. 16. For convenience of explanation, the following description will be made: the blinking light source is a fluorescent lamp, and the subject is captured by the imaging device 10 only under the illumination of the fluorescent lamp, and a still image representing the subject is acquired. For convenience of explanation, the following description will be made on the assumption that both AF and AE are executed when the release button 25 is half-pressed.

In the image capturing process shown in fig. 16, first, in step S100, the CPU52 controls the imaging element 20 and the display control unit to start displaying the through image on the display device, and then the process proceeds to step S102. In this step S100, the CPU52 causes the imaging element 20 to start shooting at the 1 st frame rate, and outputs a through image obtained by shooting by the imaging element 20 to the display control section. The display control unit causes the display device to display the input live preview image.

In step S102, the CPU52 determines whether the release button 25 is in the half-pressed state. In step S102, it is determined as yes when the release button 25 is half-pressed, and the image capturing process proceeds to step S104. In step S102, if the release button 25 is not half-pressed, it is determined as no, and the image capturing process proceeds to step S112.

In step S104, the CPU52 determines whether AF and AE are being executed. In step S104, it is determined as no if AF and AE are not being executed, and the image capturing process proceeds to step S106. In step S104, if AF and AE are being executed, it is determined as yes, and the image capturing process proceeds to step S108.

In step S106, the CPU52 starts executing AF and AE, and the image pickup process shifts to step S108.

In step S108, the CPU52 determines whether an image capturing process end condition, which is a condition for ending the image capturing process, is satisfied. The imaging process termination condition may be, for example, a condition in which an instruction to terminate the imaging process is received by the touch panel 42 and/or the operation unit 54. The imaging process termination condition may be, for example, a condition that the time determined as no in step S112 after the start of the imaging process exceeds a predetermined time. The "predetermined time" referred to herein means, for example, 5 minutes. The predetermined time may be a fixed value or a variable value that can be changed in accordance with an instruction from the user.

In step S108, if the imaging process termination condition is satisfied, it is determined as yes, and the imaging process proceeds to step S110. In step S108, if the imaging process termination condition is not satisfied, the determination is no, and the imaging process proceeds to step S102.

In step S112, the CPU52 determines whether the release button 25 is in the fully pressed state. In step S112, when the release button 25 is fully pressed, the determination is yes, and the image capturing process proceeds to step S114. In step S112, if the release button 25 is not fully pressed, that is, if the release button 25 is not pressed, it is determined as no, and the image capturing process proceeds to step S122.

In step S122, the CPU52 determines whether AF and AE are being executed. In step S122, it is determined as no if AF and AE are not being executed, and the image capturing process proceeds to step S108. In step S122, if AF and AE are being executed, it is determined as yes, and the image capturing process proceeds to step S124.

In step S124, the CPU52 ends execution of AF and AE, and the image pickup process shifts to step S108.

In step S114, the CPU52 ends AF and AE, and the image pickup process then proceeds to step S116.

In step S116, the CPU52 instructs the imaging element 20 to start execution of the strobe-avoided imaging process, and the imaging process then shifts to step S118.

If the start of execution of the strobe-avoided imaging process is instructed to the imaging element 20 by the CPU52 executing the process of step S116, the processing circuit 94 of the imaging element 20 executes the strobe-avoided imaging process.

Here, strobe avoidance imaging processing executed by the processing circuit 94 is explained with reference to fig. 17. For convenience of explanation, the following description will be made on the assumption that the strobe-avoided imaging process is started in a state where the low frame rate is adopted as the 1 st frame rate.

In the strobe-avoided imaging process shown in fig. 17, first, in step S150, the processing circuit 94 determines whether or not the strobe-avoided imaging process is executed 1 st time after the imaging process shown in fig. 16 is started to be executed. In other words, the processing circuit 94 determines whether or not the strobe avoidance imaging process is executed for the first time after the imaging process shown in fig. 16 is started to be executed.

In step S150, when the strobe-avoided imaging process is executed 1 st time after the imaging process shown in fig. 16 is started, it is determined as yes, and the strobe-avoided imaging process proceeds to step S152. In step S150, if the strobe-avoided imaging process is not executed the 1 st time after the imaging process shown in fig. 16 is started, that is, if it is executed the 2 nd time or later, it is determined as no, and the strobe-avoided imaging process proceeds to step S180.

In step S152, the processing circuit 94 acquires power supply frequency information, and the strobe-avoided imaging process shifts to step S154. The power supply frequency information is information of a commercial power supply frequency used as a supply source of electric power supplied to the fluorescent lamp. The power supply frequency information is received by the touch panel 52 and/or the operation unit 54, for example, and acquired by the processing circuit 94 via the CPU52 and the device control unit 74.

In step S154, the processing circuit 94 calculates a strobe generation cycle from the power supply frequency information acquired in step S152, and then the strobe-avoided imaging process shifts to step S156. In step S154, for example, 10ms is calculated as a strobe generation cycle when the frequency indicated by the power supply frequency information is 50Hz, and 8.33ms is calculated as a strobe generation cycle when the frequency indicated by the power supply frequency information is 60 Hz.

In step S156, the processing circuit 94 determines a high frame rate of the 1 st frame rate from the strobe generation period calculated in step S154. That is, the processing circuit 94 determines one of the 1 st and 2 nd high frame rates as the high frame rate of the 1 st frame rate.

In step S156, when the strobe generation cycle calculated in step S154 is 10ms, the 1 st frame rate and the 1 st frame rate are determined as the 1 st frame rate of the 1 st and the 2 nd high frame rates. When the strobe generation cycle calculated in step S154 is 8.33ms, the 1 st frame rate and the 2 nd frame rate are determined as the 2 nd high frame rate out of the 1 st high frame rate and the 2 nd high frame rate.

In the next step S160, the processing circuit 94 changes the 1 st frame rate from the low frame rate to the high frame rate determined in step S156, and then the strobe-avoided image capturing process shifts to step S162.

In step S162, the processing circuit 94 calculates the above-described processing time, and the strobe-avoided imaging process proceeds to step S164.

In step S164, the image processing circuit 94C acquires the luminance from the latest captured image data stored in the memory 96, and then proceeds to step S166. Here, the "latest captured image data" is an example of the "2 nd captured image data" according to the technique of the present invention.

In step S166, the image processing circuit 94C acquires the luminance from the captured image data of the frame immediately preceding the latest captured image data among the captured image data of the plurality of frames stored in the memory 96, and the strobe-avoided imaging process shifts to step S168. Here, the "captured image data of the frame immediately preceding the latest captured image data" is an example of the "1 st captured image data" according to the technique of the present invention.

In step S168, the image processing circuit 94C calculates the luminance difference, and then the strobe avoidance imaging process shifts to step S170. The luminance difference calculated by performing this step S168 is the subtraction result of subtracting the luminance acquired in step S164 from the luminance acquired in step S166. Therefore, the luminance difference becomes a positive value when "the luminance acquired in step S166 > the luminance acquired in step S164", and becomes a negative value when "the luminance acquired in step S166 < the luminance acquired in step S164". When the luminance obtained in step S166 is equal to the luminance obtained in step S164, the luminance difference is "0".

In step S170, the image processing circuit 94C determines whether or not the processing time calculated in step S162 has elapsed after the execution of step S162 is finished. In step S170, if the processing time calculated in step S162 has not elapsed, the determination is no, and the strobe-avoided imaging process proceeds to step S164. In step S170, when the processing time calculated in step S162 has elapsed, the determination is yes, and the strobe-avoided imaging process proceeds to step S172.

In step S172, the image processing circuit 94C terminates the calculation of the luminance difference and the calculation of the light amount peak period, and then shifts to step S174. The light amount peak refers to a peak of the luminance of the captured image. The light amount peak period refers to a period that specifies the strobe-influence avoiding timing. In the example shown in fig. 12, the time of "time interval B1+ time interval B2" of step S3 corresponds to the amount of one cycle of the light amount peak period.

In step S172, the termination of the calculation of the luminance difference means that the measurement detection process is terminated under the condition that the luminance difference changes from a positive value to a negative value 2 times. For example, as shown in fig. 18, when 3 positions where the luminance difference becomes "0" are detected in the luminance difference period characteristic graph G, the calculation of the luminance difference is terminated. Fig. 18 shows an example of detecting a position where 3 luminance differences become "0" in the shortest period and an example of detecting a position where 3 luminance differences become "0" in the longest period. Also, in the example shown in fig. 18, the light quantity peak period is specified by the time interval between the positions of two "xxx".

For example, as shown in fig. 21, when the light amount peak period is calculated by executing the processing of step S172, the frame for display thereafter is output from the output circuit 94D to the subsequent stage circuit 90 in accordance with the latest light amount peak period calculated in step S172. When the display frame is output from the output circuit 94D to the subsequent circuit 90, the display frame is displayed on the display device as a through image under the control of the CPU 52. Here, the "display frame" refers to output image data for a through image obtained by processing the captured image data read from the memory 96 by the image processing circuit 94C.

In step S174, the processing circuit 94 determines whether or not the timing of the light amount peak of the captured image has come, based on the light amount peak cycle calculated in step S172. Here, the "timing of the peak of the light amount of the captured image" is an example of the "strobe-influence-avoiding timing" according to the technique of the present invention.

In step S174, when the time point of the light amount peak of the captured image is not reached, the determination is no, and the determination in step S174 is performed again. In step S174, when the time point at which the light amount of the captured image reaches the peak is determined as yes, the strobe-avoided imaging process proceeds to step S176.

As an example, as shown in fig. 19, it is generally known that the influence of the rolling deviation of the electronic shutter is larger than the influence of the rolling deviation of the mechanical shutter. Therefore, even in the imaging device 20, when "the rolling deviation time of the electronic shutter > the rolling deviation time of the mechanical shutter 72", as an example, 2 line strobes occur in the screen of the display device as shown in fig. 20. The same applies to the case where "strobe generation period ≈ rolling deviation time".

Therefore, in step S176, the photoelectric conversion element driving circuit 94A sets the trimming area according to the strobe-influence-avoiding timing. Then, the photoelectric conversion element driving circuit 94A captures an object in the set trimming area in the image capturing area of the photoelectric conversion element 92, and then shifts to step S178. In the present embodiment, the output image data based on the captured image data obtained by executing the processing of step S176 is still image data, but the technique of the present invention is not limited to this. For example, it is needless to say that image data for output based on captured image data obtained by executing the processing of step S176 may be used as moving image data for recording.

The clipping area is determined by a clipping coefficient determined in accordance with the strobe-influence-avoiding timing. In this step S176, the clipping coefficient is derived from the clipping coefficient derivation table 98 stored in the storage circuit 94E. The clipping coefficient derivation table 98 is a table in which the light amount peak period and the clipping coefficient are associated correspondingly. The clipping coefficient of the clipping coefficient derivation table 98 is a coefficient that specifies an imaging area in which the influence of line stroboscopic is avoided among the imaging areas of the photoelectric conversion elements 92.

The processing circuit 94 derives a clipping coefficient corresponding to the light amount peak period calculated in step S172 from the clipping coefficient derivation table 98. The photoelectric conversion element drive circuit 94A sets a trimming area in which the influence of line stroboscopic is avoided in accordance with the trimming coefficient derived from the trimming coefficient derivation table 98, and captures an object in the set trimming area. By executing the processing of step S176, as shown in fig. 20, for example, the image area corresponding to the trimming area in the display device is positioned between the line strobes, and the line strobes can be suppressed from being reflected in the screen.

That is, by executing the processing of this step S176, the object is captured in the trimming area where the line stroboscopic does not occur in the display apparatus at the timing when the generation timing of the line stroboscopic is avoided.

In step S178, the processing circuit 94 generates still image data from the captured image data obtained by the capturing in step S176, outputs the generated still image data to the subsequent stage circuit 90, and the image capturing process proceeds to step S179. That is, in this step S178, the photoelectric conversion element driving circuit 94A outputs captured image data captured in the trimming area in the imaging area of the photoelectric conversion element 92 to the AD conversion circuit 94B. The AD conversion circuit 94B digitizes the input captured image data and stores it in the memory 96. The image processing circuit 94C reads the captured image data from the memory 96, performs processing on the read captured image data to generate still image data, and outputs the generated still image data to the output circuit 94D. The output circuit 94D outputs the image data for output to the I/F56 of the subsequent circuit 90 at the 2 nd frame rate. Still image data is an example of "image data for output" according to the technique of the present invention.

In step S179, the processing circuit 94 changes the 1 st frame rate from the high frame rate to the low frame rate, and then ends the strobe avoidance imaging process.

Since the clock of the imaging device 10 is shifted due to the influence of the temperature change, the actual strobe generation period based on the frequency of the commercial power supply may be shifted from the light amount peak period calculated in step S172. That is, as an example, as shown in fig. 23, the light source blinking characteristics as the blinking light source characteristics may differ after several hours. Although the light source flicker characteristics are deviated in this manner, if the display frame output from the imaging element 20 is directly displayed on the display device as a through image, a line strobe may occur.

Therefore, in step S180, the processing circuit 94 determines whether or not the luminance difference confirmation timing has come. The luminance difference confirmation time is a time predetermined as a time for confirming whether or not the luminance difference at the present time deviates from the luminance difference cycle characteristic graph G based on the luminance difference obtained by executing the processing of the previous step S164 to step S170.

The predetermined time is a time derived in advance from a test or a computer simulation of an actual machine as a time when the clock of the imaging device 10 is off due to a temperature change and a line strobe appears on the screen of the display device. Examples of the luminance difference checking time include a time when a1 st predetermined time (for example, 30 minutes) has elapsed since the processing of the previous step S168 was executed, a time when a2 nd predetermined time (for example, 1 hour) has elapsed after the imaging device 10 was powered on, and the like.

In step S180, if the luminance difference check time is not reached, it is determined as no, and the strobe-avoided imaging process proceeds to step S174. If the luminance difference check time has come, the determination is yes in step S180, and the strobe-avoided imaging process proceeds to step S181. The case where the luminance difference check time has come is an example of the "case where the predetermined condition is satisfied" according to the technique of the present invention.

In step S181, the processing circuit 94 changes the 1 st frame rate from the low frame rate to the high frame rate determined in step S156, and then the strobe-avoided imaging process shifts to step S182.

In step S182, the image processing circuit 94C acquires the luminance from the latest captured image data stored in the memory 96, and then proceeds to step S184.

In step S184, the image processing circuit 94C acquires the luminance from the captured image data of the frame preceding the latest captured image data among the captured image data of the plurality of frames stored in the memory 96, and the strobe-avoided imaging process then shifts to step S186.

In step S186, the image processing circuit 94C calculates a luminance difference, and then the strobe-avoided imaging process shifts to step S188. The luminance difference calculated by performing the process of this step S186 is the subtraction result of subtracting the luminance acquired in step S182 from the luminance acquired in step S184.

In step S188, the image processing circuit 94C determines whether or not the luminance difference calculated in step S186 deviates from the luminance difference cycle characteristic graph G based on the luminance difference obtained by performing the processing of the previous step S164 to step S170.

In step S188, if the luminance difference calculated in step S186 does not deviate from the luminance difference cycle characteristic graph G based on the luminance difference obtained by executing the processing of the previous step S164 to step S170, it is determined as no, and the strobe-avoided imaging process proceeds to step S174. That is, it is determined as no if the luminance difference calculated in step S186 is not on the luminance difference cycle characteristic graph G based on the luminance difference obtained by executing the processing of the previous step S164 to step S170.

In step S188, if the luminance difference calculated in step S186 deviates from the luminance difference cycle characteristic graph G based on the luminance difference obtained by executing the processing of the previous step S164 to step S170, it is determined as yes, and the strobe-avoided imaging processing proceeds to step S164. That is, the luminance difference calculated in step S186 is determined to be yes when it is on the luminance difference cycle characteristic map G based on the luminance difference obtained by executing the processing of the previous step S164 to step S170.

In the image pickup processing shown in fig. 16, the CPU52 determines in step S118 whether or not the still image data output from the output circuit 94D by executing the processing of step S178 of the strobe avoidance image pickup processing shown in fig. 17 is input to the I/F of the subsequent-stage circuit 90. In step S118, when the still image data is not input to the I/F56 of the subsequent circuit 90, the determination is no, and the determination in step S118 is performed again. In step S118, when the still image data is input to the I/F56 of the subsequent stage circuit 90, it is determined as yes, and the image capturing process proceeds to step S120.

In step S120, the CPU52 acquires still image data and executes various processes, and then shifts to step S108. The "various processes" herein include, for example, a process of outputting still image data to the image processing unit 62. When the still image data is output to the image processing unit 62, for example, the image processing unit 62 performs signal processing on the still image data and outputs the still image data subjected to the signal processing to an external device (not shown) via the external I/F63. Here, examples of the "external device" include a memory card, an SSD (Solid State Drive), a USB (Universal Serial Bus) memory, a PC (Personal Computer), and a server.

In step S110 shown in fig. 16, the CPU52 ends the display of the through image, and then ends the image capturing process.

If the imaging process shown in fig. 16 and the strobe-avoided imaging process shown in fig. 17 are executed, the sequence process shown in fig. 22 is executed as an example.

Here, the sequence processing shown in fig. 22 will be described. First, in a state where the release button 25 is not pressed, a through image is displayed on the display device, and the 1 st process is performed at a low frame rate in the processing circuit 94 of the imaging element 20.

When the release button 25 is pressed to be in a half-pressed state, AF and AE are executed. Also, even if the release button 25 is in the half-pressed state, the through image is displayed on the display device, and the 1 st process is performed at a low frame rate in the processing circuit 94 of the imaging element 20.

When the release button 25 is fully pressed, the processing circuit 94 executes the measurement detection processing in addition to the 1 st processing. The 1 st process and the measurement detection process are executed at the high frame rate determined in the process of step S156. Even if the release button 25 is fully pressed, the screen of the display device is not shielded from light, and a live preview image is displayed on the screen of the display device.

When the imaging element 20 finishes the measurement detection process, the imaging element starts imaging after waiting for the light amount peak to be reached. The waiting until the light amount peak is reached means waiting until the determination of step S174 of the strobe-avoided imaging process shown in fig. 17 is yes, for example.

If the photographing of the imaging element 20 is ended, the through image is displayed on the display device, and the 1 st process is performed at a low frame rate in the processing circuit 94 of the imaging element 20.

As described above, the imaging apparatus 10 incorporates the processing circuit 94 and the memory 96. In the image pickup apparatus 10, an object is picked up at the 1 st frame rate, and picked-up image data obtained by the image pickup is stored in the memory 96. Then, the image processing circuit 94C performs processing based on the captured image data stored in the memory 96. Image data for output based on the captured image data is output to the subsequent stage circuit 90 at the 2 nd frame rate by the output circuit 94D. Then, the 1 st frame rate is determined in accordance with the strobe generation period, and the strobe influence avoiding timing is detected by the processing circuit 94 from the captured image data of a plurality of frame amounts. Accordingly, in the imaging apparatus 10, the imaging is performed at the strobe-influence-avoiding timing at the 1 st frame rate determined in accordance with the strobe-generation cycle, and therefore, the imaging in which the influence of the strobe is avoided can be realized.

In the imaging apparatus 10, the processing time of the measurement detection processing executed by the processing circuit 94 is determined according to the strobe generation cycle. Thus, the imaging device 10 can appropriately determine the processing time required in the measurement detection processing, compared to the case where the luminance difference between frames of the captured image data is not used.

In the imaging apparatus 10, the timing at which the luminance difference changes from a positive value to a negative value is set as the strobe-influence-avoiding timing. Accordingly, the imaging apparatus 10 can detect the timing at which the captured image is brightest as the strobe-influence-avoiding timing.

Then, in the imaging device 10, the measurement detection process is terminated under the condition that the luminance difference changes from a positive value to a negative value 2 times. Accordingly, the imaging device 10 can minimize the processing time required for the measurement detection processing.

In the imaging apparatus 10, after the measurement detection processing is performed, the 1 st frame rate is set to a frame rate lower than the frame rate in the measurement detection processing until the determination in step S180 included in the strobe-avoided imaging processing is yes. Accordingly, the imaging apparatus 10 can reduce power consumption as compared with a case where the frame rate in the measurement detection process is applied to a process other than the measurement detection process.

In the imaging apparatus 10, when it is determined as yes in step S180 included in the strobe-avoided imaging process, the processing circuit 94 restarts the execution of the measurement detection process. Therefore, the imaging apparatus 10 can reduce power consumption as compared with a case where the measurement detection process is always executed.

Then, in the imaging apparatus 10, when it is determined as yes in step S180 and as yes in step S188 included in the strobe-avoided imaging process, the measurement detection process is restarted. Thus, the imaging apparatus 10 can avoid performing unnecessary measurement detection processing.

Then, in the imaging apparatus 10, when the luminance of the captured image reaches the peak value, the image is captured (steps S174 and S176). Therefore, the imaging device 10 can obtain a brighter captured image than when capturing images at a different timing from the timing at which the luminance of the captured image reaches the peak.

In the imaging device 10, the output image data obtained by imaging at the timing of the light amount peak is output from the output circuit 94D to the subsequent circuit 90. Accordingly, the imaging apparatus 10 can output image data for output that avoids the influence of the strobe.

In the imaging apparatus 10, the display control unit performs control to display an image based on the output image data on the display device. Accordingly, the imaging apparatus 10 can display an image in which the influence of the strobe is avoided on the display apparatus.

Then, in the imaging apparatus 10, imaging is performed in a trimming area that is an imaging area selected according to a trimming coefficient determined according to the strobe-influence-avoiding timing. Therefore, the imaging apparatus 10 can suppress line strobing in the image, as compared with the case where imaging is performed in all the imaging regions of the photoelectric conversion elements 92.

In the imaging apparatus 10, the strobe generation cycle is determined in advance as a generation cycle of a strobe caused by a blinking light source that blinks by supplying an alternating current from a commercial power supply. Thus, the imaging apparatus 10 can reduce the number of steps for determining the strobe generation cycle, as compared with a case where the strobe generation cycle is not determined in advance.

In the imaging apparatus 10, 10ms is calculated as a strobe generation cycle when the frequency indicated by the power supply frequency information is 50Hz, and 8.33ms is calculated as a strobe generation cycle when the frequency indicated by the power supply frequency information is 60 Hz. Then, in the case where the strobe generation period is 10ms, the high frame rate of the 1 st frame rate is determined as the 1 st high frame rate of the 1 st high frame rate and the 2 nd high frame rate. In the case where the strobe generation period is 8.33ms, the high frame rate of the 1 st frame rate is determined as the 2 nd high frame rate of the 1 st high frame rate and the 2 nd high frame rate. That is, the 1 st frame rate increases as the strobe generation period shortens. Thus, the imaging apparatus 10 can improve the detection accuracy of the strobe-influence-avoiding timing, compared to the case where the 1 st frame rate is fixed.

In the imaging apparatus 10, the captured image data is image data obtained by capturing an object in a rolling shutter manner. Thus, the imaging apparatus 10 can avoid the influence of the strobe generated when the subject is imaged in the rolling shutter method.

In addition, in the image pickup apparatus 10, a lamination type imaging element in which a processing circuit 94 and a memory 96 are laminated on the photoelectric conversion element 92 is adopted as the imaging element 20. Thus, the imaging apparatus 10 can improve the detection accuracy of the strobe-influence-avoiding timing, as compared with the case of using an imaging element of a type in which the storage section is not stacked on the photoelectric conversion element 92.

In the above-described embodiment, the difference between the average luminance of the entire captured image of one frame and the average luminance of the entire captured image of another frame in the adjacent frames is exemplified as a luminance difference, but the technique of the present invention is not limited to this. When the entire captured image of 1 frame is set as the calculation target of the luminance difference, the accuracy of the calculation of the luminance difference is affected by the movement of the subject and/or the hand shake, and therefore the luminance of the entire captured image greatly changes between frames. For example, as shown in FIG. 24, the luminance difference CnAnd a difference in luminance Cn+1Is affected by the movement of the subject and/or hand shake. In this case, the accuracy of the luminance difference period characteristic graph G is lowered, and accordingly, the detection accuracy at the strobe-influence-avoiding timing is also lowered.

In order to reduce the influence of the movement of the object and/or the camera shake, the luminance difference of the local regions corresponding to each other between the frames of the captured image data may be calculated. For example, as shown in fig. 25, a captured image of 1 frame is divided into two divided images, i.e., a1 st divided image and a2 nd divided image, in the vertical direction, and a luminance difference between the divided images corresponding to the positions is calculated between the adjacent frames. Then, the luminance difference between the divided images with little influence of the movement of the subject and/or the camera shake may be used in the 1 st divided image and the 2 nd divided image. In the example shown in fig. 25, the luminance difference of the 2 nd divided image is less affected by the movement of the subject and/or the hand shake than the luminance difference of the 1 st divided image, and therefore the luminance difference of the 2 nd divided image is adopted.

The number of divisions of the captured image may be 3 or more, or the captured image may be divided not in the vertical direction but in the horizontal direction, or the captured image may be divided not only in the vertical direction but also in the horizontal direction. In this way, when there are factors that affect the accuracy of the luminance difference calculation in addition to the movement of the subject and/or the hand shake, it is effective to increase the number of divided images. For example, when imaging is performed in an environment where an extremely bright light source is present, since it is difficult to calculate a luminance difference between frames due to saturation of pixel signals, it is possible to secure a region to be calculated, and the like by increasing the number of divided images,

In the above-described embodiment, the case of calculating the luminance difference between adjacent frames has been described, but the technique of the present invention is not limited to this. For example, since captured image data of a plurality of frames is stored in the memory 96, it is possible to calculate a luminance difference between the luminance of captured image data obtained by capturing two or more frames of latest captured image data and the luminance of the latest captured image data. In the example shown in fig. 11, instead of calculating the luminance difference of the captured image data between the 1 st frame and the 2 nd frame, the luminance difference of the captured image data between the 1 st frame and the 3 rd frame may be calculated. In this case, the captured image data at the 1 st frame rate is an example of the "1 st captured image data" according to the technique of the present invention, and the captured image data at the 3 rd frame rate is an example of the "2 nd captured image data" according to the technique of the present invention.

In addition, when the luminance difference between the luminance of the captured image data obtained by capturing two or more frames of the latest captured image data and the luminance of the latest captured image data is calculated, the processing time of the measurement detection processing is calculated as the time required for at least the luminance difference to change from a positive value to a negative value 2 times, as in the above-described embodiment.

Further, although the clipping coefficient derivation table 98 is illustrated in the above embodiment, the technique of the present invention is not limited to this. For example, the clipping coefficient may be calculated using a clipping coefficient derivation expression having the light amount peak period as an independent variable and the clipping coefficient as a dependent variable.

Further, although the line strobe is exemplified in the above embodiment, the technique of the present invention is not limited to this, and the technique of the present invention can be applied to a surface strobe in which flashing is performed on a surface unit basis.

Further, although the flicker light source that flickers at the frequency of the commercial power supply is illustrated in the above embodiment, the technique of the present invention can be applied even when the imaging device 10 performs imaging in an environment where the flicker light source that flickers regardless of the frequency of the commercial power supply flickers. In this case, step S152 is not required in the strobe-avoiding imaging process shown in fig. 17, and instead of the process of step S154, for example, a process of receiving a strobe generation cycle via the touch panel 42 and/or the operation unit 54 may be applied.

In the above embodiment, the description has been given by taking an example of a mode in which the 1 st frame rate is changed from the low frame rate to the high frame rate from the viewpoint of reducing power consumption.

In the above embodiment, the luminance difference check time is exemplified by the time when the 1 st predetermined time or the 2 nd predetermined time has elapsed, but the technique of the present invention is not limited to this. For example, the luminance difference confirmation timing may be a timing at which an instruction to confirm the luminance difference is issued from the user via the touch panel 42 and/or the operation unit 54.

In the above-described embodiment, as an example of the "case where the predetermined condition is satisfied" according to the technique of the present invention, a case where the determination in step S180 included in the strobe-avoided imaging process is yes, that is, a case where the luminance difference confirmation timing has come is given, but the technique of the present invention is not limited to this. For example, when a predetermined instruction is received by the touch panel 42 and/or the operation unit 54 as an instruction to start the processing after step S181 of executing the strobe-avoided imaging processing, the processing after step S181 of executing the strobe-avoided imaging processing may be executed. Further, in a case where the still image shooting instruction has reached a predetermined number of times (for example, 200 times) in a state where the power of the image pickup apparatus 10 is turned on, the processing after step S181 may be executed.

In the above-described embodiment, the processing after step S164 is executed again under the condition that the processing of steps S180 to S188 is executed, but the technique of the present invention is not limited to this. For example, the processing after step S164 may be executed again at the time when the 1 st predetermined time has elapsed after the processing of step S168 is executed or at the time when the 2 nd predetermined time has elapsed after the power of the imaging apparatus 10 is turned on. When a predetermined instruction is received by the touch panel 42 and/or the operation unit 54 as an instruction to restart the processing after step S164 in which the strobe-avoided imaging processing is executed, the processing after step S164 in which the strobe-avoided imaging processing is executed again may be executed. Further, in a case where the instruction for still image capturing reaches the above-described predetermined number of times in the state where the power of the image pickup apparatus 10 is turned on, the processing after step S164 may be executed.

Further, although the processing circuit 94 realized by ASIC is illustrated in the above embodiment, the strobe-avoided imaging process may be realized by a software configuration of a computer.

In this case, for example, as shown in fig. 26, a program 600 for causing the computer 20A built in the imaging element 20 to execute the above-described strobe-avoided imaging process is stored in the storage medium 700. The computer 20A includes a CPU20A1, a ROM20A2, and a RAM20A 3. Then, the program 600 of the storage medium 700 is installed in the computer 20A, and the CPU20A1 of the computer 20A executes the strobe-avoided imaging process in accordance with the program 600. Here, although one CPU is illustrated as the CPU20a1, the technique of the present invention is not limited to this, and a plurality of CPUs may be used instead of the CPU20a 1. That is, the above-described imaging process and/or strobe avoidance imaging process may be executed by one processor or physically separate processors.

As an example of the storage medium 700, an arbitrary portable storage medium such as an SSD or a USB memory can be given.

The program 600 may be stored in a storage unit such as another computer or a server device connected to the computer 20A via a communication network (not shown), and the program 600 may be downloaded in response to a request from the imaging device 10. In this case, the downloaded program 600 is executed by the computer 20A.

Also, the computer 20A may be provided outside the imaging element 20. In this case, the computer 20A may control the processing circuit 94 in accordance with the program 600.

As hardware resources for executing the various processes described in the above embodiments, various processors shown below can be used. Here, the various processes described in the above embodiments include an imaging process and a strobe-free imaging process. As the processor, for example, a CPU that is a general-purpose processor is cited, and as described above, the processor functions as a hardware resource that executes various processes according to the technique of the present invention by executing a program that is software. The processor may be, for example, a dedicated circuit as a processor having a circuit configuration specifically designed to execute a specific process, such as an FPGA, a PLD, or an ASIC. A memory is also built in or connected to any processor, and any processor also performs various processes by using the memory.

The hardware resources for executing the various processes according to the technique of the present invention may be constituted by one of these various processors, or may be constituted by a combination of two or more processors of the same kind or different kinds (for example, a combination of a plurality of FPGAs, or a combination of a CPU and an FPGA). Also, the hardware resource that executes various processes related to the technique of the present invention may be one processor.

As an example constituted by one processor, there is a first way of: a computer, such as a client or a server, is a combination of one or more CPUs and software to form one processor, and the processor functions as a hardware resource for executing various processes according to the technique of the present invention. Second, there are the following ways: a processor is used, as typified by an SoC (System-on-a-chip) or the like, which implements the functions of the entire System including a plurality of hardware resources that execute various processes according to the technique of the present invention on one IC chip. As described above, the various processes according to the technique of the present invention are realized by using one or more of the various processors described above as hardware resources.

As the hardware configuration of these various processors, more specifically, a circuit in which circuit elements such as semiconductor elements are combined can be used.

In the above-described embodiment, the interchangeable lens camera is exemplified as the imaging device 10, but the technique of the present invention is not limited to this. For example, the techniques of the present invention may be applied to the smart device 900 shown in FIG. 27. As an example, the smart device 900 shown in fig. 27 is an example of an imaging apparatus according to the technique of the present invention. The imaging element 20 described in the above embodiment is mounted on the smart device 900. Even in the smart device 900 having such a configuration, the same operation and effects as those of the imaging apparatus 10 system described in the above embodiment can be obtained. The technique of the present invention is not limited to the smart device 900, and can be applied to a PC or a wearable terminal device.

In the above-described embodiment, the 1 st display 40 and the 2 nd display 80 are exemplified as the display devices, but the technique of the present invention is not limited to this. For example, a separate display attached to the image pickup apparatus main body 12 may be used as the "display portion" according to the technique of the present invention.

The imaging process and the strobe-avoiding imaging process described in the above embodiments are only examples. Accordingly, needless to say, unnecessary steps may be deleted, new steps may be added, or the processing order may be changed without departing from the scope of the invention.

The above descriptions and drawings are detailed descriptions of the technical aspects of the present invention, and are only examples of the technical aspects of the present invention. For example, the description about the above-described structure, function, operation, and effect is a description about an example of the structure, function, operation, and effect of the portion relating to the technology of the present invention. Therefore, needless to say, unnecessary portions may be deleted, new elements may be added, or replacement may be made to the above-described description and the illustrated contents without departing from the scope of the present invention. In order to avoid complication and to facilitate understanding of the portions relating to the technology of the present invention, descriptions related to technical common knowledge and the like, which do not require any particular description in terms of enabling implementation of the technology of the present invention, are omitted from the above descriptions and drawings.

In the present specification, "a and/or B" has the same meaning as "at least one of a and B". That is, "A and/or B" means that A may be present alone, B may be present alone, or a combination of A and B may be present. In the present specification, the same concept as "a and/or" B "may be applied to the case where" and/or "is added to represent 3 or more items.

All documents, patent applications, and technical standards described in the present specification are to the same extent as if each document, patent application, and technical standard was specifically and individually described to be incorporated by reference.

48页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:接收装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类