Image capturing apparatus and method of controlling image capturing element

文档序号:1367521 发布日期:2020-08-11 浏览:13次 中文

阅读说明:本技术 图像捕获装置和控制图像捕获元件的方法 (Image capturing apparatus and method of controlling image capturing element ) 是由 阿纳斯·博斯塔曼 马场智宏 于 2018-10-15 设计创作,主要内容包括:本发明的目的是减少仅以高分辨率输出关注区域(ROI)的图像捕获元件的功耗。在其中以预定方向布置的像素行沿垂直于预定方向的方向布置的二维像素阵列中,图像捕获元件使用高分辨率相对于包括预定区域的第一像素行执行图像捕获,并且使用低分辨率相对于其他的第二像素行执行图像捕获。第一图像处理单元基于第一像素行的图像捕获信号生成预定区域的图像。像素加法单元相对于第一像素行的图像捕获信号执行像素间的加法处理,从而提供与第二像素行的图像捕获信号相同的分辨率。第二图像处理单元基于第二像素行的图像捕获信号和已执行加法处理的第一像素行的图像捕获信号来生成整个区域的图像。(An object of the present invention is to reduce power consumption of an image capturing element that outputs only a region of interest (ROI) at high resolution. In a two-dimensional pixel array in which pixel rows arranged in a predetermined direction are arranged in a direction perpendicular to the predetermined direction, an image capturing element performs image capturing with respect to a first pixel row including a predetermined area using high resolution, and performs image capturing with respect to other second pixel rows using low resolution. The first image processing unit generates an image of a predetermined area based on an image capturing signal of the first pixel row. The pixel addition unit performs an inter-pixel addition process with respect to the image-captured signal of the first pixel row, thereby providing the same resolution as that of the image-captured signal of the second pixel row. The second image processing unit generates an image of the entire area based on the image-captured signal of the second pixel row and the image-captured signal of the first pixel row on which the addition processing has been performed.)

1. An image forming apparatus comprising:

an imaging element that performs imaging at a predetermined resolution on a first pixel row including a predetermined area and performs imaging at a resolution lower than the predetermined resolution on a second pixel row other than the first pixel row in a two-dimensional pixel array in which pixel rows arranged in a predetermined direction are arranged in a direction perpendicular to the predetermined direction;

a first image processing unit that generates an image of the predetermined area based on an imaging signal of the first pixel row;

a pixel addition unit that performs addition processing between pixels on the imaging signals of the first pixel row so that a resolution is the same as that of the imaging signals of the second pixel row; and

a second image processing unit that generates an image of the entire area based on the imaging signals of the second pixel row and the imaging signals of the first pixel row subjected to the addition processing.

2. The imaging apparatus of claim 1, further comprising:

a motion detection processing unit that detects motion from a change in a time series of images of the entire area; and

a control unit that controls the imaging element to image the first pixel row while making the region in which the motion is detected the predetermined region.

3. The image forming apparatus as set forth in claim 2,

wherein the control unit controls the imaging element to increase a frame rate when transitioning from a state in which the motion is not detected to a state in which the motion is detected.

4. The image forming apparatus as set forth in claim 2,

wherein the control unit controls the imaging element to change a frame rate according to a moving speed of a region in which the motion is detected.

5. The image forming apparatus as set forth in claim 2,

wherein, in a case where a plurality of moving bodies different in moving speed are detected in the image of the entire region, the control unit controls the imaging element to image the first pixel row while making a region including the moving bodies the predetermined region.

6. The imaging apparatus of claim 2, further comprising:

an exposure evaluation value generation unit that generates an exposure evaluation value based on the area in which the motion is detected in the image of the entire area,

wherein the control unit controls the imaging element to perform exposure based on the exposure evaluation value.

7. The image forming apparatus as set forth in claim 2,

wherein, in a state in which the motion is detected, the imaging element alternately repeats a first frame period in which imaging is performed on the first pixel row at the predetermined resolution and imaging is not performed on the second pixel row and a second frame period in which imaging is performed on all pixel rows at a resolution lower than the predetermined resolution.

8. The image forming apparatus as set forth in claim 2,

wherein the imaging element performs imaging of the first pixel row at the predetermined resolution only in a state where the motion is detected.

9. The imaging apparatus of claim 1, further comprising:

an output processing unit that outputs the image of the predetermined area generated by the first image processing unit.

10. A control method of an imaging element that performs imaging with a predetermined resolution on a first pixel row including a predetermined area and performs imaging with a resolution lower than the predetermined resolution on a second pixel row other than the first pixel row in a two-dimensional pixel array in which pixel rows arranged in a predetermined direction are arranged in a direction perpendicular to the predetermined direction, the control method comprising:

a first image processing process of generating an image of a predetermined area based on the imaging signal of the first pixel row;

a pixel addition process of performing addition processing between pixels on the imaging signals of the first pixel row to make a resolution identical to that of the imaging signals of the second pixel row; and

a second image processing process of generating an image of the entire area based on the imaging signals of the second pixel row and the imaging signals of the first pixel row subjected to the addition processing.

Technical Field

The present technology relates to an imaging apparatus. In particular, the present technology relates to an imaging apparatus including an imaging element that takes an image and a control method of the imaging element.

Background

When image processing or the like is performed, the processing is sometimes performed while focusing on a specific area in an image. In this case, the region is referred to as a region of interest (ROI). In the imaging apparatus, it is desirable to reduce power consumption by performing imaging only in such a specific region. Therefore, for example, an imaging apparatus is proposed which distinguishes a target object region in a first image obtained by a first imaging element and controls a second imaging element in accordance with the target object region to obtain a second image (for example, see patent document 1).

Disclosure of Invention

Problems to be solved by the invention

In the above-described conventional technique, it has been controlled to take an image according to a target object area. However, since the first image for distinguishing the target object region and the second image corresponding to the target object region are separately imaged, there is a possibility that power consumption is not sufficiently reduced.

The present technology is achieved in consideration of such a situation, and an object of the present invention is to reduce power consumption of an imaging element that outputs only a region of interest with high resolution.

Solution to the problem

The present technology is achieved to solve the above-described problems, and a first aspect of the present technology is an imaging apparatus including: an imaging element that performs imaging at a predetermined resolution on a first pixel row including a predetermined area and performs imaging at a resolution lower than the predetermined resolution on a second pixel row other than the first pixel row in a two-dimensional pixel array in which pixel rows arranged in a predetermined direction are arranged in a direction perpendicular to the predetermined direction; a first image processing unit that generates an image of a predetermined area based on an imaging signal of the first pixel row; a pixel addition unit that performs addition processing between pixels on the imaging signals of the first pixel row so that the resolution is the same as that of the imaging signals of the second pixel row; and a second image processing unit that generates an image of the entire area based on the imaging signal of the second pixel row and the imaging signal of the first pixel row subjected to the addition processing. This brings about the following effects: by performing imaging only on a predetermined area at high resolution to output, and capturing an image of the entire area at low/high resolution, power consumption of an imaging element is reduced.

Furthermore, the first aspect may further include: a motion detection processing unit that detects motion from a change in a time series of images of the entire area; and a control unit that controls the imaging element to image the first pixel row while making the area in which the motion is detected a predetermined area. This brings about the following effects: motion is detected in an image of the entire area photographed at low/high resolution to follow the motion as a predetermined area.

Further, in the first aspect, the control unit may control the imaging element to increase the frame rate when transitioning from a state in which motion is not detected to a state in which motion is detected. This brings about the effects of increasing the frame rate and performing region prediction.

Further, in the first aspect, the control unit may control the imaging element to change the frame rate in accordance with a moving speed of the region in which the motion is detected. This brings about an effect of dynamically changing the frame rate according to the moving speed.

Further, in the first aspect, in the case where a plurality of moving bodies different in moving speed are detected in the image of the entire region, the control unit may control the imaging element to image the first pixel row while making the region including the moving bodies a predetermined region. This brings about an effect of collectively processing a plurality of moving bodies.

Further, the first aspect may further include an exposure evaluation value generation unit that generates an exposure evaluation value based on a region in which motion is detected in an image of the entire region, wherein the control unit may control the imaging element to perform exposure based on the exposure evaluation value. This brings about an effect of performing exposure based on the area where the motion is detected.

Further, in the first aspect, in a state where the motion is detected, the imaging element may alternately repeat a first frame period in which imaging is performed on the first pixel row at a predetermined resolution and imaging is not performed on the second pixel row and a second frame period; and in the second frame period, imaging is performed on all the pixel rows at a resolution lower than the predetermined resolution. This has the effect of making the control of the pixel rows common for each frame period.

Further, in the first aspect, the imaging element may perform imaging of the first pixel row at a predetermined resolution only in a state where the motion is detected. This has the effect of being driven by motion detected events.

Further, the first aspect may further include an output processing unit that outputs the image of the predetermined area generated by the first image processing unit. This brings about an effect of outputting only an image of a predetermined area.

Effects of the invention

The present technology can have an excellent effect of reducing power consumption of an imaging element that outputs only a region of interest with high resolution. Note that the effect is not necessarily limited to the effect described herein, and may be any effect described in the present disclosure.

Drawings

Fig. 1 is a diagram showing a configuration example of an imaging apparatus in an embodiment of the present technology.

Fig. 2 is a diagram showing a basic operation example of an imaging apparatus in the embodiment of the present technology.

Fig. 3 is a diagram showing a timing example of pixel reading of the imaging device in the embodiment of the present technology.

Fig. 4 is a diagram showing an example of region prediction in the embodiment of the present technology.

Fig. 5 is a diagram showing another timing example of pixel reading of the imaging device in the embodiment of the present technology.

Fig. 6 is a diagram showing an operation example in a case where a plurality of moving bodies are detected in the embodiment of the present technology.

Fig. 7 is a diagram showing an operation example in the case where normal imaging and sensing imaging are performed for different frames in the embodiment of the present technology.

Fig. 8 is a flowchart illustrating an example of determining imaging control in each region in an image in the embodiment of the present technology.

Fig. 9 is a diagram illustrating a first example of pixel driving in the imaging apparatus in the embodiment of the present technology.

Fig. 10 is a diagram showing an example of a pixel circuit assumed in the first example of pixel driving in the embodiment of the present technology.

Fig. 11 is a diagram showing a timing example when reading a block row not including an ROI in the first example of pixel driving in the embodiment of the present technology.

Fig. 12 is a diagram showing a timing example when reading a block row including an ROI in the first example of pixel driving in the embodiment of the present technology.

Fig. 13 is a diagram illustrating a second example of pixel driving in the imaging apparatus in the embodiment of the present technology.

Fig. 14 is a diagram showing an example of a pixel circuit assumed in the second example of pixel driving in the embodiment of the present technology.

Fig. 15 is a diagram showing a timing example when reading a block row including an ROI in the second example of pixel driving in the embodiment of the present technology.

Fig. 16 is a view showing an example of a column driving dividing circuit of a pixel circuit assumed in the second example of pixel driving in the embodiment of the present technology.

Fig. 17 is a diagram showing an example of a pixel circuit assumed in the third example of pixel driving in the embodiment of the present technology.

Fig. 18 is a diagram showing a timing example when reading a block row including an ROI in the third example of pixel driving in the embodiment of the present technology.

Detailed Description

Modes for carrying out the present technology (hereinafter, referred to as embodiments) are described below. The description is given in the following order.

1. Configuration (configuration example of image forming apparatus)

2. Operation (operation example of image Forming apparatus)

3. Circuit (mounting example of pixel drive)

<1. configuration >

[ arrangement of image Forming apparatus ]

Fig. 1 is a diagram showing a configuration example of an imaging apparatus of an embodiment of the present technology. The imaging apparatus includes an imaging element 100, a signal processing unit 200, a system control unit 300, and a driving unit 400.

The imaging element 100 is a pixel array in which pixels for imaging an object are two-dimensionally arranged. The imaging element 100 forms a pixel array by arranging a plurality of pixel rows arranged in a predetermined direction in a direction perpendicular to the predetermined direction. The pixel row is generally a line or line group arranged in the horizontal direction, and exposure and imaging are performed in units of the pixel row. However, the pixel rows may be arranged in the vertical direction to perform exposure and imaging.

The imaging element 100 is driven by the driving unit 400 under the control of the system control unit 300. The imaging element 100 performs normal imaging on a region of interest (ROI) at high resolution, and performs sensing imaging on other regions at low resolution.

Here, the normal imaging is imaging for recording, display, or the like of an image, and is used for imaging at a high resolution such as 2560 pixels × 1440 pixels. In this normal imaging, at the time of a/D conversion, a high bit rate of, for example, about 10 bits is assumed, accompanied by a normal dynamic range of about 60 dB. Further, for example, color imaging such as RGB is assumed. Further, a high frame rate of about 60fps (frames/second) is assumed.

In contrast, the sensing imaging is imaging for detecting a moving body, and is imaging for imaging at a low resolution such as 16 pixels × 5 pixels. In this sensing imaging, a low bit rate of, for example, about 8 bits is assumed at the time of a/D conversion, accompanied by a high dynamic range of about 100 dB. Further, a low frame rate of about 30fps caused by monochrome imaging is assumed.

Note that, as described later, in the imaging element 100, imaging is performed not in units of regions but in units of pixel rows, so that pixel rows including the ROI are read at high resolution, and then the resolution thereof is reduced by pixel addition.

The signal processing unit 200 performs signal processing on the imaging signal read from the imaging element 100. The signal processing unit 200 includes a high resolution area image processing unit 210, a pixel addition unit 220, a low resolution area image processing unit 230, a motion detection processing unit 240, an exposure evaluation value generation unit 250, and an output processing unit 260.

The high-resolution area image processing unit 210 performs signal processing on the ROI subjected to normal imaging with high resolution by the imaging element 100. Note that the high-resolution area image processing unit 210 is an example of the first image processing unit recited in the claims.

The pixel addition unit 220 performs pixel addition on the pixel row subjected to normal imaging at high resolution by the imaging element 100 to reduce the resolution thereof. The result of the pixel addition unit 220 is supplied to the low resolution area image processing unit 230.

The low resolution area image processing unit 230 performs signal processing on the imaging signal sensed and imaged at low resolution by the imaging element 100 and the imaging signal of which resolution is reduced by the pixel addition unit 220. The low resolution area image processing unit 230 performs signal processing on the entire area of low resolution obtained by combining the imaging signal sensed and imaged at low resolution by the imaging element 100 and the imaging signal of which resolution is reduced by the pixel addition unit 220. Note that the low-resolution area image processing unit 230 is an example of a second image processing unit recited in the claims.

The motion detection processing unit 240 detects motion from a time-series change in the image of the entire area subjected to the signal processing by the low-resolution-area image processing unit 230. The area in which the motion is detected by the motion detection processing unit 240 is supplied to the system control unit 300 and the exposure evaluation value generation unit 250.

The exposure evaluation value generating unit 250 generates an exposure evaluation value (auto exposure (AE) evaluation value) for exposure control. The exposure evaluation value generating unit 250 generates an exposure evaluation value based on the area in which the motion is detected by the motion detection processing unit 240 in the image of the entire area subjected to the signal processing by the low resolution area image processing unit 230. The exposure evaluation value generated by the exposure evaluation value generation unit 250 is supplied to the system control unit 300.

The output processing unit 260 outputs the imaging signal of the ROI subjected to the signal processing by the high-resolution-region image processing unit 210 to subsequent processing. Here, for example, display control for output to a display device, recording control for recording on a recording medium, and the like are assumed as the subsequent processing. Note that although it is assumed here that the imaging signal of the ROI is output, the imaging signal of the image of the entire region subjected to the signal processing by the low-resolution-region image processing unit 230 may be output.

The system control unit 300 controls the entire image forming apparatus. The system control unit 300 controls, for example, imaging a region in which motion is detected by the motion detection processing unit 240 as an ROI with high resolution. Further, for example, the system control unit 300 controls to image based on the exposure evaluation value generated by the exposure evaluation value generation unit 250. Note that the system control unit 300 is an example of a control unit recited in the claims.

The driving unit 400 drives the imaging element 100 under the control of the system control unit 300.

<2. operation >

[ basic operation ]

Fig. 2 is a diagram showing a basic operation example of an imaging apparatus in the embodiment of the present technology.

In this example, a moving body is detected in the object from time T2 to time T4. The imaging element 100 makes the region in which motion is detected by the motion detection processing unit 240 the ROI, and performs pixel reading on the pixel line including the ROI at high resolution. In contrast, pixel reading is performed at low resolution on pixel rows that do not include the ROI.

The high-resolution area image processing unit 210 performs signal processing on the ROI subjected to normal imaging at high resolution by the imaging element 100, and the output processing unit 260 outputs the ROI. That is, as shown in the sensor output in the figure, during the period in which motion is detected, a Region (ROI) in which motion is detected is output. Therefore, an image is not output during a period in which no motion is detected. In other words, the imaging element 100 is an event-driven imaging element driven by an event of motion detection.

The pixel addition unit 220 reduces the resolution of the pixel row subjected to normal imaging at high resolution by the imaging element 100. Further, pixel reading of low resolution is performed on the pixel rows not including the ROI. Accordingly, the image of the entire area of the low resolution is supplied to the low resolution area image processing unit 230. The signal processing is performed on the image of the entire area at low resolution by the low resolution area image processing unit 230, and this becomes a processing target of the motion detection by the motion detection processing unit 240. Therefore, even in the case where a new moving body enters a region other than the immediately preceding ROI, a motion can be detected from the image of the entire region. Further, the motion detection processing unit 240 performs moving body tracking after detecting one motion, and follows while predicting a region where the ROI moves in the next image (frame) in time series.

Further, the exposure evaluation value generation unit 250 generates an exposure evaluation value based on the Region (ROI) where the motion is detected. Thus, high resolution imaging can be performed with appropriate exposure in the ROI.

Frame rate immediately after moving body detection

Fig. 3 is a diagram showing a timing example of pixel reading of the imaging device in the embodiment of the present technology.

In this example, a moving body is detected from time T3 to time T7 in the object. At time T1 and time T2, no motion is detected in the object, so only the sensing reading 701 of low resolution is performed. At this time, the photographed image is not output.

When a moving body (human) enters at time T3, the motion detection processing unit 240 detects it. When a moving body is detected, sensing imaging is performed at a high frame rate in order to predict the motion of the moving body. That is, the frame rate is doubled between time T3 and time T5. This makes it possible to accurately predict the motion of the moving body and perform the normal reading 702 of high resolution for the ROI at time T5. The image of the ROI photographed in this manner is output via the high-resolution area image processing unit 210 and the output processing unit 260.

Subsequently, the moving body is followed and an image of the ROI is output. During this time, the predicted trajectory is corrected with a gap from the predicted region. When the moving body is no longer detected at time T8, imaging of high resolution is no longer performed, and an image is not output.

Fig. 4 is a diagram showing an example of region prediction in the embodiment of the present technology.

When following a moving body, the moving direction and moving speed of the area must be predicted. As an example of a method for performing this prediction, first, the center of gravity of the moving body is estimated. The center of gravity of the moving body may be, for example, coordinates of the center of a region of the moving body, or center coordinates weighted by density in consideration of the luminance value.

The center of gravity of the mobile body is similarly estimated in the next frame of the time series, and the moving direction and moving speed of the center of gravity of the mobile body are estimated from the difference. Therefore, the center of gravity of the moving body and the region of the moving body in the next frame after the next frame can be predicted.

In the above example, when the entry of the moving body is detected at time T3, the gravity center of the moving body and the region of the moving body at time T5 are predicted based on the gravity centers of the moving body at time T3 and time T4. As described above, during the period from the time T3 to the time T5, imaging is performed at the double frame rate, so that the moving body can be quickly tracked.

[ moving speed and frame Rate of Mobile body ]

Fig. 5 is a diagram showing another timing example of pixel reading of the imaging device in the embodiment of the present technology. This example shows a time series example in the case where the frame rate at the time of high-resolution imaging is dynamically changed.

As in the case of the above example, when a moving body is detected at time T3, sensing imaging is performed at a high frame rate to predict the motion of the moving body. Subsequently, when it is determined that the moving speed of the moving body is low, the system control unit 300 controls the imaging element 100 to perform imaging at a low frame rate from time T6.

Subsequently, when a new moving body is detected at time T7, sensing imaging is performed again at a high frame rate in order to predict the motion of the moving body. Then, when determining that the moving speed of the new moving body is high, the system control unit 300 controls the imaging element 100 to perform imaging at a high frame rate from time T8. In this way, the frame rate can be dynamically changed according to the moving speed of the moving body.

[ in the case where a plurality of moving bodies are detected ]

Fig. 6 is a diagram showing an operation example in a case where a plurality of moving bodies are detected in the embodiment of the present technology.

In this example, a moving body is detected in the object at time T2, and subsequently, another moving body is detected at time T3. That is, after time T3, a plurality of moving bodies having different moving speeds and moving directions exist in the object.

In this case, the imaging element 100 selects a region to accommodate all the target moving bodies, and performs pixel reading at high resolution. Then, as indicated by the sensor output at time T3, the high-resolution-region image processing unit 210 cuts out a region where each moving body exists as an ROI, and outputs a plurality of regions via the output processing unit 260.

However, in the case where a plurality of regions overlap each other as shown at time T4, a region including both the regions may be cut out as the ROI. Subsequently, when the overlap between the plurality of regions disappears, as shown by time T5, a single region is cut out as the ROI.

Note that even in the case where a plurality of regions are made to be ROIs in this way, the exposure evaluation value generation unit 250 generates exposure evaluation values for the plurality of regions, and controls imaging based on the exposure evaluation values.

[ variation of performing normal imaging and sensing imaging for different frames ]

Fig. 7 is a diagram showing an operation example in the case where normal imaging and sensing imaging are performed for different frames in the embodiment of the present technology.

In the above-described embodiment, the normal imaging and the sensing imaging are switched for each pixel row in one frame; however, a variation is described herein in which such switching is not performed.

In this modification, a moving body is detected in the object from time T3 to time T11. At time T1 and time T2, since no motion is detected in the object, only the sensing reading 701 of low resolution is performed. At this time, the photographed image is not output. After that, when the moving body enters at time T3, sensing imaging is performed at a high frame rate as in the above-described embodiment.

In this modification, only one of normal imaging or sensing imaging is performed in one frame. That is, at time T6, the high-resolution normal reading 702 is performed for the ROI, and the image of the ROI is output via the high-resolution-region image processing unit 210 and the output processing unit 260. At the next time T7, low-resolution sensing reading 701 is performed on the entire image. At this time, the image at time T7 is not output.

Subsequently, similarly, at time T8, the high-resolution normal reading 702 is performed for the ROI, and an image of the ROI is output. At the subsequent time T9, the sensing reading 701 of low resolution is performed for the entire image, and the image is not output.

According to the operation of this modification, normal imaging and sensing imaging can be repeated without switching between the normal imaging and the sensing imaging for each pixel row in one frame. However, the frame rate of the output ROI is halved.

[ image formation control for each region ]

Fig. 8 is a flowchart illustrating an example of imaging control to determine each region in an image in the embodiment of the present technology.

In this example, the area in the image is divided into N areas from the area #0 to the area # (N-1), and the content of the imaging control is determined for each area. Here, the variable n is used to repeatedly determine each region. The initial value of the variable n is "0" (step S911).

When the presence of a moving body is predicted in the region # n (step S912: yes), the imaging element 100 images the region # n at high resolution, and outputs an imaging signal of the ROI via the high resolution region image processing unit 210 and the output processing unit 260 (step S913). Further, the imaging signal of the region # n imaged at high resolution with the resolution reduced by the pixel addition unit 220 (step S914) is supplied to the motion detection processing unit 240 via the low resolution region image processing unit 230.

At this time, in the case where a moving body is not detected in the area # n (step S915: no), the motion detection processing unit 240 determines that the motion detected so far in the area # n has disappeared, and corrects motion follow for the next frame (step S926). In contrast, in the case where a moving body is detected in the region # n (step S915: YES), it is determined that there is no change in the detection of the moving body, the variable n is added with "1" (step S927), and the process shifts to the next region. That is, as long as the variable N is smaller than N (step S928: YES), the processes after step S912 are repeated.

In the case where the presence of the moving body is not predicted in the region # n (no in step S912), the imaging element 100 images the region # n at a low resolution (step S923). The imaging signal of the region # n photographed at the low resolution is supplied to the motion detection processing unit 240 via the low resolution region image processing unit 230.

At this time, in the case where a moving body is detected in the region # n (yes in step S925), the motion detection processing unit 240 determines that a new moving body enters the region # n and corrects motion following for the next frame (step S926). In contrast, in the case where a moving body is not detected in the region # n (step S925: NO), it is determined that there is no change in the detection of the moving body, the variable n is added with "1" (step S927), and the process shifts to the next region. That is, as long as the variable N is smaller than N (step S928: YES), the processes after step S912 are repeated.

Note that a process of determining the content of imaging control of each region is described here. However, in an actual circuit, exposure and imaging are performed in units of pixel rows. Which will be described in detail hereinafter.

<3. Circuit >

[ first example of Pixel Driving ]

Fig. 9 is a diagram illustrating a first example of pixel driving in the imaging apparatus in the embodiment of the present technology.

In the first example of pixel driving, pixels are driven in units of block rows, which are line groups arranged in the horizontal direction. At this time, the block line including the ROI 710 is driven by high resolution reading (high resolution imaging driving). In contrast, the block line not including the ROI 710 is driven by low resolution reading of pixel addition (sensing imaging drive).

Fig. 10 is a diagram showing an example of a pixel circuit assumed in the first example of pixel driving in the embodiment of the present technology.

Here, an FD-shared pixel circuit is assumed in which a Floating Diffusion (FD)119 is shared by four pixels 111 to 114. The four pixels 111 to 114 are photodiodes that convert received light into electric charges. The FD 119 is a diffusion layer region that converts charges into a voltage signal.

For example, four pixels 111 to 114 to which the colors of the bayer array are assigned are formed by using a red pixel, a blue pixel, and two green pixels. A line arranged in units of four pixels 111 to 114 in the horizontal direction is referred to as an FD row. Then, the N FD rows arranged in the vertical direction are referred to as block rows.

The transfer transistors 121 to 124 are connected to the four pixels 111 to 114, respectively. The transfer gate signals TG0 to TG3 are supplied to the gates of the transfer transistors 121 to 124, and transfer to the FD 119 is controlled.

Further, the reset transistor 131 is connected to the FD 119. A reset signal RST is supplied to the gate of the reset transistor 131, and resetting of the pixels 111 to 114 and the FD 119 is controlled.

Further, the amplifying transistor 132 is connected to the FD 119. The amplification transistor 132 amplifies a voltage signal based on the electric charges transferred from the pixels 111 to 114 to the FD 119. The selection transistor 141 is connected to the amplification transistor 132. A selection signal SEL is supplied to the gate of the selection transistor 141, and the output to the Vertical Signal Line (VSL)150 is controlled.

Further, the addition transistor 151 is connected to the FD 119. The addition enable signal ALSEN is supplied to the gate of the addition transistor 151, and thus, addition of charges connected through the FD in the vertical direction is controlled.

The outputs from the four pixels 111 to 114 are supplied to the vertical signal line 150 of each column, and are converted into digital signals by the a/D converter 180. Further, the vertical signal lines 150 are selectively connected by the switches 160 so that addition can be performed in the horizontal direction. That is, in the imaging element 100, pixel addition in the vertical direction by FD connection and pixel addition in the horizontal direction by the switch 160 can be performed in analog horizontal, and thus reading of low resolution can be performed.

Fig. 11 is a diagram showing a timing example when reading a block row not including an ROI in the first example of pixel driving in the embodiment of the present technology.

In a first example of pixel driving, sensing imaging driving is applied to a block row that does not include the ROI. That is, pixel addition is performed at an analog level, and reading of low resolution is performed.

As shown in the figure, the shutter operation is simultaneously performed in all FD rows in one block row, and exposure is started. Then, after the exposure period elapses, the reading operation is simultaneously performed in all the FD rows in one block row. At this time, the addition enable signal ALSEN is enabled (set to the high level), and pixel addition in the vertical direction of one block row is performed. Further, the switch 160 is placed in an ON state (ON), and pixel addition in the horizontal direction is performed.

Fig. 12 is a diagram showing a timing example when reading a block row including an ROI in the first example of pixel driving in the embodiment of the present technology.

In a first example of pixel driving, high resolution imaging driving is applied to the row of blocks comprising the ROI. That is, as shown in the figure, with time shift, the transfer gate signals TG0 to TG3 are enabled (set to a high level) in the pixels 111 to 114 in the FD row, and exposure is started. Then, after the exposure period elapses, reading is performed in each of the pixels 111 to 114.

In the high resolution imaging drive, the same pixel addition as in the sensing imaging drive is not performed. Therefore, in order to generate an image of low resolution for the motion detection processing unit 240, the pixel addition unit 220 of the signal processing unit 200 reduces the resolution at a digital level.

[ second example of Pixel Driving ]

Fig. 13 is a diagram illustrating a second example of pixel driving in the imaging apparatus in the embodiment of the present technology.

In the second example of the pixel driving, the sensing imaging driving and the high resolution imaging driving are performed at different times for the block line including the ROI 710. At this time, the target drive is selected for each block column by the column enable signal CLMEN. Further, in contrast, for the block rows not including the ROI 710, the sensing imaging drive is performed as in the first example.

Fig. 14 is a diagram showing an example of a pixel circuit assumed in the second example of pixel driving in the embodiment of the present technology.

The pixel circuit has a configuration similar to that described in the first example. However, the column enable signal CLMEN is connected to each block column including the pixel column group in the vertical direction, and the driving is controlled for each block column. That is, CLMEN (0) is connected to the first block column, CLMEN (1) is connected to the second block column, and similarly, CLMEN (M-1) is connected to the mth block column, and control is performed in units of block columns.

Fig. 15 is a view showing a timing example when reading a block row including an ROI in the second example of pixel driving in the embodiment of the present technology.

Here, a case is assumed where the first block column does not include the ROI and the second block column includes the ROI. Therefore, at the time of sensing the imaging drive, CLMEN (0) is enabled (set to a high level). In contrast, at the time of the high resolution imaging drive, CLMEN (1) is enabled (set to a high level). This makes it possible to select drive contents to be applied to each block column.

Note that the timing example when reading a block row not including the ROI is similar to that in the above-described first example, and thus detailed description thereof is omitted.

Fig. 16 is a view showing an example of a column drive dividing circuit of a pixel circuit assumed in the second example of pixel drive in the embodiment of the present technology.

The column drive dividing circuit is a circuit that generates transfer gate signals TG0 to TG3 for the FD row from a column enable signal CLMEN. The divided transfer gate signals are generated by generating a logical product of the transfer gate signal of the FD row and the column enable signal. Here, the divided transfer gate signal is represented as TG # (block column, FD row, FD column).

[ third example of Pixel Driving ]

Fig. 17 is a diagram showing an example of a pixel circuit assumed in the third example of pixel driving in the embodiment of the present technology.

In the first example described above, the pixel arrangement in the bayer array is assumed, but in the third example, one of two green pixels is used as a pixel for sensing addition. The pixel for sensing addition may be used as it is as a green pixel in a bayer array, or may be made to be a white pixel without a color filter. In this example, a pixel 115 for sensing addition is provided instead of the pixel 114. Further, the transfer transistor 125 is connected to the pixel 115, and a transfer gate signal TGW is supplied to the gate of the transfer transistor 125.

Fig. 18 is a view showing a timing example when reading a block row including an ROI in the third example of pixel driving in the embodiment of the present technology.

In a third example of pixel driving, sensing imaging and high resolution imaging are performed on a block line including the ROI. That is, at the time of sensing imaging, only TGW is driven, and TG0 to TG2 other than that are driven. In contrast, at the time of high-resolution imaging driving, the TGW is not driven, and only the other TGs 0 to TG2 are driven. This makes it possible to perform sensing imaging and high-resolution imaging while independently controlling the pixels.

Note that the timing example when reading a block row not including the ROI is similar to that in the above-described first example, and thus detailed description thereof is omitted.

[ Effect ]

In this way, according to the embodiment of the present technology, power consumption of the imaging element 100 can be reduced by performing imaging of only a region of interest (ROI) at high resolution for output and capturing the entire image at low/high resolution to track the ROI.

Note that the above-described embodiments describe examples for embodying the present technology, and there is a correspondence between the contents in the embodiments and the contents specifying the present invention in the claims. Similarly, it is specified in the claims that there is a correspondence between the content of the present invention and the content assigned with the same name in the embodiments of the present technology. However, the present technology is not limited to the embodiments, and may be implemented in various modifications of the embodiments without departing from the gist thereof.

Further, the processes described in the above embodiments may be regarded as a method including a series of processes, or may be regarded as a program for allowing a computer to execute the series of processes or a recording medium storing the program. For example, a Compact Disc (CD), a Mini Disc (MD), a Digital Versatile Disc (DVD), a memory card, a blu-ray (registered trademark) disc, or the like can be used as the recording medium.

Note that the effects described in this specification are merely illustrative and not restrictive; another effect is also possible.

Note that the present technology may also have the following configuration.

(1) An image forming apparatus comprising:

an imaging element that performs imaging at a predetermined resolution on a first pixel row including a predetermined area and performs imaging at a resolution lower than the predetermined resolution on a second pixel row other than the first pixel row in a two-dimensional pixel array in which pixel rows arranged in a predetermined direction are arranged in a direction perpendicular to the predetermined direction;

a first image processing unit that generates an image of a predetermined area based on an imaging signal of the first pixel row;

a pixel addition unit that performs addition processing between pixels on the imaging signals of the first pixel row so that the resolution is the same as that of the imaging signals of the second pixel row; and

and a second image processing unit that generates an image of the entire area based on the imaging signal of the second pixel row and the imaging signal of the first pixel row subjected to the addition processing.

(2) The image forming apparatus according to the above (1), further comprising:

a motion detection processing unit that detects motion from a change in a time series of images of the entire area; and

a control unit that controls the imaging element to image the first pixel row while making the area in which the motion is detected a predetermined area.

(3) The image forming apparatus according to the above (2),

wherein the control unit controls the imaging element to increase the frame rate when transitioning from a state in which motion is not detected to a state in which motion is detected.

(4) The image forming apparatus according to the above (2) or (3),

wherein the control unit controls the imaging element to change the frame rate according to a moving speed of the region in which the motion is detected.

(5) The imaging apparatus according to any one of (2) to (4),

wherein, in the case where a plurality of moving bodies different in moving speed are detected in the image of the entire region, the control unit controls the imaging element to image the first pixel row while making the region including the moving bodies a predetermined region.

(6) The imaging apparatus according to any one of the above (2) to (5), further comprising:

an exposure evaluation value generation unit that generates an exposure evaluation value based on an area in which motion is detected in an image of the entire area,

wherein the control unit controls the imaging element to perform exposure based on the exposure evaluation value.

(7) The image forming apparatus according to any one of the above (2) to (6),

wherein in a state where the motion is detected, the imaging element alternately repeats a first frame period in which imaging is performed on the first pixel row at a predetermined resolution and not performed on the second pixel row and a second frame period; in the second frame period, imaging is performed on all pixel rows at a resolution lower than the predetermined resolution.

(8) The image forming apparatus according to any one of the above (2) to (7),

wherein the imaging element performs imaging of the first pixel row at a predetermined resolution only in a state where the motion is detected.

(9) The imaging apparatus according to any one of the above (1) to (8), further comprising:

and an output processing unit that outputs the image of the predetermined area generated by the first image processing unit.

(10) A control method of an imaging element that performs imaging at a predetermined resolution on a first pixel row including a predetermined area and performs imaging at a resolution lower than the predetermined resolution on a second pixel row other than the first pixel row in a two-dimensional pixel array in which pixel rows arranged in a predetermined direction are arranged in a direction perpendicular to the predetermined direction, the control method comprising:

a first image processing process of generating an image of a predetermined area based on the imaging signal of the first pixel row;

a pixel addition process of performing addition processing between pixels on the imaging signals of the first pixel row to make the resolution the same as that of the imaging signals of the second pixel row; and

and a second image processing process of generating an image of the entire area based on the imaging signal of the second pixel row and the imaging signal of the first pixel row subjected to the addition processing.

REFERENCE SIGNS LIST

100 imaging element

111-115 pixels (photodiode)

121 to 125 pass transistors

131 reset transistor

132 amplifying transistor

141 selection transistor

150 vertical signal line

151 addition transistor

160 switch

180A/D converter

200 signal processing unit

210 high resolution area image processing unit

220 pixel addition unit

230 low resolution area image processing unit

240 motion detection processing unit

250 exposure evaluation value generation unit

260 output processing unit

300 System control Unit

400 drive the unit.

32页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:通过用于光学可读代码识别的CMOS型图像传感器采集图像的方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类