Image processing apparatus, method, system, and computer-readable medium

文档序号:1804624 发布日期:2021-11-05 浏览:6次 中文

阅读说明:本技术 图像处理装置、方法、系统和计算机可读介质 (Image processing apparatus, method, system, and computer-readable medium ) 是由 蝶野庆一 舟山知里 塚田正人 于 2020-02-12 设计创作,主要内容包括:本公开减少由眼睛位置波动引起的图像重新获取。根据本发明,控制器(500)控制通过用于捕获被摄者的虹膜的图像的虹膜图像捕获装置(401-404)控制图像读出。在控制通过虹膜捕获装置进行图像读出中,控制器(500)设置作为图像读出对象的关注区。控制器(500)存储过去信息,并且在对同一被摄者进行图像处理的情况下确定被摄者的眼睛区域是否借助于之前已经使用的关注区被覆盖。在确定眼睛区域不能被覆盖的情况下,控制器(500)校正通过虹膜图像捕获装置的图像读出。(The present disclosure reduces image reacquisition caused by eye position fluctuations. According to the present invention, a controller (500) controls image readout by an iris image capturing device (401) for capturing an image of an iris of a subject. In controlling image readout by an iris capture apparatus, a controller (500) sets a region of interest as an image readout object. The controller (500) stores past information and determines whether or not the eye area of the subject is covered by means of a region of interest that has been used before in the case of image processing on the same subject. In the event that it is determined that the eye region cannot be covered, the controller (500) corrects image readout by the iris image capture device.)

1. An image processing system comprising:

a plurality of iris imaging devices arranged at mutually different positions within the same field of view;

an integral imaging device for imaging a field of view wider than a field of view of the iris imaging device;

a guide device for guiding a subject;

an illumination device for illuminating the subject with light; and

control means for controlling at least one of the following using an image from the whole imaging means: reading out images from the plurality of iris image pickup devices, presenting at least one of the images and sounds through the guide device, or providing illumination using light from the illumination device,

the control means controls image readout from the iris imaging means,

in the control of the image readout, the control device sets a region of interest to be treated as an object of the image readout, and

the control device additionally stores past information, and in a case where image processing is performed on the same subject, the control device determines whether or not the eye area of the subject is successfully covered by the region of interest used in the past, and in a case where it is determined that the eye area of the subject is not successfully covered, the control device corrects the image readout from the iris imaging device.

2. The image processing system according to claim 1,

the control device performs image reading from the plurality of iris imaging devices, an

In the image readout from the plurality of iris image capturing apparatuses, the control means specifies an iris image capturing apparatus capable of capturing an image of the eye of the subject from among the plurality of iris image capturing apparatuses based on the image obtained by the entire image capturing apparatus, sets a region of interest including a position of the eye of the subject in the specified iris image capturing apparatus, and obtains an image of the region of interest from the specified iris image capturing apparatus.

3. The image processing system according to claim 1 or 2,

the plurality of iris image pickup apparatuses are stacked in a vertical direction,

in the setting of the region of interest, the control means determines the iris image pickup apparatus for image readout from among the plurality of iris image pickup apparatuses, and specifies the region of interest within the determined image pickup range of the iris image pickup apparatus, an

The control means corrects the image readout from the iris image pickup device in a case where the determined iris image pickup apparatus for image readout and the region of interest set in the iris image pickup apparatus are the same as the iris image pickup apparatus and the region of interest in the past information.

4. An image processing apparatus comprising:

a control device for controlling image reading from an iris imaging device for imaging an iris of a subject, wherein,

in the control of the image readout, the control means sets a region of interest to be treated as an object of image readout, and

the control device additionally stores past information, and in a case where image processing is performed on the same subject, the control device determines whether or not the eye area of the subject is successfully covered by the region of interest used in the past, and in a case where it is determined that the eye area of the subject is not successfully covered, the control device corrects the image readout from the iris imaging device.

5. The image processing apparatus according to claim 4,

the iris image pickup apparatus includes a plurality of iris image pickup devices stacked in a vertical direction,

in the setting of the region of interest, the control means determines an iris image pickup apparatus for image readout from among the plurality of iris image pickup means, and specifies the region of interest within an image pickup range of the iris image pickup apparatus determined, an

The control means corrects the image readout from the iris image pickup device in a case where the determined iris image pickup apparatus for image readout and the region of interest set in the iris image pickup apparatus are the same as the iris image pickup apparatus and the region of interest in the past information.

6. The image processing apparatus according to claim 4 or 5,

the past information includes information indicating a position of an eye region in a region of interest taken in the past, and

the control means corrects the image readout from the iris imaging means based on the information indicating the position of the eye region.

7. The image processing apparatus according to any one of claims 4 to 6,

the past information includes information indicating whether or not an eye region is included in a region of interest captured in the past, and

the control means determines whether the eye area is successfully covered by the region of interest in the past based on the information indicating whether the eye area of the subject is included.

8. The image processing apparatus according to any one of claims 4 to 7, wherein the control means controls the image readout from the iris image pickup device by using an image picked up by an entire image pickup device for image pickup over a range wider than an image pickup range of the iris image pickup device.

9. An image processing method comprising:

using an image from the whole camera to perform at least one of: reading out images from a plurality of iris imaging devices, presenting at least one of an image and a voice through a guide device for guiding a subject, or providing illumination with light from an illumination device for illuminating the subject with light, wherein the entire imaging device is used for imaging over a field of view wider than fields of view of the plurality of iris imaging devices arranged at mutually different positions within the same field of view.

10. An image processing method comprising:

controlling image readout from an iris imaging apparatus for imaging an iris of a subject;

determining whether or not an eye area of the subject is successfully covered by a region of interest to be treated as an object of the image readout used in the past in the case of image processing on the same subject; and

in a case where it is determined that the eye region of the subject is not successfully covered, the image readout from the iris imaging apparatus is corrected.

11. A non-transitory computer-readable medium storing a program that causes a computer to execute a process, the process comprising:

using an image from the whole camera to perform at least one of: reading out images from a plurality of iris imaging devices, presenting at least one of an image and a voice through a guide device for guiding a subject, or providing illumination with light from an illumination device for illuminating the subject with light, wherein the entire imaging device is used for imaging over a field of view wider than fields of view of the plurality of iris imaging devices arranged at mutually different positions within the same field of view.

12. A non-transitory computer-readable medium storing a program that causes a computer to execute a process, the process comprising:

controlling image readout from an iris imaging apparatus for imaging an iris of a subject;

determining whether or not an eye area of the subject is successfully covered by a region of interest to be treated as an object of the image readout used in the past in the case of image processing on the same subject; and

in a case where it is determined that the eye region of the subject is not successfully covered, the image readout from the iris imaging apparatus is corrected.

Technical Field

The present disclosure relates to an image processing apparatus, method, system, and computer readable medium, and particularly to an image processing apparatus, method, system, and computer readable medium that can be used for authentication using an iris.

Background

Biometric authentication using an iris is known. In such biometric authentication, an iris of a subject (subject) is photographed by using an image pickup device, and a feature value is extracted from a pattern of the photographed iris. To authenticate the subject, the extracted feature value is compared with feature values registered in advance in a database, and pass/fail is determined based on the score of matching therebetween. Further, in order to register a subject to be authenticated, the extracted feature value is added to the database.

As described in non-patent document 1, the iris, which is a ring-shaped tissue around the pupil, has a very complicated pattern and is unique to each person. Further, in photographing of the iris, near infrared light is applied to the eyes of the subject.

As described in non-patent document 2, in the photographing of the iris, an iris image is photographed at a resolution in which the iris radius is represented by 100 to 140 pixels. Further, the wavelength of the near infrared light applied to the subject's eye is in the range between 700nm and 900 nm.

CITATION LIST

Non-patent document

Non-patent document 1: hosoya, "Identification System by Iris registration", Japanese Society for Medical and Biological Engineering 44(1), pages 33-39,2006

Non-patent document 2: daugman, "How Iris Recognition Works," https:// www.cl.cam.ac.uk/-jgd 1000/irisorog

Disclosure of Invention

Technical problem

The diameter of the iris is about 1 cm. Therefore, when the radius of the iris is represented by 100 pixels, the granularity becomes 50 μm. As described above, since the pattern of the iris is microscopic, it is difficult to capture the iris pattern with a quality level sufficient for authentication and verification under the conditions that the distance between the subject and the image pickup device is large, the field of view to be captured is wide, and the subject moves.

In view of the above, it is an object of the present disclosure to provide an image processing apparatus, method, system, and computer-readable medium capable of photographing an iris pattern with a quality level sufficient for authentication and verification.

Solution to the problem

To achieve the above object, in a first aspect, the present disclosure provides an image processing system comprising:

a plurality of iris imaging devices arranged at mutually different positions within the same field of view;

an entire image pickup device for picking up an image in a field of view wider than that of the iris image pickup device;

a guide device for guiding a subject;

an illumination device for illuminating a subject with light; and

a control device for controlling at least one of the following using an image from the whole image pickup device: reading out images from a plurality of iris image pickup devices, presenting at least one of the images and sounds through a guide device, or providing illumination with light from an illumination device, wherein,

the control means controls the image readout from the iris camera means,

in the control of the image readout, the control means sets a region of interest to be an object to be treated as the image readout, and

the control device additionally stores past information, and in the case of performing image processing on the same subject, the control device determines whether or not the subject's eye area is successfully covered by a region of interest used in the past, and in the case of determining that the subject's eye area is not successfully covered, the control device corrects image readout from the iris imaging device.

In a second aspect, the present disclosure provides an image processing apparatus comprising:

a control device for controlling image reading from an iris imaging device for imaging an iris of a subject, wherein,

in the control of the image readout, the control means sets a region of interest to be an object to be treated as the image readout, and

the control device additionally stores past information, and in the case of performing image processing on the same subject, the control device determines whether or not the subject's eye area is successfully covered by a region of interest used in the past, and in the case of determining that the subject's eye area is not successfully covered, the control device corrects image readout from the iris imaging device.

In a third aspect, the present disclosure provides an image processing method comprising:

using an image from the whole camera to perform at least one of: reading out images from a plurality of iris imaging devices, presenting at least one of an image and a voice through a guide device for guiding a subject, or providing illumination with light from an illumination device for illuminating the subject with light, wherein the entire imaging device is used for imaging over a field of view wider than that of the plurality of iris imaging devices arranged at mutually different positions within the same field of view.

In a fourth aspect, the present disclosure provides an image processing method comprising:

controlling image readout from an iris imaging apparatus for imaging an iris of a subject;

in the case of performing image processing on the same subject, determining whether or not the eye area of the subject is successfully covered by a region of interest to be treated as an object of image readout used in the past; and

in the case where it is determined that the eye area of the subject is not successfully covered, the image readout from the iris imaging apparatus is corrected.

In a fifth aspect, the present disclosure provides a non-transitory computer-readable medium storing a program that causes a computer to execute a process including:

using an image from the whole camera to perform at least one of: reading out images from a plurality of iris imaging devices, presenting at least one of an image and a voice through a guide device for guiding a subject, or providing illumination with light from an illumination device for illuminating the subject with light, wherein the entire imaging device is used for imaging over a field of view wider than that of the plurality of iris imaging devices arranged at mutually different positions within the same field of view.

In a sixth aspect, the present disclosure provides a non-transitory computer-readable medium storing a program that causes a computer to execute a process including:

controlling reading out an image from an iris imaging apparatus for imaging an iris of a subject;

in the case of performing image processing on the same subject, determining whether or not the eye area of the subject is successfully covered by a region of interest to be treated as an object of image readout used in the past; and

in the case where it is determined that the eye area of the subject is not successfully covered, the image readout from the iris imaging apparatus is corrected.

Advantageous effects of the invention

An image processing apparatus, method, system, and computer readable medium according to the present disclosure can capture an iris pattern with a quality level sufficient for authentication and verification.

Drawings

Fig. 1 is a block diagram showing an image processing system according to a first exemplary embodiment of the present disclosure;

fig. 2 shows a state of iris imaging control;

fig. 3 is a flowchart showing an operation procedure in the image pickup system;

fig. 4 is a flowchart showing an operation procedure in an image processing system according to a second exemplary embodiment of the present disclosure; and

fig. 5 is a block diagram showing an example of the configuration of a computer apparatus.

Detailed Description

Before a description of example embodiments according to the present disclosure is given, its problems are quantitatively described. As an example, the conditions shown below, which are assumed as the operating conditions for the automatic border control system (ABC system), and the like, will be described below. Assume that the distance between the subject and the image pickup apparatus (the distance between the subject and the door) is 2m, and the horizontal field of view (i.e., the range in the horizontal direction in which both eyes of one subject can be covered) is 0.2 m. Further, the vertical field of view (i.e., the range in the vertical direction in which the eyes of a wide-range subject from a tall subject, which is typically a male, to a short subject, which is typically a female) can be covered) is 0.4 m. Further, it is assumed that the walking speed (moving speed) of the subject with respect to the door is equal to the average of the slow walking speeds of the adults, for example, 1 m/s.

Under the above-described operating conditions, assuming that an image sensor having a pixel pitch of 5 μ M and a lens having an F2 aperture stop and a focal length of 200mm are used, a high resolution of 32M pixels and a high frame rate of 100 frames per second (fps) are required from an image pickup apparatus as described later.

Regarding the resolution, to ensure a horizontal field of view of 0.2m at a position 2m from the imaging device, the imaging device needs 4000 pixels in the horizontal direction (0.2[ m ] ÷ 50[ μm ] ═ 4000). Further, in order to secure a vertical field of view of 0.4m at a position 2m from the imaging device, the imaging device needs 8000 pixels in the vertical direction (0.4[ m ]/[ 50[ μm ]: 8000). As a result, the imaging device requires a resolution of 32M pixels.

On the other hand, in the case of using the above-described lens, if the permissible circle of confusion diameter is 0.03mm, it is possible to ensure that the field of view fixed at a depth of 2m is about 1 cm. In the case where the subject has a walking speed of 1m/s, the time for the subject to pass through a subject depth of 1cm is 1[ cm ]/[ 100[ cm/s ]: 0.01s, and in this case, the imaging apparatus requires a performance of 100fps (time resolution of 1[ s ]/[ 100[ fps ]/[ 0.01 s) in order to capture the 0.01s instant at which the iris of the walking subject is in focus.

Image pickup equipment capable of satisfying a high resolution of 32M pixels and a high frame rate of 100fps with a single device does not exist as a popular product. Therefore, under the above-described operating conditions, it is difficult to capture an iris pattern at a quality level sufficient for authentication and verification. This concludes the quantitative description of the problem.

Example embodiments according to the present disclosure will be described below with reference to the accompanying drawings. Fig. 1 illustrates an image processing system according to a first exemplary embodiment of the present disclosure. The image processing system includes the whole imaging apparatus 100, the guidance apparatus 200, the illumination apparatus 300, the iris imaging apparatuses 401 to 404, and the controller 500. Note that although the number of iris image pickup apparatuses is four in fig. 1, the number of iris image pickup apparatuses is not limited to any particular number. The number of iris image pickup apparatuses may be appropriately set according to the field of view to be covered and the resolution of the iris image pickup apparatuses available.

The whole imaging apparatus (whole image pickup device) 100 photographs a subject with a wide field of view which is wide enough to cover the whole subject range from a tall subject to a short subject. The whole imaging apparatus 100 may have a resolution in which the subject can be authenticated by his/her face.

The controller (control device) 500 monitors the whole image supplied from the whole imaging apparatus 100, and controls the guide apparatus (guide device) 200, the illumination apparatus (illumination device) 300, and the plurality of iris image capturing apparatuses (iris image capturing devices) 401 to 404. The functions of the controller 500 may be implemented by hardware or a computer program. The controller 500 determines the start of biometric authentication of the subject based on his/her whole image supplied from the whole imaging apparatus 100 or based on an external input.

The control performed by the controller 500 includes guidance control, illumination control, and iris imaging control. In the guidance control, the controller 500 supplies guidance control information for guiding the subject to the guidance device 200. The guidance device 200 guides the subject based on the guidance control information. The directing device 200 comprises for example a display and/or a speaker. For example, the guidance device 200 presents an image and a sound for indicating the start of biometric authentication through a display and/or a speaker, respectively. Further, the guidance apparatus 200 presents an image and a sound for inducing the subject to turn his/her eyes to the iris imaging apparatus through a display and/or a speaker, respectively.

In the illumination control, the controller 500 supplies illumination control information for applying illumination light to a subject to the illumination apparatus 300. The illumination apparatus 300 applies light (e.g., near-infrared light) to the subject based on the illumination control information. The lighting apparatus 300 includes an LED (light emitting diode) as a light source and a synchronization signal generator. The amount of light applied from the illumination apparatus 300 to the subject is determined by the current value supplied to the LED, the light emission time of the LED, and the light emission period thereof, and the illumination control information includes the value thereof. When the LEDs are not continuously kept in the on state, the light emission period of the LEDs is synchronized with the frame rate of the plurality of iris image pickup apparatuses 401 to 404.

In the iris imaging control, the controller 500 determines one of the plurality of iris imaging apparatuses 401 to 404, which can appropriately image the region of the subject including his/her eyes, based on the whole image supplied from the whole imaging apparatus 100. Further, the controller 500 determines the vertical position of the region of interest read out at high speed in the determined iris image pickup apparatus.

Fig. 2 shows a state of iris imaging control. Details of iris imaging control will be described with reference to fig. 2, and in this example, it is assumed that a general-purpose camera having a frame rate of 12M pixels (4,000 horizontal pixels and 3,000 vertical pixels) and 60fps is used for each of the iris imaging apparatuses 401 to 404. Such cameras have become widespread as industrial cameras. In the case of photographing at a granularity of 50 μm with which a subject can be authenticated by his/her iris, the horizontal and vertical fields of view of each iris imaging apparatus are 0.2m (4,000 × 50[ μm ] ═ 0.2[ m ]) and 0.15m (3,000 × 50[ μm ] ═ 0.15[ m ]), respectively.

The plurality of iris image pickup apparatuses 401 to 404 are arranged such that they are stacked on each other in the vertical direction. Note that the plurality of iris image capturing apparatuses 401 to 404 are arranged such that image areas of iris image capturing apparatuses adjacent to each other partially overlap each other. For example, the iris image pickup apparatuses 401 to 404 are arranged such that image areas of iris image pickup apparatuses adjacent to each other overlap each other by 2.5 cm. In this case, they can secure fields of view of 0.2m in the horizontal direction and 0.45m ((0.15-0.025) + (0.15-0.025-0.025) + (0.15-0.025-0.025) m) in the vertical direction at a focus point 2m away from the four iris imaging apparatuses. That is, the required fields of view of 0.2m in the horizontal direction and 0.4m in the vertical direction can be secured. Note that, as understood from the drawings and the above description, the iris imaging apparatuses have the same field of view as each other and are placed at different positions from each other.

In the case where the frame rate of each iris image pickup apparatus is 60fps, when they are used as they are, they cannot satisfy the required frame rate of 100 fps. Note that an industrial camera or the like has a region of interest mode. In the region of interest mode, only a partial region defined as a region of interest is read out, not the entire region of the screen. By using such a region of interest mode, the frame rate can be increased.

The controller 500 sets a region of interest in any given iris imaging apparatus, and reads out an image in the region of interest from the iris imaging apparatus. In the example shown in fig. 2, a partial area of 4,000 pixels in the horizontal direction and 1,500 pixels in the vertical direction (which corresponds to half of the entire area of the screen) is defined as a region of interest. In this case, since the number of pixels in each frame is half of the number of pixels in the entire area, the frame rate can be increased to 120fps, which is twice the frame rate of 60fps in the case of reading out the entire area of the screen. However, the horizontal and vertical fields of view of the partial area become 0.2m and 0.75m, respectively. Note that the two eyes of the person are aligned in the horizontal direction. Therefore, in the region of interest, it is preferable to reduce the number of pixels in the vertical direction, rather than reducing the number of pixels in the horizontal direction, so that both eyes can be photographed.

The iris image pickup apparatus that photographs an eye region is only one of the four iris image pickup apparatuses 401 to 404 under the condition that the eye region is not photographed in the above-described range in which the image regions of the iris image pickup apparatuses adjacent to each other overlap with each other. Further, a condition that images can be read out at a frame rate of 120fps is a partial region in the iris image pickup apparatus. The controller 500 infers that one of the iris image capturing apparatuses 401 to 404 can properly capture an eye region, and estimates the vertical position of a region of interest in which an image is read out at high speed in the iris image capturing apparatus.

The above inference/estimation may be performed by the method described below. The whole imaging apparatus 100 has a resolution in which the subject can be authenticated by his/her face, and the controller 500 derives the position of the subject's eyes in the whole image captured by the whole imaging apparatus 100. The controller 500 derives an iris image capturing apparatus corresponding to the position of the subject's eyes in the whole image and the position of the eyes in the imaging device by using the camera parameters and the positional relationship of the whole imaging apparatus 100 and each iris image capturing apparatus. By using the region of interest mode, a field of view wider than 0.2m in the horizontal direction and wider than 0.4m in the vertical direction and temporal resolution higher than 100fps can be achieved by using a general purpose camera.

Note that when the vertical position is changed in the above-described region of interest mode, a delay occurs before the start of shooting. Therefore, in the above inference/estimation, an image obtained by photographing a subject at a position farther than 2 meters (i.e., at a position farther than the focus point of the iris imaging apparatus, for example, at a position 3 meters away) may be used. A resolution at which a subject existing at a position 3 meters away can be authenticated by his/her face can be achieved by a camera having about 2M pixels, so that a camera having a lower specification than that of an iris imaging camera can be used for the entire imaging apparatus 100.

The controller 500 supplies iris imaging information to each of the iris imaging apparatuses 401 to 404 based on the iris imaging control described above. The controller 500 supplies iris imaging information including the vertical position of the region of interest to an iris imaging apparatus that images the eye region of the subject. The controller 500 may supply the optional iris image capturing information to other iris image capturing apparatuses. The controller 500 may supply iris image capturing information including information on the stop of the supply of the iris image to other iris image capturing apparatuses, for example, in order to reduce the total amount of data of the iris image.

Each of the iris imaging apparatuses 401 to 404 supplies an iris image to the controller 500 based on the iris imaging information supplied from the controller 500. In addition, each of the iris imaging apparatuses 401 to 404 outputs an image (iris image) of a region of interest set by the controller 500 by using iris imaging information to the controller 500. Each of the iris image pickup apparatuses 401 to 404 may lossy-compress the iris image in the region of interest and output the compressed iris image to the controller 500. For example, each of the iris imaging apparatuses 401 to 404 compresses an iris image in a region of interest by using a combination of quantization (pixel-by-pixel compression), predictive coding and quantization (compression based on a plurality of pixel pairs), transform coding and quantization (compression based on a plurality of pixel pairs). The controller 500 performs authentication and registration described in the background section by using the iris images supplied from the iris image capturing apparatuses 401 to 404. When there is a next subject or when authentication or registration fails, the controller 500 returns to the next process.

Next, an operation procedure will be described. Fig. 3 shows an operation procedure in the image processing system. The controller 500 performs guidance control so as to guide the subject by using the guidance apparatus 200 (step S1001). The controller 500 performs illumination control so that infrared light is applied to a subject by using the illumination apparatus 300 (step S1002). The controller 500 performs the above-described iris image capturing control to acquire an image of an iris (iris image) by using the plurality of iris image capturing apparatuses 401 to 404 (step S1003). The iris image acquired in step S1003 is used for authentication or registration of the iris. In step S1003, the controller 500 need not acquire an iris image of a given subject from each of the iris image capturing apparatuses 401 to 404 as described above. The controller 500 obtains an iris image from an iris image capturing apparatus that captures an eye area of a subject.

The controller 500 performs iris-based authentication or registers an iris image by using the iris image acquired in step S1003 (step S1004). The controller 500 determines whether there is a next subject, or whether re-authentication or re-registration should be performed (step S1005). When it is determined that there is a next subject or re-authentication or re-registration should be performed, the process returns to step S1001, and the process is performed from the guidance control.

Note that, when the whole imaging apparatus 100 according to this exemplary embodiment has a resolution with which a subject can be authenticated by his/her face, and a feature value for authenticating the subject by his/her face is held in the database, but a feature value for authenticating the subject by his/her iris is not held in the database, the apparatus according to the present disclosure may also be used for a purpose in which the apparatus identifies the subject based on authentication on a face basis and registers the extracted feature value of the iris of the subject in the database. In addition, the device according to the present disclosure can also be used for the following purposes: the apparatus estimates information of the height of the subject based on information of the position of the eye obtained by iris imaging control or information of the position of the eye obtained when an iris image obtained by an iris imaging apparatus is authenticated or registered and registers the estimated information in a database. Further, the apparatus according to the present disclosure may be used to determine or calibrate the vertical position of one of the iris imaging apparatuses, which may appropriately capture the region of the eye and the region of interest in which the image is read out at high speed in the iris imaging apparatus, by using the estimated information on the altitude.

In this example embodiment, high resolution to support the required 0.2m × 0.4 field of view and high frame rate performance corresponding to a temporal resolution of 0.01s may be achieved with a combination of general purpose cameras. As a result, it is easy to capture an iris pattern with a quality level sufficient for authentication and verification under conditions such as when the distance between the subject and the image pickup apparatus is long, the field of view to be captured is wide, and the subject moves.

Next, a second example embodiment of the present disclosure will be described. The configuration of the image processing system according to this exemplary embodiment may be similar to that of the image processing system shown in fig. 1. An object of this exemplary embodiment is to reduce the retry rate of authentication and registration caused by fluctuations in the subject's eye position. In this exemplary embodiment, the controller 500 also functions as an image processing apparatus that performs an image processing method.

In this exemplary embodiment, the controller 500 stores information on an iris imaging apparatus that has performed imaging of an eye region in the past, information on the vertical position of a region of interest read out at high speed in the iris imaging apparatus, and time-series information on the vertical position of an eye detected in the region of interest read out at high speed. When biometric authentication is performed again on the same subject, the controller 500 determines whether the eye region of the subject is successfully covered by the region of interest in the previous biometric authentication. In the case where the eye region of the subject is not successfully covered, the controller 500 corrects the vertical position of the iris image pickup apparatus capable of appropriately performing image pickup of the eye region and the region of interest read out at high speed in the iris image pickup apparatus. In other respects, the configuration may be similar to the first exemplary embodiment.

Here, the fluctuation of the eye position refers to the up and down movement of the eye region captured by the iris imaging apparatus. The fluctuation of the head swing associated with walking is several centimeters, and as shown in the case of the first exemplary embodiment, the vertical field of view that can be read out at 120fps is only 7.5 cm. In particular, in a case where the eye region is located in a boundary region (overlapped image region) of two adjacent ones of the iris image capturing apparatuses 401 to 404 at the moment when the subject passes through the focus point (2m away), the eye region may not be covered with the region of interest determined by the controller 500 due to fluctuations caused by head swing associated with walking.

Next, for the following description, information on an iris imaging apparatus that has performed imaging of an eye region in the past, information on a vertical position of a region of interest that is read out at high speed in the imaging apparatus, and time-series information on a vertical position of an eye detected in the region of interest that is read out at high speed will be defined.

Let cn (k) be information on an iris imaging apparatus that has performed imaging of an eye region in the past. Here, k is an index, and cn (k) represents information from biometric authentication performed k times before. Let cn (0) be information from the immediately preceding biometric authentication. Further, in this example embodiment, it is assumed that there are four iris imaging apparatuses, and serial numbers 1 to 4 are assigned to the iris imaging apparatuses 401 to 404, respectively. In this case, cn (k) ranges from 1 to 4.

Let cy (k) be information on the vertical position of the region of interest read out at high speed in the iris imaging apparatus. Here, k is an index, and cy (k) denotes information from biometric authentication performed k times before. Let cy (0) be information from an immediately preceding biometric authentication. Similarly to the first exemplary embodiment, it is assumed that the number of vertical pixels in the iris imaging apparatus is 3,000 pixels and the height of the region of interest in the vertical direction is 1,500 pixels (see fig. 2). In this case, cy (k) ranges from 0 to 1,500 (a value obtained by subtracting 1,500 vertical pixels of the region of interest from 3,000 pixels).

Let ey (t (k)) be time series information on the vertical position of the eye detected in the region of interest read out at high speed. Here, k is an index, and t (k) represents the time of biometric authentication performed k times before. Let st (k) be the imaging start time of index k, and et (k) be the imaging end time of index k, t (k) being represented by st (k) ≦ t (k) < et (k). To simplify the following description, when the time difference between st (k) and et (k) is 2 seconds, the number of frames included in the time difference photographed at 120fps is 240. Further, in this example embodiment, the number of vertical pixels in the region of interest is 1,500 pixels, and the ey (t (k)) in the case where the eyes in the region of interest are successfully detected ranges from 0 to 1,500. In the case where an eye is not successfully detected in the region of interest, the value of ey (t (k)) takes a value of-1, which completes the definition of each piece of information.

When biometric authentication is performed again on the same subject, the controller 500 performs the attention area coverage determination process. In the attention area coverage determining process, when biometric authentication is performed again on the same subject, the controller 500 determines whether or not the eye area of the subject was successfully covered by the attention area in the past. This determination is made, for example, as follows.

The controller 500 determines an index k corresponding to when biometric authentication was performed on a subject in the past. The index k may simply assume a retry and treat the immediately preceding k as k 0, or k may be explicitly entered into the system. The controller 500 calculates a count co (k) of the number of times the value ey (t (k)) is-1 during a period st (k) ≦ t (k) < et (k) corresponding to the index k. For example, the controller 500 counts the number of images in which the eyes are not successfully detected among 240 images photographed in two seconds. In the case where the count co (k) exceeds the predetermined number, the controller 500 determines that the subject's eye area has not been successfully covered by the attention area in the past. This concludes the description of the region-of-interest coverage determination process.

In a case where it is determined that the eye area of the subject has not been successfully covered by the attention area in the past, the controller 500 performs attention area correction processing. In the region-of-interest correction processing, when biometric authentication is performed again on the same subject, the controller 500 corrects the vertical positions of the iris image pickup apparatus capable of appropriately performing image pickup of the eye region and the region of interest read out at high speed in the iris image pickup apparatus. For example, the correction is performed as follows.

The controller 500 determines an index k corresponding to when biometric authentication was performed on a subject in the past. The index k may simply assume a retry and treat the immediately preceding k as k 0, or k may be explicitly entered into the system. The controller 500 determines whether the vertical positions of the iris imaging apparatus corresponding to cn (k) and the region of interest corresponding to cy (k) corresponding to the index k coincide with the vertical positions of the iris imaging apparatus and the region of interest derived by the iris imaging control described in the first exemplary embodiment. In the case where it is determined that the above-described conditions are consistent, the controller 500 determines that correction is necessary.

In the case where it is determined that correction is necessary, the controller 500 calculates the average value ay (k) of the values of ey (t (k)) other than-1 for a period st (k) ≦ t (k) < et (k) corresponding to the index k. Here, the average value ay (k) represents the vertical position in the case where the eye region exists within the region of interest. Average values ay (k) progressively less than the vertical center position 750(═ 1,500/2) of the region of interest indicate that the eye region protrudes above the region of interest, while average values ay (k) progressively greater than 750 indicate that the eye region protrudes below the region of interest.

In the case where the average value ay (k) is 750 or less, the controller 500 corrects the vertical positions of the iris imaging apparatus and the region of interest so that the region of interest shifted upward by 750-ay (k) pixels is read out. Similarly, in the case where the average value ay (k) is larger than 750, the controller 500 corrects the vertical positions of the iris imaging apparatus and the region of interest so that the region of interest shifted downward by ay (k) -750 pixels is read out.

Note that although an example of calculating the average value of the ey (t (k)) of the cases other than-1 as the average value ay (k) is described above, the present disclosure is not limited thereto. For example, information such as a focus score may be stored for each frame, and a calculated average limited to in-focus frames may be treated as the average ay (k). Furthermore, to reduce storage capacity, co (k) and ay (k) may be stored instead of ey (t (k)) itself. This concludes the description of the region-of-interest correction process.

Next, an operation procedure will be described. Fig. 4 shows an operation procedure in an image processing system including an image processing method. The controller 500 performs guidance control and guides the subject using the guidance apparatus 200 (step S2002). The controller 500 performs illumination control, and irradiates the subject with light using the illumination apparatus 300 (step S2002). Steps S2001 and S2002 are the same as steps S1001 and S1002 of fig. 3. The controller 500 performs iris imaging control, and determines the iris imaging apparatus used for iris imaging and the vertical position of the region of interest in the iris imaging apparatus (step S2003).

The controller 500 determines whether the biometric authentication is a retry (step S2004). When it is determined that the biometric authentication is not a retry, the control unit 500 acquires an image (iris image) of the region of interest of the iris imaging apparatus determined in step S2003, and performs iris authentication or registration of the iris image using the iris image (step S2007). The controller 500 determines whether there is a next subject or whether re-authentication or re-registration should be performed (step S2008). In the case where it is determined that there is a next subject or re-authentication or re-registration should be performed, the process returns to step S2001, and the process is performed from the guidance control.

In the event that determination is made in step S2004 that biometric authentication is a retry, the controller 500 performs the region of interest coverage determination process described above, and determines whether or not the subject' S eye area was successfully covered by the region of interest in the past (step S2005). In a case where it is determined that the eye area of the subject has not been successfully covered by the attention area in the past, the controller 500 performs the attention area correction processing described above (step S2006). Then, the process proceeds to step S2007, where an image of the region of interest of the iris imaging apparatus after correction is acquired, and iris authentication or registration of the iris image is performed. In the case where it is determined that there is a next subject or re-authentication or re-registration should be performed, the process returns to step S2001, and the process is performed from the guidance control.

In an example embodiment, in the case of retrying the biometric authentication of the same subject, the controller 500 determines whether or not the eye area of the subject was successfully covered by the attention area in the past. In the case where the eye area of the subject has not been successfully covered by the attention area in the past, the controller 500 performs attention area correction processing and corrects the attention area. By correcting the region of interest using the past information, it is possible to increase the probability of successfully covering the eye region of the subject with the region of interest in the current biometrics authentication. Therefore, in this example embodiment, it is possible to reduce the re-acquisition of images and the retry rate of authentication and registration caused by fluctuations in the eye position.

Note that although an example in which a partial region of 4,000 pixels in the horizontal direction and 1,500 pixels in the vertical direction is defined as a region of interest is shown in fig. 2, the present disclosure is not limited to this example. The shape of the region of interest is not limited to a rectangle, and the number of regions of interest is not limited to one. The controller 500 may derive the positions of the right and left eyes of the subject from the whole image (overhead image) captured by the whole imaging apparatus 100, for example, and set a region of interest corresponding to the position of the right eye and a region of interest corresponding to the position of the left eye in the iris imaging apparatus. In this case, the iris image capturing apparatus supplies iris images of the right and left eyes to the controller 500. The shape of the region of interest may be rectangular or may be elliptical. The controller 500 may derive the positions of the right and left eyes of the subject based on an iris image captured by the iris image capturing apparatus, not based on the overhead view image. For example, the controller 500 may temporarily define the partial area shown in fig. 2 as a region of interest, and derive the positions of the right and left eyes from the image in the region of interest. In this case, the controller 500 may set each of the partial region corresponding to the position of the right eye and the partial region corresponding to the position of the left eye as a region of interest in the iris image capturing apparatus based on the derived positions of the right and left eyes.

In each of the above exemplary embodiments, the controller 500 may be formed as a computer device. Fig. 5 shows an example of a configuration of an information processing apparatus (computer apparatus) that can be used for the controller 500. The information processing apparatus 600 includes a control unit (CPU: central processing unit) 610, a storage unit 620, a ROM (read only memory) 630, a RAM (random access memory) 640, a communication interface (IF: interface) 650, and a user interface 660.

The communication interface 650 is an interface for connecting the information processing apparatus 600 to a communication network by a wired communication apparatus, a wireless communication apparatus, or the like. The user interface 660 comprises, for example, a display unit, such as a display. In addition, the user interface 660 includes input units such as a keyboard, a mouse, and a touch panel.

The storage unit 620 is a secondary storage device that can hold various types of data. The storage unit 620 does not necessarily have to be a part of the information processing apparatus 600, and may be an external storage device or cloud storage connected to the information processing apparatus 600 through a network. The ROM 630 is a nonvolatile storage device. For example, a semiconductor memory device such as a flash memory having a relatively small capacity is used for the ROM 630. Programs executed by the CPU 610 may be stored in the storage unit 620 or the ROM 630.

The above-described program may be stored and provided to the information processing apparatus 600 by using any type of non-transitory computer-readable medium. Non-transitory computer readable media include any type of tangible storage media. Examples of the non-transitory computer readable medium include magnetic storage media such as floppy disks, magnetic tapes, and hard disk drives, magneto-optical storage media such as magneto-optical disks, optical disk media such as CDs (compact discs) and DVDs (digital versatile discs), and semiconductor memories such as mask ROMs, PROMs (programmable ROMs), EPROMs (erasable PROMs), flash ROMs, and RAMs. Further, the program may be provided to the computer using any type of transitory computer-readable medium. Examples of transitory computer readable media include electrical signals, optical signals, and electromagnetic waves. The transitory computer-readable medium may provide the program to the computer via a wired communication line such as an electric wire and an optical fiber or a radio communication line.

The RAM 640 is a volatile storage device. As the RAM 640, various types of semiconductor memory devices such as a DRAM (dynamic random access memory) or an SRAM (static random access memory) can be used. The RAM 640 may be used as an internal buffer for temporarily storing data and the like. The CPU 610 expands (i.e., loads) the program stored in the storage unit 620 or the ROM 630 into the RAM 640, and executes the expanded (i.e., loaded) program. By executing this program, the CPU 610 performs various types of control including, for example, guidance control, illumination control, iris imaging control, region-of-interest coverage determination processing, and region-of-interest correction processing.

Although the example embodiments according to the present disclosure have been described above in detail, the present disclosure is not limited to the above example embodiments, and the present disclosure also includes those obtained by changing or modifying the above example embodiments without departing from the spirit of the present disclosure.

For example, all or part of the exemplary embodiments disclosed above may be described as, but not limited to, the following supplementary notes.

(supplementary notes 1)

An image processing system comprising:

a plurality of iris imaging devices arranged at mutually different positions within the same field of view;

an entire image pickup device for picking up an image in a field of view wider than that of the iris image pickup device;

a guide device for guiding a subject;

an illumination device for illuminating a subject with light; and

a control device for controlling at least one of the following using an image from the whole image pickup device: reading out images from a plurality of iris image pickup devices, presenting at least one of the images and sounds through a guide device, or providing illumination with light from an illumination device, wherein,

the control means controls the image readout from the iris camera means,

in controlling readout of the image, the control device sets a region of interest to be an object to be treated as image readout, an

The control device additionally stores past information, and in the case of performing image processing on the same subject, the control device determines whether or not the subject's eye area is successfully covered by a region of interest used in the past, and in the case of determining that the subject's eye area is not successfully covered, the control device corrects the image read out from the iris imaging device.

(supplementary notes 2)

The image processing system according to supplementary note 1, wherein,

the control device reads images from the plurality of iris imaging devices, an

In image reading from the plurality of iris image capturing apparatuses, the control device specifies an iris image capturing apparatus capable of capturing an image of a subject's eye from among the plurality of iris image capturing apparatuses based on an image obtained by the entire image capturing apparatus, sets a region of interest including a subject's eye position in the specified iris image capturing apparatus, and obtains an image of the region of interest from the specified iris image capturing apparatus.

(supplementary notes 3)

The image processing system according to supplementary note 1 or 2, wherein,

a plurality of iris image pickup apparatuses are stacked in a vertical direction,

upon setting the region of interest, the control means determines an iris image pickup apparatus for image readout from among the plurality of iris image pickup apparatuses, and specifies the region of interest within an image pickup range of the determined iris image pickup apparatus, an

In the case where the determined iris image pickup apparatus used for image readout and the region of interest set in the iris image pickup apparatus are the same as those in the past information, the control device corrects image readout from the iris image pickup device.

(supplementary notes 4)

An image processing apparatus comprising:

a control device for controlling image reading from an iris imaging device for imaging an iris of a subject, wherein,

in controlling the image readout, the control means sets a region of interest to be an object to be treated as the image readout, and

the control device additionally stores past information, and in the case of performing image processing on the same subject, the control device determines whether or not the subject's eye area is successfully covered by a region of interest used in the past, and in the case of determining that the subject's eye area is not successfully covered, the control device corrects the image read out from the iris imaging device.

(supplementary notes 5)

The image processing apparatus according to supplementary note 4, wherein

The iris imaging apparatus includes a plurality of iris imaging apparatuses stacked in a vertical direction,

upon setting the region of interest, the control means determines an iris image pickup apparatus for image readout from among the plurality of iris image pickup apparatuses, and specifies the region of interest within an image pickup range of the determined iris image pickup apparatus, an

In the case where the determined iris image pickup apparatus used for image readout and the region of interest set in the iris image pickup apparatus are the same as those in the past information, the control device corrects image readout from the iris image pickup device.

(supplementary notes 6)

The image processing apparatus according to supplementary note 4 or 5, wherein

The past information includes information indicating a position of an eye region in a region of interest taken in the past, and

the control device corrects image readout from the iris imaging device based on information indicating the position of the eye region.

(supplementary notes 7)

The image processing apparatus according to any one of supplementary notes 4 to 6, wherein,

the past information includes information indicating whether or not an eye area is included in a region of interest captured in the past, and

the control means determines whether or not the subject's eye area is successfully covered by the attention area in the past based on the information indicating whether or not the eye area is included.

(supplementary notes 8)

The image processing apparatus according to any one of supplementary notes 4 to 7, wherein the control means controls image readout from the iris image pickup means by using an image picked up by an entire image pickup means for image pickup over a range wider than an image pickup range of the iris image pickup means.

(supplementary notes 9)

An image processing method comprising:

using an image from the whole camera to perform at least one of: reading out images from a plurality of iris imaging devices, presenting at least one of an image and a voice through a guide device for guiding a subject, or providing illumination with light from an illumination device for illuminating the subject with light, wherein the entire imaging device is used for imaging over a field of view wider than that of the plurality of iris imaging devices arranged at mutually different positions within the same field of view.

(supplementary notes 10)

An image processing method comprising:

controlling image readout from an iris imaging apparatus for imaging an iris of a subject;

in the case of performing image processing on the same subject, determining whether or not the eye area of the subject is successfully covered by a region of interest to be treated as an object of image readout used in the past; and is

In the case where it is determined that the eye area of the subject is not successfully covered, the image readout from the iris imaging apparatus is corrected.

(supplementary notes 11)

A non-transitory computer-readable medium storing a program that causes a computer to execute a process, the process comprising:

using an image from the whole camera to perform at least one of: reading out images from a plurality of iris imaging devices, presenting at least one of an image and a voice through a guide device for guiding a subject, or providing illumination with light from an illumination device for illuminating the subject with light, wherein the entire imaging device is used for imaging over a field of view wider than that of the plurality of iris imaging devices arranged at mutually different positions within the same field of view.

(supplementary notes 12)

A non-transitory computer-readable medium storing a program that causes a computer to execute a process, the process comprising:

controlling image readout from an iris imaging apparatus for imaging an iris of a subject;

in the case of performing image processing on the same subject, determining whether or not the eye area of the subject is successfully covered by a region of interest to be treated as an object of image readout used in the past; and is

In the case where it is determined that the eye area of the subject is not successfully covered, the image readout from the iris imaging apparatus is corrected.

This application is based on and claims the benefit of priority from japanese patent application No.2019-026938 filed on 18.2.2019, the entire contents of which are incorporated herein by reference.

List of reference numerals

100 integral imaging device

200 boot device

300 Lighting device

401-404 iris shooting equipment

500 controller

600 information processing apparatus

21页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:车辆安装的照相机装置和图像失真校正方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类