Imaging method, imaging apparatus, method of distinguishing imaging objects, and computer program

文档序号:1804623 发布日期:2021-11-05 浏览:20次 中文

阅读说明:本技术 成像方法、成像装置、区分成像对象的方法及计算机程序 (Imaging method, imaging apparatus, method of distinguishing imaging objects, and computer program ) 是由 川岛安武 于 2020-11-06 设计创作,主要内容包括:一种成像装置,包括作为成像器的成像传感器(111)以及作为用于照明成像传感器(111)的成像对象的照明器的脉冲LED(113),其根据各个成像条件来执行各个成像,每个成像条件包括成像传感器(111)的曝光时间te和脉冲LED(113)的照明时间tr,存储通过各个成像获得的各个图像的亮度指标值D1和各个图像的成像条件的各个组合,基于存储的亮度指标值D1和成像条件的组合,获得指示曝光时间te的变化对亮度指标值D1的影响程度的曝光贡献度k-off的估计以及指示照明时间tr的变化对亮度指标值D1的影响程度的照明贡献度k-on的估计(S36),以及基于k-on和k-off的估计,确定将在下一次成像中使用的成像条件(S37)。(An imaging apparatus includes an imaging sensor (111) as an imager and a pulse LED (113) as an illuminator for illuminating an imaging object of the imaging sensor (111), performs respective imaging according to respective imaging conditions, each of which includes an exposure time te of the imaging sensor (111) and an illumination time tr of the pulse LED (113), stores respective combinations of luminance index values D1 of respective images obtained by the respective imaging and the imaging conditions of the respective images, obtains an estimate of an exposure contribution degree k _ off indicating a degree of influence of a change in the exposure time te on a luminance index value D1 and an estimate of an illumination contribution degree k _ on indicating a degree of influence of a change in the illumination time tr on a luminance index value D1 based on the stored combinations of the luminance index values D1 and the imaging conditions (S36), and based on the estimates of k _ on and k _ off, the imaging conditions to be used in the next imaging are determined (S37).)

1. An imaging method for imaging by an imaging apparatus comprising an imager and an illuminator configured to illuminate an imaging subject of the imager, the method comprising:

performing respective imaging according to respective imaging conditions, each of the imaging conditions including an exposure time of the imager and an illumination time of the illuminator;

storing in a memory respective combinations of the luminance of the respective images obtained by the respective imaging and the imaging conditions of the respective images;

obtaining an estimate of a first parameter indicating a degree of influence of the change in the exposure time on image brightness and an estimate of a second parameter indicating a degree of influence of the change in the illumination time on image brightness, based on a combination of brightness and imaging conditions stored in the memory; and

based on the estimate of the first parameter and the estimate of the second parameter, an imaging condition to be used in a next imaging is determined.

2. The imaging method according to claim 1, further comprising:

determining a relationship between the exposure time and the illumination time based on the estimation of the first parameter and the estimation of the second parameter such that the exposure time becomes shorter the longer the illumination time is, and the exposure time becomes longer the shorter the illumination time is,

wherein the imaging condition to be used in the next imaging is determined so as to satisfy the determined relationship.

3. The imaging method according to claim 2,

the imaging condition to be used in the next imaging is determined so that a relationship between the exposure time and the illumination time is satisfied, and a ratio of the illumination time to the exposure time differs from a ratio in the latest imaging condition by a predetermined threshold or more.

4. The imaging method according to claim 2 or 3,

the imaging conditions to be used in the next imaging are determined such that the ratio of the illumination time to the exposure time and the ratio of the estimate of the second parameter to the estimate of the first parameter are as close as possible.

5. An imaging method for imaging by an imaging apparatus comprising an imager and an illuminator configured to illuminate an imaging subject of the imager, the method comprising:

performing respective imaging according to respective imaging conditions, each of the imaging conditions including an exposure time of the imager and an illumination amount of the illuminator to the imaging subject;

storing in a memory respective combinations of the luminance of the respective images obtained by the respective imaging and the imaging conditions of the respective images;

obtaining an estimate of a first parameter indicating a degree of influence of the change in the exposure time on image brightness and an estimate of a second parameter indicating a degree of influence of the change in the illumination amount on image brightness, based on a combination of brightness and imaging conditions stored in the memory; and

based on the estimate of the first parameter and the estimate of the second parameter, an imaging condition to be used in a next imaging is determined.

6. An image forming apparatus comprising:

an imager;

an illuminator configured to illuminate an imaging subject of the imager;

a controller configured to control the imager and the illuminator according to respective imaging conditions to perform respective imaging, each of the imaging conditions including an exposure time of the imager and an illumination time of the illuminator;

a memory configured to store respective combinations of the luminance of the respective images obtained by the respective imaging and the imaging conditions of the respective images;

an imaging condition determiner configured to obtain, based on a combination of the luminance and the imaging condition stored in the memory, an estimate of a first parameter indicating a degree of influence of a change in the exposure time on image luminance and an estimate of a second parameter indicating a degree of influence of a change in the illumination time on image luminance, and determine an imaging condition to be used in next imaging based on the estimates of the first parameter and the second parameter.

7. The imaging apparatus according to claim 6,

the imaging condition determiner is configured to determine a relationship between the exposure time and the illumination time based on the estimation of the first parameter and the estimation of the second parameter such that the longer the illumination time, the shorter the exposure time becomes, and the shorter the illumination time, the longer the exposure time becomes, and determine the imaging condition to be used in the next imaging so as to satisfy the determined relationship.

8. The imaging apparatus according to claim 7,

the imaging condition determiner is configured to determine the imaging condition to be used in the next imaging so that a relationship between the exposure time and the illumination time is satisfied, and a ratio of the illumination time to the exposure time differs from a ratio in the latest imaging condition by a predetermined threshold or more.

9. The imaging apparatus according to claim 7 or 8,

the imaging condition determiner is configured to determine the imaging condition to be used in the next imaging such that a ratio of the illumination time to the exposure time and a ratio of the estimate of the second parameter to the estimate of the first parameter are as close as possible.

10. An image forming apparatus comprising:

an imager;

an illuminator configured to illuminate an imaging subject of the imager;

a controller configured to control the imager and the illuminator according to respective imaging conditions to perform respective imaging, each of the imaging conditions including an exposure time of the imager and an illumination amount of the illuminator to the imaging subject;

a memory configured to store respective combinations of the luminance of the respective images obtained by the respective imaging and the imaging conditions of the respective images;

an imaging condition determiner configured to obtain, based on a combination of the luminance and the imaging condition stored in the memory, an estimate of a first parameter indicating a degree of influence of the change in the exposure time on the image luminance and an estimate of a second parameter indicating a degree of influence of the change in the illumination amount on the image luminance, and determine an imaging condition to be used in next imaging based on the estimates of the first parameter and the second parameter.

11. A computer program that causes a computer to control an imaging apparatus including an imager and an illuminator configured to illuminate an imaging subject of the imager, to perform the imaging method according to any one of claims 1 to 5.

12. A differentiation method of differentiating an imaging object by an imaging apparatus comprising an imager and an illuminator configured to illuminate the imaging object of the imager, the method comprising:

performing respective imaging according to a plurality of imaging conditions, each of the imaging conditions including an exposure time of the imager and an illumination time of the illuminator, the respective imaging conditions having exposure times different from each other;

storing respective combinations of the luminance of the respective images obtained by the respective imaging and the imaging conditions of the respective images in a memory; and

obtaining an estimate of a degree of influence of the change in the exposure time on image brightness based on a combination of brightness and imaging conditions stored in the memory, and determining that the imaging subject of the imager is a illuminant when the estimate is greater than a predetermined threshold.

13. A differentiation method of differentiating an imaging object by an imaging apparatus comprising an imager and an illuminator configured to illuminate the imaging object of the imager, the method comprising:

performing respective imaging according to a plurality of imaging conditions, each of the imaging conditions including an exposure time of the imager and an illumination time of the illuminator, the respective imaging conditions having illumination times different from each other;

storing respective combinations of the luminance of the respective images obtained by the respective imaging and the imaging conditions of the respective images in a memory; and

obtaining an estimate of a degree of influence of the variation in the illumination time on image brightness based on a combination of brightness and imaging conditions stored in the memory, and determining that the imaging subject of the imager is a illuminant when the estimate is less than a predetermined threshold.

14. A differentiation method of differentiating an imaging object by an imaging apparatus comprising an imager and an illuminator configured to illuminate the imaging object of the imager, the method comprising:

performing respective imaging according to a plurality of imaging conditions, each of the imaging conditions including an exposure time of the imager and an illumination time of the illuminator, ratios of the exposure time and the illumination time of the respective imaging conditions being different from each other;

storing respective combinations of the luminance of the respective images obtained by the respective imaging and the imaging conditions of the respective images in a memory;

obtaining an estimate of a first parameter indicating a degree of influence of the change in the exposure time on image brightness and an estimate of a second parameter indicating a degree of influence of the change in the illumination time on image brightness, based on a combination of brightness and imaging conditions stored in the memory; and

determining that the imaging subject of the imager is a illuminant when the estimate of the first parameter is greater than a first threshold and the estimate of the second parameter is less than a second threshold.

15. A differentiation method of differentiating an imaging object by an imaging apparatus comprising an imager and an illuminator configured to illuminate the imaging object of the imager, the method comprising:

performing respective imaging according to a plurality of imaging conditions, each of the imaging conditions including an exposure time of the imager and an illumination time of the illuminator, ratios of the exposure time and the illumination time of the respective imaging conditions being different from each other;

storing respective combinations of the luminance of the respective images obtained by the respective imaging and the imaging conditions of the respective images in a memory;

obtaining an estimate of a first parameter indicating a degree of influence of the change in the exposure time on image brightness and an estimate of a second parameter indicating a degree of influence of the change in the illumination time on image brightness, based on a combination of brightness and imaging conditions stored in the memory; and

determining that the imaging subject of the imager is a illuminant when a ratio of the estimate of the first parameter to the estimate of the second parameter is greater than a predetermined threshold.

16. An imaging method, comprising:

determining an imaging condition to be used in next imaging using an upper limit of the illumination time which is smaller in a case where an imaging object is determined to be a luminous body by the discrimination method according to any one of claims 12 to 15 than in a case where the imaging object is not determined to be a luminous body by the discrimination method, and determining an imaging condition to be used in next imaging using a lower limit of the illumination time which is larger in a case where the imaging object is not determined to be a luminous body by the discrimination method than in a case where the imaging object is determined to be a luminous body by the discrimination method; and

performing imaging by the imaging device according to the determined imaging condition.

17. An image forming apparatus comprising:

an imager;

an illuminator configured to illuminate an imaging subject of the imager;

a controller configured to control the imager and the illuminator to perform respective imaging corresponding to respective imaging conditions according to a plurality of imaging conditions, each of the imaging conditions including an exposure time of the imager and an illumination time of the illuminator, the respective imaging conditions having exposure times different from each other;

a memory configured to store respective combinations of the luminance of the respective images obtained by the respective imaging and the imaging conditions of the respective images; and

an imaging subject discriminator configured to obtain an estimate of a degree of influence of a change in the exposure time on image brightness based on a combination of brightness and imaging conditions stored in the memory, and determine that the imaging subject of the imager is a illuminant when the estimate is greater than a predetermined threshold.

18. An image forming apparatus comprising:

an imager;

an illuminator configured to illuminate an imaging subject of the imager;

a controller configured to control the imager and the illuminator to perform respective imaging corresponding to respective imaging conditions according to a plurality of imaging conditions, each of the imaging conditions including an exposure time of the imager and an illumination time of the illuminator, the respective imaging conditions having illumination times different from each other;

a memory configured to store respective combinations of the luminance of the respective images obtained by the respective imaging and the imaging conditions of the respective images; and

an imaging subject discriminator configured to obtain an estimate of a degree of influence of a change in illumination time on image brightness based on a combination of brightness and imaging conditions stored in the memory and to determine that the imaging subject of the imager is a illuminant when the estimate is less than a predetermined threshold.

19. An image forming apparatus comprising:

an imager;

an illuminator configured to illuminate an imaging subject of the imager;

a controller configured to control the imager and the illuminator to perform respective imaging corresponding to respective imaging conditions according to a plurality of imaging conditions, each of the imaging conditions including an exposure time of the imager and an illumination time of the illuminator, ratios of the exposure time and the illumination time of the respective imaging conditions being different from each other;

a memory configured to store respective combinations of the luminance of the respective images obtained by the respective imaging and the imaging conditions of the respective images; and

an imaging subject discriminator configured to obtain an estimate of a first parameter indicating a degree of influence of a change in the exposure time on image brightness and an estimate of a second parameter indicating a degree of influence of a change in the illumination time on image brightness based on a combination of brightness and imaging conditions stored in the memory, and determine that the imaging subject of the imager is a illuminant when the estimate of the first parameter is greater than a first threshold and the estimate of the second parameter is less than a second threshold.

20. An image forming apparatus comprising:

an imager;

an illuminator configured to illuminate an imaging subject of the imager;

a controller configured to control the imager and the illuminator to perform respective imaging corresponding to respective imaging conditions according to a plurality of imaging conditions, each of the imaging conditions including an exposure time of the imager and an illumination time of the illuminator, ratios of the exposure time and the illumination time of the respective imaging conditions being different from each other;

a memory configured to store respective combinations of the luminance of the respective images obtained by the respective imaging and the imaging conditions of the respective images; and

an imaging subject discriminator configured to obtain an estimate of a first parameter indicating a degree of influence of a change in the exposure time on image brightness and an estimate of a second parameter indicating a degree of influence of a change in the illumination time on image brightness based on a combination of brightness and imaging conditions stored in the memory, and determine that the imaging subject of the imager is a illuminant when a ratio of the estimate of the first parameter to the estimate of the second parameter is greater than a predetermined threshold.

21. The imaging apparatus according to any one of claims 17 to 20, comprising:

an imaging condition determiner configured to determine, by the controller, an imaging condition to be used in next imaging using an upper limit where the illumination time is small in a case where the imaging object discriminator determines that an imaging object of the imager is a luminous body, as compared to a case where the imaging object discriminator does not determine that the imaging object of the imager is a luminous body, and determine the imaging condition to be used in next imaging using a lower limit where the illumination time is large in a case where the imaging object discriminator does not determine that the imaging object of the imager is a luminous body, as compared to a case where the imaging object discriminator determines that the imaging object of the imager is a luminous body.

22. A computer program causing a computer to control an imaging apparatus comprising an imager and an illuminator configured to illuminate an imaging subject of the imager, to perform the discrimination method of any one of claims 12 to 15 or the imaging method of claim 16.

Technical Field

The present invention relates to an imaging apparatus including an imager and an illuminator for illuminating an imaging subject of the imager, an imaging method by such an imaging apparatus, and a computer program for causing a computer to control the imaging apparatus to execute the imaging method. The invention further relates to a method for distinguishing imaged objects and an imaging method for an imaging apparatus comprising an imager and an illuminator for illuminating imaged objects of the imager.

Background

In an optical information reading apparatus for optically reading a code symbol such as a barcode or a two-dimensional code, there have conventionally been employed: the apparatus images a reading object, cuts out a code symbol from an image obtained by the imaging, and decodes it. In this case, decoding cannot be performed if the image is too dark, and therefore it has also been adopted to provide an illuminator in the optical information reading apparatus and perform imaging while illuminating the reading object.

In this case, the appropriate illumination intensity and time vary depending on the surrounding environment and the type of the reading object. Appropriate exposure times also vary from one exposure to another. Therefore, as shown in PTLs 1 to 7, various techniques for automatically adjusting the intensity and time of illumination to appropriate values have been developed, which also include the techniques of the present applicant.

In these patent documents, for example, PTL1 discloses in paragraph 0028 and fig. 5A and the like that an appropriate exposure time is set based on the image intensity of a captured image when continuous low-intensity illumination and at least one pulse of high-intensity illumination are performed.

PTL2 discloses in paragraph 0038 and the like that the illumination time is adjusted according to the brightness of an image and the distance from a reading object.

Incidentally, when image formation is performed for a purpose other than reading code symbols, adjustment of illumination and exposure is also performed, and PTLs 1 to 7 include documents disclosing techniques of apparatuses relating to non-optical information reading apparatuses.

Reference list

Patent document

PTL1 japanese patent No. 6,053,224

PTL2 japanese patent application laid-open No. 2015-76106

PTL3 Japanese patent application laid-open No. 2001-43306

PTL4 Japanese patent application laid-open No. 2001-2457111

PTL5 japanese patent application publication No. 2011-

PTL6 japanese patent application publication No. 2013-

PTL7 japanese patent application publication No. 2019-129509

Disclosure of Invention

Technical problem

Incidentally, taking code symbols as an example, in recent years, code symbols are not only fixedly printed on a record carrier such as paper, but are also often dynamically displayed on a display device such as a display. Thus, in the operating environment of the reading apparatus it is often necessary to assume that both the record carrier and the display are the objects to be read, i.e. the objects to be imaged.

However, the characteristics of the record carrier and the display device differ from each other considerably in that the record carrier usually does not emit light by itself, whereas most display devices emit light by itself. Thus, the conditions suitable for imaging them are also quite different.

Therefore, it is sometimes difficult to obtain an image suitable for decoding a code symbol only by adjusting the imaging conditions based on the brightness of the image obtained by imaging. For example, when reading a code symbol displayed on a display of a smartphone, since the display itself is emitting light, even if the illumination time of illumination is increased in consideration of insufficient brightness, such an increase cannot be expected to greatly contribute to improving the brightness of an image. On the other hand, extending the exposure time is considered to be effective for improving the brightness. In contrast, when reading a code symbol printed on paper, if the environment is dark, even if the exposure time is extended, it cannot be expected that the brightness of the image will be improved, and it is considered effective to increase the illumination time of the illumination (within the range of the exposure time).

If the amount of illumination or illumination time and the exposure time are increased, it is considered to be effective in both the case of the smartphone and the paper. However, when imaging a smartphone, an unnecessary amount of illumination will be provided, and this may result in increased power consumption and a feeling of glare for the user. Furthermore, the illumination may interfere with the imaging due to specular reflection of the glass on the display surface of the smartphone.

It is also conceivable to distinguish the imaging subjects in some way and then perform control appropriate for the characteristics of the imaging subjects. However, even paper and smart phones are difficult to distinguish from each other with inexpensive hardware and software. For example, when specular reflection occurs on the surface of an image forming object, it is conceivable to determine that the reading object is a smartphone, but when specular reflection does not occur due to attachment of a protective sheet or the like, appropriate discrimination cannot be made based on this criterion. It is also conceivable that the operator sets the imaging subject onto the reading device by switching or the like, but it is not possible if various subjects need to be quickly read.

An object of the present invention is to solve such a problem and to enable quick setting of appropriate imaging conditions while assuming various imaging subjects when imaging is performed by an imaging apparatus while illuminating the imaging subjects. From another aspect, an object of the present invention is to enable easy discrimination of the fact that imaging is performed by an imaging device while an imaging object is illuminated, when the imaging object is a luminous body. In either case, it goes without saying that the purpose of imaging is not limited to reading information such as code symbols, and the present invention can be applied to imaging for any purpose. The assumed imaging object is not limited to a record carrier such as paper and a display provided in a smartphone or the like, and is arbitrary.

Solution to the problem

In order to achieve the above object, an imaging method of the present invention is an imaging method of imaging by an imaging apparatus including an imager and an illuminator configured to illuminate an imaging object of the imager, the method including: performing respective imaging according to respective imaging conditions, each imaging condition including an exposure time of the imager and an illumination time of the illuminator; storing respective combinations of the luminance of the respective images obtained by the respective imaging and the imaging conditions of the respective images in a memory; obtaining an estimate of a first parameter indicating a degree of influence of a change in exposure time on image brightness and an estimate of a second parameter indicating a degree of influence of a change in illumination time on image brightness, based on a combination of brightness and imaging conditions stored in a memory; and determining an imaging condition to be used in the next imaging based on the estimate of the first parameter and the estimate of the second parameter.

Further, it is conceivable that the method further includes determining a relationship between the exposure time and the illumination time based on the estimation of the first parameter and the estimation of the second parameter such that the longer the illumination time, the shorter the exposure time becomes, and the shorter the illumination time, the longer the exposure time becomes, and determining the imaging condition to be used in the next imaging so as to satisfy the determined relationship.

Further, it is conceivable that the imaging condition to be used in the next imaging is determined so that the relationship between the exposure time and the illumination time is satisfied, and the ratio of the illumination time to the exposure time differs from the ratio in the latest imaging condition by a predetermined threshold or more.

Further, it is conceivable to determine the imaging conditions to be used in the next imaging so that the ratio of the illumination time to the exposure time and the ratio of the estimate of the second parameter to the estimate of the first parameter are as close as possible.

In the imaging method of the present invention, the illumination amount of the imaging object by the illuminator may be used instead of the illumination time.

The present invention also provides a distinguishing method and an imaging method using such a distinguishing method, which aim to enable easy distinction between facts in a case where imaging is performed by an imaging device while an imaging object is illuminated, when the imaging object is a luminous body. Also in these inventions, the purpose of imaging is not limited to reading information such as code symbols, and the present invention can be applied to imaging for any purpose. The assumed imaging object is not limited to a record carrier such as paper and a display provided in a smartphone or the like, and is arbitrary.

The distinguishing method of the present invention is a distinguishing method of distinguishing an imaging object by an imaging apparatus including an imager and an illuminator configured to illuminate the imaging object of the imager, the method including: performing respective imaging according to a plurality of imaging conditions, each imaging condition including an exposure time of the imager and an illumination time of the illuminator, the respective imaging conditions having exposure times different from each other; storing respective combinations of the luminance of the respective images obtained by the respective imaging and the imaging conditions of the respective images in a memory; and obtaining an estimate of the degree of influence of the change in the exposure time on the image brightness based on a combination of the brightness and the imaging conditions stored in the memory, and determining that the imaging subject of the imager is a luminous body when the estimate is greater than a predetermined threshold.

Further, another distinguishing method of the present invention is a distinguishing method of distinguishing an imaging object by an imaging apparatus including an imager and an illuminator configured to illuminate the imaging object of the imager, the method including: performing respective imaging according to a plurality of imaging conditions, each imaging condition including an exposure time of the imager and an illumination time of the illuminator, the respective imaging conditions having illumination times different from each other; storing respective combinations of the luminance of the respective images obtained by the respective imaging and the imaging conditions of the respective images in a memory; and obtaining an estimate of the degree of influence of the change in illumination time on the image brightness based on a combination of the brightness and the imaging condition stored in the memory, and determining that the imaging subject of the imager is a luminous body when the estimate is less than a predetermined threshold.

Further, another distinguishing method of the present invention is a distinguishing method of distinguishing an imaging object by an imaging apparatus including an imager and an illuminator configured to illuminate the imaging object of the imager, the method including: performing respective imaging according to a plurality of imaging conditions, each imaging condition including an exposure time of the imager and an illumination time of the illuminator, ratios of the exposure time to the illumination time of the respective imaging conditions being different from each other; storing respective combinations of the luminance of the respective images obtained by the respective imaging and the imaging conditions of the respective images in a memory; obtaining an estimate of a first parameter indicating a degree of influence of a change in exposure time on image brightness and an estimate of a second parameter indicating a degree of influence of a change in illumination time on image brightness, based on a combination of brightness and imaging conditions stored in a memory; and determining that an imaging object of the imager is a illuminant when the estimate of the first parameter is greater than the first threshold and the estimate of the second parameter is less than the second threshold.

Further, another distinguishing method of the present invention is a distinguishing method of distinguishing an imaging object by an imaging apparatus including an imager and an illuminator configured to illuminate the imaging object of the imager, the method including: performing respective imaging according to a plurality of imaging conditions, each imaging condition including an exposure time of the imager and an illumination time of the illuminator, ratios of the exposure time to the illumination time of the respective imaging conditions being different from each other; storing respective combinations of the luminance of the respective images obtained by the respective imaging and the imaging conditions of the respective images in a memory; obtaining an estimate of a first parameter indicating a degree of influence of a change in exposure time on image brightness and an estimate of a second parameter indicating a degree of influence of a change in illumination time on image brightness, based on a combination of brightness and imaging conditions stored in a memory; and determining that an imaging subject of the imager is a illuminant when a ratio of the estimate of the first parameter to the estimate of the second parameter is greater than a predetermined threshold.

The imaging method of the present invention is a method including: determining an imaging condition to be used in next imaging using an upper limit of an illumination time which is smaller in a case where the imaging object is determined to be the illuminant by the method of distinguishing according to either of the methods of distinguishing than in a case where the imaging object is not determined to be the illuminant by the method of distinguishing, and determining an imaging condition to be used in next imaging using a lower limit of an illumination time which is larger in a case where the imaging object is not determined to be the illuminant by the method of distinguishing than in a case where the imaging object is determined to be the illuminant by the method of distinguishing; and performing imaging by the imaging device according to the determined imaging condition.

The present invention can be implemented in any manner other than the above, such as an apparatus, a method, a system, a computer program, a storage medium in which a computer program is recorded, and the like.

Advantageous effects of the invention

According to the configuration of the present invention as described above, when imaging is performed by the imaging apparatus while illuminating the imaging subject, appropriate imaging conditions can be set quickly while assuming various imaging subjects. Further, according to the configuration of another aspect of the present invention, when an imaging object is a luminous body, in the case where imaging is performed by an imaging device while the imaging object is illuminated, the fact can be easily distinguished.

Drawings

Fig. 1 is a block diagram showing a hardware configuration of a reading apparatus 100 as an embodiment of an image forming apparatus of the present invention.

Fig. 2 is a functional block diagram showing a functional configuration of the reading apparatus 100 shown in fig. 1.

Fig. 3 is a diagram for explaining an outline of a process performed by the reading apparatus 100 for obtaining the illumination time tr and the exposure time te.

Fig. 4 is another diagram for explaining an outline of a process of obtaining the illumination time tr and the exposure time te performed by the reading apparatus 100.

Fig. 5 is a flowchart of a procedure corresponding to the function of the reading control section 151 executed by the CPU 121 of the reading apparatus 100.

Fig. 6 is a flowchart of a procedure corresponding to the functions of the imaging section 153 and the imaging condition determination section 154, which is executed by the same apparatus.

Fig. 7 is a flowchart of the imaging condition determining process shown in fig. 6.

Fig. 8 is a flowchart of a procedure corresponding to the function of the decoding section 156, which is executed by the CPU 121 of the reading apparatus 100.

Fig. 9 is a functional block diagram corresponding to fig. 2 showing a functional configuration of the reading apparatus 100 of the second embodiment.

Fig. 10 is a flowchart of a process corresponding to the function of the imaging target distinguishing section 158, which is executed by the CPU 121 of the reading apparatus 100 of the second embodiment.

Detailed Description

Embodiments of the present invention will be described with reference to the accompanying drawings.

The first embodiment: FIGS. 1 to 8

First, an embodiment of an imaging apparatus of the present invention will be described with reference to fig. 1. Fig. 1 is a block diagram showing a hardware configuration of a reading apparatus as an embodiment of an image forming apparatus.

The reading apparatus 100 shown in fig. 1 is an optical information reading apparatus for optically reading a code symbol 102 (represented by a symbol having a light reflectance different from the surrounding environment) on a reading object 101, and is also an imaging apparatus for capturing an image of the reading object 101 at the time of reading.

The read object 101 may be a record carrier statically carrying the code symbol 102 or may be a display dynamically displaying the code symbol. The material of the record carrier may be any of paper, metal, resin, etc., and the method of carrying the code symbol 102 on the record carrier may be any of printing, surface modification, embossing, etc. The display may be a light-emitting body which displays information by emitting light by itself using a backlight or the like, or a device such as a reflective liquid crystal display device which displays information by reflecting light from the outside. Of course, it is not necessary to specify which of the above is the reading target 101 in advance.

Code symbol 102 may be any standard one-dimensional bar code or two-dimensional code. The standard of the code symbol 102 does not have to be specified in advance, and can be distinguished in a decoding process described later.

As shown in fig. 1, the reading apparatus 100 includes an optical portion 110, a control portion 120, an operation portion 131, and a notification portion 132.

Among them, the optical section 110 includes an imaging sensor 111, a lens 112, a pulse LED (light emitting diode) 113, and the optical section 110 is an imaging device for optically capturing an image of the reading target 101 including the code symbol 102.

The imaging sensor 111 is an imager for capturing an image of an imaging object such as the reading object 101, and for example, the imaging sensor 111 may be constituted by a CMOS (complementary metal oxide semiconductor) image sensor. Further, the imaging sensor 111 may generate image data indicating a gradation value of each pixel based on the electric charge accumulated in each pixel of the image sensor by imaging, and output the image data to the control section 120. In this imaging sensor 111, pixels are two-dimensionally arranged.

The lens 112 is an optical system for imaging the reflected light from the imaging subject on the imaging sensor 111.

The pulse LED 113 is an illuminator for irradiating illuminating light to an imaging object of the imaging sensor 111.

Next, the control section 120 includes a CPU 121, a ROM 122, a RAM 123, and a communication I/F124 for communicating with an external device, the ROM 122 storing data such as programs to be executed by the CPU 121 and various tables, and the RAM 123 serving as a work area when the CPU 121 executes various processes.

The CPU 121 executes a program stored in the ROM 122 using the RAM 123 as a work area to control the operation of the entire reader 100 including the optical section 110, the operation section 131, and the notification section 132, thereby realizing various functions including those described later with reference to fig. 2. Further, the CPU 121 also performs processing such as detection and decoding of the code symbol 102 included in the image data of the image captured by the imaging sensor 111, and outputting the decoding result to the outside or accumulating it.

The communication I/F124 is an interface for communicating with various external devices such as a data processing device that utilizes the decoding result of the code symbol 102.

The operation portion 131 is an operation device such as a button and a trigger for accepting an operation by a user. The notification unit 132 is a notifier for executing various notifications to the user. Specific notification methods that may be contemplated include, but are not limited to, displaying a message or data through a display, illuminating or flashing lights, outputting a sound through a speaker, and the like.

When the reading apparatus 100 automatically operates according to control from an external apparatus or autonomous control, the operation section 131 and the notification section 132 may be omitted.

The reading device 100 described above may be configured as, for example, but not limited to, a handheld or stationary code symbol reading device.

In the above-described reading apparatus 100, one of the feature points is a method for determining an imaging condition used in next imaging based on a combination of image brightness obtained by past imaging by the imaging sensor 111 and an imaging condition used in each imaging. This point will be explained next.

First, functions of the reading apparatus 100 relating to reading of a code symbol, including the above-described function of determining an imaging condition, will be described. Fig. 2 is a functional block diagram showing a functional configuration.

As shown in fig. 2, the reading apparatus 100 includes functions of a reading control section 151, a trigger detection section 152, an imaging section 153, an imaging condition determination section 154, an imaging history storage section 155, a decoding section 156, and an output section 157. In the example described here, the functions of the respective portions are realized by the CPU 121 controlling the respective portions of the reading apparatus 100 including the optical section 110 by executing software, but some or all of the functions may be realized by a dedicated control circuit.

The reading control section 151 shown in fig. 2 has a function of controlling an overall operation from imaging of the reading target 101 to decoding of the code symbol 102 in relation to reading of the code symbol 102. The function includes starting imaging in response to detection of a reading start trigger, stopping imaging and outputting a decoding result in response to detection of a decoding completion trigger, and the like.

The trigger detecting section 152 has a function of monitoring the occurrence of the reading start trigger and notifying the reading control section 151 of the occurrence of the reading start trigger when the reading start trigger is detected. What is used as a read start trigger may be determined arbitrarily by the manufacturer or user of the reading apparatus 100. For example, the reading start trigger may be an operation of a trigger switch provided in the operation section 131, entry of some subject into an imaging range of the imaging sensor 111, or the like. In the latter case, a sensor for detecting an object, such as an infrared sensor, may be provided in the reading apparatus 100. Further, the user can arbitrarily switch what is employed as a trigger.

The imaging section 153 has a function of controlling the optical section 110 to perform imaging and acquire image data obtained by the imaging in response to a start instruction from the reading control section 151. The imaging is performed according to the conditions determined by the imaging condition determination section 154. Further, the imaging section 153 has the following functions: when decoding an image obtained by imaging, a function of transmitting the acquired image data to the decoding section 156, and a function as a memory configured to store a combination of an imaging condition used in imaging and the brightness of the image obtained by imaging in the imaging history storage section 155 as an imaging history.

The imaging condition determination section 154 has a function of determining an imaging condition suitable for imaging the current reading subject 101 based on a combination of the imaging condition and the luminance stored in the imaging history storage section 155, and a function of supplying the determined imaging condition to the imaging section 153. In the present embodiment, the imaging condition determination section 154 determines the exposure time of the imaging sensor 111 and the illumination time of the pulse LED 113 as imaging conditions, but does not prevent other conditions from being determined together. Details of an algorithm for this determination will be described later.

The imaging history storage section 155 has a function of storing a combination of the imaging conditions used by the imaging section 153 in imaging and the brightness of an image obtained by imaging. The hardware for storage may be hardware provided in the reading apparatus 100, such as the RAM 123, or hardware external to the reading apparatus 100.

The decoding section 156 has a function of performing a decoding process on the code symbol 102 included in the image data transferred from the imaging section 153 and, when the decoding is successful, transferring the fact of the decoding result and data to the reading control section 151.

The output section 157 has a function of outputting the decoding result of the code symbol 102 to an external device such as a data processing device and notifying the user of successful reading via the communication I/F124 and the notification section 132. The method of notifying the user may be any method such as a buzzer or vibration, or the notification may be omitted when it is not necessary.

Next, a basic concept of the reading condition determination performed by the imaging condition determining section 154 in the reading apparatus 100 described above will be described.

First, the imaging condition determination section 154 determines the imaging condition so that an image having luminance suitable for decoding can be obtained by imaging by the imaging section 153. For example, the luminance may be obtained as a percentage value of a pixel value near a top percentage of a sample pixel of about several hundred pixels in the image. An appropriate offset process can be performed. In any case, in this specification, we introduce a value of a parameter indicating the luminance of an image as the luminance index value D1, and will determine the imaging conditions so that an image having the luminance index value D1 of the predetermined target value D1_ t can be obtained.

Here, when the ambient environment, the reading object, and the position thereof are constant, that is, imaging of a specific reading object for a plurality of frames in a short time is performed, the illumination time tr of the pulse LED 113, the exposure time te of the imaging sensor 111, and the gain g of the imaging sensor 111 are assumed as parameters that affect the luminance index value D1. The relationship between the luminance index value D1 and these parameters can be approximately expressed as the following formula 1.

Equation 1

D1 ═ g (k _ on · tr + k _ off · te. (equation 1)

Here, k _ off is an exposure contribution degree (first parameter) indicating the degree of influence of the change in the exposure time te on the image luminance, and k _ on is an illumination contribution degree (second parameter) indicating the degree of influence of the change in the illumination time tr on the image luminance. Equation 1 is obtained from the following idea: the amount of light detected by the imaging sensor 111 is substantially proportional to the exposure time te; when the pulse LED 113 is turned on, the light amount increases by an amount proportional to the illumination time tr; and the luminance of the image (luminance index value D1) is determined by how much the amount of light is reflected to the pixel value with the gain g.

Here, the illumination amount of the pulse LED 113 is always constant during illumination, and the contributions of the exposure time te and the illumination time tr are assumed to be linear. Even if such an assumption is made, the determination of the imaging condition can be performed with sufficient accuracy. An example in which the variation in the illumination light amount is considered will be described later as a modification.

In equation 1, the values of k _ on and k _ off are determined according to the surrounding environment, the reading object and its position, and when these conditions are constant, k _ on and k _ off can be regarded as constants. Therefore, by appropriately determining the values of tr, te, and g, it can be expected that the luminance index value D1 of the image obtained by imaging becomes a value at or near the expected value.

However, in order to simplify the control, we here consider obtaining a desired luminance index value D1 by adjusting the illumination time tr and the exposure time te while fixing the gain g.

Therefore, we introduce a luminance index value D, where D ═ D1/g. Using the luminance index value D, the relationship shown in formula 1 can be expressed as the following formula 2. Then, in the following, we consider obtaining the illumination time tr and the exposure time te by which the luminance index value D of the image obtained by imaging becomes the predetermined target value D _ t.

Equation 2

D ═ k _ on · tr + k _ off · te. (equation 2)

For this reason, the imaging condition determination section 154 first estimates the values of k _ on and k _ off.

Specifically, the imaging condition determination section 154 performs imaging while changing the illumination time tr and the exposure time te, and applies the illumination time tr, the exposure time te, and the luminance index value D of the image obtained by each imaging to formula 2 to obtain the relational expression formula 3. In formula 3, D _ ci is a luminance index value D of an image obtained by the i-th imaging, and tri and tei are an illumination time tr and an exposure time te of an imaging condition used in the i-th imaging, respectively, where i is a natural number. x is the number of imaging histories that can be used to estimate the k _ on and k _ off values.

Equation 3

In equation 3, since only k _ on and k _ off are unknown, the estimated values of k _ on and k _ off can be obtained by solving the simultaneous equations of equation 3 regarding them as k _ on and k _ off. If there are three or more equations, the solutions of k _ on and k _ off cannot be uniquely determined, but an approximation of the solutions of k _ on and k _ off that approximately satisfy all the equations can be obtained by calculation according to the following equations 4 and 5.

Equation 4

Equation 5

Is provided with

K=(STS)-1STD. (formula 5)

Equation 4 represents the same contents as equation 3 using a matrix, and equation 5 is derived by modifying equation 4 to obtain the value of the matrix K on the left side. In the formula 5, XTRepresents the transpose of matrix X, X-1Representing the inverse of the matrix X. Equation 5 may be obtained by multiplying the matrix S in order from the left side on both sides of equation 4TSum matrix (S)TS)-1And organizing the formula for derivation.

Considering the meaning of equation 2, both k _ on and k _ off are considered positive values. Therefore, if the result of the calculation according to equation 5 is zero or less, an appropriate small positive value may be employed as the estimate.

Further, in the case where x is 2, when the ratio between the illumination time tr and the exposure time te for two times of imaging is the same, a solution cannot be determined. Therefore, it is preferable to use the imaging conditions having different ratios between the illumination time tr and the exposure time te in the two times of imaging. Further, even if x is 3 or more, when all the ratios between the illumination time tr and the exposure time te with respect to x times of imaging are the same, a solution cannot be determined. Therefore, it is preferable to appropriately change the ratio between the illumination time tr and the exposure time te in each imaging.

When the estimates of k _ on and k _ off are obtained as described above, the imaging condition determination section 154 uses the estimates to determine the illumination time tr and the exposure time te to be used in the next imaging. Next, this will be described with reference to the graphs of fig. 3 and 4.

Fig. 3 shows a graph of equation 2 over tr-te-D space, reference numeral 223 denotes a plane represented by the expression 2, and (a) and (b) are two examples of this plane 223 showing different combinations of values with respect to k _ on and k _ off.

(a) Is an example where k _ on is large and k _ off is small. For example, when attempting to image a non-luminous body such as paper in a dark environment, even if the exposure time te is extended, the amount of light incident on the imaging sensor 111 does not change so much, and if the illumination time tr is extended, the amount of incident light will greatly increase and will be reflected to the image luminance, exhibiting the tendency shown in (a).

(b) Is an example where k _ on is small and k _ off is somewhat large. For example, when attempting to image a luminous body such as a display of a smartphone, if the exposure time te is extended, light emitted by the display will be incident on the imaging sensor 111 for a long time, and therefore, the amount of light incident on the imaging sensor 111 will increase and will be reflected to the image brightness. However, even if the illumination time tr is extended, although the amount of light itself incident on the imaging sensor 111 will increase accordingly, the effect of the increase on the image brightness will be small because the increased light is weaker than the light emitted by the display and incident within the same period. Therefore, the tendency shown in (b) is exhibited.

These (a) and (b) show only two examples, and other cases where both k _ on and k _ off are large values, for example, are also conceivable. In the first embodiment, the imaging condition determination section 154 does not need to change the calculation algorithms of the illumination time tr and the exposure time te in accordance with the estimated values of k _ on and k _ off. That is, it is not necessary to determine what the imaging subject is or what the imaging environment is based on the estimates of k _ on and k _ off.

In both (a) and (b), the plane 223 represented by formula 2 is a plane including: a line 221 on the tr-D plane having a slope of k _ on and passing through the origin; and a line 222 on the te-D plane having a slope of k _ off and passing through the origin. In order to obtain the target value D _ t as the value of the luminance index value D, it is only necessary to determine the combination of the illumination time tr and the exposure time te so that the combination falls on a straight line 224, which is an intersection line between the plane 223 and a plane where D is D _ t. This line 224 is referred to as a "solution providing line" in the sense of a line indicating the relationship between the values of tr and te to be determined. This is the same for both example (a) and example (b).

The imaging condition determination section 154 selects a certain point on the solution providing line 224 according to a predetermined constraint condition, and uses the coordinate values of tr and te of the point as the illumination time tr and the exposure time te used in the next imaging, respectively. Next, these constraints will be explained.

Fig. 4 shows an example of the constraint when the solution providing line 224 is projected on the te-tr plane. Fig. 4 (a) and (b) correspond to fig. 3 (a) and (b), respectively.

The first of the constraints shown in FIG. 4 is 0 ≦ te, tr. This is because neither the exposure time nor the illumination time can take a negative value.

The second constraint is tr ≦ te. Even if the light emission time is longer than the exposure time, there is no influence on the image brightness, resulting in a waste of power, and thus the constraint condition is prepared. In fig. 4, the line 231 is a line of tr ═ te, and the region with dot hatching is excluded by the second constraint.

The third constraint is ρ ≦ (tr/te)/(tr α/te α) or (tr/te)/(tr α/te α) ≦ 1/ρ, where te α and tr α are the exposure time and illumination time, respectively, in the previous imaging, and ρ is an appropriate constant greater than 1. This is a constraint that the ratio between the illumination time tr and the exposure time te is different by a predetermined threshold value or more than that used in the previous imaging, and that ranges larger than 1/ρ times the previous value and smaller than ρ times are excluded. In fig. 4, the point 232 is (te α, tr α), the line 233 is a line passing through the origin and the point 232, and the range indicated by reference numeral 234 without shading is excluded by the third constraint.

Therefore, the imaging condition determination section 154 adopts a certain point of the solution providing line 224 in the range shown by the vertical hatching as the illumination time tr and the exposure time te used in the next imaging.

Note that the slope of solution supply line 224 on the te-tr plane is negative because it is-k _ off/k _ on. That is, the solution providing line 224 determines the relationship between the exposure time te and the illumination time tr such that the exposure time te becomes shorter as the illumination time tr becomes longer, and the exposure time te becomes longer as the illumination time tr becomes shorter.

Then, if the intersection 236 of the perpendicular line 235 from the origin to the solution providing line 224 and the solution providing line 224 is within a range satisfying the constraint condition, the imaging condition determination section 154 adopts the coordinates of the intersection 236. (b) Corresponding to this case. If the intersection point 236 is not within the range satisfying the constraint condition, the coordinates of the point 237 as close to the intersection point 236 as possible within the range satisfying the constraint condition are adopted. (a) Corresponding to this case.

In either case, since the slope of the vertical line 235 is k _ on/k _ off, at the intersection point 236, tr ═ t (k _ on/k _ off) · te, that is, tr/te ═ k _ on/k _ off. Therefore, the imaging condition determination section 154 determines the exposure time te and the illumination time tr such that the ratio of the illumination time tr to the exposure time te is as close as possible to the ratio of the estimate of the second parameter k _ on to the estimate of the first parameter k _ off.

The intersection point 236 is used because it is closest to the origin on the solution providing line 224. It can be considered that an image having the luminance index value D at the target value D _ t can be obtained by taking any point on the solution straight line 224, but if the exposure time te is too long, the time of one frame becomes long and the reading time is prolonged, and if the illumination time tr is too long, the power consumption increases, and then if the user can see the pulse LED 113, the user may feel dazzled. It is considered that the imaging condition that is balanced between the exposure time te and the illumination time tr can be determined by using the coordinates of the point as close to the intersection point 236 as possible.

As can be seen from fig. 4, in the case of (a), the illumination time tr sufficient for obtaining a sufficient amount of light by the reflected light from the reading target 101 and the exposure time te almost the same as tr are determined. In the case of (b), an exposure time te sufficient for obtaining a substantially sufficient amount of light by the light emitted from the reading object 101 and a shorter illumination time tr are determined.

Based on the above concept, the imaging conditions can be automatically and quickly set using a general algorithm, so that an image with preferable brightness can be obtained without lighting the pulse LED 113 for an unnecessarily long time and a not too long frame time regardless of the reading object 101 and the surrounding environment. It may be desirable to be able to set an imaging condition capable of achieving the target value D _ t of the luminance index value with a certain degree of accuracy in the third frame for which data in two frames can be used as the imaging history.

As described above, it is not necessary to individually perform the calculation processes of k _ on, k _ off, and tr, te step by step. Described herein are basic concepts, and calculations for obtaining results in subsequent steps may be analytically or approximately determined and performed by combining steps.

Further, conditions other than the conditions shown in fig. 4 may be regarded as constraint conditions that the exposure time te and the illumination time tr should satisfy. The conditions shown in fig. 4 may differ from the above conditions in specific details.

For example, in order to prevent noise due to a voltage change at the time of turning off the pulse led 113 when reading out image data from the imaging sensor 111 after imaging, it is conceivable to terminate the illumination a predetermined time k before the end of the exposure time te. In this case, the constraint is te ≦ tr-k.

Further, the maximum value and the minimum value of the exposure time te may be determined based on the limits on the frame period and the shutter speed. Further, the maximum value and the minimum value of the illumination time tr may be determined based on restrictions on the response speed, heat generation, power consumption, and the like of the pulse LED 113.

Further, the points on the solution providing line 224 to be used as the exposure time te and the illumination time tr are not limited to the intersection with the vertical line 235. For example, as for an appropriate normal number β, a point satisfying tr/te ═ β · (k _ on/k _ off) may be employed. The condition that the illumination time is long and the exposure time is short is set to be larger, and the condition that the illumination time is short and the exposure time is long is set to be smaller. The value of β may be adjusted by the user depending on the usage pattern or the surrounding environment of the reading apparatus 100.

Next, referring to fig. 5 to 8, a process for performing read control of the code symbol 102, including the process for determining the exposure time te and the illumination time tr as described above, executed by the CPU 121 will be described. The process described herein is an embodiment of the imaging method of the present invention. Further, a process up to preparation of an imaging history of two frames, which is omitted in the above description, is also included.

First, fig. 5 shows a flowchart of a procedure corresponding to the function of the read control section 151.

When the reading apparatus 100 is turned on, the CPU 121 starts the process shown in fig. 5.

In this process, the CPU 121 first waits until a notification of the read start trigger is detected from the trigger detecting section 152 (S11). When detecting the notification, the CPU 121 instructs the imaging section 153 to start imaging so as to read the code symbol 102 (S12).

Thereafter, the CPU 121 waits until the decoding result is received from the decoding section 156 (yes in S13) or a predetermined time elapses from the start of reading (yes in S14), and when either of these two conditions is satisfied, the CPU 121 instructs the imaging section 153 to stop imaging (S15). In the former case, decoding (reading) is successful, and in the latter case, reading fails due to timeout.

If the decoding is successful (yes in S16), the CPU 121 outputs the decoding result through the output section 157, and the process returns to step S11. At this time, the CPU 121 may notify the user of the decoding success by sound or light. If the decoding fails (NO in S16), the process returns directly to step S11.

Next, fig. 6 shows a flowchart of a procedure corresponding to the functions of the imaging section 153 and the imaging condition determination section 154.

When detecting the imaging start instruction supplied through step S12 of fig. 5, the CPU 121 starts the process of fig. 6.

In this process, the CPU 121 first executes an imaging condition determining process shown in fig. 7 (S21). As described with reference to fig. 3 and 4, this process is a process for determining the exposure time te and the illumination time tr to be used in the next imaging, and will be described later.

Next, the CPU 121 controls the optical section 110 to perform imaging for one frame according to the imaging conditions determined in step S21 (S22). Then, the CPU 121 calculates a luminance index value D from the pixel values of the image data obtained by imaging (S23), and stores a combination of the imaging conditions used in imaging and the calculated luminance index value D in the imaging history storage section 155 (S24). At this time, data indicating the number of frames of imaging in the course of the current start is also stored. The data stored in step S24 is referred to in the imaging condition determination process of fig. 7.

Thereafter, the CPU 121 determines whether one of the following conditions is satisfied: the luminance index value D calculated this time falls within a predetermined range around the target value D _ t (suitable for decoding); and the luminance index value D continues to fall outside the predetermined range during the last predetermined number of frames (the discrimination continues because it seems to take a long time although D has not converged to the preferable range) (S25). If "yes" in S25, the CPU 121 determines to decode the image data obtained in the current imaging, and passes the image data to the decoding section 156 (S26).

If "no" in step S25, the CPU 121 determines not to decode the image data obtained in the current imaging, skips step S26, and proceeds to step S27.

Thereafter, if the CPU 121 has received the imaging stop instruction supplied through step S15 of fig. 5 (yes in S27), the process of fig. 6 ends, otherwise, the process returns to step S21 to repeat the process. If the decoding has succeeded or the reading times out, the determination in step S27 becomes yes.

Next, fig. 7 shows a flowchart of the imaging condition determining process shown in fig. 6. This process corresponds to the function of the imaging condition determination section 154.

The CPU 121 starts the process of fig. 7 in step S21 of fig. 6.

In this process, first, the CPU 121 checks how many imaging histories usable to determine the imaging conditions used in the current reading are stored in the imaging history storage section 155 (S31). The process then branches according to this number.

Basically, the imaging history stored after the last start of the process of fig. 6 is considered to be available, but may be limited to the latest predetermined number of frames of imaging history. Alternatively, if a long time has not elapsed since the previous decoding, since it can be assumed that the surrounding environment has not significantly differed, the last predetermined number of frames of imaging history used in the previous decoding may also be available. By so doing, it is possible to accurately set an appropriate reading condition from the first frame in step S37.

However, when it is assumed that the reading objects 101 having different characteristics such as paper and a smartphone are mixed, it is preferable not to use the imaging history of the previous decoding. The user may be allowed to arbitrarily switch the mode between a mode using the previously decoded imaging history and another mode not using the previously decoded imaging history.

When the number of available imaging histories is zero in step S31, there is no data for determining the exposure time te and the illumination time tr to be used in imaging, so the CPU 121 adopts initial values registered in advance as the exposure time te and the illumination time tr to be used in subsequent imaging (S32), and returns to the process of fig. 6. As these initial values, values suitable for the surrounding environment or the reading object 101 where reading is most often performed are preferably registered. For example, it is preferable to register a value suitable for reading a sheet at a standard distance at a standard indoor brightness.

Next, when the number of available imaging histories is 1 in step S31, the estimated values of k _ on and k _ off cannot be obtained yet according to formulas 3 to 5. Therefore, the CPU 121 adopts the initial value registered in advance as k _ off (S33). The initial value may be the same as or different from the value employed in step S32.

Then, the CPU 121 calculates an estimate of k _ on based on the combination of the imaging conditions and the luminance index value D in the imaging history and k _ off employed in step S33 (S34). Specifically, the estimation can be obtained by substituting te, tr, the luminance index value D, and the exposure contribution degree k _ off under the imaging conditions into the following equation 6 obtained by modifying equation 2.

Equation 6

k _ on ═ (D-k _ off · te)/tr.. (equation 6)

Thereafter, as described above with reference to fig. 3 and 4, the CPU 121 calculates the exposure time te and the illumination time tr which satisfy the predetermined constraint condition and are estimated to achieve the target luminance index value D _ t, based on k _ off employed in step S33 and k _ on calculated in step S34 (S35). Then, the CPU 121 determines the calculated te and tr as the exposure time te and the illumination time tr to be used in the next imaging, and the process returns to the process of fig. 6.

Note that the reason for adopting the default initial value for k _ off in step 33 is that even if the value deviates from the target value, the negative influence caused by k _ off is less than k _ on. However, even if the default initial value for k _ on is adopted, the calculations in steps S34 and S35 may be performed similarly to the above case. In this case, however, an equation obtained by arranging equation 2 so that the left side becomes k _ off is used instead of equation 6.

Further, in step S35, it is not necessary to adopt the coordinate values of the intersection 236 between the solution providing line 234 and the vertical line 235 as the exposure time te and the illumination time tr as shown in fig. 4. Since the case where k _ on, k _ off, te, and tr can be stably calculated by, for example, using the coordinate values of the points closest to the straight line 233 within the range where the constraint condition is satisfied has not been reached, it is possible to prevent te and tr from becoming extreme values and adversely affecting the next imaging or the subsequent calculations of k _ on, k _ off, te, and tr.

Next, when the number of available imaging histories is two or more in step S31, in a normal case, estimates of k _ on and k _ off may be obtained according to equations 3 to 5. Then, the CPU 121 calculates the estimates of k _ on and k _ off according to the above equation 5 based on a plurality of combinations of the imaging conditions and the luminance index value D in the imaging history (S36).

Then, as described above with reference to fig. 3 and 4, based on k _ on and k _ off determined in step S36, the CPU 121 calculates the exposure time te and the illumination time tr which satisfy the predetermined constraint condition and are estimated to achieve the target luminance index value D _ t (S37). Then, the CPU 121 determines the calculated te and tr as the exposure time te and the illumination time tr to be used in the next imaging, and the process returns to the process of fig. 6.

Next, fig. 8 shows a flowchart of a procedure corresponding to the function of the decoding section 156.

When the image data is transferred to the decoding section 156 by the process of step S26 of fig. 6, the CPU 121 acquires the image data and starts the process of fig. 8.

In this process, the CPU 121 first extracts a code symbol portion to be decoded from the acquired image data, and performs a decoding process on the extracted image data (S41). For this decoding process, a well-known process can be appropriately adopted according to the standard of the assumed code symbol. Sequential decoding according to multiple standards may be required.

When the decoding in step S41 is successful (yes in S42), the CPU 121 notifies the read control section 151 of the decoding success and the data of the decoding result (S43), and ends the process. If the decoding fails (NO in S42), the process ends skipping step S43.

Since the CPU 121 executes the processes of fig. 5 to 8, particularly through the processes of fig. 6 and 7, the reader 100 can quickly set imaging conditions suitable for the reading being performed, and can read the code symbol 102 even in the case where a wide range of surrounding environments (particularly, brightness) and the reading subject 101 are assumed, as described with reference to fig. 3 and 4. Further, the illumination time of the pulse LED 113 is not unnecessarily long, and thus power consumption and glare can be reduced.

If decoding is successful in a state where the number of available imaging histories involved in the process of fig. 7 is zero or one, imaging does not have to be repeated any more, and reading can be completed before the number of available imaging histories becomes two or more.

Second embodiment: FIGS. 9 and 10

Next, a second embodiment of the present invention will be described. The second embodiment is different from the first embodiment only in that the reading apparatus 100 discriminates whether or not an imaging object is a luminous body based on the calculated estimates of the exposure contribution degree k _ off and the illumination contribution degree k _ on, and performs different processes according to the determination result. Therefore, the description of the same portions as those of the first embodiment is omitted, and the description will focus on the differences. Elements identical or corresponding to elements of the first embodiment are denoted by the same reference numerals.

First, fig. 9 shows a configuration of functions related to reading of code symbols included in the reader 100.

The functional configuration shown in fig. 9 is the same as that shown in fig. 2, except for the addition of an imaging subject distinguishing section 158.

The imaging subject distinguishing section 158 has a function of distinguishing whether or not the imaging subject of the imaging sensor 111 is a luminous body based on the estimates of k _ off and k _ on calculated by the imaging condition determining section 154 at the time of determining the imaging condition. Further, the imaging target distinguishing section 158 also has a function of changing the constraint condition of the illumination time tr to be determined by the imaging condition determining section 154 and the decoding process performed by the decoding section 156 based on the result of the distinguishing.

Fig. 10 shows a flowchart of a process corresponding to the function of the imaging subject differentiation section 158 described above. The process of fig. 10 will be inserted between steps S36 and S37 of fig. 7.

In the reading apparatus 100 of the second embodiment, when the CPU 121 calculates the estimates of k _ off and k _ on in step S36 of fig. 7, the process proceeds to step S51 of fig. 10. In steps S51 and S52, when the exposure contribution degree k _ off is greater than the predetermined first threshold value T1 and the illumination contribution degree k _ on is less than the predetermined second threshold value T2 (yes in S52), the CPU 121 determines that the imaging subject is a luminous body. If any of these conditions is not satisfied (NO in S51 or S52), the CPU 121 determines that the imaging subject is not a light emitter.

As described with reference to fig. 3 in the description of the first embodiment, the above criteria are provided according to this concept since k _ on can be considered small and k _ off a bit large when attempting to image a luminaire.

When the CPU 121 determines that the imaging subject is not the illuminant, the CPU 121 switches the decoding section 156 to the general decoding mode (S53), and sets the lower limit of the illumination period tr to one of the constraints used in S37 of fig. 7 (S54). As a result, the lower limit of the illumination time tr increases as compared with the case of step S56 where the lower limit is not set.

The general decoding mode is a mode in which the decoding process is sequentially performed for all possible standards of the code symbol 102 without assuming a specific read object. Further, the lower limit of tr is set in step S54 because the exposure time te is prevented from being unnecessarily long by ensuring a certain degree of illumination time in consideration of when the imaging object is not a light emitter (a certain degree of illumination is generally required), so that when the illumination time tr is too short, the exposure time te becomes long, and reading takes a long time.

On the other hand, when the CPU 121 determines that the imaging object is a light emitter, the CPU 121 switches the decoding section 156 to the decoding mode of the smartphone (S55), and sets the upper limit of the illumination time tr to one of the constraints used in S37 of fig. 7 (S56). As a result, the upper limit of the illumination time tr is reduced as compared with the case of step S54 where no upper limit is set.

In this embodiment, it is assumed that the display of the smartphone functions as a light emitter, and it is also assumed that a code symbol of a specific standard is displayed on the smartphone in an environment in which the reading apparatus 100 is used. In such an environment, when it can be determined that the imaging object is a luminous body, by first attempting decoding for a specific standard in the decoding process, it is possible to speed up the decoding process while preventing unnecessary decoding failure due to decoding for other standards. In view of such applications, the decoding mode for a smartphone is the mode that first attempts decoding for a particular standard.

Incidentally, it is conceivable that, in some environments, in addition to the code symbol displayed on the smartphone, some standard code symbols are printed on paper. In this case, if the decoding mode that is first attempted for the standard used for printing on paper is set as the decoding mode of the paper medium instead of the general decoding mode in step S53, the decoding process similar to that described above can be accelerated.

Further, the reason why the upper limit of tr is set in step S56 is to prevent the illumination time tr from becoming too long, because when the imaging subject is a luminous body, generally, illumination does not have too great an effect on increasing the image brightness, and the too long illumination time tr is disadvantageous because it causes an increase in power consumption and glare.

After step S54 or S56, the CPU 121 proceeds to the process of step S37 of fig. 7, and then executes the same process as in the first embodiment.

As described above, by using the estimation of k _ off and k _ on, it is possible to distinguish whether or not the imaging object is a luminous body in a simple process. Further, according to the discrimination result, by switching the decoding process, changing the imaging conditions used in the next and subsequent imaging, or changing the determination method thereof, it is also possible to perform effective reading suitable for the reading object or the imaging object.

The process performed according to the discrimination result of the imaging object is not limited to the process shown in fig. 10, and is arbitrary. Which may be a process unrelated to reading or decoding code symbols. Further, it may be a process of assuming some objects other than smartphones as luminaries.

Further, as a criterion for distinguishing the imaging subject, only one of "the exposure contribution degree k _ off is greater than a predetermined first threshold value T1" and "the illumination contribution degree k _ on is less than a predetermined second threshold value T2" may be used. Although the accuracy is lower than the case where both of them are used at the same time as shown in fig. 10, a distinction can be made to some extent based on only one of the above.

Further, since k _ off, k _ on >0 holds, if k _ off > T1 and k _ on < T2, k _ off/k _ on > T1/T2. Therefore, by adopting T1/T2 as the third threshold T3, the criteria used in steps S51 and S52 of fig. 10 can also be considered to determine that the imaging subject is a luminous body when the ratio of k _ off to k _ on is greater than the third threshold T3.

Modification example

Although the description of the embodiments is as described above, in the present invention, the specific configuration of the device, the specific process of the processing, the format of the data, the specific content of the data, the standard to be adopted, and the like are not limited to those described in the embodiments.

For example, in the above-described embodiment, although tr is defined as the illumination time of the pulse LED 113, the pixel value of the image captured by the imaging sensor 111 depends on the time-integrated value of the amount of light incident on each pixel. Accordingly, tr can also be regarded as a time-integrated value of the amount of light irradiated by the pulse LED 113. This time integration value may be regarded as "illumination amount", that is, the illumination amount provided on the reading target 101. If the irradiation intensity of the pulse LED 113 is always constant, the time integral value is proportional to the illumination time, and therefore, no matter how the value of tr is considered, there is no great difference in calculation to obtain the value of tr. However, when tr is considered as a time integral value, for example for tr multiplication, it is also conceivable to double the light intensity of the pulsed LED 113 instead of doubling its illumination time.

Further, in an actual device, even if a voltage is applied to turn on the pulse LED 113, the light intensity does not immediately reach a desired level, but generally the light intensity will gradually increase within a time corresponding to the time constant of the control circuit even if it does not take so long. To reflect this in the calculation of tr, it is conceivable to regard tr as a time-integrated value of the light amount, convert the actual energization time tr _ c into a time-integrated value of the emitted light intensity using a characteristic equation of a control circuit of the pulse LED 113, and calculate values of te and tr for achieving the target value D _ t of the luminance index value by the method described with reference to equations 3 to 5, fig. 3, and fig. 4 while regarding the converted value as the illumination time tr in the above-described embodiment. Then, by calculating the energization time tr _ x for realizing the time integration value corresponding to the calculated tr using the characteristic equation and energizing the pulse LED 113 by the time of tr _ x, an appropriate amount of illumination can be performed in consideration of the transient phenomenon.

The read target of the reading apparatus 100 may be information of the non-code symbol 102 such as a character string or a mark.

The present invention is naturally applicable to imaging for purposes other than reading information. For example, even in a case where it is desired to acquire an image having a certain brightness in order to analyze the image itself obtained by imaging, it is useful to determine the imaging conditions in the same manner as in the case of the above-described embodiment.

Further, an embodiment of the computer program of the present invention is a computer program for causing one or more computers to cooperate to control necessary hardware to realize the functions of the reading apparatus 100 in the above-described embodiment or to execute the processes described in the above-described embodiment.

Such a computer program may be stored in ROM or other non-volatile storage media (flash memory, EEPROM, etc.) originally contained in the computer. The computer program may be provided while being recorded on any nonvolatile recording medium such as a memory card, a CD, a DVD, a blu-ray disc, and the like. The computer program may also be downloaded from an external device connected to a network, installed in and executed by a computer.

Further, the configurations of the embodiment and the modified example described above may be implemented in any combination as long as they are not contradictory to each other, and naturally, may be implemented by taking out only a part thereof.

List of reference numerals

100.. a reading apparatus, 101.. a reading object, 102.. a code symbol, 110.. an optical section, 111.. an imaging sensor, 112.. a lens, 113.. a pulse LED, 120.. a control section, 151.. a reading control section, 152.. a trigger detection section, 153.. an imaging section, 154.. an imaging condition determination section, 155.. an imaging history storage section, 156.. a decoding section, 157.. an output section, 158.. an imaging object area section, 224.. a solution supply line, 235.. a vertical line, D1.. luminance index value, D _ t, D1_ t.. a target value of luminance index value, k _ off.. an exposure contribution degree, k _ on... an illumination contribution degree, te … exposure time of the imaging sensor 111, and tr _ 84 illumination time of the pulse …

27页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:图像处理装置、方法、系统和计算机可读介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类