Object matching device and object matching method

文档序号:862260 发布日期:2021-03-16 浏览:5次 中文

阅读说明:本技术 对象物对照装置以及对象物对照方法 (Object matching device and object matching method ) 是由 高森哲弥 羽田真司 于 2019-08-01 设计创作,主要内容包括:本发明的目的在于提供能够以良好的精度对可分割的医疗用物品的图像进行匹配,此外能够容易地确认匹配结果的对象物对照装置以及对象物对照方法。在第1方式涉及的对象物对照装置中,在对象物被判定为已被分割的情况下,对第1匹配用图像和针对未被分割的状态的对象物的匹配用图像(第2匹配图像)进行对照,因而匹配对象的区域不变窄,而能够以良好的精度对可分割的医疗用物品的图像进行匹配。此外,针对被判断为拍摄到相同种类的对象物的显示用图像进行第1、第2显示处理,因而能够容易地确认匹配结果。(The object of the present invention is to provide an object matching device and an object matching method that can match images of a medical article that can be divided with good accuracy and can easily check matching results. In the object matching device according to the first aspect of the present invention, when the object is determined to have been divided, the 1 st matching image and the matching image for the object in an undivided state (the 2 nd matching image) are matched with each other, and therefore, the image of the dividable medical article can be matched with good accuracy without narrowing the region of the matching object. Further, the 1 st and 2 nd display processes are performed on the display images determined to have captured the same kind of object, so that the matching result can be easily confirmed.)

1. An object matching device includes:

a1 st image acquisition unit that acquires a1 st matching image based on a1 st captured image of a target object that is a divisible medical article;

a2 nd image acquisition unit that acquires a2 nd matching image based on a2 nd captured image of the object in an undivided state;

a division determination unit that determines whether or not the object captured in the 1 st captured image has been divided;

a matching unit that performs matching between the 1 st matching image and the 2 nd matching image when it is determined that the object is divided; and

and a display control unit that displays, on a display device, a display image in which the same type of object is determined to have been captured, among the 1 st matching images, based on the result of the comparison, and performs 1 st display processing for displaying the object with the outline orientation thereof being the same, or 2 nd display processing for displaying the object with the identification information given to the object being the same.

2. The object contrast device according to claim 1, wherein,

the 1 st image acquiring unit acquires the 1 st matching image for the front surface and the back surface of the object,

the 2 nd image acquiring unit acquires the 2 nd matching image on the front surface and the back surface of the object in the undivided state,

the collation unit performs the collation on the front surface and the back surface of the object,

the display control unit selects the 1 st matching image for the front surface and/or the back surface of the object and displays the selected image on the display device.

3. The object control device according to claim 1 or 2, wherein,

the display control unit may be configured to, in the 1 st display process, align the orientation of the dividing line generated by dividing the object so as to align the orientation of the outer shape.

4. The object control device according to any one of claims 1 to 3, wherein,

the comparison unit performs the comparison based on the outer shape and/or the identification information of the object.

5. The object control device according to any one of claims 1 to 4, wherein,

the matching unit extracts a region including a part of the object and/or the identification information in the 1 st matching image, and performs the matching with respect to the part of the region.

6. The object control device according to any one of claims 1 to 5, wherein,

the division determination unit determines that the object has been divided when the outer shape of the object is a predetermined shape.

7. The object control device according to any one of claims 1 to 6,

the matching unit performs the matching by using an image obtained by performing a process of emphasizing the identification information as the 1 st matching image and/or the 2 nd matching image.

8. The object control device according to any one of claims 1 to 7, wherein,

the medical article is any one of a tablet, a package containing a tablet, and a package containing a capsule-type drug.

9. The object control device according to any one of claims 1 to 8, wherein,

the identification information includes a print and/or an imprint given to the object.

10. A method for comparing objects, comprising:

a1 st image acquisition step of acquiring a1 st matching image based on a1 st captured image of a target object that is a divisible medical article;

a2 nd image acquisition step of acquiring a2 nd matching image based on a2 nd captured image of the object in an undivided state;

a division determination step of determining whether or not the object captured in the 1 st captured image has been divided;

a matching step of matching the 1 st matching image and the 2 nd matching image when it is determined that the object is divided; and

and a display control step of displaying, on a display device, a display image determined to have captured the same type of object among the 1 st matching images based on the result of the comparison, and performing 1 st display processing of displaying the object with the outline orientation of the object being aligned, or 2 nd display processing of displaying the object with the identification information given to the object being aligned.

Technical Field

The present invention relates to an object matching device and an object matching method for matching images of a medical article that can be divided.

Background

In hospital facilities, drug stores, and the like, drug monitoring and drug preparation authentication are performed. If this is done visually, the burden of the pharmacist and the like is large, and therefore, techniques for supporting inspection or authentication have been developed. In the case of performing inspection and authentication, there is also known a system that performs inspection of such divided tablets (irregular troche tablets) in the case of dividing the tablets according to conditions such as prescription contents.

For example, patent document 1 describes that the number of shaped troches is checked by matching the shaped troche tablet pattern, which is the shape pattern of the shaped troche tablet, with the drug band image.

Prior art documents

Patent document

Patent document 1: WO13/021543 publication

Disclosure of Invention

Problems to be solved by the invention

When a tablet is divided, identification information given to the tablet such as printing and imprinting may be cut off. Further, the directions of printing, imprinting, and the like in the image obtained by photographing the tablet become irregular, and if the photographed image and the image used for matching are directly displayed, it is difficult for the user to recognize and confirm the images. However, in the above-mentioned patent document 1, only the number of the irregular troche tablets (divided tablets) is counted without considering these problems, and it is difficult to confirm the inspection result. In patent document 1, a standard-shaped tablet pattern (a pattern of an undivided tablet) is divided by the number of divisions of a special-shaped tablet to generate a special-shaped tablet pattern, and a captured image of the special-shaped tablet is matched with the standard-shaped tablet pattern in a divided state. Further, in patent document 1, nothing is considered other than dividing the tablet.

As described above, in the conventional technique, the matching accuracy of the images of the divisible medical article is low, and it is difficult to confirm the matching result.

The present invention has been made in view of the above circumstances, and an object thereof is to provide an object matching device and an object matching method that can match images of a medical article that can be divided with good accuracy and can easily check a matching result.

Means for solving the problems

In order to achieve the above object, an object matching device according to claim 1 of the present invention includes: a1 st image acquisition unit that acquires a1 st matching image based on a1 st captured image of a target object that is a divisible medical article; a2 nd image acquisition unit that acquires a2 nd matching image based on a2 nd captured image of an object that is not divided; a division determination unit that determines whether or not the object captured in the 1 st captured image has been divided; a matching unit that performs matching between the 1 st matching image and the 2 nd matching image when the object is determined to have been divided; the display control unit displays, on the basis of the comparison result, a display image for which it is determined that the same type of object is captured, among the 1 st matching images, on the display device, and performs 1 st display processing for displaying the object with the outline orientation thereof being the same or 2 nd display processing for displaying the object with the identification information given thereto being the same.

In the 1 st aspect, when it is determined that the object is divided, the 1 st matching image is compared with the matching image (2 nd matching image) for the object in an undivided state, and therefore, the area of the matching object is not narrowed as in the above-described patent document, and the images of the dividable medical article can be matched with good accuracy. Further, the 1 st and 2 nd display processes are performed on the display images determined to have captured the same type of object, so that the matching result can be easily confirmed. The 1 st and 2 nd matching images may be the 1 st and 2 nd captured images as they are, or may be images obtained by performing image processing (for example, enlargement, reduction, rotation, region extraction, and region emphasis) on the 1 st and 2 nd captured images. In addition, the "dividable medical article" may be divided as long as the article can be divided, without any question as to whether or not the article itself is used (for example, a package such as a tablet may be divided, but a package itself is not used, and a tablet is used).

In the object matching device according to claim 2, in the 1 st aspect, the 1 st image acquiring unit acquires the 1 st matching image with respect to the front surface and the back surface of the object, the 2 nd image acquiring unit acquires the 2 nd matching image with respect to the front surface and the back surface of the object in an undivided state, the matching unit performs matching with respect to the front surface and the back surface of the object, and the display control unit selects the 1 st matching image with respect to the front surface and/or the back surface of the object and displays the selected 1 st matching image on the display device.

In the object matching device according to claim 3, in the 1 st or 2 nd aspect, the display control unit causes the orientation of the outline to be matched by matching the orientation of the dividing line generated by dividing the object in the 1 st display process. Examples of the dividing line include straight lines generated when dividing a circular object, but are not limited thereto. In the 1 st and 2 nd display processes, it is preferable that the positions of the objects with respect to the dividing lines are matched (for example, the objects are aligned in any direction of the vertical and horizontal directions with respect to the dividing lines) in addition to the directions of the dividing lines.

The object matching device according to claim 4 is the object matching device according to any one of claims 1 to 3, wherein the matching unit performs matching based on the external shape and/or the identification information of the object.

The object matching device according to claim 5 is configured such that, in any one of claims 1 to 4, the matching unit extracts a region including a part of the object and/or the identification information in the 1 st matching image, and matches the region.

The object matching device according to claim 6 is configured such that, in any one of claims 1 to 5, the division determination unit determines that the object is divided when the outer shape of the object is a predetermined shape. For example, the determination can be made based on the distribution of pixels representing the object in the captured image. Examples of the "predetermined shape" include a semicircular shape, a semielliptical shape, and a rectangle having an aspect ratio in a predetermined range, but are not limited to these examples.

The object matching device according to claim 7 is configured such that the matching unit performs matching by using an image obtained by performing a process of emphasizing the identification information as the 1 st matching image and/or the 2 nd matching image in any one of the 1 st to 6 th aspects. According to the 7 th aspect, matching can be performed with good accuracy.

The object collating device according to aspect 8 is the object collating device according to any one of aspects 1 to 7, wherein the medical article is any one of a tablet, a package containing the tablet, and a package containing the capsule-type medicine. The shape of the tablet is not particularly limited. The package may be a sheet-like package in which the tablet or capsule-type drug is stored and taken out one by one or taken out one by one capsule.

The object matching device according to claim 9 is the object matching device according to any one of claims 1 to 8, wherein the identification information includes a print and/or an imprint given to the object. Printing and imprinting can be performed by characters, numbers, symbols, figures, and combinations thereof, and may be colored.

In order to achieve the above object, an object matching method according to claim 10 of the present invention includes: a1 st image acquisition step of acquiring a1 st matching image based on a1 st captured image of a target object that is a divisible medical article; a2 nd image acquisition step of acquiring a2 nd matching image based on a2 nd captured image of the object in an undivided state; a division determination step of determining whether or not the object captured in the 1 st captured image has been divided; a matching step of matching the image for the 1 st matching with the image for the 2 nd matching when the object is judged to be divided; and a display control step of displaying, on the display device, a display image determined to have captured the same type of object among the 1 st matching images based on the comparison result, and performing 1 st display processing of displaying the object with the outline oriented in the same direction or 2 nd display processing of displaying the object with the identification information attached to the object oriented in the same direction.

According to the 10 th aspect, the images of the medical article that can be divided can be matched with good accuracy as in the 1 st aspect. Further, the matching result can be easily confirmed.

The object control method according to claim 10 may further include the same configuration as in any of embodiments 2 to 9. The object matching method of these embodiments can be also exemplified as an embodiment of the present invention, as a program executed by an object matching device or a computer, and a non-transitory storage medium storing a computer-readable code of the program.

Effects of the invention

As described above, according to the object matching device and the object matching method of the present invention, it is possible to match images of a medical article that can be divided with good accuracy, and to easily confirm a matching result.

Drawings

Fig. 1 is a diagram showing a configuration of a medication specifying device according to embodiment 1.

Fig. 2 is a diagram showing a state in which the packaged medicines are transported.

Fig. 3 is a side view showing the arrangement of the light source and the camera.

Fig. 4 is a plan view showing the arrangement of the light source and the camera.

Fig. 5 is a diagram showing a configuration of the processing unit.

Fig. 6 is a diagram showing information stored in the storage unit.

Fig. 7 is a diagram showing a method for drug identification according to embodiment 1.

Fig. 8 is a diagram showing a case of judging whether or not a divided tablet is present.

Fig. 9 is another diagram showing a case of judging whether or not the divided tablet is present.

Fig. 10 is a diagram showing a case of template matching.

Fig. 11 is a diagram showing a state of cutting out a half ingot region.

Fig. 12 is a diagram showing a state in which an area for template matching is enlarged.

Fig. 13 is a diagram showing extraction results of the drug region image, the mask image, and the imprint.

Fig. 14 is a diagram showing a case where the matching score is calculated.

Fig. 15 is a diagram showing an example of a tablet.

Fig. 16 is a diagram showing an example of display based on the 1 st display processing.

Fig. 17 is a diagram showing an example of display based on the 2 nd display processing.

Fig. 18 is a diagram for explaining matching with respect to an elliptical tablet.

Fig. 19 is another diagram for explaining matching with respect to an oval-shaped tablet.

Fig. 20 is another diagram for explaining matching with respect to an oval-shaped tablet.

Fig. 21 is a diagram for explaining calculation of a matching score for an oval tablet.

Fig. 22 is a side view showing the arrangement of the light source and the camera in the case of drug discrimination.

Fig. 23 is a plan view showing the arrangement of the light source and the camera in the case of drug discrimination.

Fig. 24 is a view showing the rotation angle of a rectangle circumscribing the divided tablet.

Fig. 25 is a diagram showing a state in which an outline rectangle is displayed upright.

Fig. 26 is a perspective view showing an example (a state where the sheet is not divided) of a sheet in which tablets are stored.

Fig. 27 is a front view showing an example (a state where the sheet is not divided) of a sheet in which tablets are stored.

Fig. 28 is a diagram showing an example of a sheet (divided state) in which tablets are stored.

Fig. 29 is a diagram showing an example of the 1 st display process.

Fig. 30 is a diagram showing an example of the 2 nd display processing.

Detailed Description

Hereinafter, embodiments of the object matching device and the object matching method according to the present invention will be described in detail with reference to the drawings.

< embodiment 1 >

Fig. 1 is a diagram showing a configuration of a tablet specifying apparatus 10 (object collating apparatus, tablet specifying apparatus) according to embodiment 1 of the present invention. The tablet specifying device 10 includes a processing unit 100, a storage unit 300, a display unit 400, an operation unit 500, and a conveyance mechanism 240, and the illumination unit 200, the camera 210 (1 st image acquisition unit), the camera 220 (1 st image acquisition unit), and the prescription reader 230 are connected to the processing unit 100.

The cameras 210 and 220 are configured by digital cameras. As shown in fig. 2, a camera 210 is disposed on the vertical upper side (the + Z side in fig. 3) of a drug package tape 700 in which drug packages 702 (drug packages) are continuously formed, a camera 220 is disposed on the vertical lower side (the-Z side in fig. 3) of the drug package tape 700, tablets 800 (tablets, objects) of the drug packages 702 are photographed from the top and bottom (in a plurality of different directions), and images are acquired of the front surface and the back surface (the 1 st photographed image). The bag 702 (the medicine packaging tape 700) is conveyed in the + X direction of FIG. 2 (along the axis in the longitudinal direction of the medicine packaging tape 700; in the direction of the arrow of FIG. 2) by the conveying mechanism 240, and at the time of imaging, the plurality of light sources 202 provided in the illumination section 200 illuminate the bag 702 from 4 directions. In fig. 3, the intervals (d1, d2, d3, and d4) between the plurality of light sources 202 and the photographing optical axes 1001 of the cameras 210 and 220 are the same. That is, the plurality of light sources 202 are equally spaced from the imaging optical axis 1001 (d1 ═ d2 ═ d3 ═ d 4).

The prescription information is read by the prescription reader 230. For example, information such as the name of the patient, the prescribed medicine, and the number thereof is read from a prescription written on paper by OCR (Optical Character Recognition). When a barcode or the like indicating information on prescribed medicines is stored in a prescription, the prescribed medicines and information such as the number thereof can be read from the barcode. The user such as a doctor or pharmacist may read the prescription and input prescription information (prescription data) through an input device such as the keyboard 510 and the mouse 520 of the operation unit 500.

< Structure of processing section >

The processing section 100 specifies the medicines to be packaged into the packaging bags 702 based on the images captured by the cameras 210 and 220, the information read by the prescription reader 230, and the like. As shown in fig. 5, the processing unit 100 includes a prescription data acquisition unit 100A (prescription data acquisition unit), an image acquisition unit 100B (image acquisition unit, 1 st image acquisition unit), a main image acquisition unit 100C (main image acquisition unit, 2 nd image acquisition unit), a tablet determination unit 100D (tablet determination unit, division determination unit), an image generation unit 100E (image generation unit), a tablet determination unit 100F (tablet determination unit), a display control unit 100G (display control unit), a complete tablet determination unit 100H (complete tablet determination unit), and a preprocessing unit 100I (preprocessing unit, comparison unit). The Processing Unit 100 includes a CPU110 (Central Processing Unit), a ROM120 (Read Only Memory), and a RAM130 (Random Access Memory).

The functions of the processing unit 100 described above can be realized by using various processors (processors). The various processors include, for example, a Central Processing Unit (CPU) that is a general-purpose processor that executes software (program) to realize various functions. The various processors described above include a GPU (Graphics Processing Unit), an FPGA (Programmable Gate Array), and the like, which are processors dedicated to image Processing, and a Programmable Logic Device (PLD), which is a processor capable of changing a circuit configuration after manufacturing. Further, a processor, that is, a dedicated electric Circuit, which has a Circuit configuration designed specifically for executing a Specific process such as an ASIC (Application Specific Integrated Circuit), is also included in the various processors described above.

The functions of each part may be implemented by 1 processor, or may be implemented by a plurality of processors of the same kind or different kinds (for example, a plurality of FPGAs, or a combination of a CPU and an FPGA, or a combination of a CPU and a GPU). Furthermore, 1 processor may be used to implement multiple functions. As an example of configuring a plurality of functions by using 1 processor, 1 st is a mode in which 1 processor is configured by a combination of 1 or more CPUs and software as represented by a computer and the processor is realized as a plurality of functions. The 2 nd System uses a processor that realizes the functions of the entire System using 1 ic (integrated circuit) Chip, as typified by a System On Chip (SoC) and the like. As described above, the various functions can be configured using 1 or more of the various processors described above as hardware configurations. Further, the hardware structure of these various processors is, more specifically, an electrical circuit (circuit) obtained by combining circuit elements such as semiconductor elements. These electrical circuits may be electrical circuits that implement the functions described above using logical operations of logical or, logical and, logical not, logical exclusive or, and combinations thereof.

When the processor or the electric circuit executes software (program), a code readable by the processor (computer) of the software to be executed is stored in a non-transitory storage medium such as a ROM120 (Read Only Memory), and the processor refers to the software. The software stored in the non-transitory storage medium includes a program for executing the object matching method according to the present invention and the tablet specifying method described later. The code may be stored in a non-transitory storage medium other than the ROM, such as various magneto-optical storage devices and semiconductor memories. When the processing using the software is performed, for example, the RAM130 (RAM) is used as a temporary storage area, and data stored in an EEPROM (electrically Erasable and Programmable Read Only Memory), which is not shown, can be referred to. The processing by these processors or electrical circuits is summarized by the CPU 110.

< Structure of memory portion >

The storage unit 300 is configured by a non-transitory storage medium such as a CD (Compact disc), a DVD (Digital Versatile disc), a Hard Disk (Hard disc), various semiconductor memories, and a control unit thereof, and stores the information shown in fig. 6 in association with each other. Prescription data 300A is information of the prescription read by the prescription reader 230 or information that the user has made input, editing, and the like based on the prescription. The prescription data 300A may be, for example, a name of a specific drug that is input based on a general name of a drug described in a prescription, or a primary drug and a dummy drug that are changed from each other. The captured image 300B (1 st captured image) is an image of the medicine captured by the cameras 210, 220, including images for the front and back of the medicine. In the case where a plurality of drugs (tablets) are included in the captured image, an image in which a region for a single drug is extracted from the captured image may be used as the captured image 300B. The matching image 300C (the 1 st matching image based on the 1 st captured image) is an image including a tablet region generated from the captured image, which is determined to be a tablet of the divided tablet, and is used for template matching with the main image. The mask image 300D includes a1 st mask image as a mask image for tablet region extraction and a2 nd mask image as a mask image for imprint and/or print region extraction. These mask images may be subjected to 2-valued conversion. The main image 300E (the 2 nd matching image based on the 2 nd captured image) is an image of the front surface and the back surface of the tablet (object) in an undivided state, and is an image serving as a reference in template matching. The determination result 300F is a determination result of the type and face of the tablet shown in the matching image.

< Structure of display part and operation part >

The display unit 400 includes a display 410 (display device) and can display an input image, a processing result, information stored in the storage unit 300, and the like. The operation unit 500 includes a keyboard 510 and a mouse 520 as an input device and/or a pointing device, and a user can perform operations (to be described later) necessary for executing the object matching method according to the present invention and the tablet specifying method described later, such as an image capturing instruction, a tablet specifying instruction, and a display mode (1 st display process or 2 nd display process) selection, via these devices and a screen of the display 410. The display 410 may be configured by a touch panel and operated via the touch panel.

< treatment of tablet specifying method >

The processing of the object matching method (tablet specifying method) by the tablet specifying device 10 having the above-described configuration will be described with reference to the flowchart of fig. 7.

The prescription data obtaining section 100A inputs prescription information via the prescription reader 230 (step S100: prescription data obtaining process). The prescription information that is input may be directly acquired as prescription data, or information that is input, edited, or the like by the user based on the prescription through the operation unit 500 may be acquired as prescription data. The prescription data acquiring unit 100A may input, as related information, the characteristics of the medicine (for example, the type, shape, color, and the like of the tablet) visually recognized by the user or the information such as the name, number, and administration method of the medicine described in a note pad such as a so-called "medicine note pad" in accordance with the operation of the user, and may add the information to the prescription data or use the data in place of the data.

The image acquiring unit 100B controls the cameras 210 and 220 to acquire captured images (1 st captured image) obtained by capturing images of the drug (tablet) in the packet component packaging bag 702 from a plurality of different directions (the ± Z direction in fig. 2 and 3; the vertical up-down direction) (step S110: image acquiring step, 1 st image acquiring step). At this time, the illumination unit 200 and the light source 202 illuminate the bag 702.

The main image acquiring unit 100C acquires a main image for the front and back surfaces of an undivided tablet (object) based on the acquired prescription data (step S120: main image acquiring step, 2 nd image acquiring step). The main image acquiring unit 100C may acquire the main image 300E stored in the storage unit 300, or may acquire the main image from an external server, a database, or the like via a communication line not shown. The main image acquiring unit 100C may capture an image (2 nd captured image) of an undivided tablet (object) via the camera 210, the camera 220, and the image acquiring unit 100B, and perform necessary image processing on the captured image to acquire the captured image as a main image.

< decision on divided tablet >

The tablet determination section 100D (division determination section) determines (determines) whether or not the tablet (object) shot in the shot image (1 st shot image) is a divided tablet (whether or not it has been divided) (step S130: tablet determination step, division determination step). This determination can be made by, for example, the following methods 1 and 2, and the tablet determination unit 100D can determine that a tablet that is not determined as a "complete tablet" and a tablet that is determined as "unknown" are "divided tablets" by these methods. In embodiment 1, the tablet is a form of a "divisible medical article".

< technique 1: determination based on captured image and main image >

The complete tablet identification unit 100H (division determination unit) identifies the type of tablet (complete tablet) that is not divided based on the captured image and the main image. Specifically, the complete tablet is determined by template matching of the captured image and the main image. The template matching can be performed by a known method (for example, the method described in japanese patent application laid-open No. 2014-67342).

< technique 2: mask image based decision >

The tablet determination section 100D (division determination section) can perform determination based on the following symmetry (asymmetry) of the pixel distribution in the mask image. Although the whole tablets are generally symmetrical in both the horizontal and vertical directions (in a state where the orientation is aligned horizontally or vertically), it is considered that the divided tablets have asymmetry as shown below. FIG. 8 is a diagram showing a state in which a 2-valued mask image of a tablet captured from a captured image is generated using a hierarchical network (for example, neural network for region extraction; 1 st hierarchical network) constructed by machine learning, a rotation angle is estimated from the orientation of a rectangle circumscribing the mask image, and the rectangle is rotated by an amount corresponding to the rotation angle. Specifically, portions (a) and (b) in fig. 8 are respectively rectangular diagrams each showing a shape circumscribed to a tablet region (region of a pixel value 255) of a 2-valued mask image for a divided tablet (1/2 for a complete tablet) or a standing tablet (a tablet in a state where a cut surface is in contact with a mounting surface). In this state, as shown in fig. 9, the rectangular region can be divided into a plurality of regions (for example, 2 regions in the horizontal direction, and 2 regions in the vertical direction, for a total of 4 regions), the proportion of white pixels (pixels having a pixel value of 255) in each region can be calculated, and the above determination can be performed based on the distribution of pixel values. Specifically, in the case of the divided tablet (half tablet), as shown in part (a) of fig. 9, the divided tablet is asymmetrical on the + X side and the-X side, and symmetrical on the + Y side and the-Y side, and in the case of the upright tablet, as shown in part (b) of fig. 9, the divided tablet becomes substantially symmetrical on both the + X side and the-X side, and on both the + Y side and the-Y side. Therefore, the tablet determination section 100D can determine that the tablet is divided (half-tablet) if the + X side and the-X side are asymmetrical, and determine that the tablet is upright (except for the tablet determined to be a whole tablet) if the tablet is symmetrical. That is, when the outer shape of the tablet (object) is a predetermined shape (having the above-described symmetry), the tablet is determined to have been divided. The neural network for region extraction can be configured by a machine learning (GAN: generic adaptive Networks) with conditions such as GAN, which is provided with a mask image separately created as training data.

< summary of matching processing >

An outline of matching processing (comparison of the 1 st matching image and the 2 nd matching image as the main image) in the tablet specifying device 10 will be described. Part (a) of fig. 10 is an example of a captured image in which a divided tablet is captured, and a 2-valued mask image is generated for such a captured image using a neural network for region extraction (1 st hierarchical network) (part (b) of the figure). Then, the captured image and the 2-valued mask image are multiplied, and preprocessing (imprint extraction, 2-valued, inversion, and the like) is performed to generate an image for matching (part (c) of the figure). Then, while relatively rotating the image for matching and the main image, a matching score is calculated, a rotation angle at which the score becomes maximum is obtained (part (d) of the figure), and the tablet region image is rotated reversely by the rotation angle so that the direction of printing or imprinting matches the main image (part (e) of the figure; in the case of the 2 nd display processing). Such matching processing will be described in detail below.

< Generation of image for matching >

The image generating unit 100E (1 st image acquiring unit) generates a matching image (1 st matching image) for the divided tablets included in the captured image (1 st captured image) (step S140: image generating step, 1 st image acquiring step). The image generating unit 100E generates and corresponds matching images for specifying the front and back surfaces of the tablet, respectively, with respect to the images captured from the top-bottom direction by the cameras 210 and 220. Generation of the matching image will be described below.

The image generating unit 100E generates a mask image by a neural network for region extraction (1 st hierarchical network), performs preprocessing such as 2-valued shaping and closed (Closing) on the generated mask image. The neural network for region extraction used for the determination of the divided tablet described above can be used as the neural network for region extraction. Then, the image generating unit 100E multiplies the mask image and the captured image after the preprocessing to extract a tablet region, and removes noise or the like. Further, the image generating section 100E obtains a rectangle including the tablet region (for example, a rectangle circumscribed with the tablet region), and cuts out a range of a square including the rectangle from the captured image as an image for matching. In addition, it is preferable that the rotation angle is calculated and erected with respect to the rectangle. Fig. 11 is a diagram showing an example of the relationship between the divided tablet region, the rectangle, and the matching image (the divided tablet region 830, the rectangle 832, and the matching image 834) generated in this manner.

< preprocessing of images for matching >

The preprocessing unit 100I (the matching unit) performs at least 1 of processing for extracting a region enlargement processing, a 2-valued processing, an image inversion processing, a processing of a region for printing and/or imprinting (a region including a part of the identification information), and a processing of emphasizing printing and/or imprinting as a preprocessing for the image for matching and/or the main image (step S150: preprocessing step, matching step). It is preferable that the matching image and the main image match each other with respect to whether or not to perform the 2-valued processing, the image reversing processing, the processing of extracting the area of the print and/or the stamp, and the processing of emphasizing the print and/or the stamp (the processing of emphasizing the identification information). Further, it is preferable that the sizes of the matching image and the main image are matched by enlarging or reducing the image.

< area enlargement >

In the region enlargement as the preprocessing, the preprocessing section 100I enlarges the region so that the matching image includes the circumscribed circle of the tablet region. For example, when the length of the edge of the matching image 834 before enlargement of the area shown in part (a) of fig. 12 is "a", a part of the circumscribed circle 835 of the tablet area is out of the matching image 834 as shown in part (b) of fig. 12. Therefore, as shown in part (c) of the figure, if the both sides are enlarged by 0.25a and the length of the side is set to "1.5 a", the region can be enlarged so as to include the circumscribed circle, thereby obtaining the matching image 836 shown in part (d) of fig. 12. Fig. 12 (e) is a diagram showing an example (main image 838) of the main image subjected to the preprocessing of the imprinting section extraction.

The preprocessing unit 100I may cut out a square area of the circumscribed circle 835 in which the blank area is considered from the matching image 836, enlarge or reduce the area so that the size matches the main image, and use the area as the matching image. The blank area can be set to a range from (1/10) × a to (1/12) × a ("a" is the length of one side of the square including the above-described rotated rectangle) with respect to the circumscribed circle, for example. The blank area is secured by an area enlarged in consideration of a tilt error or the like.

The preprocessing unit 100I may perform the 2-valued processing, the image inverting processing, and the stamp extracting processing on the matching image in addition to or instead of the above-described area enlargement. The imprint extraction can be performed by multiplication with a mask image generated using a neural network for imprint extraction (layer 2 network). The tablet specifying unit 100F performs template matching using the matching image 836 subjected to such preprocessing and the main image. The 2 nd hierarchical network can be configured by providing images obtained by extracting print characters and/or imprint characters as training data and performing machine learning.

< example of tablet region image, mask image, and imprint extraction result >

Fig. 13 is a diagram showing an example of extraction results of a tablet region image, a mask image for region extraction and imprint extraction, an imprint, and the like. The column of reference numeral 810 indicates an image (the front surface or the back surface of the tablet) obtained by cutting out the above-described rectangular portion (corresponding to the rectangular portion 832 in fig. 11) from the captured image, and the column of reference numeral 812 indicates the surface (the back surface or the front surface of the tablet) opposite to the image indicated by the column of reference numeral 810. The columns of reference signs 814, 816 represent the 2-valued mask images for imprint extraction (surface or back; images corresponding to reference signs 810, 812), and the columns of reference signs 818, 820 represent the 2-valued mask images for tablet region extraction (surface or back; images corresponding to reference signs 810, 812). In order to eliminate the influence of reflection due to the inclined side surface and the cross section, the mask image for imprint extraction is made smaller than the mask image for tablet region extraction. The columns of reference symbols 822 and 824 indicate the results (front or back) obtained by extracting the inscription from the images indicated by the columns of reference symbols 810 and 812. In these images, the images of the same row are images for the same tablet.

< calculation of matching score >

The tablet identification unit 100F (comparison unit) calculates a matching score while relatively rotating the matching image (1 st matching image) and the main image (2 nd matching image), and repeats the calculation of the matching score while changing the rotation angle (step S160: tablet identification step, comparison step). Fig. 14 is a conceptual diagram illustrating a case where matching scores are calculated for images for matching the front and back surfaces of a captured image and for main images for the front and back surfaces of a tablet. Images 802A, 802B represent images of a segmented tablet, one being the front and the other being the back. The main images 804A and 804B are main images for the front and back surfaces, respectively. In such a situation, the tablet identification unit 100F calculates the matching scores S10 to S40 at each rotation angle. The matching scores S10 and S20 are matching scores for the images 802A and 802B (images for matching) and the main image 804A (front surface), respectively, and the matching scores S30 and S40 are matching scores for the images 802A and 802B (images for matching) and the main image 804B (back surface), respectively.

In the calculation of the matching score, (a) the matching score may be calculated while rotating the images one by one with respect to each other in a state where the centers of the matching image and the main image are aligned, and (b) the matching score may be calculated by creating a plurality of images having different rotation angles and moving the images. Further, although it is possible to make the change in the rotation angle small (for example, using 360 different images each having a rotation angle of 1 degree (deg)), it is possible to perform accurate matching, and it may take time to process. In this case, it is possible to speed up the processing by creating a small number of images with a large change in the rotation angle and performing rough matching (for example, using 36 different images with rotation angles of 10 degrees), and then performing matching using images with a small change (for example, using 10 different images with rotation angles of 1 degree) in the vicinity of the angle at which the score of the result of rough matching becomes large.

< calculation of matching score considering characteristics of divided tablet >

In typical template matching, the relevance score value (normalized) is typically used. However, if the characteristics of the divided tablet are taken into consideration, it is preferable to use the following matching score (correction score). Specifically, the tablet specifying unit 100F preferably calculates (a correlation score value (normalized) in template matching) x (the number of pixels of an image representing printing and/or imprinting of an image for matching) x (the number of pixels of an image representing printing and/or imprinting of a main image) as a matching score, and specifies the type and the face of the tablet based on the matching score. The "correlation score value (normalized)" is multiplied by "the number of pixels of the image representing printing and/or imprinting of the matching image" and "the number of pixels of the image representing the printing and/or imprinting portion of the main image" in order to increase the score as the area of printing and/or imprinting increases, and in addition, even if the "correlation score value (normalized)" is the same, weighting can be performed for complicated printing and/or imprinting to improve the accuracy of matching. In the calculation of such a correction score, as the "number of pixels of the image indicating the printed and/or engraved portion", for example, the number of pixels of the white pixel in the image (the image indicating the printed and/or engraved portion of the divided tablet) shown in the columns of reference numerals 822 and 824 in fig. 13, and the number of pixels of the white pixel in the portion (e) in fig. 12 (the image indicating the printed and/or engraved portion of the main image) can be used. In addition, the number of pixels in the dividing line portion may be excluded when "the number of pixels of the image indicating the printed and/or engraved portion" is obtained for the main image.

< determination of the kind of tablet >

Since there are few cases where a plurality of types of divided tablets are divided into 1 divided pocket, it is usually sufficient to match the 1 type of main image, but when a plurality of types of divided tablets are included, the tablet specifying unit 100F can compare the maximum values of the matching scores for the respective main images with each other and specify "the matching image indicates the same tablet as the main image whose maximum value of the matching score becomes maximum".

< determination of the angle of rotation >

The tablet specifying unit 100F specifies the angle at which the matching score becomes maximum as the rotation angle of the tablet.

< determination of surface and Back >

The tablet identification unit 100F identifies the surface (front surface or back surface) of the tablet based on the following criteria, for example.

(1) If (maximum of match score S10) > (maximum of match score S30), and (maximum of match score S20) ≦ maximum of match score S40, then image 802A represents the surface and image 802B represents the back.

(2) If (maximum of match score S20) > (maximum of match score S40), and (maximum of match score S10) ≦ maximum of match score S30, then image 802A represents the back surface and image 802B represents the surface.

The tablet specifying unit 100F determines whether or not the processing for all the divided tablets is completed (step S170), and repeats the processing from steps S140 to S160 until the determination is affirmative. If the determination of step S170 is affirmative, the process for 1 of the sachets is ended, and the process proceeds to step S180, and the processes of steps S110 to S170 are repeated until the process for all the sachets is ended (until the determination of step S180 is affirmative). If the determination of step S170 is affirmative, it proceeds to step S190.

< display processing >

The display control unit 100G causes the display 410 (display device) to display the main image and the matching image (display image determined that the same type of tablet (object) in the 1 st matching image is captured) that is specified to show the same tablet and the same face as the main image, based on the result of specifying the type and face of the tablet (step S190: display control step). In step S190, the 1 st display process of displaying the string orientation of the straight line portion of the divided tablet in a uniform manner, or the 2 nd display process of displaying the print and/or imprint orientation of the divided tablet in a uniform manner is performed. The display control unit 100G may perform any display processing in accordance with the user's instruction, or may perform any display processing regardless of the user's instruction. In the 1 st and 2 nd display processes, the photographed image and the tablet region image may be displayed instead of the matching image. In the case of displaying the matching image, an image that has not been subjected to the preprocessing may be used, or an image that has been subjected to the preprocessing may be used.

< 1 st display processing >

In the 1 st display process, the orientation of the strings (an example of the orientation of the outline of the object) in the matching image is displayed so as to match each other. The chord of the tablet is an example of a dividing line generated by dividing the tablet as an object, and when the tablet is divided, the vicinity of the dividing line is often the dividing line. The display control unit 100G can determine, for example, a side with a small curvature, a short side, a side near the center of a circumscribed circle, or the like in the matching image or the mask image as a chord. In the calculation of the chord orientation, the display control unit 100G determines the coordinates of the vertex of a rectangle 840 (circumscribed rectangle) circumscribed with the divided tablet (half ingot) and the rotation angle θ from the X axis (horizontal direction) of the circumscribed rectangle, as shown in fig. 24. Then, the display control section 100G rotates (90 degrees- θ degrees) the circumscribed rectangle to erect the circumscribed rectangle to obtain an image in which the divided tablets are erected (either of the states of the rectangle 841 or the rectangle 842 in fig. 25). The display control unit 100G examines the pixel distribution of the upright divided tablets, and determines whether there are many pixels that are half-bright on the right side as in the rectangle 841 or many pixels that are half-bright on the right side as in the rectangle 842 (this can be determined based on the distribution of white pixels in the mask image as described in the "divided tablet determination" technique 2). If the rectangular 841 is in the state of being rotated clockwise by 90 degrees, and if the rectangular 842 is in the state of being rotated counterclockwise by 90 degrees, the orientation of the strings can be made uniform in the horizontal direction. In addition, the orientation of the strings may be aligned not in the horizontal direction but in the vertical direction or obliquely in the display. In the case of displaying a plurality of tablets, the orientation of the bow (curved portion dividing the tablet) may be displayed so as to match the orientation (an example of the orientation of the outer shape of the object) in addition to the orientation of the string of each tablet (see fig. 16).

< 2 nd display processing >

In the 2 nd display process, the orientation of printing and/or embossing (an example of identification information given to an object) in the matching image is aligned and displayed. The display control unit 100G can match the printing and/or imprinting directions by reversely rotating the matching image by the rotation angle determined by the above-described processing.

< example of display processing >

Specific examples of the above-described 1 st and 2 nd display processes will be described. The target Tablet is "barsartan Tablet (Valsartan Tablet)80mg FFP" shown in fig. 15. Part (a) of fig. 15 is a front view (surface) of the tablet, and printing and cutting lines are performed. Part (b) of the figure is a side view. The relation between the direction of the dividing line and the direction of printing differs depending on the tablet, and the dividing line may be applied in the horizontal direction or the vertical direction with respect to the printing. In addition, portions (c) and (d) of fig. 15 are rear views (rear faces). The back surface is printed in the same manner as the front surface, but the printing direction is not fixed between the front surface and the back surface. For example, with respect to the front surface shown in part (a) of fig. 15, the rear surface may be the same as part (c) (the front surface and the printing direction are the same), or may be the same as part (d) (the front surface and the printing direction are different).

Fig. 16 shows an example of the 1 st display process for the tablet described above. Part (a) of the figure is a display for the front surface of the main image (the left area of the white line) and the front surface of the matching image (the right area of the white line), and part (b) of the figure is a display for the back surface of the main image (the left area of the white line) and the back surface of the matching image (the right area of the white line). In fig. 16, images present at the same position in the part (a) and the part (b) indicate the same tablet. Fig. 17 shows an example of the 2 nd display process. Part (a) of the figure is a display for the front surface of the main image (the left area of the white line) and the front surface of the matching image (the right area of the white line), and part (b) of the figure is a display for the back surface of the main image (the left area of the white line) and the back surface of the matching image (the right area of the white line). In fig. 17, images existing at the same position in the part (a) and the part (b) also represent the same tablet. In the 1 st and 2 nd display processes, the display control unit 100G selects the matching image (1 st matching image) for the front surface and/or the back surface of the tablet (object) and displays the image on the display 410 (display device). Only the front image or only the back image may be displayed, or both images may be displayed. Further, the main image may be displayed together, or the display may be omitted.

As described above, according to embodiment 1, the divided tablets can be matched with good accuracy, and the matching result can be easily confirmed by the 1 st and 2 nd display processes.

< matching in case of dividing oval tablet >

The matching when the elliptical tablet is divided will be described. Fig. 18 is an example of a main image of an oval tablet, and part (a) of the figure shows the surface of the tablet. On the left and right sides of the surface, "a" and "B" are imprinted (or printed), respectively. In addition, part (b) of the figure shows the back side of the tablet. On the left and right sides of the back surface, "C" and "D" are imprinted (or printed), respectively. The tablet had a cut line in the center. Fig. 19 shows a state in which the tablet of fig. 18 is divided. In fig. 19, parts (a1) and (a2) respectively show the left and right sides of the front surface, and parts (b1) and (b2) respectively show the left and right sides of the back surface. Among such elliptical tablets, there are those having different correspondence between the front surface and the back surface (different orientations of the imprint on the front surface and the back surface as in the case of the above-mentioned barsartan tablet). For example, as shown in parts (a1) and (a2) of fig. 20, the orientations (vertical direction) of the marks on the front and back surfaces are the same, and as shown in parts (b1) and (b2) of the figure, the orientations of the marks are different.

< calculation of matching score in case of oval tablet >

FIG. 21 is a view showing the fitting of the oval tablets shown in FIGS. 18 to 20. In fig. 21, the image captured by the camera 210 from above is described as "upper", and the image captured by the camera 220 from below is described as "lower". Any of the marks A, B, C, D is engraved in the captured image, and the engraving of the captured image shown in fig. 21 is an example. In such a situation, the tablet identification unit 100F calculates the following matching scores S1 to S8 (step S160 in fig. 7, corresponding to the tablet identification step). In this case, it is preferable to calculate a matching score considering the characteristics of the divided tablets as well as the corrected score value. In the following description, the "matching score S1" may be referred to as "S1" (the same applies to the matching scores S2 to S8).

Matching score S1: matching score for captured image (captured from above) and main image (left side of surface)

Matching score S2: matching score for captured image (captured from above) and main image (right side of surface)

Matching score S3: matching score between ground shot image (shot from above) and main image (left side of back surface)

Matching score S4: matching score for captured image (captured from above) and main image (right side of back surface)

Matching score S5: matching score for captured image (captured from below) and main image (left side of surface)

Matching score S6: matching score for captured image (captured from below) and main image (right side of surface)

Matching score S7: matching score for captured image (captured from below) and main image (left side of back surface)

Matching score S8: matching score for captured image (captured from below) and main image (right side of back surface)

For example, the matching score S1 represents a score value at a rotation angle at which the score becomes maximum among score values obtained by rotating the main image (left side of the front surface) within a range of rotation angles of 0 to 359 degrees with respect to the captured image (captured from above) and performing template matching.

As the correspondence relationship between the matching scores, there may be (S1, S7), (S1, S8), (S2, S7), (S2, S8), (S3, S5), (S3, S6), (S4, S5), (S4, S6). Here, for example, (S1, S7) means "an image taken from above indicates the left side of the front surface, and an image taken from below indicates the left side of the back surface".

< determination of surface >

In the above situation, the tablet specifying unit 100F specifies the surface of the elliptical divided tablet in the following 1 st and 2 nd aspects. This makes it possible to accurately match the elliptical divided tablets and specify the surface of the tablet. The type of tablet is set as separately specified information (including a case where only 1 type of tablet is divided in prescription data or a captured image of a bag).

(1 st mode)

The tablet identification unit 100F calculates the score T1 ═ S1+ S2+ S7+ S8 and the score T2 ═ S3+ S4+ S5+ S6, and compares the sizes of the scores T1 and T2. As a result, if the score T1 > T2, the tablet determination section 100F determines that "the image photographed from above is the front surface and the image photographed from below is the back surface". On the other hand, if the score T1 < T2, the tablet determination section 100F determines that "the image photographed from above is the back side and the image photographed from below is the front side".

(2 nd mode)

The tablet identification unit 100F calculates a score T1 ═ S1+ S7, a score T2 ═ S1+ S8, a score T2 ═ S2+ S7, a score T4 ═ S2+ S8, a score T5 ═ S3+ S5, a score T6 ═ S3+ S6, a score T7 ═ S4+ S5, a score T8 ═ S4+ S6, and identifies the maximum score among the scores T1 to T8. As a result, when the maximum score is any of the scores T1, T2, T7, and T8, the tablet identification unit 100F identifies that "the image photographed from above is the front surface and the image photographed from below is the back surface". On the other hand, in the case where the maximum score is any of the scores T3, T4, T5, and T6, the tablet determination portion 100F determines that "the image photographed from above is the back surface, and the image photographed from below is the front surface".

< display processing >

In the case of the above-described elliptical tablet, the display control unit 100G can also perform the 1 st display process of displaying the strings that are the straight portions of the divided tablets in a uniform orientation, or the 2 nd display process of displaying the divided tablets in a uniform orientation, and display the result on the display 410 (display device) as shown in fig. 16 and 17. This makes it possible to easily confirm the matching result.

< case of drug identification >

In the above-described embodiment, the identification of the subpackaged medicines is mainly performed as the medicine check support, but the object comparison device (tablet identification device) and the object comparison method (tablet identification method) of the present invention can also be applied to the identification of the medicine of the patient who is self-provided in a hospital, a pharmacy, or the like. In the case of performing the discrimination, as shown in the side view of fig. 22, the tablet 800 is placed in a container 710 such as a shallow dish instead of the sub-bag and photographed. Fig. 23 is a plan view of the state shown in fig. 22. The other structure in the case of performing the discrimination is the same as the above-described tablet specifying device 10. The tablet identification process can be performed in the same manner as described above, but if the self-prepared medicine is identified, the prescription cannot be confirmed in some cases. In this case, the prescription data acquiring unit 100A may input, as related information, characteristics of a medicine (for example, the type, shape, color, and the like of a tablet) recognized by visual observation or the like, or information such as the name, number, and administration method of the medicine described in a note pad such as a so-called "medicine note pad" in accordance with an operation by the user, and may use the information instead of the prescription data.

Even in the case of drug discrimination, the split tablets (objects as divisible medical articles) can be matched with good accuracy by the object matching device (tablet specifying device) and the object matching method (tablet specifying method) of the present invention, and the matching result can be easily confirmed by the 1 st and 2 nd display processes.

< control for Package containing tablet or Capsule drug >

In the above example, the case of performing the matching with respect to the tablets has been described, but the object matching device and the object matching method according to the present invention can be applied to a package containing tablets and a package containing capsule-type medicines. For example, a so-called PTP (press through pack) is a sheet-like package in which tablets and capsules are held between plastic and aluminum foil, and the tablet and the like can be taken out one by breaking the aluminum foil by strongly pressing the plastic portion formed in a solid form in accordance with the shape of the tablet or capsule-like drug. In general, a PTP packaging sheet is provided with a plurality of perforations, and can be divided along the perforations. That is, the PTP-packaging sheet is another example of the divisible medical article.

Fig. 26 is a perspective view of a sheet 900 (in an undivided state) as a PTP-packaged sheet containing tablets as viewed from the front side, and fig. 27 is a front view. The sheet 900 can be obtained by printing characters and numerals on the end portion 902 and the main body portion 904 to identify the name of the medicine, the amount of active ingredient, and the like (an example of identification information; the same information is printed on the back side), and cutting (dividing) the main body portion 904 along the perforation 904B to adjust the number of the tablet portions 904A, thereby obtaining a required number of tablets 910. For convenience of explanation, the identification information in fig. 26 and 27 is information that is described and does not correctly reproduce the display of the actual product. Fig. 28 is a diagram showing an example of a sheet 900 in a state in which a part of the main body portion 904 is cut. When the tablet is divided and provided to the patient according to the contents of the prescription, the tablet is cut out.

< control for Package of tablet >

Such a sheet 900 can be compared with the above-described tablet. Specifically, as shown in fig. 26 and 27, a main image (image for 2 nd matching based on the 2 nd captured image) is acquired by the camera 210, the camera 220, the image acquisition unit 100B, and the main image acquisition unit 100C with respect to the front surface and the back surface of the sheet 900 (object) in an undivided state, and an image for 1 st matching based on the 1 st captured image is acquired with respect to the front surface and the back surface of the sheet 900 (in an undivided state or a divided state) as a comparison target. The subsequent processing can be executed in accordance with the flowchart shown in fig. 7 as in the case of tablets.

< display example for packaging body >

Fig. 29 shows an example of a state in which the 1 st display process is performed on the divided sheet 900. In this example, the orientation of perforation 904B as the dividing line (and the orientation of the object to the dividing line) can be made uniform, and the orientation of the outer shape of sheet 900 can be made uniform. Fig. 29 is a display example for the front surface, but the same can be displayed for the back surface (see the example of fig. 16). On the other hand, fig. 30 shows an example of a state in which the 2 nd display processing is performed on the divided sheet 900. The sheet 900 is not necessarily limited to being cut by the perforations 904B, and may be cut in a direction orthogonal to the perforations 904B by scissors or the like, in which case the shape of the sheet 900 after being cut is not limited to being the same even if the number of tablets is the same. In such a case, the 2 nd display process of displaying identification information (in this case, a name of a medicine or the like printed) assigned to the sheet 900 in a direction matching each other can be performed. Fig. 30 is a display example for the front surface, but the same can be displayed for the rear surface (see the example of fig. 17).

While the case of the package (PTP sheet) containing the tablet is described above with reference to fig. 26 to 30, the comparison can be performed similarly even in the case of the sheet containing the capsule medicine, and the display can be performed by the 1 st and 2 nd display processes. In addition, the collation and display processing can be similarly performed also for a type of package in which bag-shaped storage portions for individually storing tablets are continuously formed into a sheet shape.

Even in the case of a package containing tablets or a package containing capsule-type drugs, the package can be matched with good accuracy as in the case of tablets, and the matching result can be easily confirmed by the 1 st and 2 nd display processes.

(attached note)

In addition to the above-described embodiments, the following configurations are also included in the scope of the present invention.

(attached note 1)

The tablet specifying device according to supplementary note 1 includes: an image acquisition unit that acquires captured images obtained by capturing the tablet from a plurality of different directions; a main image acquiring unit that acquires a main image for the front and back surfaces of an undivided tablet; a tablet determination unit configured to determine whether or not a tablet captured in the captured image is a divided tablet; an image generation unit that generates a matching image including a tablet region from a captured image of a tablet determined to be a divided tablet; a tablet specifying unit that specifies the type and face of a tablet represented by the matching image by template matching between the matching image and the main image; and a display control unit for displaying the main image and the matching image determined to represent the same tablet and the same surface as the main image on a display control unit of the display device based on the determination result, and performing a1 st display process of displaying the main image and the matching image by aligning the orientations of the strings of the straight portions of the divided tablets or a2 nd display process of displaying the divided tablets by aligning the orientations of the printing and/or the imprinting of the divided tablets.

In the configuration of supplementary note 1, since the matching result can be displayed by performing the 1 st display process of displaying the divided tablets in such a manner that the orientations of the strings in the straight line portions of the divided tablets are aligned with each other, or the 2 nd display process of displaying the divided tablets in such a manner that the orientations of the characters and/or imprints in the divided tablets are aligned with each other, the user can easily grasp the determination result (matching result) of the type and the face of the tablet visually. In addition, in the configuration of supplementary note 1, since template matching is performed using the main image for the tablet that is not divided, it is possible to reduce the possibility that the images are matched erroneously due to only partial matching, and to perform correct matching. As described above, according to the structure of note 1, the divided tablets can be matched with good accuracy, and the matching result can be easily confirmed.

In the configuration of supplementary note 1, it is preferable that the image acquisition unit acquires images captured from a plurality of opposing directions. More preferably, images obtained by imaging the front and back surfaces of the tablet from the vertical up-down direction or the like are acquired. The main image may be acquired based on prescription data (information on the medicines described in the prescription and information input by a doctor, pharmacist, or the like based on the information). In addition, "determination" of the tablet can be performed in the inspection of the drug, the identification of the self-prepared drug, and the like.

(attached note 2)

In the tablet specifying device according to supplementary note 2, in the configuration of supplementary note 1, the display control section displays the main image and the matching image on the front surface and the back surface of the tablet. With the configuration of supplementary note 2, the determination result (matching result) can be grasped more easily. In the configuration of supplementary note 2, it is preferable that the determination result is displayed for each of the front surfaces and the back surfaces of the main image and the matching image.

(attached note 3)

The tablet specifying device according to supplementary note 3 is configured as supplementary note 1 or supplementary note 2, and further includes: the complete tablet identification unit identifies the type of the tablet that is not divided based on the captured image and the main image, and the tablet determination unit determines the type of the tablet that is not identified by the complete tablet identification unit. According to the configuration of supplementary note 3, it is possible to efficiently perform the processing by determining whether or not the tablet whose kind is not specified by the complete tablet specifying unit is a divided tablet.

(attached note 4)

In any one of the configurations of supplementary note 1 to supplementary note 3, the tablet determination section generates a mask image including a tablet region from the captured image, and performs determination based on a distribution of pixel values in the mask image. The tablet that is not divided is generally symmetrical in shape such as a circle or an ellipse, but asymmetrical direction is generated by division. Therefore, in the structure of supplementary note 4, whether or not it is a divided tablet is determined based on the distribution (for example, asymmetry) of the pixel values in the mask image. Further, unnecessary portions such as noise can be removed from the captured image, and an image including the tablet region can be made 2-valued as a mask image. The range of the mask image may be, for example, a rectangle circumscribing the tablet region, but is not limited to this embodiment. The upright tablet (a state in which the divided surface or the cut surface is in contact with the mounting surface of the tablet; so-called "upright tablet") can also be determined based on the distribution of the pixel values.

(attached note 5)

The tablet specifying device according to supplementary note 5 has the configuration of supplementary note 4, in which the tablet determination section generates a mask image using the 1 st hierarchical network constructed by machine learning. The 1 st hierarchical Network may be a Neural Network, and may be configured by providing a mask image as training data using, for example, CNN (Convolutional Neural Network) and performing machine learning such as deep learning.

(attached note 6)

In the tablet specifying device according to supplementary note 6, in the configuration of supplementary note 5, the image generating section multiplies the pixel value of the captured image by the pixel value of the mask image for each pixel to generate the matching image. The configuration of reference numeral 6 defines a specific embodiment of the matching image generation processing, and can generate a matching image from which unnecessary portions are removed by multiplication with the mask image.

(attached note 7)

The tablet specifying device according to supplementary note 7 further includes, in any one of the configurations of supplementary notes 1 to 6: the image processing apparatus includes a preprocessing unit that performs at least 1 of an area enlargement process, a 2-valued process, an image inversion process, a process of extracting a region of a print and/or stamp, and a process of emphasizing the print and/or stamp on an image for matching and/or a main image as a preprocessing, and the tablet specifying unit performs template matching using the image for matching and/or the main image subjected to the preprocessing. In the configuration of supplementary note 7, matching can be further accurately performed by the above-described preprocessing. These pretreatments may be determined by the user's instruction as to the type and/or degree of treatment to be performed, or may be determined by the tablet determination device regardless of the user's instruction. It is preferable that the matching image and the main image match each other with respect to whether or not to perform 2-valued processing as preprocessing, image reversing processing, processing for extracting a region of printing and/or imprinting, and processing for emphasizing printing and/or imprinting.

(attached note 8)

The tablet specifying device according to supplementary note 8 is configured as supplementary note 7 in which the preprocessing unit performs processing for extracting print and/or imprint using a2 nd hierarchical network constructed by machine learning. The 2 nd hierarchical Network can be a Neural Network, and can be configured by providing an image obtained by extracting printing and/or imprinting as training data using, for example, CNN (Convolutional Neural Network) and performing machine learning such as deep learning.

(attached note 9)

In any one of the configurations of supplementary note 1 to supplementary note 8, the tablet specifying device according to supplementary note 9 calculates a matching score while relatively rotating the image for matching and the main image, and specifies the matching score based on the matching score. In the configuration of supplementary note 9, matching scores can be calculated for the front and back surfaces of the images for matching and the main image in the plurality of directions, respectively, and based on the results, the type and surface of the tablet can be specified. In addition, the matching may be performed by rotating the image in a state where the center of the circumscribed circle of the tablet region in the matching image is aligned with the center of the main image, or by shifting the image that has been rotated in advance. Further, there may be a relative movement (parallel movement) of the images in the matching.

(attached note 10)

In the tablet specifying device of reference numeral 10, in the configuration of reference numeral 9, the display control unit calculates the rotation angle at which the matching score becomes maximum with respect to the specified surface in the 2 nd display process, and rotates the matching image by the reverse rotation angle only, thereby aligning the direction of printing and/or imprinting in the matching image with the main image. Note that the structure of reference numeral 10 is a structure for specifically defining a process of matching the direction of printing and/or imprinting in the matching image with the main image in the 2 nd display process.

(attached note 11)

In the tablet specifying device according to supplementary note 11, in the configuration of supplementary note 9 or supplementary note 10, the tablet specifying unit calculates the matching score using the correlation score value between the matching image and the main image, the number of pixels of the image indicating printing and/or imprinting, and the number of pixels of the image indicating printing and/or imprinting of the main image. The structure of reference numeral 11 defines the calculation of a score for accurately matching the divided tablets in consideration of the characteristics thereof.

(attached note 12)

The method for determining a tablet according to supplementary note 12 comprises: an image acquisition step of acquiring captured images obtained by capturing the tablet from a plurality of different directions; a main image acquisition step of acquiring a main image for the front and back surfaces of an undivided tablet; a tablet determination step of determining whether or not the tablet captured in the captured image is a divided tablet; an image generation step of generating a matching image including a tablet region from a captured image of a tablet determined to be a divided tablet; a tablet specifying step of specifying the type and face of a tablet represented by the matching image by template matching of the matching image and the main image; and a display control step of displaying the main image and the matching image determined to represent the same tablet and the same surface as the main image on the display device based on the determination result, and performing a1 st display process of displaying the main image and the matching image by aligning the orientations of the strings of the straight portions of the divided tablets or a2 nd display process of displaying the divided tablets by aligning the orientations of the printing and/or the imprinting of the divided tablets.

According to the structure of supplementary note 12, the divided tablets can be matched with good accuracy as in the structure of supplementary note 1, and the matching result can be easily confirmed.

The tablet specifying method according to supplementary note 12 may further include the same configurations as supplementary notes 2 to 11. In addition, a program that causes a tablet specifying device or a computer to execute the tablet specifying method of these embodiments and a non-transitory storage medium storing a computer-readable code of the program can also be cited as embodiments of the present invention.

While the embodiments and other aspects of the present invention have been described above, the present invention is not limited to the above-described embodiments, and various modifications can be made without departing from the spirit of the present invention.

-description of symbols-

10: tablet determining means;

100: a processing unit;

100A: a prescription data acquisition unit;

100B: an image acquisition unit;

100C: a main image acquisition unit;

100D: a tablet determination section;

100E: an image generation unit;

100F: a tablet determination section;

100G: a display control unit;

100H: a complete tablet determination section;

100I: a pretreatment section;

110:CPU;

120:ROM;

130:RAM;

200: an illumination unit;

202: a light source;

210: a camera;

220: a camera;

230: a prescription reader;

240: a carrying mechanism;

300: a storage unit;

300A: prescription data;

300B: shooting an image;

300C: matching images;

300D: a mask image;

300E: a main image;

300F: determining a result;

400: a display unit;

410: a display;

500: an operation section;

510: a keyboard;

520: a mouse;

700: a medicine wrapping belt;

702: packaging bags;

710: a container;

800: a tablet;

802A: an image;

802B: an image;

804A: a main image;

804B: a main image;

830: dividing the tablet area;

832: a rectangle shape;

834: matching images;

835: a circumscribed circle;

836: matching images;

838: a main image;

840: a rectangle shape;

841: a rectangle shape;

842: a rectangle shape;

900: slicing;

902: an end portion;

904: a main body portion;

904A: a tablet section;

904B: perforating;

910: a tablet;

1001: shooting an optical axis;

θ: a rotation angle;

s1: matching scores;

s2: matching scores;

s3: matching scores;

s4: matching scores;

s5: matching scores;

s6: matching scores;

s7: matching scores;

s8: matching scores;

s10: matching scores;

s20: matching scores;

s30: matching scores;

s40: matching scores;

s100 to S190: each step of the tablet determination method;

t1: scoring;

t2: scoring;

t3: scoring;

t4: scoring;

t5: scoring;

t6: scoring;

t7: scoring;

t8: and (6) scoring.

44页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于奶瓶设备的分隔部件和奶瓶设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!