Method and system for processing images

文档序号:54651 发布日期:2021-09-28 浏览:31次 中文

阅读说明:本技术 用于处理图像的方法和系统 (Method and system for processing images ) 是由 西达斯·塞提斯 凯文·J·米勒 于 2019-10-14 设计创作,主要内容包括:一种系统执行用于处理机器可读代码的图像的方法。该方法包括接收包括编码信息的机器可读代码的图像,其中该机器可读代码至少部分地被具有主导颜色的物质遮挡;通过基于主导颜色调整图像的颜色空间,来生成经调整的图像;对图像的至少机器可读代码区域进行二值化,其中图像的机器可读代码区域描绘了机器可读代码;以及对二值化的机器可读代码区域进行解码以确定编码信息。还描述了其他装置和方法。(A system performs a method for processing an image of machine-readable code. The method includes receiving an image including a machine-readable code encoding information, wherein the machine-readable code is at least partially obscured by a substance having a dominant color; generating an adjusted image by adjusting a color space of the image based on the dominant color; binarizing at least a machine-readable code region of the image, wherein the machine-readable code region of the image depicts a machine-readable code; and decoding the binarized machine-readable code region to determine encoded information. Other apparatus and methods are also described.)

1. A method, comprising:

accessing, by one or more processors of a machine, an image depicting a machine-readable code at least partially obscured in the image by a substance having a dominant color;

generating, by the one or more processors of the machine, an adjusted version of the image by adjusting a color space of the image based on the dominant color of the substance at least partially occluding the machine-readable code; and

binarizing, by the one or more processors of the machine, at least one region of the adjusted version of the image, the region depicting the machine-readable code.

2. The method of claim 1, further comprising:

capturing, by the optical sensor, the image depicting the machine-readable code at least partially obscured by the substance having the predominant color.

3. The method of claim 1, wherein the predominant color of the substance that at least partially obscures the machine-readable code is substantially red.

4. The method of claim 1, wherein:

the image is a color image; and

the adjustment to the color space of the image includes converting the color space of the color image to a grayscale representation based on the dominant color of the substance.

5. The method of claim 1, wherein binarizing at least the region of the image comprises color thresholding a histogram of at least the region of the image.

6. The method of claim 1, further comprising:

locating the region depicting the machine-readable code in the adjusted version of the image.

7. The method of claim 6, wherein locating the region depicting the machine-readable code in the adjusted version of the image comprises performing at least one of: corner detection of the adjusted version of the image or edge detection of the adjusted version of the image.

8. The method of claim 1, wherein the image depicts the machine-readable code attached to a surgical textile soiled with the substance having the dominant color.

9. The method according to claim 1, wherein the machine-readable code represents encoded information comprising at least one of a type of surgical textile or an identifier of the surgical textile.

10. The method of claim 1, further comprising:

determining encoded information represented by the machine-readable code by decoding a binarized area depicting the machine-readable code.

11. The method of claim 10, further comprising:

incrementing a textile counter index in response to determining the encoded information represented by the machine-readable code.

12. A system, comprising:

one or more processors; and

memory storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising:

accessing an image depicting a machine-readable code at least partially obscured in the image by a substance having a dominant color;

generating an adjusted version of the image by adjusting a color space of the image based on the dominant color of the substance at least partially occluding the machine-readable code; and

binarizing at least one region of the adjusted version of the image, the region depicting the machine-readable code.

13. The system of claim 12, further comprising an optical sensor configured to capture the image depicting the machine-readable code at least partially obscured by the substance having the predominant color.

14. The system of claim 12, wherein the predominant color of the substance that at least partially obscures the machine-readable code is substantially red.

15. The system of claim 12, wherein:

the image is a color image; and

the adjustment to the color space of the image includes converting the color space of the color image to a grayscale representation based on the dominant color of the substance.

16. The system of claim 12, wherein binarizing at least the region of the image comprises color thresholding a histogram of at least the region of the image.

17. The system of claim 12, wherein the operations further comprise:

locating the region depicting the machine-readable code in the adjusted version of the image.

18. The system of claim 17, wherein the locating of the area depicting the machine-readable code in the adjusted version of the image comprises performing at least one of: corner detection of the adjusted version of the image or edge detection of the adjusted version of the image.

19. The system of claim 12, wherein the image depicts the machine-readable code attached to a surgical textile soiled with the substance having the dominant color.

20. The system of claim 12, wherein the machine-readable code represents encoded information including at least one of a type of surgical textile or an identifier of the surgical textile.

21. The system of claim 12, wherein the operations further comprise:

determining encoded information represented by the machine-readable code by decoding a binarized area depicting the machine-readable code.

22. The system of claim 21, wherein the operations further comprise:

incrementing a textile counter index in response to determining the encoded information represented by the machine-readable code.

23. A machine-readable medium comprising instructions that, when executed by one or more processors of a machine, cause the machine to perform operations comprising:

accessing an image depicting a machine-readable code at least partially obscured in the image by a substance having a dominant color;

generating an adjusted version of the image by adjusting a color space of the image based on the dominant color of the substance at least partially occluding the machine-readable code; and

binarizing at least one region of the adjusted version of the image, the region depicting the machine-readable code.

Technical Field

The subject matter disclosed herein relates generally to the field of special purpose machines that facilitate image processing (including computerized variations of software configurations of such special purpose machines and improvements to such variations) and to techniques for improving such special purpose machines.

Background

One common way of packaging item information is to associate the item with a unique visual graphic, such as a machine-readable code. For example, a machine-readable code associated with a particular item may include identification information about the item, descriptive information about the item, or both, and may be used to distinguish the associated item from other (e.g., similar) items.

In general, barcodes and other graphics including data may be machine readable to provide a faster, more accurate way to interpret the information represented by the machine readable code. For example, the machine-readable code may be read and interpreted by a specialized optical scanner. As another example, the machine readable code may be read and interpreted by image processing techniques.

However, conventional image processing techniques for reading a visual machine-readable code may result in inaccurate or incomplete results if the image does not clearly depict the machine-readable code. For example, in some instances, the machine-readable code may be partially covered or obscured. For example, when a 2D machine-readable code (e.g., a QR code) is soiled with a substance, conventional image processing techniques may have difficulty accurately processing the 2D machine-readable code because the substance may make it more difficult to distinguish between different shaded elements (e.g., blocks) in a patterned matrix of the 2D machine-readable code.

Brief Description of Drawings

Some example embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings.

FIG. 1 is a schematic diagram illustrating a system for processing an image, according to some example embodiments.

Fig. 2 is a flow chart illustrating operation of the system when performing a method of processing an image, according to some example embodiments.

Fig. 3A-3H are pictures illustrating machine readable code imaged and processed according to the method of fig. 2, according to some example embodiments.

Fig. 4 is a block diagram illustrating components of a machine capable of reading instructions from a machine-readable medium and performing any one or more of the methods discussed herein, according to some example embodiments.

Detailed Description

An example method (e.g., process or algorithm) facilitates image processing, including image processing of machine-readable code contaminated with a substance (e.g., blood), and an example system (e.g., a dedicated machine configured by dedicated software) is configured to facilitate such image processing. Examples merely typify possible variations. Unless explicitly stated otherwise, structures (e.g., structural components such as modules) are optional and may be combined or subdivided, and operations (e.g., in processes, algorithms, or other functions) may change order or be combined or subdivided. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the various example embodiments. It will be apparent, however, to one skilled in the art that the present subject matter may be practiced without these specific details.

In some example embodiments, a method for processing an image of machine-readable code comprises: receiving an image comprising a machine-readable code encoding information, wherein the machine-readable code is at least partially obscured by a substance having a dominant color; generating an adjusted image by adjusting a color space of the image based on the dominant color; and binarizing at least the machine-readable code region of the image, wherein the machine-readable code region of the image depicts the machine-readable code. The method may further include capturing an image of the machine-readable code with an optical sensor, decoding the binarized machine-readable code region to determine the encoded information, or both.

In certain example embodiments, a system for processing an image of machine-readable code comprises one or more processors configured to (e.g., at least): receiving an image comprising a machine-readable code encoding information, wherein the machine-readable code is at least partially obscured by a substance having a dominant color; generating an adjusted image by adjusting a color space of the image based on the dominant color; and binarizing at least the machine-readable code region of the image, wherein the machine-readable code region of the image depicts the machine-readable code. The one or more processors may be further configured to decode the binarized machine-readable code region to determine encoded information. In some variations, the system includes an optical sensor configured to capture an image of the machine-readable code.

In various example embodiments, the received or captured image is a color image, and the image may be adjusted at least in part by adjusting a color space of the color image to a grayscale representation (e.g., by isolating color channels associated with or similar to dominant colors of the substance). The machine-readable code regions may be positioned in the image by techniques such as corner detection techniques, edge detection techniques, other suitable computer vision techniques, or any suitable combination thereof. In addition, additional image processing (e.g., binarization, with or without one or more color thresholding processes) may be performed for further processing (e.g., "cleaning" the machine-readable code regions of the image for interpretation (e.g., decoding)).

The methods and systems described herein may be used in various applications, such as processing images of machine-readable codes associated with (e.g., attached to, representing, or otherwise corresponding to) surgical textiles, where the machine-readable codes may be at least partially obscured by one or more bodily fluids (e.g., blood). For example, the dominant color of the substance on the machine-readable code may be red, and the image of the machine-readable code may be adjusted by spacing the red channel of the image within the color space of the image. The machine-readable code can include any suitable encoded information (e.g., a unique identifier of the associated surgical textile, a type of the associated surgical textile, or both) that can provide useful information to a user. For example, in response to determining the encoded information of the machine-readable code and determining an identifier of the surgical textile associated with the machine-readable code (e.g., upon such determinations), a textile counter index may be incremented. The value of the textile counter index may then be presented as an output on a display, via an audio device, or both.

In some example embodiments, a system comprises:

one or more processors; and

memory storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising:

accessing an image depicting a machine-readable code at least partially obscured in the image by a substance having a dominant color;

generating an adjusted version of the image by adjusting a color space of the image based on a dominant color of a substance that at least partially occludes the machine-readable code; and

at least one region of the adjusted version of the image is binarized, the region depicting the machine readable code.

In certain example embodiments, a method comprises:

accessing, by one or more processors of a machine, an image depicting a machine-readable code at least partially obscured in the image by a substance having a dominant color;

generating, by one or more processors of a machine, an adjusted version of an image by adjusting a color space of the image based on a dominant color of a substance that at least partially occludes the machine-readable code; and

at least one region of the adjusted version of the image is binarized by the one or more processors of the machine, the region depicting the machine-readable code.

In various example embodiments, a machine-readable medium includes instructions that, when executed by one or more processors of a machine, cause the machine to perform operations comprising:

accessing an image depicting a machine-readable code at least partially obscured in the image by a substance having a dominant color;

generating an adjusted version of the image by adjusting a color space of the image based on a dominant color of a substance that at least partially occludes the machine-readable code; and

at least one region of the adjusted version of the image is binarized, the region depicting the machine readable code.

In general, the methods and systems described herein may be used to process an image of one or more machine-readable codes. That is, the image depicts one or more machine-readable codes that may be read by an optical device. Examples of such optical machine-readable codes include barcodes (e.g., linear barcodes or other one-dimensional (1D) barcodes, or two-dimensional (2D) barcodes, such as QR codes) and/or other suitable graphics that carry encoded information in an optically readable form, such as in the form of a patterned matrix of black and white elements or other optically contrasting elements. Such machine-readable code may be used in various applications to provide information relating to one or more items associated with (e.g., attached to, represented by, or otherwise corresponding to) the machine-readable code. For example, during surgery and other medical procedures, a surgical textile (e.g., a surgical sponge or other item that can be used to absorb various liquids, including patient blood) can include machine-readable codes, such that each machine-readable code can be associated with a particular surgical textile. In some cases, the machine-readable code may be depicted (e.g., printed, woven, etc.) on a label that is sewn or otherwise attached to the surgical textile. In some cases, the machine-readable code may be depicted in the surgical textile itself. The machine-readable code may include encoded information about the associated surgical textile, such as its manufacturer, its type, its material, its size, its identifier (e.g., a serial number unique to the surgical textile for other surgical textiles), or any suitable combination thereof. Thus, the machine-readable code may be scanned (e.g., imaged) and interpreted by a decoding process, such as by computer vision techniques, to produce and utilize the encoded information contained therein.

In some cases, the machine-readable code associated with the surgical textile is scanned both before and after a medical procedure (e.g., a surgical procedure) to track the surgical textile and identify any surgical textile that may inadvertently remain in the patient. Such a scan may provide a "before" count and a "after" count of the surgical textile. The difference between the "before" count and the "after" count may prompt medical personnel to locate any textile that is significantly missing, perform a recount, perform an X-ray scan of the patient, or perform other risk mitigation.

However, during a medical procedure, the machine-readable code on the surgical textile may become stained or otherwise soiled (e.g., by other bodily fluids). In case the machine-readable code comprises dark and light elements (e.g. light and dark sections of the matrix), any dark substance (e.g. blood) may at least partially obscure the machine-readable code and disturb or even prevent accurate scanning of the machine-readable code. With respect to the counting of surgical textiles described above, such potentially erroneous scanning of the machine-readable code may result in an uncertainty of whether the count is correct (e.g., a "after" count). A wrong count may lead medical personnel to falsely conclude that a surgical textile remains in the patient or, worse still, that all surgical textiles are counted in and all textiles are removed from the patient. The methods and systems described herein are capable of processing images of machine-readable codes and robustly preventing errors due to occlusion of the machine-readable codes.

Furthermore, while some example embodiments of the methods and systems described herein may be used to track surgical textiles within the same medical procedure or the same medical session (session), other example embodiments may additionally or alternatively be used to track surgical textiles between different medical procedures or medical sessions. The surgical textile may inadvertently travel between different medical links (e.g., on or with a nurse or other person moving between different rooms). This may result in inaccurate textile counts in the source link, the destination link, or both, such as due to inadvertent repeat counts of the surgical textiles traveling in the source link, the destination link, or both. Thus, in some example embodiments, the methods and systems described herein may be used to identify textiles used during different medical procedures, and to improve accuracy in tracking surgical textiles, counting surgical textiles, or both.

The methods and systems described herein may be used in a variety of environments, including in a hospital or clinic environment (e.g., an operating room), a military environment (e.g., a battlefield), or other suitable medical treatment environment. The methods described herein may be computer-implemented and executed at least in part by one or more processors. As shown in fig. 1, the methods discussed herein may be performed, at least in part, by a computer device, such as a mobile device 150 (e.g., a tablet computer, smartphone, etc.), configured to capture images of one or more surgical textiles and process the resulting images in an operating room or other medical environment. Further, the methods discussed herein may be performed by one or more processors separate from the mobile device 150 (e.g., performed on-site in an operating room or remotely outside of the operating room).

Method for processing images of machine-readable code

As shown in fig. 2, according to some example embodiments, a method 200 for processing an image of a machine-readable code includes receiving an image of a machine-readable code including encoded information (at operation 210), wherein the machine-readable code is at least partially obscured by a substance having a dominant color. The method 200 also includes generating an adjusted image by adjusting a color space of the image based on the dominant color (at operation 220). The method 200 further includes binarizing (at operation 230) at least the machine-readable code region of the image, wherein the machine-readable code region of the image depicts the machine-readable code. The method 200 still further includes decoding the binarized machine-readable code region to determine encoded information (at operation 250). In some variations, the method 200 further includes capturing an image of the machine-readable code (at operation 208). In certain variations, the operations (e.g., steps) depicted in fig. 2 may be performed in a different order than that depicted.

As shown in fig. 2, some example embodiments of the method 200 include capturing an image of the machine-readable code (at operation 208) or generating or otherwise obtaining at least one image of the machine-readable code. One or more images of the machine-readable code may be stored in a database in a suitable data storage medium (e.g., local or remote). Accordingly, the receiving of the image of the machine-readable code in operation 210 may include receiving the image from a memory or other suitable storage device. For example, the image may have been previously acquired and stored in a storage medium. Each image may depict the entire surgical textile associated with (e.g., attached to) the machine-readable code, or only a portion of the surgical textile associated with the machine-readable code. The surgical textile may be, for example, a surgical sponge, a surgical dressing, a towel, or other suitable textile.

Each image may be a single still image or an image frame from a video feed and may include an area (e.g., a machine-readable code area) depicting a corresponding machine-readable code (e.g., within a field of view of the camera). The camera may be in a handheld device or a mobile device (e.g., a tablet computer). The camera may be mounted to a support, such as a table, or may be an overhead camera. The image may be an optical image that captures color characteristics having component values of each pixel in a color space (e.g., RGB, CMYK, etc.). The images may be stored in memory or a suitable data storage module (e.g., local or remote) and processed. Processing the image may include normalizing color characteristics of the image based on a set of one or more optical fiducials (e.g., color fiducials). The color reference may represent, for example, one or more shades of red (e.g., a grid of frames including different shades of red). Normalization of the image may include compensating for changes in illumination conditions throughout the medical procedure (e.g., surgical procedure) using a color reference to artificially match illumination conditions in the image to the template image, to artificially match illumination conditions in the image to a fluid composition concentration model that depends on the illumination conditions, or any suitable combination thereof. For example, normalizing the image may include identifying a color reference captured in the image, determining an assigned color value associated with the identified color reference, and adjusting the image such that the color value of the color reference in the image substantially matches the assigned color value associated with the color reference. For example, the assigned color value may be determined by looking up a color reference in a database (e.g., identified by a code, a location within a set of color references, a location relative to a known feature of the channel, or any suitable combination thereof). The adjustment to the image may include, for example, adjusting exposure, contrast, saturation, temperature, hue, or any suitable combination thereof.

Image pre-processing

The method 200 may include generating an adjusted image (at operation 220), such as by adjusting a color space of the image based on a dominant color of a substance that at least partially occludes the machine-readable code depicted in the image. In some example embodiments, the effect of a substance occluding one or more features (e.g., elements or segments) of a machine-readable code may be mitigated by color conversion of an isolated color (e.g., a color channel similar to a dominant color of a substance that at least partially occludes the machine-readable code). For example, blood tends to absorb less red light (e.g., light having wavelengths in the range of 620nm-740nm or within a portion of this range, such as 635nm-700nm) and reflect more red light than other colors of light (e.g., light of other wavelengths outside the range of 620nm-740 nm). In the example case where the machine-readable code includes white elements and black elements, any white elements that are stained with blood may be misread as black elements. Assume, for example, that an image has red, green, and blue (RGB) color components, and a white pixel has substantially the same red (R) value as the red pixel, although the white pixel additionally has a green (G) value and a blue (B) value. Thus, isolating the red (R) channel of the image (e.g., by removing the green (G) channel and the blue (B) channel of the image) causes any white elements of the stained blood of the machine-readable code to become recognizable as the original white elements of the machine-readable code, and the white elements of the stained blood thus become disambiguated from the black elements of the machine-readable code. Thus, the red channel of the image may be isolated (e.g., and remain in the adjusted image) such that any white features of the machine-readable code that are occluded by blood will appear similar to non-occluded white features of the machine-readable code (e.g., elements that are not occluded by blood) in the adjusted image.

In other example embodiments, one or more other color channels, such as any one or more color channels of a predefined color space (e.g., RGB, XYZ, CIE-LAB, YCrCb, CMYK, etc.), are isolated (e.g., and retained in the adjusted image). Additionally or alternatively, other color mappings may be used to pre-process the image. For example, some variations of the method 200 apply linear or non-linear equations (e.g., predefined equations) that map from an existing color space (e.g., the RGB or other color space of the optical sensor used to capture the image) to another color space. In certain example embodiments, the method 200 applies a mapping learned from data using machine learning techniques (such as SYM-regression, neural networks, K-nearest neighbors, locally weighted linear regression, decision tree regression, or any suitable combination thereof).

In various example embodiments, the method 200 also includes adjusting the image in other suitable manners. In some cases, the substance on the machine-readable code (such as blood) problematically interferes with the texture information, produces false textures, or both, and the positioning of the machine-readable code regions of the image (at operation 240) may be based on corner detection, edge detection, or both. If the material on the machine-readable code obscures corners, makes the corners appear less clear, creates false corners (e.g., due to glare, clumping, etc.), or any combination thereof, the method 200 may further include reducing high frequency noise and saving or restoring high frequency signals to preserve the sharpness of edges and corners (e.g., by improving signal-to-noise ratio). For example, the high frequency noise may be reduced with a suitable smoothing algorithm (e.g., gaussian blur, median blur, or any suitable combination thereof). Further, for example, retaining or restoring one or more high frequency signals may be accomplished with a suitable deblurring algorithm (e.g., unsharp filtering, a suitable optimization-based algorithm, etc.).

As another example, the method 200 may reduce high frequency noise by applying a bilateral filter on the image with a suitable threshold while preserving the high frequency signal. Additionally or alternatively, a trained neural network or other machine learning model may take as input an image of the machine-readable code and output a suitable pre-processed or adjusted image, where the model may be trained using manually processed (e.g., manually "cleaned") images that are processed in a desired manner.

Locating machine readable code

The method 200 (at operation 230) may include locating a region of the image depicting the machine-readable code (e.g., locating a machine-readable code region of the image). Locating the region may involve estimating a location of the machine-readable code within the image, a size of the machine-readable code, a perimeter of the machine-readable code, an orientation of the machine-readable code, any other suitable physical aspect of the machine-readable code, or any suitable combination thereof. In some example embodiments, one or more suitable computer vision techniques are used to find one or more salient features of the machine-readable code, such as corner points, straight edges near corner points (straight edges), edge orientations at 90 degrees to each other, collimated edges, certain spatial frequency bands, bi-modal color distributions, or any suitable combination thereof. As an example, the method 200 may include finding an L-shaped finder pattern (finder pattern) associated with the machine-readable code, an alternating black and white timing pattern in an image associated with the machine-readable code, or both.

In some example embodiments, a neural network model (e.g., fast-RCNN, YOLO, SSD, or other architecture suitable for object detection and localization tasks) or other machine learning model is trained to predict the location of the machine-readable code from raw images or pre-processed images processed in a manner similar to that described above (e.g., as part of locating the machine-readable code, such as operation 230 in method 200). For example, such neural networks may be trained with a sufficient number of occlusion (e.g., blood-stained) images of the machine-readable code to extract features that are robust to occlusion.

In some example embodiments, a heat map having heat map values of coordinates within an image is obtained using a corner detection algorithm (e.g., Harris corner detection algorithm) (e.g., as part of locating machine-readable code, such as operation 230 in method 200). For example, heat map values for coordinates may be generated by analyzing the rate of change (e.g., rate of change in brightness) within a sliding window as the sliding window slides around the coordinates. For example, the amount of change may be approximated as a quadratic function of the sliding window offset, and the lowest rate of change in any direction may be found and used as a heat map value. Thus, a high heat map value means that the sliding window will change a lot in any direction (e.g., corner points), while a low heat map value means that at least one direction (e.g., long side) does not change the window significantly. If the heatmap value for a coordinate in the image is above a first predetermined upper threshold, the coordinate may be determined as a corner coordinate (e.g., a corner of a machine-readable code). Further, if the heat map value of a coordinate in the image is above a second predetermined lower threshold (which is below the first predetermined threshold), the coordinate may be determined to be a straight-sided coordinate (e.g., a side, top, or bottom coordinate). Any corner coordinate and a straight-sided coordinate spatially close to the corner coordinate may be considered as a set of coordinates of interest. In various example embodiments, fewer thresholds or more thresholds may be used to properly classify the coordinates.

Outlier coordinates (e.g., coordinates that may not be from or correspond to machine-readable code) may be removed from the coordinates of interest (e.g., as part of locating the machine-readable code, such as operation 230 in method 200), such as based on a median, a quartile range, another suitable statistical metric, or any suitable combination thereof. After removing outlier coordinates, a tightly rotated rectangle may be fit around the remaining coordinates of interest (e.g., under the assumption that the machine-readable code is generally a rectangle) as an initial estimate of the location of the machine-readable code, the area of the machine-readable code, the perimeter of the machine-readable code, the orientation of the machine-readable code, or any suitable combination thereof.

In some example embodiments, the initial estimate of the machine-readable code property described above may be subsequently adjusted (e.g., as part of locating the machine-readable code, such as operation 230 in method 200). For example, the estimated area of the machine-readable code or the boundaries of the machine-readable code may be modified to have an aspect ratio that is the same as or similar to a known aspect ratio (e.g., length to width ratio) of the delineated (e.g., imaged) machine-readable code. Additionally or alternatively, the estimated orientation may be refined by performing a Hough (Hough) transform within a rectangle and taking the median orientation of the resulting Hough lines (e.g., after rotating some of the Hough lines by 90 degrees, taking into account the fact that some lines will be perpendicular to any machine-readable code line direction).

It should be appreciated that the above-described application of the Harris corner detection algorithm may be modified for different machine-readable code shapes (e.g., as part of locating the machine-readable code, such as operation 230 in method 200). For example, other suitable shapes may be fit around the coordinates of interest, which may depend on the shape of the delineated (e.g., imaged) machine readable code, if known (e.g., a circle fitted to a circular machine readable code, a triangle fitted to a triangular machine readable code, a pentagon fitted to a pentagonal machine readable code, etc.).

In some example embodiments, after estimating one possible location of the machine-readable code region within the image, a plurality of potential machine-readable code locations may be estimated (e.g., as part of locating the machine-readable code, such as operation 230 in method 200) by slightly scaling the estimated machine-readable code location (e.g., expanding or contracting the estimated perimeter of the machine-readable code region). These multiple potential machine-readable code locations may be passed on to subsequent processes (e.g., binarization or decoding as described below) so that at least one estimated machine-readable code location will result in a successful decode.

Binarization method

In certain example embodiments, the method 200 (at operation 240) includes further image processing, such as binarizing at least the region located in operation 230 (e.g., the machine-readable code region of the image). As used herein, "binarizing" refers to converting an image or region thereof to only two colors (e.g., a light color, such as white, and a dark color, such as black). Binarization of the localization area converts the machine readable code area of the image into a binary image (e.g. black and white, instead of a grayscale representation with at least three different shades of grayscale (e.g. black, grey and white)), which binarization may have the effect of removing any residual darkening caused by substances (e.g. blood) in the image of the machine readable code. The located region may be further processed based on local information of the region or information otherwise specific to the region, such as its color histogram, its orientation histogram, or both. For example, the region may be binarized at least partially with Otsu thresholding (e.g., based on a color histogram of the region). In other example embodiments, instead of binarization, tone separation or other methods of quantizing color information into more than two resulting colors are used.

As another example, binarizing a region (e.g., a machine-readable code region) may include fitting a grid of shape-like machine-readable code to an edge map of the region, generating a median color histogram of the resulting grid blocks, and applying an atraumatic thresholding on the generated median color histogram. Otsu thresholding may be used to determine which grid blocks correspond to light elements in the machine-readable code and which grid blocks correspond to dark elements in the machine-readable code. The lines of the grid may be linear or parameterized by non-linear equations, such that, for example, the grid may be fitted to image regions depicting non-linear components of the machine-readable code, machine-readable code that has been curved or distorted, or both. As another example, in some example embodiments, a neural network or other suitable machine learning model may be trained to output a predicted mesh shaped like machine-readable code based on raw images or pre-processed images processed in a manner similar to those described above. For example, such neural networks may be trained with a sufficient number of manually edited (curved) meshes.

Although the above example is described as using atraumatic thresholding, it should be appreciated that any suitable thresholding technique may be applied to binarize at least one region of the image (e.g., a machine-readable code region).

Decoding

Given an image or portion thereof (e.g., depicting at least a region of a machine-readable code) that is processed (e.g., "cleaned") as described herein, certain example embodiments of the method 200 (at operation 250) include decoding at least the region depicting the machine-readable code. This may be performed by decoding the binarized machine-readable code region of the image to determine the encoded information present in the machine-readable code. Any suitable technique for processing (e.g., reading and decoding) the machine-readable code may be applied to obtain information (e.g., a string of text characters, such as alphanumeric characters) that has been encoded in the machine-readable code. For example, such an algorithm may use vertical scan lines and horizontal scan lines to find an L-shaped seek pattern, an alternating timing pattern, or both in the machine-readable code, and then use the resulting position and scale information to evaluate each element (e.g., piece of content) of the machine-readable code. Elements of the machine-readable code (e.g., blocks of content) may then be converted into decoded data (e.g., decoded strings) using decoding and error correction methods.

Where multiple potential locations for the machine-readable code have been estimated or guessed, decoding of the machine-readable code may be performed for each of the potential locations, and the results may be compared to each other. If at least one of the potential locations for the machine-readable code results in a successful decode, a decode string may be returned (e.g., output for subsequent use). Moreover, in some example embodiments, the return of decoded strings may further be conditioned on sufficient correspondence between potential locations. For example, a decoded string may be returned if no two guesses result in conflicting decoded strings, or if an appropriate number (e.g., a majority) of guesses result in the same, common decoded string.

Using decoded information

After returning the decoded string from the machine-readable code, the information in the decoded string may be utilized in any suitable manner, which may depend on the type of encoded information. For example, the method 200 may include incrementing the textile counter index based on the decoded information (e.g., incrementing the textile counter index if it is determined that the scanned and decoded machine-readable code is different from the other scanned and decoded machine-readable code). In some example embodiments, multiple textile counter indices (e.g., a first index for surgical sponges, a second index for chux, etc.) may be maintained. For example, the encoded information may include the type of textile (e.g., textile type) in addition to the unique identifier (e.g., serial number), such that only the corresponding textile counter index for that textile type is incremented in response to the machine-readable code being scanned and decoded.

Further, the method 200 may further comprise outputting the textile counter index, such as by displaying the textile counter index on a display (e.g., as a count of the type of textile) or outputting the textile counter index (e.g., as a count of the type of textile) as an audible (audible) count via an audio device (e.g., a speaker).

System for processing images of machine-readable code

As shown in fig. 1, according to some example embodiments, a system 100 for processing an image of machine-readable code includes at least one processor 152 and a memory 154 having instructions stored therein. The processor 152 is configured to execute the stored instructions such that it is configured to: receiving an image comprising a machine-readable code encoding information, wherein the machine-readable code is at least partially obscured by a substance having a dominant color; generating an adjusted image by adjusting a color space of the image based on the dominant color; binarizing at least one region of the image (e.g., a machine-readable code region), wherein the region of the image depicts the machine-readable code; and decoding the binarized area of the image to determine encoded information. In certain example embodiments, the system 100 may be configured to substantially perform the method 200 described in more detail above. An example of the system 100 is further described below with respect to fig. 4.

As further shown in fig. 1, the system 100 may include a camera 156, the camera 156 configured to obtain (e.g., capture or otherwise generate) one or more images of the machine-readable code, and the system 100 may include a display 158 (e.g., a display screen), the display 158 configured to display the one or more images of the machine-readable code. In some example embodiments, some or all of the system 100 may be in an integrated device (e.g., the mobile device 150) and placed near the patient during a surgical procedure (e.g., in an operating room) to evaluate patient fluids contained (e.g., absorbed) in the surgical textile. For example, the system 100 may include, at least in part, a handheld or mobile electronic computing device (e.g., mobile device 150) that may be configured to execute a local fluid analysis application. Such a handheld or mobile electronic computing device may be or include, for example, a tablet computer, a laptop computer, a mobile smartphone, or any suitable combination thereof, which may include a camera 156, a processor 152, and a display 158. However, in other example embodiments, some or all of the system components may be separated into separate, interconnected devices. For example, the camera 156, the display 158, or both may be located substantially near the patient during the surgical procedure (e.g., in an operating room), while the processor 152 may be located at a remote location (e.g., separate from the camera 156 or the display 158 in the operating room, or outside the operating room) and communicate with the camera 156 and the display 158 over a wired or wireless connection or other network.

In general, the one or more processors 152 may be configured to execute instructions stored in the memory 154 such that when they execute the instructions, the processors 152 perform various aspects of the methods described herein. The instructions may be executed by computer-executable components within a user computer or other user device (e.g., mobile device, wristband, smartphone, or any suitable combination thereof) integrated with an application, applet, host, server, network, website, communication service, communication interface, hardware, firmware, software, or any suitable combination thereof. The instructions may be stored in memory or on another computer-readable medium, such as RAM, ROM, flash memory, EEPROM, an optical disk (e.g., CD or DVD), a hard drive, a floppy drive, or any other suitable device.

As described above, the one or more processors 152 may be integrated into a handheld or mobile device (e.g., mobile device 150). In other example embodiments, one or more processors 152 are incorporated into a computing device or system (such as a cloud-based computer system, mainframe computer system, grid computer system, or other suitable computer system). Additionally or alternatively, the one or more processors 152 may be incorporated into a remote server that receives images of the surgical textile, reconstructs such images (e.g., as described above), analyzes such images (e.g., as described above), and transmits the quantification of one or more aspects of the fluid in the surgical textile to another computing device, which may have a display for displaying the quantification to a user. Examples of the one or more processors 152 are further described below with respect to fig. 4.

The system 100 may also include an optical sensor (e.g., in the camera 156) that operates to generate one or more images (such as a set of one or more still images or as part of a video feed). The camera 156 may include at least one optical image sensor (e.g., CCD, CMOS, etc., which captures a color optical digital image having pixels with red, green, and blue (RGB) color components), other suitable optical components, or both. For example, the camera 156 may include a single image sensor paired with suitable corresponding optics, filters (e.g., a color filter array such as a Bayer pattern filter), or both. As another example, the camera 156 may include a plurality of image sensors paired with suitable respective optics (such as at least one prism or diffractive surface) to separate the white light into separate color channels (e.g., RGB), where each color channel is detected by a respective image sensor. According to various example embodiments, the camera 156 includes any suitable image sensor and other optical components to enable the camera 156 to generate images.

The camera 156 may be configured to transmit the images to the processor 152 for analysis, to a database storing the images, or both. As previously described, the camera 156 may be integrated in the same device (e.g., the mobile device 150) as one or more of the other components of the system 100, or the camera 156 may be a separate component that communicates image data to the other components.

The system 100 may also include a display 158 (e.g., a display screen) that operates to display or otherwise communicate (e.g., present) some or all of the information generated by the system 100 (including, but not limited to, patient information, images of the surgical textile, quantitative indicators characterizing fluid in the surgical textile, or any suitable combination thereof) to a user (e.g., a doctor or nurse). Display 158 may include a screen on a handheld or mobile device, a computer monitor, a television screen, a projector screen, or other suitable display.

In some example embodiments, the display 158 is configured to display a user interface (e.g., a Graphical User Interface (GUI)) that enables a user to interact with the displayed information. For example, the user interface may enable a user to manipulate the image (e.g., zoom, crop, rotate, etc.) or manually define at least a region depicting the machine-readable code in the image. As another example, the user interface may enable the user to select display options (e.g., fonts, colors, languages, etc.), select content to show (e.g., patient information, quantitative indicators or other fluid-related information, alerts, etc.), or both. In some such example embodiments, the display 158 is user interactive and includes a resistive or capacitive touchscreen responsive to skin, stylus, or other user contact. In other such example embodiments, the display 158 is user-interacted via a cursor controlled by a mouse, keyboard, or other input device.

In some example embodiments, the system 100 includes an audio system that communicates information to a user. The display 158, the audio system, or both may provide (e.g., present) the current value of the textile counter index, which may help track the usage of the surgical textile during the procedure.

Examples of the invention

Fig. 3A-3H are images of a 2D machine readable code attached to a surgical textile, according to some example embodiments. These machine readable codes are partially obscured to varying degrees by a red dye solution (e.g., water mixed with a red food color) that simulates blood. For example, the machine-readable code depicted in fig. 3A is only lightly covered by the dye solution (primarily on the outer boundary of the machine-readable code). The machine readable code depicted in fig. 3B to 3H is increasingly covered by the dye solution as a whole.

Each of the machine-readable codes depicted in fig. 3A-3H is attempted to be scanned and decoded using a machine-readable code reader application (e.g., configured to read a QR code) on a mobile device (e.g., mobile device 150). The machine-readable code reader successfully decoded the machine-readable code of fig. 3A, but failed to decode the machine-readable codes of fig. 3B-3H.

A color (RGB) image is captured by a camera (e.g., camera 156) and converted to a grayscale representation by isolating and acquiring only the R (red) channel values. For each image, a number of guesses at the region depicting the machine-readable code are generated by applying a harris corner detector (e.g., implementing a harris corner detection algorithm), thresholding the heat map values to identify corner coordinates and straight-side coordinates, removing outliers in the corner coordinates and straight-side coordinates, and fitting a rectangle to the resulting remaining coordinates. The machine-readable code regions of the image are further pre-processed with Otsu thresholding to binarize the located regions depicting the machine-readable code and then fed into a machine-readable code processing algorithm to search for successfully decoded character strings. For each machine-readable code, if there are any successful attempts to decode any guessed region in the multiple guesses at the machine-readable code region, and if none of the decoding results are conflicting, then returning a decoded string. As this process is repeated, successful decoding of the machine readable code is achieved for all of the machine readable codes shown in fig. 3A through 3H.

Any one or more of the components described herein may be implemented using hardware alone or using a combination of hardware and software. For example, any component described herein may physically include an arrangement of one or more processors (e.g., processor 152) configured to perform the operations described herein for that component. As another example, any component described herein may include software, hardware, or both that configure an arrangement of one or more processors (e.g., processor 152) to perform the operations described herein for that component. Thus, different components described herein may include and configure different arrangements of processors at different points in time, or a single arrangement of such processors at different points in time. Each component described herein is an example of a means (means) for performing the operations described herein for that component. Further, any two or more components described herein may be combined into a single component, and the functions described herein for a single component may be subdivided among multiple components. Moreover, according to various example embodiments, components described herein as being implemented within a single system or machine (e.g., a single device) may be distributed across multiple systems or machines (e.g., multiple devices).

Any of the systems or machines (e.g., devices) discussed herein can be, include, or otherwise be implemented in a special purpose (e.g., dedicated or otherwise unconventional and non-general purpose) computer that has been modified to perform one or more of the functions described herein for that system or machine (e.g., as configured or programmed by special purpose software, such as one or more software modules of a special purpose application, operating system, firmware, middleware, or other software program). For example, a specific use computer system capable of implementing any one or more of the methods described herein is discussed below with respect to fig. 4, and such specific use computer may therefore be a means for performing any one or more of the methods discussed herein. Within the art of such special purpose computers, special purpose computers specifically modified (e.g., configured by specialized software) by the structures discussed herein to perform the functions discussed herein are technically improved over other special purpose computers lacking or otherwise unable to perform the structures discussed herein. Accordingly, a special purpose machine configured in accordance with the systems and methods discussed herein provides an improvement over special purpose machine-like technology. Further, any two or more of the systems or machines discussed herein may be combined into a single system or machine, and the functions described herein for any single system or machine may be subdivided among multiple systems or machines.

Fig. 4 is a block diagram illustrating components of a machine 400 (e.g., mobile device 150) according to some example embodiments, the machine 400 being capable of reading instructions 424 from a machine-readable medium 422 (e.g., a non-transitory machine-readable medium, a machine-readable storage medium, a computer-readable storage medium, or any suitable combination thereof) and performing, in whole or in part, any one or more of the methodologies discussed herein. In particular, fig. 4 illustrates machine 400 in the example form of a computer system (e.g., a computer) in which instructions 424 (e.g., software, a program, an application, an applet, an application or other executable code) for causing machine 400 to perform any one or more of the methodologies discussed herein may be executed in whole or in part.

In alternative embodiments, the machine 400 operates as a standalone device or may be communicatively coupled (e.g., networked) to other machines. In a networked deployment, the machine 400 may operate in the capacity of a server machine or a client machine in server-client network environment, or as a peer machine in a distributed (e.g., peer-to-peer) network environment. The machine 400 may be a server computer, a client computer, a Personal Computer (PC), a tablet computer, a laptop computer, a netbook, a cellular telephone, a smartphone, a set-top box (STB), a Personal Digital Assistant (PDA), a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing instructions 424 that specify actions to be taken by that machine, sequentially or otherwise. Further, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute the instructions 424 to perform all or part of any one or more of the methodologies discussed herein.

The machine 400 includes a processor 402 (e.g., one or more Central Processing Units (CPUs), one or more Graphics Processing Units (GPUs), one or more Digital Signal Processors (DSPs), one or more Application Specific Integrated Circuits (ASICs), one or more Radio Frequency Integrated Circuits (RFICs), or any suitable combination thereof), a main memory 404 and a static memory 406, which are configured to communicate with each other via a bus 408. The processor 402 includes solid-state digital microcircuits (e.g., electronic, optical, or both) that are temporarily or permanently configured by some or all of the instructions 424, such that the processor 402 can be configured to perform, in whole or in part, any one or more of the methodologies described herein. For example, a set of one or more microcircuits of the processor 402 may be configured to execute one or more modules (e.g., software modules) described herein. In some example embodiments, the processor 402 is a multi-core CPU (e.g., a dual-core CPU, a quad-core CPU, an 8-core CPU, or a 128-core CPU), where each core of the plurality of cores appears as a separate processor capable of performing, in whole or in part, any one or more of the methods discussed herein. Although the benefits described herein may be provided by the machine 400 having at least the processor 402, these same benefits may be provided by such a processor-less machine if a different kind of machine (e.g., a purely mechanical system, a purely hydraulic system, or a hybrid mechanical-hydraulic system) that does not include a processor is configured to perform one or more of the methods described herein.

The machine 400 may also include a graphics display 410 (e.g., a Plasma Display Panel (PDP), a Light Emitting Diode (LED) display, a Liquid Crystal Display (LCD), a projector, a Cathode Ray Tube (CRT), or any other display capable of displaying graphics or video). The machine 400 may also include an alphanumeric input device 412 (e.g., a keyboard or keypad), a pointer input device 414 (e.g., a mouse, a touchpad, a touch screen, a trackball, a joystick, a stylus pen, motion sensor, eye tracking device, data glove or other pointing tool), a data storage 416, an audio generation device 418 (e.g., a sound card, amplifier, speaker, headphone jack, or any suitable combination thereof), and a network interface device 420.

The data storage 416 (e.g., data storage device) includes a machine-readable medium 422 (e.g., tangible and non-transitory machine-readable storage medium) on which is stored instructions 424 embodying any one or more of the methodologies or functions described herein. The instructions 424 may also reside, completely or at least partially, within the main memory 404, within static memory 406, within the processor 402 (e.g., within a processor's cache memory), or any suitable combination thereof, before or during execution thereof by the machine 400. Thus, the main memory 404, static memory 406, and processor 402 may be considered machine-readable media (e.g., tangible and non-transitory machine-readable media). The instructions 424 may be transmitted or received over a network 490 via the network interface device 420. For example, the network interface device 420 may communicate the instructions 424 using any one or more transport protocols (e.g., hypertext transport protocol (HTTP)).

In some example embodiments, the machine 400 may be a portable computing device (e.g., a smartphone, tablet, or wearable device) and may have one or more additional input components 430 (e.g., sensors or meters). Examples of such input components 430 include image input components (e.g., one or more cameras), audio input components (e.g., one or more microphones), directional input components (e.g., compasses), position input components (e.g., Global Positioning System (GPS) receivers), orientation components (e.g., gyroscopes), motion detection components (e.g., one or more accelerometers), altitude detection components (e.g., altimeters), temperature input components (e.g., thermometers), and gas detection components (e.g., gas sensors). Input data collected by any one or more of these input components 430 may be accessed and made available to any of the modules described herein (e.g., with appropriate privacy notifications and protections, such as opt-in consent or opt-out consent, implemented according to user preferences, applicable regulations, or any suitable combination thereof).

As used herein, the term "memory" refers to a machine-readable medium capable of storing data, either temporarily or permanently, and can be considered to include, but is not limited to, Random Access Memory (RAM), Read Only Memory (ROM), cache memory, flash memory, and cache memory. While the machine-readable medium 422 is shown in an example embodiment to be a single medium, the term "machine-readable medium" should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) that are capable of storing the instructions. The term "machine-readable medium" shall also be taken to include any medium, or combination of multiple media, that is capable of carrying (e.g., storing or transmitting) the instructions 424 for execution by the machine 400, such that the instructions 424, when executed by one or more processors of the machine 400 (e.g., the processor 402), cause the machine 400 to perform, in whole or in part, any one or more of the methodologies described herein. Thus, a "machine-readable medium" refers to a single storage apparatus or device, and to a cloud-based storage system or storage network that includes multiple storage apparatuses or devices. The term "machine-readable medium" shall accordingly be taken to include, but not be limited to, one or more tangible and non-transitory data stores (e.g., data volumes), exemplified in the form of solid-state memory chips, optical disks, magnetic disks, or any suitable combination thereof.

A "non-transitory" machine-readable medium as used herein specifically excludes propagated signals per se. According to various example embodiments, the instructions 424 for execution by the machine 400 may be conveyed via a carrier medium (e.g., a machine-readable carrier medium). Examples of such carrier media include non-transitory carrier media (e.g., non-transitory machine-readable storage media such as solid-state memory that is physically movable from one place to another) and transitory carrier media (e.g., a carrier wave or other propagated signal conveying instructions 424).

Various operations of the example methods described herein may be performed, at least in part, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such a processor may constitute a processor-implemented module that operates to perform one or more operations or functions described herein. As used herein, "processor-implemented module" refers to a hardware module, wherein the hardware includes one or more processors. Thus, the operations described herein may be at least partially processor-implemented, hardware-implemented, or both, as the processor is an example of hardware, and at least some of the operations within any one or more of the methods discussed herein may be performed by one or more processor-implemented modules, hardware-implemented modules, or any suitable combination thereof.

Further, such one or more processors may perform operations in a "cloud computing" environment or as a service (e.g., within a "software as a service" (SaaS) implementation). For example, at least some of the operations within any one or more of the methods discussed herein may be performed by a set of computers (e.g., as an example of a machine including a processor), where the operations may be accessed via a network (e.g., the internet) and via one or more appropriate interfaces (e.g., Application Program Interfaces (APIs)). Execution of certain operations may be distributed among one or more processors (whether residing only in a single machine or deployed across multiple machines). In some example embodiments, one or more processors or hardware modules (e.g., processor-implemented modules) may be located in a single geographic location (e.g., within a home environment, office environment, or server farm). In other example embodiments, one or more processors or hardware modules may be distributed across multiple geographic locations.

Throughout this specification, plural instances may implement a component, an operation, or a structure described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures presented as separate components and functions in the example configurations and their functions may be implemented as a combined structure or component having combined functions. Similarly, structures and functionality presented as a single component may be implemented as separate components and functionality. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein.

Some portions of the subject matter discussed herein may be presented in terms of algorithmic representations or symbolic representations of operations on data stored as bits or binary digital signals within a memory (e.g., computer memory or other machine memory). Such algorithmic or symbolic representations are examples of techniques used by those of ordinary skill in the data processing arts to convey the substance of their work to others of ordinary skill in the art. An "algorithm," as used herein, is a self-consistent sequence of operations or similar processing leading to a desired result. In this context, algorithms and operations involve physical manipulations of physical quantities. Typically, though not necessarily, such quantities may take the form of electrical, magnetic, or optical signals capable of being stored, accessed, transferred, combined, compared, or otherwise manipulated by a machine. It is convenient at times, principally for reasons of common usage, to refer to such signals using words such as "data," "content," "bits," "values," "elements," "symbols," "characters," "terms," "numbers," "numerals," or the like. However, these terms are merely convenient labels and are to be associated with appropriate physical quantities.

Unless specifically stated otherwise, discussions utilizing terms such as "accessing," "processing," "detecting," "computing," "calculating," "determining," "generating," "presenting," "displaying," or the like, herein refer to an action or process that may be performed by a machine (e.g., a computer), that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or any suitable combination thereof), registers, or other machine components that receive, store, transmit, or display information. Furthermore, unless specifically stated otherwise, the terms "a" or "an" are used herein (as is common in patent documents) to include one or more instances. Finally, as used herein, the conjunction "or" refers to a non-exclusive "or" unless specifically stated otherwise.

The description set forth below describes various examples of the methods, machine-readable media, and systems (e.g., machines, devices, or other apparatus) discussed herein.

A first example provides a method, comprising:

accessing, by one or more processors of a machine, an image depicting a machine-readable code at least partially obscured in the image by a substance having a dominant color;

generating, by one or more processors of a machine, an adjusted version of an image by adjusting a color space of the image based on a dominant color of a substance that at least partially occludes the machine-readable code; and

at least one region of the adjusted version of the image is binarized by the one or more processors of the machine, the region depicting the machine-readable code.

A second example provides the method according to the first example, further comprising: an image depicting a machine-readable code at least partially obscured by a substance having a dominant color is captured by an optical sensor.

A third example provides the method according to the first or second example, wherein the dominant color of the substance at least partially obscuring the machine-readable code is substantially red.

A fourth example provides the method of any one of the first to third examples, wherein:

the image is a color image; and

the adjustment of the color space of the image comprises converting the color space of the color image into a grayscale representation based on the dominant color of the substance.

A fifth example provides the method of any of the first to fourth examples, wherein binarizing at least the region of the image comprises color thresholding a histogram of at least the region of the image.

A sixth example provides the method of any one of the first to fifth examples, further comprising: a region depicting the machine-readable code in the adjusted version of the image is located.

A seventh example provides the method according to the sixth example, wherein locating the region depicting the machine-readable code in the adapted version of the image comprises performing at least one of: corner detection of the adjusted version of the image or edge detection of the adjusted version of the image.

An eighth example provides the method of any of the first to seventh examples, wherein the image depicts machine-readable code attached to a surgical textile soiled with a substance having a dominant color.

A ninth example provides the method of any of the first to eighth examples, wherein the machine-readable code represents encoded information comprising at least one of a type of the surgical textile or an identifier of the surgical textile.

A tenth example provides the method of any of the first to ninth examples, further comprising: the encoded information represented by the machine-readable code is determined by decoding the binarized region depicting the machine-readable code (e.g., by decoding the binarized region where the image depicts the machine-readable code).

An eleventh example provides the method according to the tenth example, further comprising:

the textile counter is incremented exponentially in response to determining the encoded information represented by the machine-readable code.

A twelfth example provides a system (e.g., a computer system) comprising:

one or more processors; and

a memory storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising:

accessing an image depicting a machine-readable code at least partially obscured in the image by a substance having a dominant color;

generating an adjusted version of the image by adjusting a color space of the image based on a dominant color of a substance that at least partially occludes the machine-readable code; and

at least one region of the adjusted version of the image is binarized, the region depicting the machine readable code.

A thirteenth example provides the system according to the twelfth example, further comprising an optical sensor configured to capture an image depicting the machine-readable code, the machine-readable code being at least partially obscured by the substance having the dominant color.

A fourteenth example provides the system of the twelfth or thirteenth example, wherein the predominant color of the substance that at least partially obscures the machine-readable code is substantially red.

A fifteenth example provides the system of any of the twelfth to fourteenth examples, wherein:

the image is a color image; and

the adjustment of the color space of the image comprises converting the color space of the color image into a grayscale representation based on the dominant color of the substance.

A sixteenth example provides the system of any of the twelfth to fifteenth examples, wherein binarizing at least the region of the image includes color thresholding a histogram of at least the region of the image.

A seventeenth example provides the system of any of the twelfth to sixteenth examples, wherein the operations further comprise: a region depicting the machine-readable code in the adjusted version of the image is located.

An eighteenth example provides the system of the seventeenth example, wherein locating the region depicting the machine-readable code in the adjusted version of the image comprises performing at least one of: corner detection of the adjusted version of the image or edge detection of the adjusted version of the image.

A nineteenth example provides the system of any of the twelfth to eighteenth examples, wherein the image depicts a machine-readable code attached to a surgical textile soiled with a substance having a dominant color.

A twentieth example provides the system of any of the twelfth to nineteenth examples, wherein the machine-readable code represents encoded information comprising at least one of a type of the surgical textile or an identifier of the surgical textile.

A twenty-first example provides the system of any of the twelfth to twentieth examples, wherein the operations further comprise: the encoded information represented by the machine-readable code is determined by decoding the binarized region depicting the machine-readable code (e.g., by decoding the binarized region where the image depicts the machine-readable code).

A twenty-second example provides the system of any of the twelfth to twenty-first examples, wherein the operations further comprise: the textile counter is incremented exponentially in response to determining the encoded information represented by the machine-readable code.

A twenty-third example provides a machine-readable medium (e.g., a non-transitory machine-readable storage medium) comprising instructions that, when executed by one or more processors of a machine, cause the machine to perform operations comprising:

accessing an image depicting a machine-readable code at least partially obscured in the image by a substance having a dominant color;

generating an adjusted version of the image by adjusting a color space of the image based on a dominant color of a substance that at least partially occludes the machine-readable code; and

at least one region of the adjusted version of the image is binarized, the region depicting the machine readable code.

A twenty-fourth example provides a carrier medium carrying machine-readable instructions for controlling a machine to perform operations (e.g., method operations) performed in any of the examples described above.

25页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于识别图像中的对象的电子设备及其操作方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!