Reading test cards using a mobile device
阅读说明:本技术 使用移动设备读取测试卡 (Reading test cards using a mobile device ) 是由 G·纳胡姆 N·布勒尔 A·奥夫德 R·莫尼茨 于 2018-05-21 设计创作,主要内容包括:从移动设备接收输入图像。所述输入图像的一部分被确定为与测试卡相对应并且对所述输入图像的该部分应用图像变换。所述图像变换纠正所述输入图像的所述部分。基于经纠正的图像,识别所述测试卡包括结果的特定测试以及该测试的结果。(An input image is received from a mobile device. A portion of the input image is determined to correspond to a test card and an image transformation is applied to the portion of the input image. The image transformation corrects the portion of the input image. Based on the corrected image, a particular test of the test card including the results is identified along with the results of the test.)
1. A computing device for interpreting test cards, the device comprising:
a template matching subsystem configured to receive an input image from a mobile device and determine that a portion of the input image corresponds to a test card;
an image processing subsystem operatively connected to the template matching subsystem, the image processing subsystem configured to apply an image transformation to correct the portion of the input image corresponding to the test card;
a test identification subsystem configured to identify a particular test for which the test card includes a result based on the corrected portion of the input image; and
a result recognition subsystem configured to recognize the result of the particular test based on the corrected portion of the input image.
2. The computing device of claim 1, wherein to determine that the portion of the input image corresponds to the test card comprises to:
identifying a plurality of potential features in the input image;
comparing the plurality of potential features to a template to determine a relevance score; and
in response to the relevance score exceeding a threshold, determining that the portion of the input image corresponds to the test card.
3. The computing device of claim 1, wherein to determine that the portion of the input image corresponds to the test card comprises to:
identifying a plurality of potential features in the input image;
comparing the plurality of potential features to a plurality of templates to generate a plurality of relevance scores, each template corresponding to a different class of test cards;
selecting a first template of the plurality of templates, the first template corresponding to a first class of test cards and having a highest relevance score; and
determining that the portion of the input image corresponds to the first class of test cards.
4. The computing device of claim 1, wherein the image transformation comprises at least one of: cropping the input image, applying a deskew to the input image, and resizing the input image.
5. The computing device of claim 1, wherein an angle between a focal axis of a camera that captures the input image and a normal to a plane of the test card may be in a range of zero degrees to forty-five degrees without substantially affecting reliability of the identified results.
6. The computing device of claim 1, wherein a distance between the camera that captures the input image and the test card may be in a range of five centimeters to thirty centimeters without substantially affecting reliability of the identified results.
7. The computing device of claim 1, wherein the input image is captured by a camera of the mobile device, the test card is physically separated from the camera when the image is captured, and the test card is not supported by an accessory connected to the mobile device.
8. The computing device of claim 1, wherein the test identification subsystem identifies the particular test by providing at least a portion of the corrected image to a support vector machine that has been trained to identify the particular test from a plurality of possible tests.
9. The computing device of claim 1, wherein the result recognition subsystem recognizes the result of the particular test by providing at least a portion of the corrected image and an identifier of the particular test to a support vector machine that has been trained to recognize the test result from a plurality of possible results.
10. A method of interpreting a test card, the method comprising:
receiving an input image from a mobile device;
determining that a portion of the input image corresponds to a test card;
applying an image transformation to the portion of the input image, the image transformation correcting the portion of the input image;
identifying a particular test for which the test card includes a result based on the corrected portion of the input image; and
identifying the result of the particular test based on the corrected portion of the input image.
11. The method of claim 10, wherein determining that the portion of the input image corresponds to the test card comprises:
identifying a plurality of potential features in the input image;
comparing the plurality of potential features to a template to determine a relevance score; and
in response to the relevance score exceeding a threshold, determining that the portion of the input image corresponds to the test card.
12. The method of claim 10, wherein determining that the portion of the input image corresponds to the test card comprises:
identifying a plurality of potential features in the input image;
comparing the plurality of potential features to a plurality of templates to generate a plurality of relevance scores, each template corresponding to a different class of test cards;
selecting a first template of the plurality of templates, the first template corresponding to a first class of test cards and having a highest relevance score; and
determining that the portion of the input image corresponds to the first class of test cards.
13. The method of claim 10, wherein the image transformation comprises at least one of: cropping the input image, applying a deskew to the input image, and resizing the input image.
14. The method of claim 10, wherein an angle between a focal axis of a camera capturing the input image and a normal to a plane of the test card may be in a range of zero degrees to forty-five degrees without substantially affecting reliability of the identified results.
15. The method of claim 10, wherein a distance between the camera capturing the input image and the test card may be in a range of five centimeters to thirty centimeters without substantially affecting reliability of the identified results.
16. The method of claim 10, wherein the input image is captured by a camera of the mobile device, the test card is physically separated from the camera when the image is captured, and the test card is not supported by an accessory connected to the mobile device.
17. The method of claim 10, wherein the test identification subsystem identifies the particular test by providing at least a portion of the corrected image to a support vector machine that has been trained to identify the particular test from a plurality of possible tests.
18. The method of claim 10, wherein the result recognition subsystem recognizes the result of the particular test by providing at least a portion of the corrected image and an identifier of the particular test to a support vector machine that has been trained to recognize the test result from a plurality of possible results.
19. A system for reading a test card, the system comprising:
a mobile device having a camera and a display, the camera configured to acquire an image including a test card, the test card physically separated from the camera when the image is captured and not supported by an accessory connected to the mobile device, and the display configured to display a test result determined from the image; and
a diagnostic server communicatively coupled to the mobile device and configured to:
receiving the image from the mobile device;
applying an image transformation to the image, the image transformation correcting the image;
determining the test result based on the corrected image; and
sending the result to the mobile device.
20. The system of claim 19, wherein the diagnostic server determines the test results by using a support vector machine to identify a particular test from a plurality of possible tests and to identify the results from a plurality of possible results of the particular test.
1. Field of the invention
The subject matter described herein relates generally to diagnostic testing, and in particular to reading the results of such testing using a camera of a mobile device.
Background
Disclosure of Invention
According to various embodiments, the above and other problems are solved by a computing device and method for reading a test card using an image captured by a mobile device. In one embodiment, a computing device reads a test card by obtaining an input image including the test card from a mobile device. The computing device determines whether the image (or a portion thereof) includes a test card and, if so, applies an image transformation to correct the input image (e.g., to correct for skew, size differences between images, etc.). The computing device also identifies, based on the corrected input image, a particular test of the test card that includes the result and the result of the test. In one embodiment of the method, the test card is read with respect to the computing device substantially as described above. Embodiments of a system including both a mobile device and a diagnostic server operating together to read a test card are also disclosed.
Drawings
FIG. 1 is a high-level block diagram of a system suitable for reading test cards using a mobile device according to one embodiment.
FIG. 2 is a high-level block diagram illustrating an example of a computer used in the system of FIG. 1, according to one embodiment.
FIG. 3 is a high-level block diagram illustrating one embodiment of the mobile device shown in FIG. 1.
FIG. 4 is a high-level block diagram illustrating one embodiment of the diagnostic server shown in FIG. 1.
FIG. 5 is a flow diagram illustrating a method of providing test results using a camera of a mobile device, according to one embodiment.
FIG. 6 is a flow diagram illustrating a method of identifying a portion of an image including a test card according to one embodiment.
FIG. 7 is a flow diagram illustrating a method of applying an image transformation to assist in test result identification, according to one embodiment.
Detailed Description
The presence of mobile devices has become almost ubiquitous. Most people now carry at least one mobile device most of the time in a bag or pocket. The computing power of these devices has also increased, increasing the range of functionality that they can provide. Thus, in many contexts, mobile devices are underutilized resources. One such context is the reading of medical test cards. Enabling medical professionals to easily obtain results using their own mobile devices may reduce the time required to obtain results. This may also provide results in scenarios where a dedicated test card reader cannot be used due to failure or insufficient proximity. For example, the mobile device may be used to read a test card at a patient's home or at a medical emergency site, which may be miles away from the nearest hospital equipped with a dedicated reader.
The figures (figures) and the following description depict certain embodiments by way of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described. Reference will now be made to several embodiments, examples of which are illustrated in the accompanying drawings. It should be noted that wherever possible, similar or identical reference numbers may be used in the drawings and may refer to similar or identical functions.
Overview of the System
FIG. 1 illustrates one embodiment of a
In various embodiments, the
In one embodiment, a user captures an image of
Referring back to FIG. 1, once the image is analyzed, the
Referring again to FIG. 1,
FIG. 2 is a high-level block diagram illustrating one embodiment of a computer 200 suitable for use in
The storage device 208 includes one or more non-transitory computer-readable storage media, such as a hard disk drive, a compact disk read-only memory (CD-ROM), a DVD, or a solid state storage device. The memory 206 holds instructions and data used by the processor 202. Pointing device 214 is used in conjunction with keyboard 210 to input data into computer system 200. Graphics adapter 213 displays images and other information on display device 218. In some embodiments, the display device 218 includes touch screen functionality for receiving user inputs and selections. Network adapter 216 couples computer system 200 to
The computer 200 is adapted to execute computer program modules to provide the functionality described herein. As used herein, the term "module" refers to computer program instructions or other logic for providing the specified functionality. Accordingly, a module may be implemented in hardware, firmware, or software, or a combination thereof. In one embodiment, program modules formed from executable computer program instructions are stored on the storage device 208, loaded into the memory 206, and executed by the processor 202.
Example System
Fig. 3 illustrates one embodiment of a
The camera 310 captures images of the surroundings of the
Display 330 presents information to the user, such as instructions on how to obtain an appropriate image of the test card and the results of the image acquisition by analyzing the test card. In one embodiment, display 330 is a touch screen. The test card reader application presents a user interface on the display 330 for acquiring images. For example, display 330 may present instructions that tell the user to take a picture of the test card. The user then taps the control to open a camera interface that displays the preview that the camera 310 is currently acquiring. Upon selection of another control (or a second selection of the same control), an image is captured.
In one embodiment, once the image is captured, it is presented to the user for viewing on display 330. The display 330 also includes a pair of controls, one for rendering an image for analysis and the other for discarding an image and capturing the other. If the user selects the submit control, the image is sent to the
Local data store 340 includes one or more computer-readable storage media (e.g., hard disk drive, flash memory, etc.) that store software and data used as part of a test card reading process. In one embodiment, the local data store 340 stores the test card reader application, the images captured by the camera 310, and the test results received from the
Fig. 4 illustrates one embodiment of the
The template matching subsystem 410 compares the captured image to one or more templates to determine if a test card is present. In various embodiments, the template identifies features common to a particular class of gel cards. For example, one type of gel card may include six adjacent sample reservoirs with various text boxes underneath (e.g., identifying the particular test corresponding to each reservoir and result). Thus, when viewed along the normal (i.e., "front") of the plane of the test card, the corresponding template may identify each of these functions and their relative positions.
In one embodiment, the template matching subsystem 410 receives an image (e.g., an image captured by the camera 310 of the mobile device 120) and an indication of which type of test card is expected to be presented. The indication may be hard coded (for systems designed for a single type of test card) or received with the image (e.g., based on a user selection). The template matching subsystem 410 applies a feature recognition algorithm to generate match data (e.g., an indication that a region of interest in an image corresponding to a feature in a template is predicted and a probability that the image includes a desired category of test cards). Examples of such algorithms include: Scale-Invariant Feature Transform (SIFT), Fast retinal keypoints (Fast retina keypoints) (FREAK), Binary Robust scalable keypoints (BRISK), and Fast Oriented and short rotations (organized Fast and Rotated BRIEF) (ORB). A common feature of these algorithms is that they attempt to account for variations in the scale and alignment of features in the image to identify known features (in this case, features of the test card class).
In other embodiments, the template matching subsystem 410 compares the image to a plurality of templates, each template corresponding to a different class of test cards (e.g., there may be TOX/SEE for each type of test card)TMUrine test Panel, GENEIUSTMHIV test kit and ID-SYSTEMTMA template for each of the gel cards). In one such embodiment, the template matching subsystem 410 generates a degree of match for each template and selects the test card class corresponding to the closest match. In another embodiment, at least some instances (e.g., instances where two or more templates result in a degree of match within a threshold of each other) cause the template matching subsystem 410 to present a list of possible matches to the user for selection.
The image processing subsystem 420 receives images and corresponding match data (or a subset thereof) that indicate regions of interest that are predicted to correspond to features defined in the template. In various embodiments, image processing subsystem 420 determines one or more transforms to apply to the image based on the matching data. Examples of such transformations include cropping, brightness adjustment, contrast adjustment, resizing, rotation, and skew correction. By applying these transformations, the image processing subsystem 420 produces a corrected image that approximates the look of a photograph of the test card taken under assumed ideal conditions (e.g., at a fixed distance, at uniform illumination, perfect focus, and at a camera axis aligned with the normal to the plane of the test card). For example, if a test card category includes four identical boxes on a vertical line, the appearance of these boxes in the image may be used to estimate the required rotation (based on the angle between the box center and the image edge), skew correction (based on the angle between the sides of the box created by the perspective effect), and resizing (based on the size of the box).
In some embodiments, iterative algorithms, such as random sample consensus (RANSAC), are used to identify combinations of geometric transformations that best account for differences between feature locations in the template image and corresponding locations in the input image. One advantage of RANSAC is that it is robust against outliers. Thus, the feature set generated by the template matching subsystem 410 need not be perfect to produce a reliable fit. In one embodiment, the output of the algorithm includes a metric indicative of the quality of the corrected image (e.g., how likely it is that the original test card is accurately reproduced with sufficient precision to produce an accurate result). If the quality of the corrected image is below the threshold, the image is rejected and the user is notified of the analysis failure (e.g., enabling the user to submit a new image). Alternatively, a notification may be sent to the camera 310 that the camera 310 automatically captures a new image. The process may loop until an image of sufficient quality has been received or an exit condition is reached (such as a certain number of failed attempts, the user selecting a "stop" option, or the image having a quality metric below a second threshold (indicating that the user has stopped aiming the camera at the test card)).
In some embodiments, the image transformation does not change the base pixel values. Alternatively, the image is rotated, resized, and/or skewed by altering the geometric coordinates of the pixels. For example, if a given test card has two lines that are one inch apart and the input image has two lines that are determined to match the features and are separated by a number of pixels corresponding to one-half inch, the image processing subsystem may alter the coordinates of the pixels to double the spacing between each line. Thus, the corrected image will have two lines one inch apart, as expected from the template. Those skilled in the art will understand how the coordinates of the pixels can be changed to implement various image transformations.
Assuming that the corrected image is of sufficient quality, the test identification subsystem 430 analyzes it to identify which test or results of which tests are on the card. In one embodiment, the test identification subsystem 430 uses a support vector machine (e.g., LIBSVM or open source code computer vision class library (OpenCV)) to identify which test or results of which tests are on the test card. The support vector machine is a machine learning model that has been trained under human supervision to classify input images. For example, if a first region of a test card may have any one of a set of test identifiers (e.g., a string, a symbol, etc.), the support vector machine (once trained) may distinguish between tests based on which identifier is found in the first region. In this example, the portion of the corrected image corresponding to the first region of the test card (rather than the entire corrected image) may be passed to the support vector machine, which then determines which test was performed. Those skilled in the art will recognize that a variety of flags may be used by the support vector machine to determine which test or results of which tests are on the test card.
The result identification subsystem 440 identifies the results of one or more tests identified by the test identification subsystem 430. In one embodiment, because the test cards in a class have a substantially uniform format and the image has been corrected, the location of the test identifier also provides the location of the corresponding result. For example, if a test card includes adjacent boxes for test identifiers and test results, the results of a particular identified test (determined by test identification subsystem 430) may be found by analyzing the adjacent boxes. The support vector machine may again be used to determine the results by analyzing the corresponding portions of the corresponding images.
Those skilled in the art will appreciate that different tests will have different result flags. Thus, once a particular test has been identified (e.g., by test identification subsystem 430), this information may be used to assist in the analysis performed by result identification subsystem 440. For example, the result area of the test card may include a particular shape for negative results (e.g., a single line, cross, open circle, etc.) and a different shape for positive results (e.g., a pair of lines, cross, closed circle, etc.). Similarly, for tests that produce a digital output, the result region may contain a number (e.g., cholesterol level, antibody count, etc.). Thus, the result identification subsystem 440 need only consider those results that the identified test may produce.
In some embodiments, the result identification subsystem 440 also generates a degree of certainty for the identified results. In one such embodiment, if the certainty is below a threshold, the result is discarded. Additionally or alternatively, the results are returned to the
Template storage 450 includes one or more computer-readable storage media that store templates used by template matching subsystem 410. Because templates are used independently to identify a particular class of test cards, the system can be extended for use with new classes of test cards by adding corresponding templates to the template store 450. In one embodiment, template store 450 is a hard drive within
The results store 460 includes one or more computer-readable storage media that store results generated by the image processing subsystem 420 (e.g., processed portions of an image), results generated by the results recognition subsystem 440 (e.g., results of diagnostic tests that are added to a patient's file), or both. In one embodiment, results store 460 is a hard drive within
Example method
FIG. 5 illustrates one embodiment of a
In the embodiment shown in FIG. 5, the
Referring back to fig. 5, the
Referring again to FIG. 5, the
Once a particular test has been identified 540, the
The
FIG. 6 illustrates one embodiment of a
In the embodiment shown in FIG. 6, the
The template matching subsystem 410 locates 620 features in the input image that may correspond to features defined in the template. For example, if the template indicates that the test card class includes a series of boxes for the test identifier and results, the template matching subsystem 410 may identify any quadrilateral in the input image as a potentially matching feature. In various embodiments, the template matching subsystem 410 extracts a feature vector for each point in the input image and compares the feature vector to a feature vector used to identify the region or point of interest of the template. If the feature vector extracted from the input image matches the feature vector from the template, the corresponding location in the input image is temporarily determined as an instance of the region or point of interest defined by the template. Thus, the template matching subsystem 410 generates a set of location pairs in the image and corresponding regions or points defined in the template. In one embodiment, the analysis is performed on spatial derivatives of the input image. This effectively identifies the boundaries between regions of different color or intensity (e.g., edges of the object). Thus, the analysis is robust to variations in brightness, color balance, etc. in the input image, since the analysis focuses on the contours of the depicted object, rather than the depiction of the object itself.
The template matching subsystem 410
For example, if a template includes three uniformly sized boxes adjacent to each other, and the only box-like features in the image form triangles and are separated by large distances, the probability of a true match (and thus the relevance score) is low. In contrast, including three aligned box-like features in the image, the only difference between these features and the template being non-uniform size, and they being trapezoids instead of boxes, the relevance score will be high (as this likely corresponds to the test card being at an angle to the camera and is easy to resolve). In one embodiment, the relevance score is determined by applying the RANSAC algorithm using parameters of possible image transformations (e.g., resizing, skewing, and rotation) as variables. Because RANSAC is robust to outliers, the algorithm converges to a set of transform parameters with a high probability of matching even if several pairs of identified image locations and regions/points of interest are false positives. For example, even where a match between a single feature in the template and a portion of the input image may be uncertain, the relevance score for the entire set may still be high enough to justify relying on the algorithm when reading the test card. One skilled in the art can recognize other ways to determine a relevance score.
In various embodiments, regardless of the particular manner of determination, if the relevance score exceeds a threshold, the template matching subsystem 410 identifies 640 the corresponding portion of the image as including a test card. For example, in one embodiment, a high pass filtered version of the image is used to determine the relevance score. Thus, only edges contribute to the relevance score, while homogeneous regions are ignored. As a result, the relevance score drops rapidly towards zero when the input image and template are misaligned. Thus, a relatively low correlation score (e.g., 0.25) may be used as a threshold and still reliably distinguish between a match and a mismatch. In some embodiments, indicia unrelated to the particular outcome (e.g., a manufacturer or provider logo) is used as an additional validation check. Once a potential match is identified (e.g., the relevance score exceeds a threshold), a logo is sought at the location where the match would be if it were a true match. The secondary relevance score for the logo may be determined by comparing the area of the image where the logo is expected to be found to the logo template in a manner similar to the way the input image is compared to the template of the entire test card. If a logo is found at the expected location, a match is confirmed. If the logo is not found at the expected location, the potential match is rejected. One skilled in the art can recognize other ways in which the presence of a logo in a desired location can be determined.
In other embodiments, the degree of match is presented to the user (e.g., by sending a message back to the mobile device 120), who may request analysis or provide a new image. In further embodiments, other methods are used. For example, a match above a first threshold may be automatically accepted, while a match between the first threshold and a second threshold may be presented to the user for confirmation, while a match below the second threshold is automatically rejected and the user is informed that no match was found.
FIG. 7 illustrates an embodiment of a
In the embodiment shown in FIG. 7,
In various embodiments, the cropping and resizing of the input image enables the
The image processing subsystem 420 determines 730 an angular difference between a focal axis of a camera used to capture the image and a normal to the plane of the test card. In one embodiment, the image processing subsystem 420 compares the potential features identified in the image to the test card template to determine 730 an angular difference. For example, the degree to which parallel or perpendicular lines on the test card converge in the image can be used to determine the angular difference. As another example, the relative sizes of features on the test card and the sizes of those features appearing in the image may also be used to determine 730 the angular difference.
Having determined 730 the angular difference, image processing subsystem 420 applies 740 a deskew to the image. In some embodiments, image processing subsystem 420 determines a skew correction amount to be applied 740 to compensate for the angular difference. In other words, after the skew is corrected, the image will look similar to what it would look if the angular difference were zero (i.e., if the camera was aimed directly at the test card). In one embodiment, image processing subsystem 420 may correct for angular differences of up to forty-five degrees without substantially compromising the reliability of the results read from the test card. In another embodiment, angular differences of up to twenty degrees may be corrected without a substantial effect on reliability. In other embodiments, other ranges of angular differences may be corrected without substantially affecting reliability.
The image processing subsystem 420 also determines 750 the illumination level based on the image. In one embodiment, the image processing subsystem calculates the average intensity of each pixel in the image to determine 750 the overall illumination. The image processing subsystem 420 then applies 760 the brightness correction to normalize the image to a standard brightness. In a related embodiment, the image processing subsystem 420 also applies contrast adjustments to the image. For example, in low illumination situations, increasing the contrast of the image may help to distinguish the feature of interest from the background of the test card.
Thus, the embodiment of
Other considerations
Some portions of the above description describe embodiments in terms of algorithmic processes or operations. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to effectively convey the substance of their work to others skilled in the art. Although these operations are described functionally, computationally, or logically, these operations are understood to be implemented by a computer program comprising instructions to be executed by a processor or equivalent circuits, microcode, or the like. Furthermore, it has also proven convenient at times, to refer to an arrangement of functional operations as a subsystem, without loss of generality.
Any reference to "one embodiment" or "an embodiment" as used herein means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.
Some embodiments may be described using the expression "coupled" and "connected" along with their derivatives. It should be understood that these terms are not intended as synonyms for each other. For example, some embodiments may be described using the term "connected" to indicate that two or more elements are in direct physical or electrical contact with each other. In another example, some embodiments may be described using the term "coupled" to indicate that two or more elements are in direct physical or electrical contact with each other. The term "coupled," however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. The embodiments are not limited in this context.
As used herein, the terms "comprises," "comprising," "includes," "including," "contains," "has," or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Further, unless expressly stated to the contrary, "or" refers to an inclusive or and not to an exclusive or. For example, condition a or B satisfies any one of the following: a is true (or present) and B is false (or not present), a is false (or not present) and B is true (or present), and both a and B are true (or present).
Furthermore, the use of "a" or "an" is used to describe elements and components of embodiments herein. This is done merely for convenience and to give a general sense of the invention. This description should be read to include one or at least one and the singular also includes the plural unless it is obvious that it is meant otherwise.
After reading this disclosure, those skilled in the art will appreciate additional alternative structural and functional designs for systems and processes for reading test cards using mobile devices. Thus, while particular embodiments and applications have been illustrated and described, it is to be understood that the subject matter described is not limited to the precise construction and components disclosed herein and that various modifications, changes, and variations which will be apparent to those skilled in the art may be made in the arrangement, operation, and details of the methods and apparatus disclosed herein. The scope of the invention is limited only by the appended claims.
- 上一篇:一种医用注射器针头装配设备
- 下一篇:图像处理装置