Fresh agricultural product identification system of retail checkout terminal

文档序号:1277230 发布日期:2020-08-25 浏览:21次 中文

阅读说明:本技术 零售结账终端新鲜农产品标识系统 (Fresh agricultural product identification system of retail checkout terminal ) 是由 马塞尔·赫兹 克里斯托弗·桑普森 于 2018-12-20 设计创作,主要内容包括:公开了系统和方法,所述系统和方法包括:从第一数量的图像开始,通过对所述第一数量的图像进行数字操作生成第二数量的图像,从所述第二数量的图像中提取特征,以及通过在所述第二数量的图像上训练神经网络生成分类模型,其中所述分类模型提供图像的分类的百分比可能性;将所述分类模型嵌入处理器中并且接收用于分类的图像,其中所述处理器与POS系统通信,所述处理器运行所述分类模型以向所述POS系统提供所述图像的分类的百分比可能性的输出。(Systems and methods are disclosed, including: generating a second number of images by digitally operating on the first number of images, starting from a first number of images, extracting features from the second number of images, and generating a classification model by training a neural network on the second number of images, wherein the classification model provides a percent likelihood of classification of an image; embedding the classification model in a processor and receiving an image for classification, wherein the processor is in communication with a POS system, the processor executing the classification model to provide an output of a percent likelihood of classification of the image to the POS system.)

1. A method of image classification, comprising:

in the preprocessing, starting with a first number of images, generating a second number of images by performing digital operations on the first number of images, extracting features from the second number of images, and generating a classification model by training a neural network on the second number of images, wherein the classification model provides a percent likelihood of classification of the images;

embedding the classification model in a processor; and

receiving an image for classification, wherein the processor is in communication with a POS system, the processor running the classification model to provide an output of a percent likelihood of classification of the image to the POS system.

2. The method of claim 1, wherein pre-processing an image for classification comprises capturing the image, extracting the features, and applying the extracted features to the neural network of the classification model to generate a percent likelihood of classification of the image.

3. The method of claim 1, wherein in preprocessing, a pre-trained Convolutional Neural Network (CNN) is trained on a large uncorrelated or independent data set used as a feature detector.

4. The method of claim 1, wherein the neural network comprises a fully connected neural network.

5. The method of claim 3, wherein the feature extraction comprises:

a. the pre-trained CNN;

b. color space histogram

c. Texture features generated by numerical feature vectors and

d. dominant color segmentation

6. The method of claim 1, wherein the POS system receives a formatted communication of an output through the classification model, the formatted output comprising a protocol for providing the POS system with a score of the percent likelihood of a category of the image.

7. A method of a system external to a point of sale (POS) system, wherein the external system includes a processor and captures an image and runs a classification model embedded in the processor, the classification model providing as output a score of a percentage likelihood of a category of the image, and the external system generates as output a formatted communication comprising a protocol to the POS system, wherein the POS system receives the formatted communication of the output through a classification model of the external system.

8. The method of claim 7, wherein upon preprocessing, the model embedded in a processor starts with a first number of images, generates a second number of images by enhancing the first number of images, extracts features from the second number of images, and generates the classification model by processing the second number of images via a neural network to provide a percent likelihood of classification of the images.

9. The method of claim 8, wherein, at the time of preprocessing, the neural network comprises a pre-trained Convolutional Neural Network (CNN) trained on a large set of uncorrelated or independent data used as shape and edge detectors.

10. The method of claim 8, wherein the neural network comprises a fully connected neural network.

11. The method of claim 9, wherein the feature extraction comprises:

e. the pre-trained CNN

f. Color space histogram

g. Texture features generated by numerical feature vectors and

h. dominant color segmentation

12. A method for classifying a product, comprising:

the method includes the steps of populating a first number of images, generating a second number of images by enhancing the first number of images, performing feature extraction from the second number of images, wherein the feature extraction includes running a pre-trained Convolutional Neural Network (CNN) as a high-level edge and shape identifier, and then generating a classification model by processing the second number of images via the neural network, wherein the classification model provides a percent likelihood of classification of the images.

13. The method of claim 12, wherein generating the classification model further comprises preprocessing feature extraction of the second number of images.

14. The method of claim 12, wherein the feature extraction comprises:

a. the pre-trained CNN

b. Color space histogram

c. Texture features generated by numerical feature vectors

d. Dominant color segmentation

15. The method of claim 12, wherein the neural network comprises a fully connected neural network.

16. The method of claim 12, wherein the classification model is embedded in a processor that is external to and in communication with a POS system.

17. The method of claim 16, wherein the POS system receives a formatted communication of an output through the classification model, the formatted output comprising a protocol for providing the POS system with a score of the percent likelihood of the category of the image.

18. A method for expanding an image dataset, comprising:

in the pre-processing, the first number of images are segmented starting with the first number of images, a second number of images are generated by performing digital operations on the first number of images, features are extracted from the second number of images and the second number of images are processed via a neural network and thereby a classification model for deployment is generated, wherein the segmentation of the images is not performed at the time of deployment.

19. The method of claim 18, wherein upon deployment, the image is captured, the features are extracted, and the extracted features are applied to the neural network of the classification model to generate a percent likelihood of classification of the image.

20. The method of claim 18, wherein, at preprocessing, the feature extraction includes a pre-trained Convolutional Neural Network (CNN) trained on a large set of uncorrelated or independent data used as shape and edge detectors.

21. The method of claim 18, wherein the neural network comprises a fully connected neural network,

22. the method of claim 20, wherein the feature extraction comprises:

a. the pre-trained CNN

b. Color space histogram

c. Texture features generated by numerical feature vectors and

d. dominant color segmentation

23. The method of claim 18, wherein for deployment, the classification model is embedded in a processor that is external to and in communication with a POS system.

24. The method of claim 23, wherein the POS system receives a formatted communication of an output through the classification model, the formatted output comprising a protocol for providing the POS system with a score of a percentage likelihood of a category of the image.

25. A method of image classification, comprising:

generating, starting from a first number of images, a second number of images by performing a digital operation on the first number of images, extracting features from the second number of images according to:

a. a pre-trained Convolutional Neural Network (CNN) trained on a large set of uncorrelated or independent data used as feature detectors;

b. a color space histogram;

c. texture features generated by the numerical feature vectors; and

d. dominant color segmentation

And training a neural network on the extracted features to generate a classification model, wherein the classification model provides a percent likelihood of classification of the image.

26. The method of claim 25, further comprising:

embedding the classification model in a processor; and

receiving an image for classification, wherein the processor is in communication with a POS system, the processor running the classification model to provide an output of a percent likelihood of classification of the image to the POS system.

27. The method of claim 26, wherein the POS system receives a formatted communication of an output through the classification model, the formatted output comprising a protocol for providing the POS system with a score of the percent likelihood of a category of the image.

Technical Field

The present invention relates to a retail checkout terminal fresh produce identification system, and more particularly, the system employs machine learning that uses a fresh produce learning set to visually classify fresh produce types in use as they are presented by image data captured at the terminal. Furthermore, the present machine learning system is trained in a particular manner to address limitations inherent in the retail environment of fresh produce. Although the present system and method is widely applicable to different types of retail checkout terminals, it will be described hereinafter primarily with reference to self-checkout terminals. It should be understood, however, that the invention is not necessarily limited to this particular application within the scope of the objects of the embodiments provided.

Background

Self-checkout is becoming increasingly common today, where shoppers are able to scan items and make payments essentially autonomously.

While bar codes can be scanned on packaged goods, for fresh produce (such as fresh fruits, vegetables, etc.), the user is required to make a selection from the screen display.

However, this method is inaccurate because fresh produce items are often misclassified unintentionally or fraudulently.

The present invention seeks to provide a way that will overcome or substantially ameliorate at least some of the disadvantages of the prior art, or at least provide an alternative.

It will be understood that, if any prior art information is referred to herein, this reference does not constitute an admission that the information forms part of the common general knowledge in the art in australia or in any other country.

Disclosure of Invention

A retail checkout terminal fresh produce identification system for visual identification of fresh produce is provided herein, wherein the system is trained using machine learning. The system may be integrated with a conventional checkout POS system so as to be able to output one or more predicted fresh produce types to the checkout system for display on a screen for selection by the shopper.

The imaging components of the system may include a mechanical fixture comprising illumination (typically LED illumination), optionally a suitable homogeneous background and a visible spectrum camera.

The camera captures an image of the fresh agricultural product presented to classify the type of agricultural product.

In an embodiment, the system employs supervised machine learning through neural network optimization.

As will be described in further detail below, the present system is trained in a particular manner to address certain limitations inherent in the fresh produce identification system, while maintaining a desired or suitable detection accuracy.

In particular, utilizing large data sets may increase detection accuracy in order to address the problem of an uncontrolled environment across numerous retailers. However, it is problematic that such large data sets may not be available, particularly for retailers who stock relatively few fresh produce commodities.

Also, where large data sets are available, it is not possible in alternative solutions to generalize performance to scenarios where data has not yet been collected. The described method solves this problem by describing the following method: how data is collected and expanded so that a model can be generated that will be generic to a wide variety of environments that arise in POS applications.

Furthermore, the present system is ideally suited to minimize computational requirements in terms of processing and storage, thereby allowing the system to be constructed at low cost. The system may also address image resolution limitations (such as low as 740x 480 pixels).

In addition, brightness and illumination color fluctuations at the checkout terminal may affect imaging of the fresh produce items presented.

Specifically, in one or preferred embodiment, the present system captures and utilizes two feature vectors extracted from fresh produce image data, including a color histogram feature vector and a Harlick texture feature, which are combined to form a full feature vector. Given the inherent limitations of retail produce identification systems, it was discovered during experimentation that the combination of these two feature vectors and their manner of use may provide sufficient accuracy and other advantages.

In an embodiment, the histogram is band-divided into individual bands of increasing width to reduce the length of the color histogram feature vector, allowing a smaller learning data set to be used to train the high performance model. This is an important consideration for small and medium-sized fresh produce suppliers, where it is impractical to collect a large number of training images. In this way, performance may be improved on small fixed-size data sets by reducing the features available to the training model. In an embodiment, the number of bands may be optimized to optimize the accuracy of the neural network.

Furthermore, in an embodiment, 14 Harlick texture features are used, comprising: angular second moment, contrast, correlation, sum of squares: variance, inverse differential moment, sum mean, sum variance, sum entropy, difference variance, difference entropy, information correlation measure 1, information correlation measure 2, maximum correlation coefficient.

Likewise, when the accuracy of the neural network is again optimized to account for the sample set and computation constraints, sub-selections of these texture features may be selected.

Neural networks are trained using a fresh produce learning set, which, as described above, may be limited in number for a particular food trader. For example, the sample set training data may be only those fresh produce items that are typically stocked by a particular retail location.

A full feature vector is computed for each image and then used to optimize the neural network, including neural weights and structures.

Once trained, the system is deployed to capture images of unknown inputs of fresh produce items that fall within the category of the learning data set presented at the checkout terminal. The deployed system similarly generates full feature vectors from the collected images and generates predictions using a trained neural model. The prediction is then passed to a checkout system for processing using a defined communication protocol.

Thus, according to this arrangement, the consumer need not select from fresh produce items on the screen, but may simply place fresh produce items in front of the camera for identification. In embodiments, where fresh produce items cannot be determined to some degree of accuracy, the deployed system may transmit a plurality of potential categories to a checkout system, which may then present a sub-selection of fresh produce items on a screen for selection by a consumer.

It should be noted that once deployed onto the deployment system, the optimized neural network model does not contain specific information about the digital signature/color histogram for each category of fresh produce. In this way, the deployment system does not require large memory or significant computing power to identify the fresh produce category, giving the advantage of reduced computation and storage, and hence reduced cost.

EP 0685814 a2(D1) discloses an agricultural product identification system. According to D1, the processed image is compared with a reference image, wherein the object is identified when a match occurs. However, in contrast, the present system can avoid deploying the reference image to the deployed system, and only need to provide a trained neural network model, thereby reducing the computational storage of the deployed system.

Furthermore, while D1 does address image features including color and texture, D1 does not utilize full feature vectors including only a combination of color and texture feature vectors as with the present system.

Furthermore, D1 does not seek to reduce computational requirements, and therefore does not seek to band divide the color histogram as the present system does, let alone optimize the number and width of bands to optimize accuracy to address the learning set limitations. Furthermore, D1 does not disclose a sub-selection of texture features to further address this limitation.

As such, in view of the foregoing, various embodiments are disclosed herein. Disclosed are a method and system for image classification, the method and system including: in the preprocessing, starting from a first number of images, generating a second number of images by performing digital operations on the first number of images, extracting features from the second number of images, and generating a classification model by training a neural network on the second number of images, wherein the classification model provides a percent likelihood of classification of the images; embedding the classification model in a processor, receiving an image for classification, wherein the processor is in communication with a POS system, the processor executing the classification model to provide an output of a percent likelihood of classification of the image to the POS system.

According to one embodiment, there is provided a retail checkout terminal fresh produce identification method and system, the method and system comprising: at least one visible spectrum camera; a processor in operative communication with the visible spectrum camera, and a memory device for storing digital data, the memory device in operative communication with the processor across a system bus; and a checkout system interface, wherein, in use: the system is trainable, wherein: the processor is configured to receive fresh produce image data from a fresh produce learning set using a visible spectrum camera; the memory device comprises a feature vector generation controller configured to generate a full feature vector for each fresh produce image, the full feature vector comprising a combination of: a color histogram feature vector; and a texture feature vector; a primary color segmentation vector; a pre-trained convolutional neural network; the memory device comprises a neural network optimization controller for optimizing a neural network model; the neural network optimization controller is configured to optimize the neural network model using the full feature vectors; and the optimized neural network model is deployed into the system, and the system is deployable to predict fresh produce classifications, wherein: the processor is configured to receive image data by a visible spectrum camera; the feature vector generation controller is configured to compute a full feature vector comprising a color histogram feature vector and a texture feature vector for the image data and input the full feature vector into a neural network optimized with the neural network model to output a fresh agricultural product classification prediction; and the system outputs the fresh agricultural product classification forecast via the checkout system interface.

In another embodiment, a method and system for image classification is disclosed, the method and system comprising: generating, starting from a first number of images, a second number of images by performing a digital operation on said first number of images, features being extracted from said second number of images according to:

a. a pre-trained Convolutional Neural Network (CNN) trained on a large data set used as a feature detector;

b. a color space histogram;

c. generating texture features through the numerical feature vectors; and

d. dominant color segmentation

And training a nerve and training a neural network on the extracted features to generate a classification model, wherein the classification model provides a percent likelihood of classification of the image.

Also disclosed are methods and systems for classifying products, comprising: the method includes the steps of populating a first number of images, generating a second number of images by digitally operating on the first number of images, performing feature extraction from the second number of images, wherein the feature extraction includes running a pre-trained Convolutional Neural Network (CNN) as a feature extractor, and then generating a classification model by processing the second number of images via the neural network, wherein the classification model provides a percent likelihood of classification of the images.

Further, a method of a system external to a point of sale (POS) system is disclosed, wherein the external system includes a processor and captures an image and runs a classification model embedded in the processor, the classification model providing as output a score of a percentage likelihood of a category of the image, and the external system generates as output a formatted communication comprising a protocol to the POS system, wherein the POS system receives the formatted communication of the output through the classification model of the external system.

Furthermore, a method and system for expanding an image data set is disclosed, the method and system comprising: in the pre-processing, the first number of images are segmented starting with the first number of images, a second number of images are generated by performing digital operations on the first number of images, features are extracted from the second number of images and the second number of images are processed via a neural network and thereby a classification model for deployment is generated, wherein the segmentation of the images is not performed at the time of deployment.

Other features including the color histogram feature vector may be normalized to a certain scale.

The scale may be between 0 and 1

The feature vector generation controller may be configured to divide the color histogram feature vector bands into discrete bands.

The neural network optimization controller may be configured to optimize the number of discrete bands.

The discrete bands may comprise between 5 and 100 bands.

The discrete bands may comprise 10 bands.

The texture feature vector may include a plurality of texture features.

The texture features may include at least a subset of: angular second moment, contrast, correlation, sum of squares: variance, inverse differential moment, sum mean, sum variance, sum entropy, difference variance, difference entropy, information correlation measure 1, information correlation measure 2, maximum correlation coefficient.

The neural network optimization controller is configured to select a subset of the plurality of texture features for optimizing the accuracy of the neural network.

The subset of the plurality of texture features may include between 8 and 12 texture features.

The texture features may include 10 feature vectors.

The neural network optimization controller may be configured to optimize the number of neurons of the hidden layer.

The number of neurons may be between 100 and 120.

The number of neurons may be 116.

Other aspects of the invention are also disclosed.

Drawings

Although there may be any other form which may fall within the scope of the present invention, preferred embodiments of the present disclosure will now be described, by way of example only, with reference to the accompanying drawings, in which:

fig. 1 illustrates a retail checkout terminal fresh produce identification system 100 according to one embodiment;

2-6 illustrate exemplary color histogram vectors and associated band-split color histograms from different types of fruit;

fig. 7 shows exemplary Harlick feature vectors for the same fruit of fig. 2-6;

FIG. 8 illustrates test results of layer optimization and layer optimization;

FIG. 9 shows test results for ribbon optimization;

FIG. 10 shows test results of texture feature optimization; and is

Fig. 11 shows the test result detection accuracy.

FIG. 12 depicts pre-processing a first number of images to generate a second number of images.

FIG. 13 depicts a feature extraction and learning process for a second number of images to generate a classification model based on the second number of images.

FIG. 14 depicts a deployed classification model for use with a POS system.

Detailed description of the preferred embodiments

Fig. 1 shows a retail checkout terminal fresh produce identification system 100 that is trained to predict fresh produce classifications for self-service terminals.

The system 102 includes a processor 105 for processing digital data. In operative communication with processor 105 across the system bus is a memory device 106. The memory device 106 is configured to store digital data, including computer program code instructions and associated data. Thus, in use, the processor 105 retrieves these computer program code instructions and associated data from the memory 106 for interpretation and execution thereon.

In the embodiment shown in FIG. 1, the computer program code instructions of the memory device 106 have been shown as logically divided into various computer program code controllers, which will be described in further detail below.

In use, the system 102 is trained using the fresh produce learning set 101.

In this regard, the system 102 may operate as a training system 106 and a deployed system 114 for respective operations, which may include shared or separate components. As described above, the deployed system 104 may take the form of a low-cost computing device with low processing and storage requirements.

The system 102 may include a mechanical clamp 115 for capturing image data. In this regard, the fixture 115 may include the visible spectrum camera 104 and associated illumination as well as a suitable homogeneous surface background (not shown) for optimizing the image capture process. As such, during training, fresh produce items will be placed in front of the visible spectrum camera 104 to capture image data therefrom, or alternatively be loaded from an image database, during use.

During the training process, the processor 106 stores a set of image data 107 captured by the camera 104 from the set of learned fresh agricultural products 101 in the memory 106. In embodiments, for a particular food merchant, the food merchant may provide a sample set of each agricultural product type to the system for learning.

The memory 105 may include an image cropping and segmentation controller 108 for cropping and segmenting portions of the image including fresh agricultural produce from a homogeneous background. In an embodiment, the image segmentation may employ Otsu's algorithm.

The image cropping and segmentation controller 108 separates the fresh produce commodity image from the homogeneous background for generating a full feature vector containing only fresh produce commodity on a preferably black background. This process minimizes any background interference and reduces the training required for the neural network model, allowing a smaller learning data set 101 to be used for good prediction performance.

Thereafter, the luminance correction controller 109 may adjust the luminance of the image data.

As part of the data collection and prediction process, the luminance may be corrected by normalizing the average grayscale luminance of the RGB data image. The RGB image can be converted to equivalent gray scale using standard color to gray scale conversion. The average gray scale image brightness set point, typically half the dynamic range, is selected to normalize all RGB images to have approximately equal average gray scale brightness.

The luminance correction may comprise the following calculation steps:

let Ixyz denote an x × y pixel image, where z ═ { r, g, b } denotes each color component of an RGB image.

Let Ig denote the x y pixel grayscale representation of Ixyz. Ig ^ represents average brightness of Ig

Let Isp be the average gray scale brightness set point.

Wherein ILcIs a luminance corrected image:

ILC=Ig^-Isp

the memory device 106 may further comprise a feature vector generation controller configured for generating a full feature vector 110 comprising a combination of a color histogram feature vector 111 and a Harlick texture feature vector 112.

Colour histogram

The color histogram of each fresh produce commodity is used as a feature of a digital signature vector for the fresh produce commodity. A set of color histograms and texture features are used to train the neural network model 113.

Fig. 2-6 illustrate exemplary RGB color histograms for different types of apples.

A color histogram may be created by taking an RGB image and creating a histogram using all possible color intensity values as an x-axis statistical heap and collecting the frequency of occurrence of each intensity.

The color histogram may be normalized by scaling the maximum frequency of the band-split color histogram to 1. All other histogram values are linearly reduced by the same scaling factor so that all color histogram values are between 0 and 1.

In a preferred embodiment, the histogram statistic bin width is increased during the "band partitioning" process. This process reduces the color histogram feature vector length, allowing a smaller learning data set to be used to train a high performance model.

This is an important feature for small and medium-sized fresh produce suppliers, where it is impractical to collect a large number of images. Performance can be improved on small fixed-size data sets by reducing the features available to train the model. The band partitioning process is performed by reducing the number of statistical bins in the color histogram. The statistical bins in the panchromatic histogram may be assigned sequentially and distributed evenly to each larger statistical bin. The larger statistical heap frequency may be calculated by averaging the frequencies assigned to the smaller statistical heaps of the smaller statistical heaps. The result is a color histogram with divisions as shown in fig. 2B to 4B.

In an embodiment, the number/width of the statistical heap may be optimized when optimizing the neural network.

The calculation of the color histogram and the color histogram with segmentation may include: full histogram:

let Ixyz denote an x × y pixel image, where z ═ { r, g, b } denotes each color component of an RGB image.

The eferson brackets are defined herein as:

where n is the color depth, the histogram statistic heap is calculated as follows:

the three components of the histogram vector are then:

FR=(r0,r1,…,rn-1)

FG=(g0,g1,...,gn-1)

FB=(b0,b1,…,bn-1)

let the maximum normalized color histogram vector be:

the full histogram vector is then constructed as:

histogram with partitioning:

where b is the number of bands: the bandwidth is calculated as follows: n ═

The histogram statistics heap with partitions is computed as follows:

note: if n is not an integer multiple of b, then b-1 will be the average of (m-1) components.

The three components of the histogram vector with partitions are then:

BR=(r0,r1,…,rn-1)

BG=(g0,g1,…,gn-1)

BB=(b0,b1,…,bn-1)

the histogram vector with partitions is then constructed as:

in an embodiment, 14 textural features (e.g., Harlick textural features) are used, the textural features including: angular second moment, contrast, correlation, sum of squares: variance, inverse differential moment, sum mean, sum variance, sum entropy, difference variance, difference entropy, information correlation measure 1, information correlation measure 2, maximum correlation coefficient. Mathematical calculation of these features is a common state of the art and is presented herein.

These 14 texture features are combined into a vector and used as a set of features for training a neural network predictor. In particular, fig. 7 shows the texture feature vectors collected for each of the apple varieties described above.

The full feature vectors 110 are then used to optimize the neural network model 113.

The predictions output by the system 100 may be sent to the kiosk 119 via the interface for on-screen display of the predicted goods on the screen 120. Such predictions may be cross-referenced with the produce database 122 for checkout purposes. As described above, in the event that the commodity cannot be predicted to some degree of accuracy, a sub-selection interface 121 may be presented on the screen 120, the sub-selection interface 121 including possible candidates from the set of commodity commodities for selection by the user.

Neural network optimization-exemplary test results

During optimization, the following parameters may be varied: the number of neural network layers, the number of neural network nodes in each layer, the number of color histogram bands, and the number of texture features.

The system may automatically optimize these parameters to optimize the detection accuracy of the neural network model 113.

For example, for the exemplary test results provided below, 2866 images were used across 6 classes for the color histograms and texture feature vectors provided in fig. 2-7, the classification including:

1. apple-Australian green apple

2. Apple-pink beauty

3. Apple-snake fruit

4. apple-Royal Gala

5. Orange-emperor orange

6. Orange-navel orange

During training, as will be described in further detail below, the system optimizes the neural network model 113 to include a single hidden layer having 137 hidden layer neurons and utilizing all texture features and 10 bands of the color histogram for band partitioning.

Figures 8-10 provide various performance graphs to illustrate the optimization process

For each of the various model configurations, 22 models were developed using a random approach

Selection of training, validation and test sets. The performance was then checked using a test set, where the following was calculated and plotted using 22 runs:

a. mean value of

b. 95% confidence interval of mean

c. A minimum value.

The selection of the optimal neural network model configuration is based on these parameters

Where the best performing model has the largest average, the smallest confidence interval, and the smallest value near the lower confidence interval. Model performance was compared based on these parameters. Selecting a model based on this approach may provide a solution for finding the best performance on small datasets. A typical solution to improve model performance is to increase the size of the data set, however, as mentioned above, especially for the fresh produce market, only a limited fresh produce learning set 101 is available. Thus, for the present application, it is not possible to optimize accuracy using conventional methods that increase the size of the learning set.

Hidden layer optimization

Fig. 8 shows hidden layer optimization, where, as can be seen, the use of approximately 140 hidden neurons yields widely different model performance, although the average performance is quite consistent, peaking at approximately 78% of the 116 hidden neurons used.

Ribbon optimization

Fig. 9 additionally illustrates the reason why color band optimization is important for achieving optimal performance on the small fresh produce learning set 101. Specifically, when all 256 color bands were used, performance peaked at 116 hidden layer neurons with an average of about 78%. With 10 color bands, the performance peaked at about 87%.

Texture feature optimization

Fig. 10 shows the number of optimized texture features.

While the average performance of the models with 5, 10 and 14 texture features is about 87%, the model with 10 texture features may yield more consistent performance and therefore would be the best choice for this particular application.

Fig. 11 shows the final performance of the system 100 according to this exemplary test.

As discussed above, various embodiments of methods and systems for image classification are disclosed. An embodiment utilizing two feature vectors extracted from fresh produce image data, including a color histogram feature vector and a Harlick texture feature, which are combined to form a full feature vector is discussed in detail above. That is, the above discloses a method and system of two feature extraction processes using a feature vector generation controller. Alternatively, a feature vector generation controller is disclosed which processes an image dataset according to the following feature extraction process:

a. a pre-trained Convolutional Neural Network (CNN) trained on large uncorrelated and/or independent data sets acting as feature detectors;

b. a color space histogram;

c. generating texture features through the numerical feature vectors; and

d. dominant color segmentation

Thus, a feature vector generation controller that affects the execution of the classification model may be embedded in the processor utilized at deployment.

As previously discussed, the deployment system 102 including a camera to capture images at the deployment location may take the form of a low cost computing device with low processing and storage requirements. The deployment system 102 may be installed external to the POS system or may be integrated into the POS system. In order for the presently described systems and methods to be meaningful in a business environment, whether external or integrated, it is preferable to quickly, reliably, and inexpensively provide a percentage likelihood of a category of images. Current POS systems may work externally with the presently described systems and methods. Thus, the presently described systems and methods are designed to communicate with a POS system regarding the percent likelihood of a category of image using the disclosed protocol.

As discussed above, in order to train a neural network on an input image, a learning set is provided. In fig. 1, digital operations including image cropping and segmentation 108 and brightness correction 109 are performed on the learning set. Additional details of the training system are provided in fig. 12, where the image dataset is a "second number of images" generated as a result of digitally operating on the "first number of images".

FIG. 12 depicts pre-processing a first number of images to generate a second number of images. The first number of images may be, for example, 100 images of one type of fruit that may be provided at steps 150 and 152. The first number of images may also be processed, including the product image removing background 154 to generate a masked product image 156. The images 152 and 156 are then subjected to enhancement techniques 158, including flipping, rotating, cropping, scaling and skewing, and enhancement to address illumination and product variations, including illumination color and intensity transformations and color variation noise. This enhancement process can expand the original image set by more than a factor of 10. This extended set of product images may be combined by randomly or non-randomly superimposing the images with a background set comprising empty scenes in which the classification model is to be deployed for the scene simulation 160 to generate a second extended number of product images 162. The background set may contain more than 10 scenes and may include various lighting conditions to provide robustness in addition to enhancement. To maximize performance across the intended deployment environment, the background set may be enhanced as described above, including lighting and color variations. The background set may be an exhaustive representation of all environments expected in the deployment. Starting from an initial first number of images, e.g., 100 images, using product image expansion and scene simulation, more than 10000 number of images (which may be segmented images or non-segmented images) can be generated with sufficient number variation to train the neural network, making the neural network robust to illumination, background variations, and natural product variations.

As described above, other types of images may also be processed to simulate conditions without the need for expensive data collection processes. For example, bag simulation of merchandise may be performed using bag texture images blended with masked agricultural product images to provide robust sorting performance if the product is placed in a translucent package. Further, hand simulation of the product may be performed using, for example, hand images combined with masks. For products that are not agricultural products (such as bulk products), the same process can be utilized. The benefit is that non-bar coded goods can be quickly handled at the deployment site as if they had a bar code.

Turning now to fig. 13, fig. 13 depicts a feature extraction and learning process for a second number of images to generate a classification model based on the second number of images. The feature extraction process may include using advanced feature extraction with a pre-trained convolutional neural network 180a (cnn). The CNN may be pre-trained on a large dataset (e.g., millions of images) and then truncated to deliver general feature extraction/detection. It is beneficial to choose a low computing architecture (e.g., MobileNet architecture) that has high performance. This approach achieves higher performance because the pre-trained CNN is able to identify advanced features in various scenarios. In conjunction with more general features, such as color and texture, an extraordinary level of performance and versatility can be achieved. Note that this approach contrasts with the state of the art in product identification, which typically explicitly trains the CNN architecture on the available data sets. While these architectures have proven to work well in a variety of applications, training these models without a large number of per-class images greater than 1000 has proven difficult and the models are not versatile.

The generated second extended number of product images 162 is received so that a number of features can be extracted. As discussed above, feature extraction may include: a pre-trained Convolutional Neural Network (CNN) trained on a large data set independent of the first set of images used as the feature detector 180 b; a color space histogram 182, such as R, G, B color histogram (where color bars may be optimized); texture features generated by the numerical feature vector 184, such as texture features using Harlick texture features; and a primary color segmentation 186, such as a primary color segment using a K-average color segmentation; and a pre-trained CNN180b, which as discussed is a pre-trained Convolutional Neural Network (CNN) trained on a large dataset.

The fully connected feedforward neural network 188 is trained on features extracted from the input image 162. The feed-forward neural network may generate a score for each class to output a classification model 190, which classification model 190 may run on a feature vector generation controller to predict images received at the deployment location. The classification model 190 may be embedded as a feature vector generation controller and incorporated into the inexpensive processor 105 of FIG. 1. Consistent with the commercial aspects of the presently disclosed systems and processes, a benefit of running a classification model arrived at by the described processes is that running the same classification model requires little processing power to output quickly at the deployment location and does not require storage of images or data signatures.

Turning to fig. 14, fig. 14 depicts a deployed classification model for use with a POS system, i.e., the disclosed methods and systems may include a system 194 external to a point of sale (POS) system 196 at a deployment location, where the external system includes a processor 105, and captures an image of, for example, an unknown fresh produce item 200 using a visible spectrum camera 198, and runs a classification model embedded in the processor 105 that provides as output a score of the percent likelihood of the category of the image. Image feature extraction at POS system deployment utilizes the same parameters and configurations as used in training. Alternatively, the deployed feature extraction may include variations, such as non-segmentation discussed below. The training data is stored in the cloud rather than locally. Only a small trained classification model, which is a fraction of the data size, is embedded in the processor 105 such that the feature vector generation controller is deployed. For example, the training data for 100 classes may be greater than 40GB, with deployed models and code bases less than 150 MB.

The external system 194 may generate the formatted communication as an output that includes a protocol to the POS system, wherein the POS system 196 receives the output formatted communication through the classification model of the external system. From the pre-processing, and starting with the first number of images, segmentation of the first number of images may be processed, where segmentation of images received from the visible spectrum camera 198 is not performed at the time of deployment. Alternatively, the visible spectrum camera 198 may be a 3-D camera, such that segmentation is not performed at the time of deployment, but rather is implemented by a depth threshold. Various adjustments may be made to limit the amount of processing required at deployment in order to allow processing to occur quickly. The present system and method are intended to operate quickly and make the hardware of the vision system 194 inexpensive.

As noted, segmentation (extracting only the masked image agricultural product in the foreground without background) may impact processing efficiency at the deployment location. As mentioned, the prediction may be run on non-segmented or segmented images. For segmented images, no background simulation is required. The segmentation robustness depends on the method: threshold background subtraction: a model of the background (e.g., Guassian, KNN) is created using > -1 image. The agricultural product images are compared to create a mask. Depth information is obtained using stereo imaging and a mask is created based on the known background depth. For non-segmented, scene simulation may be used to direct the system to identify agricultural products in various environments.

In another embodiment, a deployed system may combine multiple perspectives to increase the statistical variance of features. Multiple cameras can be implemented by combining (stitching) the images into a single image and running the prediction process discussed previously. Mirrors can be used to increase the viewing angle and increase the variation during the same stitching process. Illumination may not be possible using automatic exposure adjustment or with HDR enabled cameras. The camera may be calibrated in real time using a laser in the scanner or external illumination or dedicated calibration illumination/laser. The disclosed systems and methods may be implemented to disable selection of non-barcode merchandise only if sufficient prediction scores are not achieved.

The algorithmic instructions for feature extraction (including the pre-trained CNN) along with the trained neural network may be deployed on an external system and communicate with a POS system (such as a low cost single board computer) or directly onto the POS machine where communication is actually facilitated.

As shown in FIG. 13, communication with the POS system may be accomplished through an HTTP server that sends JSON documents to provide a percentage likelihood of 200 classifications of images. It uses an ethernet connection but is applicable to Wifi, serial or other digital data transfer mechanisms.

The disclosed systems and methods may provide various prediction options. Object detection uses threshold evaluation of the masked image; triggered by external input (fresh produce button pressed, scale stability); continuously predicting to make the result always available for an external system when needed; and/or use constant predictions to assess whether agricultural products are present and trigger external systems when sufficient certainty is reached. Certain classifications may not be needed when they are not active in the POS system. The lowest highest score may be achieved such that one or more results may be displayed, with an optional cutoff score for ranking the results being provided.

The disclosed system includes a convenient way to communicate with the POS through a defined protocol. The predictions made by the device may be stored in JavaScript object notation (JSON) files, allowing easy integration into most programming languages. The JSON file may be provided periodically or when the POS requests it via an HTTP server or serial link. Other standard data structures, such as XML, may be used that allow the following information to be formatted.

Table 1: JSON structure

■ contains a single JSON list "prediction".

■ the list is ordered from largest to smallest by "score".

Table 2: message type

The disclosed systems and methods provide a scalable solution for identifying non-bar coded merchandise (such as fresh produce and bulk purchased merchandise) at a POS system using a camera. The disclosed system and method solution allows for the addition of new goods. The disclosed systems and methods beneficially implement a general solution by learning advanced feature relationships that can be taught to account for lighting, background variations, and seasonal variations. The disclosed systems and methods avoid expensive hardware and are therefore scalable due to low implementation costs. High internet bandwidth and server costs, which may prohibit the use of cloud API services, are also avoided.

The foregoing description, for purposes of explanation, used specific nomenclature to provide a thorough understanding of the invention. However, it will be apparent to one skilled in the art that the specific details are not required in order to practice the invention. Thus, the foregoing descriptions of specific embodiments of the present invention are presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the following claims and their equivalents define the scope of the invention.

27页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:纸张存放装置和纸张存放装置的控制方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!