System, apparatus and method for item location, inventory creation, routing, imaging and detection

文档序号:230846 发布日期:2021-11-09 浏览:11次 中文

阅读说明:本技术 物品定位、清单创建、路线规划、成像和检测的系统、设备和方法 (System, apparatus and method for item location, inventory creation, routing, imaging and detection ) 是由 D·P·斯道特 E·M·贝尔德 T·摩尔 于 2019-09-04 设计创作,主要内容包括:提供了用于使用现有商店照相机和人工智能以及机器学习来实现用户生成的购物清单的店内路线规划的系统。该系统使用实时成像的物品与机器学习图像的数据库相比的像素缓冲器比较。该系统还通过机器学习来提供物品识别和检测,以改进购物者体验。该系统和方法还包括无人机辅助方案和无线电信号物品和生物检测以提高准确度。用以改进引导和准确度的其它特征包括地标导航和掩蔽以提高物品识别和检测的准确度。该系统可以是独立的自助服务终端。(A system is provided for implementing in-store route planning for user-generated shopping lists using existing store cameras and artificial intelligence, as well as machine learning. The system uses pixel buffer comparisons of real-time imaged articles compared to a database of machine-learned images. The system also provides item identification and detection through machine learning to improve the shopper experience. The system and method also includes drone assistance schemes and radio signal item and biometric detection to improve accuracy. Other features to improve guidance and accuracy include landmark navigation and masking to improve the accuracy of item identification and detection. The system may be a self-contained kiosk.)

1. A kiosk for checkout at a store, the kiosk comprising:

a body mounted with both a first camera and a second camera, both in communication with a processor;

the first camera is configured to detect and authenticate a user; and

the second camera is configured to detect at least one user-selected product;

wherein the processor compares a pre-existing database of images with the data collected by the second camera using machine learning to accurately detect the user-selected product; and

wherein the processor generates an inventory of items detected by the second camera.

2. The kiosk of claim 1 wherein the first camera is a biometric camera.

3. The kiosk of claim 1 wherein the user is automatically debited when the first camera or the second camera detects that the user is walking away.

4. The self-service terminal of claim 1, wherein the user initiates checkout on the self-service terminal.

5. The kiosk of claim 1 wherein the kiosk further comprises a display screen.

6. The kiosk of claim 1 wherein the kiosk further comprises a third camera configured to view a user cart.

7. The kiosk of claim 6 wherein the third camera is directed to view the contents of a shopping cart.

8. The kiosk of claim 6 wherein the third camera is in communication with the processor, the processor configured to visually or audibly notify if an item is left in the user cart.

9. A system for processing an order, the system comprising:

a processor;

a first camera in communication with the processor, the first camera and the processor configured to detect that a user is placing an order;

an order input interface configured to accept an order, the order input interface in communication with the processor; and

a second camera in communication with the first camera and the processor, the second camera spaced apart from the first camera;

wherein marking the order entered in the order entry interface incomplete until the user and the ordered item are detected in the same pixel buffer by the first camera or the second camera.

10. The system of claim 9, wherein the first camera and/or the second camera is a biometric camera.

11. The system of claim 9, wherein payment is completed after the user places an order.

12. The system of claim 9, wherein payment is completed when the user walks away from the first camera or the second camera.

13. A kiosk for checkout at a store, the kiosk comprising:

a body mounted with a camera and a display, the camera and the display in communication with a processor;

the camera is configured to detect and authenticate a user; and

the processor is configured to generate targeted marketing material for display to the user on the display using data associated with the user.

14. The kiosk of claim 13 wherein the material displayed to the user is an advertisement based on a user's shopping history.

15. The kiosk of claim 13 wherein the user is able to interact with the display by using gestures, voice commands, applications, and/or physical interactions.

Technical Field

The present specification relates generally to systems and methods to assist in item positioning (such as shopping or warehouse item positioning and tracking), detection, route planning, and more particularly to systems and methods for determining the fastest and most efficient route through a store or warehouse using user generated shopping lists and in-store SKU databases, machine learning, drone assistance, image recognition and detection to facilitate faster and more efficient checkout and/or recognition processes.

Background

Shopping lists are well known in the art. People typically create shopping lists on their mobile devices or using traditional paper. Mobile applications are increasingly being used for online shopping and item retrieval information. Specifically, it is known in the art to visit a store's website to determine whether a particular store has inventory items and the price of each item. However, these previously known shopping methods are outdated and do not generally provide the most accurate up-to-date information. A common problem is that users will check the supply of items online to confirm that a product is available, but will find that the product is not available after they arrive at the store. These existing systems are not updated in real time and do not use specific physical store database location information, thus being inconvenient for the user.

Accordingly, there is a need for alternative and efficient shopping systems and methods for creating checklists, confirming availability, and generating route planning information that is easily accessible to users. Furthermore, there is a need to streamline the checkout process when using the system and method.

The use of various methods such as infrared, sonar, etc. to detect the physical presence of organisms (humans, animals, etc.) and objects (any inanimate object) is well known in the art. However, these methods are unable to determine additional biophysical information and often cannot detect the presence of something if it is located behind another physical object.

Accordingly, there is a need for improved methods and systems for accurately detecting the wavelength, frequency, and general presence of living beings and things.

Disclosure of Invention

A system for implementing in-store routing of a user-generated shopping list, the system comprising: a global positioning system that determines a location of the user to determine a particular store in which the user is currently located; a processor connected to a SKU database of the particular store, the SKU database including product price, product location, and product availability information, wherein the processor is configured to determine a most efficient route based on location information from only the SKU database of the particular store, each product being classified by aisle and/or location; and mapping location information on a predetermined store layout map using mapping points to determine a fastest route between the mapping points, wherein each mapping point specifies a particular product location. In some embodiments, the processor connects to the SKU database before the user enters a store location. The processor may be configured to display product availability information from the global positioning system and the SKU database to the user on a user display screen as the availability of products in the shopping list. One option is to provide the fastest route in an indication list that includes product location information. In some embodiments, the provided route is optimized for efficiency, the system is configured to create multiple listings, and/or the user may select from which listing to start shopping. Further, in some embodiments, the SKU database information is sent to the user in real-time.

In another embodiment, there is provided a system for allowing a user to check out at a store, the device and system comprising: a mobile application that allows a user to generate a shopping list; a kiosk in wireless communication with the mobile application, the kiosk having at least one camera in communication with a database of learning images, the database being continuously updated based on previous transactions within the store; and a processor that compares the physically present items to the database of images, the processor comparing the physically present items to items in a shopping list on a mobile application of a user, the processor comparing items within a shopping cart to the shopping list to confirm items in the shopping cart and/or a corresponding payment total. In some embodiments, the mobile application allows wireless payment. In some embodiments, the kiosk includes a scale configured to weigh a variable-weight item, the scale configured to send weight information to the mobile application. The kiosk may include a plurality of cameras. The kiosk may communicate with a camera that is physically spaced apart from the kiosk itself. Further, the kiosk may communicate with an existing secure camera in the store. In some embodiments, the camera is a depth sensing camera.

In another embodiment, there is provided a system for authenticating items collected by a user in a store, the system comprising: at least one camera in a shopping location of a store, the camera capable of viewing a user and its surroundings in a frame of the shopping location of the store, the camera configured to detect a training item picked up by the user in real time, the camera detecting only the training item, wherein the training item is a product having data stored in an image database corresponding to the store; and a processor in communication with the image database, the processor using machine learning to compare the training item in real-time with the image database to determine a particular product. In some embodiments, data collected from the training articles in real-time is stored in the image database for future use, thereby improving the accuracy of the system. Further, one option is to use pixel buffer comparison to authenticate the training article within the frame. In some embodiments, the processor determines an identifier and a confidence score. The system may also include a mobile application used by the user to generate a shopping list, the processor in communication with the mobile application to confirm that the correct product was added to the shopping list by comparing data collected in real time with the user's shopping list. The system may include a mobile application configured to generate a shopping list, the processor in communication with the mobile application to add a particular product to the shopping list.

In another embodiment, a system and method for authenticating items during a checkout process is provided, the system and method comprising: an image database containing a plurality of database images, the image database comprising a plurality of images of individual items; at least one camera configured to take a checkout picture of an item during the checkout process to authenticate the item; and an authentication system configured to compare the checkout image with the database image using a pixel buffer comparison to authenticate the item, wherein the system is configured to store the checkout image into the image database such that the checkout image becomes a database image, thereby increasing the image database to improve accuracy and precision through machine learning.

In another embodiment, a system and method for using artificial intelligence to identify items in a store with a mobile device camera, the system and method comprising: an image database containing a plurality of database images, the image database comprising a plurality of images of individual items; a mobile device camera configured to view images in real-time, a processor in communication with the database of images, the processor configured to compare the images to the plurality of database images in real-time to determine an item at which the user is pointing the mobile device camera; and a system configured to allow a user to interact with the recognized item within the application, wherein the interactive functionality includes taps, voice, clicks, and/or gestures.

In another embodiment, a bag configured to weigh an item, the bag comprising: a holding section configured to hold a product selected by a user while shopping; a weighing device configured to weigh an item placed in the holding portion; an accelerometer configured to detect rotation or movement to prevent erroneous readings; and wherein the bags and corresponding systems are configured to zero a weight reading after the user weighs a first item, thereby allowing the user to weigh a second item in the same bag without removing the first item from the bag. The bag may also include a processor. In some embodiments, the processor communicates with the mobile application to send weight information related to the product. The processor may be in wireless communication with a mobile application. The bag may also include a storage capacity to store weight information relating to the variable weight items contained within the bag. In some embodiments, the bag may also include a sensor for connecting and communicating with a checkout station at a store.

In another embodiment, a drone assistance system configured to assist a user in use in a store while shopping, the drone assistance system comprising: a drone in communication with a mobile device of a user, the drone having a light projecting feature, the drone in communication with a processor, the processor having a particular location of an item in a store, and the drone being configured to receive an item location from the processor after the user requests the item location, the drone being configured to find the item within a store and illuminate individual items within the store to indicate to the user the exact location of the item. In some embodiments, the drone includes an onboard camera and a corresponding processor. In some embodiments, the processor is configured to perform object detection and identification by the camera. Further, the processor may be configured to perform facial recognition by the camera. In some embodiments, the camera and the processor are configured to track a user in the store.

In another embodiment, a drone system includes: a shelving unit having an uppermost shelf; and a wireless charging pad configured to charge a drone, the wireless charging pad located on the uppermost shelf. In some embodiments, the shelving unit is configured to hold items in a store. The shelving unit may be arranged to form an aisle in a store, and a power cable may extend through the shelving unit to power the wireless charging mat.

In yet another embodiment, a system for imaging an article, the system comprising: a radio transmitter configured to transmit a signal projected onto a desired object or area, the signal traveling through a solid surface and creating a return signal when the signal interacts with the surface or object, the return signal being received by a radio receiver, and a processor configured to interpret the signal received by the radio receiver to determine whether there are additional items behind the object that cannot be viewed by a conventional camera. In some embodiments, the system includes at least one camera, data from which and signals received from the radio receiver are combined to provide a composite image or accounting of all items within the area.

In another embodiment, a detection system includes: an artificial intelligence program in communication with the processor and a radio transmitter configured to transmit a signal onto an object or living being; a radio receiver configured to receive echoes from the radio transmitter, the processor configured to receive and interpret data received from the radio receiver, the processor in communication with the artificial intelligence program to interpret data from the echoes, and the processor configured to determine a confidence threshold, wherein if the confidence threshold is met, the processor outputs the data in a predetermined desired format, and if the confidence threshold is not met, the radio transmitter transmits a new signal to enhance the data. In some embodiments, the radio transmitter transmits a signal through the object to detect the object that is not visible to the camera. In some embodiments, the signal is sent to a human or animal to detect biomagnetism, body frequency, and/or body wavelength. In some embodiments, the signal is sent to a human or animal to set a physical function parameter comprising a nerve command. In some embodiments, the detection of biomagnetism, body frequency, and/or body wavelength is collected as data to interpret the frequency of a wave to correlate the frequency with a particular body state. In some embodiments, the specific physical state is a mood, a disease, and/or a disorder. In some embodiments, the transmitted signal is a Wi-Fi signal. In some embodiments, the predetermined desired format is a visual representation of the detected object.

In yet another embodiment, a system for landmark navigation within a store, the system comprising: a mobile application on a mobile device, the mobile application having a user interface configured to allow a user to select a landmark navigation option; a processor configured to process a route based on a shopping list of the user or based on a particular object being purchased by the user; a route matrix that evaluates distances between existing navigation points, and if the distances meet or exceed a landmark requirement threshold, the system inserts landmark navigation points between qualified navigation steps, thereby notifying the user that a particular product is located near a particular landmark. In some embodiments, the location of the landmark is utilized to provide a visual map to the user. In some embodiments, data is retrieved from at least one in-store camera, the data relating to a particular location of an existing landmark in the store.

In another embodiment, a system for improving image processing, the system comprising: at least one depth sensing camera that finds a focus within a frame; a processor configured to retrieve depth data from a corresponding region within the frame in which the focus is located, the processor being configured to use the depth data to determine a background contained within the frame, wherein the processor then places a binary mask over the background to occlude unnecessary images within the field of view of the camera, thereby improving accuracy. In some embodiments, the mask visually occludes all images except the focal point within the frame. In some embodiments, data is acquired only from unobstructed focal points within the frame to minimize noise within the frame.

There is provided a kiosk for checkout at a store, wherein the kiosk comprises a body mounted with both a first camera and a second camera, both in communication with a processor, the first camera configured to detect and authenticate a user, and the second camera configured to detect at least one user-selected product, wherein the processor compares a pre-existing database of images with data collected by the second camera using machine learning to accurately detect the user-selected product, and wherein the processor generates a list of items detected by the second camera. In some embodiments, the first camera is a biometric camera. In some embodiments, the user is automatically debited when the first camera or the second camera detects that the user walks away or the user initiates checkout on the self-service terminal. In some embodiments, the kiosk further includes a display screen. The kiosk may also include a third camera configured to view a cart of the user with the third camera directed to view content of the shopping cart. The third camera may be in communication with the processor, the processor configured to visually or audibly notify whether items remain in the user cart.

There is provided a system for processing an order, the system having: a processor; a first camera in communication with the processor, the first camera and the processor configured to detect that a user is placing an order; an order input interface configured to accept an order, the order input interface in communication with the processor; and a second camera in communication with the first camera and the processor, the second camera being spaced apart from the first camera, wherein marking of the order entered in the order entry interface is incomplete until the user and the ordered item are detected by either the first camera or the second camera in the same pixel buffer. The first camera and/or the second camera may be a biometric camera. In some embodiments, payment is completed after the user places an order. In other embodiments, the payment is completed when the user walks away from the first camera or the second camera.

A kiosk for checkout at a store, the kiosk comprising a body mounted with a camera and a display, the camera and the display in communication with a processor, the camera configured to detect and authenticate a user, and the processor configured to use data associated with the user to generate targeted marketing material for display to the user on the display. In some embodiments, the material displayed to the user is an advertisement based on the user's shopping history. In some embodiments, the user is able to interact with the display by using gestures, voice commands, applications, and/or physical interactions.

Drawings

The embodiments set forth in the drawings are illustrative and exemplary in nature and are not intended to limit the subject matter defined by the claims. The following detailed description of illustrative embodiments can be understood when read in conjunction with the following drawings, where like structure is indicated with like reference numerals and in which:

FIG. 1 depicts a flowchart detailing the high-level steps taken by a system in accordance with one or more embodiments shown and described herein;

FIG. 2 depicts an exemplary diagram illustrating an aisle priority system in accordance with one or more embodiments shown and described herein;

FIG. 3 depicts an exemplary store layout and corresponding route according to one or more embodiments shown and described herein;

FIG. 4 depicts a flowchart detailing the store SKU connections and processing undertaken by the system according to one or more embodiments shown and described herein;

FIG. 5 depicts a screenshot of a center display in accordance with one or more embodiments shown and described herein;

FIG. 6 depicts a listing selection display screen shot in accordance with one or more embodiments shown and described herein;

FIG. 7 depicts a checklist display screen shot in accordance with one or more embodiments shown and described herein;

FIG. 8 depicts a route display screen shot in accordance with one or more embodiments shown and described herein;

FIG. 9 depicts a schematic diagram of a checkout device and system for use in checkout of a user upon leaving a store in accordance with one or more embodiments shown and described herein;

FIG. 10 depicts an exemplary embodiment of a store inventory in accordance with one or more embodiments shown and described herein;

FIG. 11 depicts an exemplary performance graph based on the image catalog as shown in FIG. 10 in accordance with one or more embodiments shown and described herein;

FIG. 12 depicts an exemplary modified store inventory, according to one or more embodiments shown and described herein;

FIG. 13 depicts an exemplary performance graph based on the improved image catalog as shown in FIG. 13 in accordance with one or more embodiments shown and described herein;

FIG. 14 depicts a flowchart depicting an item identification Intelligent System ("IRIS") according to one or more embodiments shown and described herein;

FIG. 15 depicts an exemplary screenshot of an AI feature with the camera pointed at an item in accordance with one or more embodiments shown and described herein;

FIG. 16 is an exemplary model of IRIS in use according to one or more embodiments shown and described herein;

FIG. 17 is a flow diagram of an overall system in accordance with one or more embodiments shown and described herein;

FIG. 18 is a schematic illustration of a system as broadly disclosed herein, in accordance with one or more embodiments shown and described herein;

FIG. 19 is an exemplary photograph and graphical representation of a system showing the location of a product as viewed by a mobile application highlighting the product location to enable a user to easily locate the product in accordance with one or more embodiments shown and described herein;

FIG. 20 is a schematic diagram having a weight measuring device, an accelerometer, a pressure sensor, a sensor, and a sensor, according to one or more embodiments shown and described herein,Etc. of gravity bags;

FIG. 21 depicts an illustrative embodiment of a gravity bag in use according to one or more embodiments shown and described herein;

FIG. 22 is a generalized depiction of an unmanned shopping assistant according to one or more embodiments shown and described herein;

fig. 23 depicts, from a side view, an exemplary model of a drone charging rack in accordance with one or more embodiments shown and described herein;

fig. 24 depicts an overall exemplary diagram of a user's use of a drone assistance system in which a drone moves from a drone charging station along a flight path to a user location to illuminate a particular product for which the user has requested assisted location, wherein the illumination is a light projection from the drone to the particular product, in accordance with one or more embodiments shown and described herein;

FIG. 25 depicts a flowchart depicting a process of a radio transmitter detection system according to one or more embodiments shown and described herein;

FIG. 26 depicts a side-by-side comparison of a camera capturing a raw feed and capturing a feed with an active binary mask in accordance with one or more embodiments shown and described herein;

FIG. 27 depicts a schematic diagram of a graphically displayed landmark navigation system in accordance with one or more embodiments shown and described herein;

FIG. 28 depicts a flowchart for embodying a landmark navigation system in accordance with one or more embodiments shown and described herein;

FIG. 29 depicts a self-service terminal and corresponding flow diagrams of a self-service terminal according to one or more embodiments shown and described herein;

FIG. 30 depicts an exemplary kiosk according to one or more embodiments shown and described herein;

FIG. 31 depicts an exemplary kiosk according to one or more embodiments shown and described herein; and

FIG. 32 depicts an exemplary checkout system and corresponding flow diagram in accordance with one or more embodiments shown and described herein.

Detailed Description

The system, apparatus and method of the present application have two parts. The first section relates to systems and methods for creating checklists and optimizing routes within stores using mobile applications. The second component relates to checkout systems and devices configured to streamline the checkout process of a user while the user is using the aforementioned mobile application.

The system and method of the present specification provides one-stop shopping for people shopping in stores, particularly grocery stores. From a high level overview, the system and method includes a checklist creation system, a connection to a Global Positioning System (GPS) to determine specific store location information, a connection to the SKU system of the specific store to provide real-time availability, location and price information, and a route planning function to provide the most efficient and fastest route throughout the store using information retrieved only from the SKU database of product information for the specific physical store. The route planning system obtains items on a list generated by a user. Upon activation, the system will determine the fastest route that the user can take throughout the store based solely on the SKU information specific to the particular store in which the user is located.

FIG. 1 discloses a flow chart of mapping the system to a store SKU system or connecting items on a user-generated grocery list to a store SKU system, generally through the use of a mobile application. The system 100 enables a user to create one or more item listings, such as grocery items, within a mobile application. A user may have multiple manifests and thus be able to manage multiple manifests, share manifests with group members, or create a particular manifest for a particular event.

At a first step 102, the system communicates with the GPS to determine the exact geographic location of the user. The system references the geographic location to determine the particular store in which the user is currently located. Once the geographic location is established, the system references data from the determined SKU system of the store at step 104. The store SKU system stores information such as product pricing, availability, and location within the store.

The system of the present specification is particularly advantageous in that: it allows the system to connect directly to the SKU system of a particular store. The SKU system provides live (also referred to as real-time) and fully accurate data regarding the price, availability and location of items in that particular store. Similar systems fail to provide real-time and accurate data regarding product availability, pricing, and location because these systems are not directly connected to the SKU system of a particular store.

At step 106, the user is prompted to select a list from a plurality of lists within the mobile application that they will use to make a purchase. After the SKU system interacts with the user-selected list, the system will notify the user if something on the list is not available at step 108. This availability determination occurs in real time and may occur even before the user enters the store at step 110. This data will show whether the item on the user's list is in stock and the location of the aisle where the item is currently in the store where the user is located. In addition, the user may also be provided with a payment system 114 (such as within a mobile application)OrPayment) and made available to the user.

The system then determines the most efficient route based on the user list. At step 112, a route is calculated based on the locations of the items on the user's shopping list. As discussed in further detail in fig. 2-4, the aisle priority system uses data from the SKU system of the store. Specific SKU information for location information within a specific store is communicated to an application.

The system and application then generate an aisle priority matrix such as that shown in fig. 2 based on the exact location of each particular product. Referring to FIGS. 2 and 3, items on the user's shopping list are directly associated with the SKU number. Each SKU number in the entire store is associated with a location name or aisle name. The matrix shown in FIG. 2 sorts the items on the user's shopping list according to SKU and hence aisle name. The items are then sorted by aisle name. In this embodiment, all items located in, for example, aisle 1 are located in sequence within the matrix as shown in fig. 2. This sorting of items by aisle name into groups by aisle name is data used to calculate a particular route that is customized for the user.

In the example shown in fig. 2, the user has a list of four items including milk, bread, grain, and soda. After the system determines which store the user is actually at based on GPS information, each item on the user's list is bound to a particular SKU number, such as shown in matrix 205. The SKU number for each item on the user list is associated with a particular location (also known as an aisle name). Each item on the user list is also associated with a quantity (also referred to as inventory availability).

The data from matrix 205 is then passed to the aisle priority system as shown by matrix 207. The location information of the matrix as shown at 205 is compared to other items on the list in the aisle priority matrix 207. The products on the user list are then organized by aisle name and grouped together by aisle name.

By way of example, as shown in the present matrix 207, bread is classified in the bakery, grain is classified in aisle 3, milk is classified in aisle 8, and soda is classified in aisle 5. The groupings by product location as shown in matrix 207 are then compared to a store layout 200 such as that shown in FIG. 3. Each store location will have a separate store layout map that will specify the user's route based on aisle priorities such as that shown in matrix 207.

Referring to fig. 3, the layout 200 specifies that the user should start at entry 202 and follow route 206 as specified by the aisle priority system (if most efficient). In a further example as shown in fig. 2, the user in this particular example would first stop at the bakery 208 and then continue through the remaining aisle 210, in this particular embodiment, the user starts at the bakery 208, proceeds to the aisle 3, then proceeds to the aisle 8, and finally passes through the aisle 5. The user is then directed to checkout 214 and through exit 204. In this particular example, the aisle priority would specify that the user would first obtain items at the bakery 208 and then would continue through the aisles 1-4. The user will then proceed to the deli store 212 and then pass through the aisles 8,7, 6, 5, wherein items in aisle 9 may be picked up between any of aisles 7, 6 or 5. It should be noted that the examples shown in fig. 2 and 3 are merely exemplary and are not intended to limit the scope of this description.

FIG. 4 further discusses SKU and checklist communications between the store database and processor and the user's mobile device. The flow chart 300 recites the steps of first obtaining inventory items having SKU numbers at step 302, where the numbers are used at step 304 to retrieve store product information from the SKU database for the particular store including the location in the store, then the system enters the information for each particular listed item into the aisle priority routing system at step 306, then the system uses an aisle priority system such as described above to construct a most efficient route to pick up and depart from each item in the shortest time and travel distance possible, such as shown at step 308.

Fig. 5-8 illustrate exemplary application visuals to be displayed on a user's mobile device. The progression of fig. 5 through 8 shows a high level flow chart of the screens to be displayed to the user. When an application is opened, the opened screen first prompts the user. A login screen is then displayed to the user so that the user can enter login information and the user can log into the system to use any of the above or the aforementioned applications for the entire system. The center screen 406 allows the user to connect to grouping, listing, or route information for a particular store. Screen 408 displays a single or multiple checklists created by the user, screen 410 displays the items contained on the checklists, and screen 412 displays a routing function and overview to most efficiently collect the items on the user's checklists.

A screen 406 such as that shown in fig. 5 displays a center screen. The center screen 406 is a location where the user can access and view all of the functionality contained within the system and applications. On the center screen 406, the user may view notifications regarding sales, adjustments to grocery listings, out-of-stock items, and the like. The user may then also search for items at search function button 428, change their settings, access their listings, access their groupings, and enable grouping functionality at button 424. Button 420 allows the user to change the grouping settings, such as who can obtain and access the list, etc. Button 422 allows the user to view a list, such as that shown in list screen 408. The notification is displayed in a notification window as shown at reference numeral 426. Setting information is available at reference numeral 429.

Fig. 6 shows the list screen 408. The checklist screen 408 enables the user to create a new checklist at reference numeral 440, view the weekly checklist 442, or view additional specific checklists such as indicated at reference numerals 444, 446, 448, etc. A search function 428 is also displayed on the list screen 408.

FIG. 7 shows an enumerated listing of items on a user's grocery list 450. Item names and quantities are listed on user list 450. A search function 428 and setting information 429 are also displayed on the list to the route display screen 410. The display screen 410 is displayed once the user selects a list on the list screen 408. The user can scroll through the list and make changes as desired. The listing will show the quantity or weight (if applicable) and detailed name brand specific information. The listing may include a health rating below that scores and rates the product information based on the nutritional information. The user may then begin route planning by pressing the start route planning button 452. By selecting the start route planning button 452, the system will turn on a new display 412, the new display 412 showing the most efficient route based on the user's shopping list within the store.

Fig. 8 shows a route planning function display screen 412. The routing screen 412 displays local store information 460 and a list name 462. The location information for each item or group of items is shown at step 470. The route planning function as described herein distinguishes the present function and system from the prior art by using its aisle priority system. The route planning system of the present specification is an aisle-by-aisle guide for walking through stores to ensure shoppers save time while shopping and avoid wandering in stores while searching for particular items. The route planning function is much larger and more innovative than just a walk-through from aisle to aisle: it uses GPS and/or geographic location to directly connect with the store to communicate to the system the user's specific location. This information identifies the particular store in which the user is located. Once the GPS system determines the particular store in which the user is located, the GPS system is no longer needed because the location information for each particular product is sent to the system and is based only on the SKU number within that particular store.

A payment button 472 may also be provided in any of the above screens. In some embodiments, the user may pay for each item when placed in the cart, or may use a particular technique to pay for the item when checkout of the mobile device payment system is enabled. A system may also be provided at checkout that uses an RFID system and/or camera to verify purchases made in the store, such as will be discussed below.

The above-described mobile application and system allows a user to create a list, which then generates the most efficient route for the shopper based on the store's data and using an aisle priority system (all as discussed above and in the figures). The checkout self-service terminal is used in conjunction with the mobile application so that the checkout process is completely seamless. Licensing a user's use of Apple within an application(or the like). In this embodiment, the mobile application accepts payment when the user is at the checkout kiosk (also referred to as a bagging station).

Each kiosk includes a plurality of stereoscopic cameras that are programmed by linking a computer to the cameras. When the computers in each bagging station are linked, they will detect the shopping items based on the hue saturation value of the item and detailed information of the RGB and size of the item. If the camera's robotic vision does not detect an item and the item is on the shopper's list, an alert or notification will be created for both the user and the store. Similarly, if there are additional items in the user's cart or bag that are not on the user's shopping list, a warning/notification will be issued to both the user and the store. This will prevent theft and accidental overcharging. The camera communicates with an application, and the application communicates with the camera. It is a system of pure communication and inventory.

As discussed previously, each store capable of operating with a mobile application includes at least one of the kiosks described herein. An exemplary kiosk is shown in fig. 9. The kiosk 500 includes a body or housing 502. The housing 502 includes a plurality of stereo cameras (or depth sensing cameras) 504, all of which are connected through and to a computer or other processor. Kiosk 500 also includes a light ring 508 or other similar flashing device that enables the system to more easily define and view the contents A, B within basket portion 512 of cart 510. The kiosk 500 also includes a wireless scale 506 for weighing the product or other bulk food product.

Each kiosk 500 will contain a camera and via wifi, cellular, and/orA host computer in communication with the application. The camera will be programmed to know the details of each item in the store. Such detailed information includes: SKU, item name, HSV, RGB and their 3D dimensions. The camera 504 will use a series of advanced algorithms to properly detect and identify the shopper's cart, the items in the bag, while transferring from the cart to the bag. The camera 504 and camera system will communicate with the application to determine the expiration date of the shopper's items.

By way of example, there are two items on the user's list: banana and apple. If the camera system detects a grain box in the cart 510, the system will automatically notify the shopper and allow the shopper to remove products from the shopper's cart or add items to the shopper's shopping list so the total shopping price can be adjusted. If the user ignores the error, the kiosk 500 will illuminate its lights to alert store employees of the presence of an item detection error.

The camera uses a series of advanced algorithms that communicate with the user's grocery list within the mobile application. The first operation used by the camera is robotic vision through python and opencv. This allows the camera to track objects based on their HSV, RGB and size. The camera system will use a number of target tracking algorithms. One algorithm is real-time and is used in conjunction with high quality product detection and product isolation known as KCF (kernel correlation filter). Another algorithm is TLD (tracking, learning and detection) for searching for specific items that are only relevant to the list. These detection algorithms and methods are used in conjunction with the camera and may be used separately or together as the system requires and permits.

The KCF algorithm is programmed to look up all items in the store and catalog the items it detects when a new user approaches. Its cataloged items should match exactly with the active user shopping list. If an item is found that is not on the list, the KCF system will send an alert/notification.

The TLD algorithm is programmed to look for items only on the user's grocery list. If the list item is missing from the shopping cart, the TLD system will send an alert/notification. The system provides real-time grocery product detection and deep learning to prevent shoplifting and product hiding in bags or carts.

In some embodiments, the system as described herein utilizes machine learning to eliminate item theft and accidental overcharging for customers. Fig. 10 through 13 illustrate and describe the present system for improving system performance by utilizing machine learning. The system discussed and illustrated herein creates a seamless checkout process.

In some embodiments, the verification and detection process for confirming the identity of an item involves the use of machine learning models (including, but not limited to, image classifiers and object detectors and pixel buffer comparisons). A machine learning model, such as an image classifier or an object detector, takes an input image and runs an algorithm to determine the likelihood that the image or an object within the image matches a training object within the model. The model then outputs the identifiers and the confidence scores or multiples for each identifier. In order for the output to be considered reliable, the confidence score needs to reach a desired threshold. Notably, when the model is running, it will continue to output identifiers and confidence scores at a rate of several times per second, even if there are no training articles in the image frames. However, a well-trained model will never assign a high confidence score to images that do not contain training articles. Thus, setting a high confidence threshold ensures high accuracy.

A second aspect of the above-described verification method relates to pixel buffer comparison. A single image, frame, or machine learning model output from a given image, which is defined as a buffer, may be maintained for future use. As the model is running, previous model outputs that have reached the confidence threshold are placed into a buffer and/or moved through a series of buffers. Maintaining the outputs within these buffers allows for comparison of the current model output with the previous model output. This output comparison is beneficial because it includes providing parameters for certain actions and further enhances the accuracy of the output.

By way of example, the system begins with no training articles within the camera frame, where the machine learning model attempts to determine if any training articles are present in the image. In the case where there are no training items in the frame, the model outputs the identifier of the most likely item or the closest match to the associated confidence score. In this case, the model trains well, and only those outputs that do not reach a confidence threshold that is considered reliable are assigned a low confidence score. The training article then enters the camera field of view. The model begins recognizing the training item based on its training and algorithm and outputs a higher confidence score to the appropriate identifier. The confidence score meets or exceeds a confidence threshold required for the procedure to take further action. The output is then placed into a buffer. The model then outputs again a high confidence score for the same item. Remember that the model is creating multiple outputs, which means that a single item will likely remain in the frame of the model to recognize it multiple times. After a subsequent model output is created that meets the required confidence threshold, the new output is then compared to the previously output results and certain parameters are consulted to perform the action. In this case, if the previous output identifier is the same as the new output identifier, the system may consider both outputs to be the result of the same item still within the camera frame. In other cases, the newly output identifier is different from the previously output identifier, thus informing the system that a new item has entered the camera frame.

The present checkout system and item authentication process shown and described herein rely on machine learning. Machine learning finds algorithms for collecting item data that give system insight and the ability to predict or recall items or objects based on the collected data. The more the system is running, the more data it collects, the more problems it creates, and therefore a higher prediction/accuracy rate occurs. In other words, the system continues to collect images and other data to continually improve the accuracy of the system and the detection of products.

The present system uses machine learning, whereby catalogs are created to catalog items in stores. The system is a fully operational grocery item detection system that performs with 100% accuracy and 100% recall. Using high quality 360 item photographs, the camera system has full knowledge of the items that the user has taken from the shelves and the items in the shopper's cart.

The item authentication system is directly connected to applications, systems, and software such as those described herein. The kiosk (also referred to as a landing) as described and illustrated herein has a built-in camera. The system may also utilize cameras already installed in the store facility to further provide accuracy to the overall system and additional angles when taking pictures.

The kiosk will use machine learning so there is no need to manually enter item color identification in the system data. Conventional methods typically utilize only color for item authentication. The system utilizes the actual picture, thereby improving the accuracy of the article authentication. The machine learning system studies and learns every detail of each item. Thus, a high level of article authentication accuracy is thereby achieved.

FIG. 10 depicts an exemplary embodiment of a store inventory in accordance with one or more embodiments shown and described herein. In this embodiment, as shown in FIG. 10, fewer images (typically 5 to 6) of each article are madeCataloging. By way of example and in reviewing FIG. 11, there are 5 cataloged FruitThe image of (2). FIG. 11 depicts an exemplary performance graph based on the image catalog as shown in FIG. 10 in accordance with one or more embodiments shown and described herein. Based on the performance map and in the Fruit Loops example, an accuracy of 88.9% is achieved with a recall of 100%. However, e.g. Cinnamon ToastIn the example shown, 100% accuracy and recall is achieved in the case of cataloging 10 images. In full function and after a certain time, hundreds of photos are catalogued, eliminating errors. The more systems used, the better its performance.

FIG. 12 depicts an exemplary improved store inventory, wherein a greater number of images are used, in accordance with one or more embodiments shown and described herein. As shown, in these embodiments, 15 images are cataloged. 100% accuracy is achieved by having high quality images and data; without high quality data, accuracy may be imperfect. The number is not a guarantee of higher accuracy. Photos that can reveal details under different lighting and camera angles are key to success. FIG. 13 depicts an exemplary performance graph based on the improved image catalog as shown in FIG. 12 in accordance with one or more embodiments shown and described herein. As shown in this figure, in some embodiments, 100% accuracy is obtained when 15 (or more) images are used.

Fig. 14 and 15 provide the user with artificial intelligence features included within the application enhancement functionality. An artificial intelligence feature within the application as shown in fig. 14 communicates with a camera on the user's mobile device. When the system is granted access to the mobile device camera, the system processes the live camera image and finely identifies the item the user is pointing at. The system then allows the user to interact with the identified item within the application. The types of interactions permitted include taps, voice recognition, voice activation, clicks, and/or gestures.

The systems and corresponding functions within the systems described above and shown in fig. 14-16 refer to specific items that have been trained and programmed in a store-specific inventory and/or store-specific model. The system 1000 as described herein and as shown in fig. 14-16 facilitates higher efficiency and faster reaction times. Faster reaction times and efficiencies may be provided due to store-specific catalogs.

By way of example, if a user wants to add an item to their grocery list, they begin the process by accessing the application 1002 on their mobile device 1004. The user selects a program and turns on the system in the live view screen 1006. When the user points at the actual and tangible product 1008 (e.g.,) The system 1010 will then identify the details of the item. This information is extracted from the store specific catalog 1012 (based on the information contained in the database) and the data 1014 for the product. The system then processes the information and makes it accessible to the user within the application. Thus, no bar code is required. The system processes and collects the information and displays the product information 1016 to the user without any bar code, relying entirely on image data and product database photo information.

Similarly, such as shown in FIG. 15, is drawnBananas. The user selects a program and turns on the system in the live view screen 1002. When a user points to a live product (e.g. live product)Banana), the system will identify the item's details 1016 and automatically identify the item specificallyBananas. This information is derived from the store-specific catalog of products 1008 (based on a database)The information contained in (1) is extracted. The system then processes this information and makes it accessible to users active within the application. Again, no barcode and enabled user interaction is required.

Such item identification intelligence system (also known as IRIS or IRIS)1020 communicates with both system 1010 and cloud 1022. The cloud 1022 is configured to store and collect additional data using cameras or other vision 1024 from both the store 1026 and the user's mobile device and applications 1002. This collection of information, data, and images is collected by IRIS 1020 and system 1010 for implementation into catalog 1012, which includes products and data 1014. This collection of data through machine learning improves the accuracy of the overall IRIS system exponentially by collecting large amounts of information, data, and images for comparison with live products, such as shown in fig. 15.

FIG. 16 depicts multiple sources of data collection using the iris system 1020. The iris system 1020 uses its data 1014 already in the store 1026 database to collect data from the store 1026. This information includes SKU information, images, pricing information, color images, product data, and any other applicable information needed for the operation of iris 1020.

The system 1020 is further in communication with the kiosk 500, the kiosk 500 including the vision of a camera 1024. The camera 1024 of the self-service terminal 500 collects information when the user checks out. This information is transmitted back to iris 1020 and stored. All information is stored in the hard disk drive 1052, such as shown in FIG. 17.

The system 1020 further communicates with the user's device and the mobile application 1002. The mobile device includes vision such as a camera 1024 or the like. As the user collects the information and data, the information and data is transmitted back to the system 1020 and then stored.

The system 1020 also communicates with a store 1026 that collects data using a camera system 1024. The star camera system using cameras 1024 collects images while the system is in use, such as when determining whether a user has removed items from a shelf or other display in a store. These images collected during termination of article removal by the user are collected by the camera 1024 and communicated to the system 1020 and then stored.

Fig. 17 discloses and depicts the overall and general operation of the iris system 1020. The system 1020 communicates with a data store 1052. Data in the data store 1052 is collected in the manner discussed above. Data 1014 may be collected by user 1050. Further, to use the system, the user 1050 performs a face scan 1051 to confirm the identity of the user 1050. Such scanning is required to operate mobile applications and automated checkout and inventory creation as described above and in the foregoing. In some embodiments, the face scan 1051 is performed using the mobile application 1002.

FIG. 18 depicts a schematic diagram of a user 1050 shopping in a store using the present system and iris 1020. In this figure, the user 1050 logs into his account using a face scan, and the system connects to the user's shopping list. In this embodiment, the user includes 10 items on his list, including but not limited to cake mix and birthday candles. In this embodiment, the user is guided through the store by the aisle priority system to easily find the cake mix and birthday candles. Further, the user in this embodiment is utilizing iris or the present system 1020 so that items are easily identified when the user removes them from the shelf or display.

In some embodiments and as shown in fig. 26, additional masking processes may be undertaken to improve visual detection. At a self-service terminal or other checkout area, a system is provided that uses depth sensing hardware, cameras, and software. The system uses a biometric focus mask that creates a clear focus for the machine learning framework to retrieve data from images (as shown in the side-by-side comparison shown in fig. 26). This allows the high traffic areas to remain highly accurate because masks are generated around the desired active shopper, focusing only on the desired active shopper. The system searches for the assigned focus within a given frame/image. In some embodiments, the focus is the face of the user. The system then retrieves depth data from the corresponding region in the image in which the user's face is located. Using this depth data, the system is able to determine which regions constitute the background of the image or parts that are not intended to be considered for classification. Once the background has been calculated, the system places a binary mask over the background area of the image (depicted on the right side of FIG. 26). After applying the masking, a machine learning model is applied to process the image. The benefits of applying such a binary mask and then processing the image are: this approach effectively removes any background "noise" that may interfere with obtaining accurate output from the machine learning model.

In another aspect of the present description, a landmark navigation system is provided to assist a user in easily locating products based on known landmarks (such as sushi stands or cookie advertisements, etc.) within the store. A diagram relating to a landmark navigation system is shown in fig. 27 and 28. Landmark navigation systems are designed to navigate users by using points of interest in retail spaces, landmarks, marketing stalls, departments, or other distinguishable or prominent features within a space to assist users in navigating through stores. The system is intended to supplement, not replace, a primary SKU-based navigation system. This system is particularly useful for users in large stores, where items are desired to be located at great distances from each other. These landmarks are implemented as navigation points along the user's route to assist in advancing the user through the route without becoming disoriented or lost.

By way of example and referring to FIG. 27, a store layout having various regions is shown. Store 2000 includes deli shop 2002, restroom 2004, sushi stand 2006, soda advertising 2008, pharmacy 2011, and entrances 2022 and exits 2024. A plurality of checkout kiosks 2020 are also provided. Biscuit advertising 2012 and a garment portion 2010 are provided adjacent to aisles 2014 and 2016. When instead of merely receiving an indication to proceed to the aisle 12 by route with the landmark system activated, navigation may include "proceed through the pharmacy, soda advertisement, then reach the aisle 12". In this case, the pharmacy and soda ads are included as landmarks or navigation points to assist the user in reaching their destination, i.e., the aisle 12. While the user may have been able to navigate to their destination without including landmarks, some people may certainly consider it a desirable aid.

The landmark navigation system as disclosed above operates as shown in fig. 28. As a first step, the user activates a landmark navigation request (landmark system) on a device having a user interface. The system then processes the active route or will initiate route planning based on the user's inventory or the item the user is searching for. The route matrix evaluates the distance between existing navigation points. If the distance meets or exceeds the landmark required threshold, the system inserts landmark navigation points between qualified navigation steps. The route then proceeds as usual.

Fig. 20-21 illustrate a gravity bag 1100 of the present description. Gravity bags are a solution for variable cost items (e.g., products measured by weight to determine cost). A gravity bag 1100 as shown here is connected to the present system and allows the user to weigh the product (or other variable weight item) and leave the item in the bag and continue weighing new items. After the user has finished weighing the current item in the bag, the bag is processed and zeroed so that the user can continue to use the bag and the previous items in the bag.

The gravity bag 1100 as shown in fig. 20 includes a bag portion 1112, the bag portion 1112 being a standard bag made of polyester, cotton, nylon, or any other suitable material configured to hold bags of variable weight items, or the like. The bag also includes a load sensor 1114, the load sensor 1114 tracking the weight of the bag as items are added to the bag by the user. The gravity bag 1100 also includes a chip 1116, the chip 1116 measuring weight and passingA signal is sent to the device. In this embodiment, the signal (or other similar wireless signal) is sent to the user's mobile device via a mobile application. Information regarding the weight of the present item most recently added to the gravity bag 1100 is sent to the mobile application. The gravity bag 1100 also includes an external pressure sensor 1118. The pressure sensor 1118 may also be a magnet for checkout station detection. If the bag is turned, twisted and/or rotated to prevent misreading, an accelerometer 1120 is used. In addition, a battery 1122 (or multiple batteries) is used to power the various components discussed herein to operate the gravity bag 1100.

By way of example, there are three onions on the user's list. The user grabs the onions and the bags weigh the onions. Onion price is $ 1.03 per pound. The gravity bag reads 15 ounces of onion and then (after the user indicates to the application that weighing is complete) saves the data in the mobile application. The bag was then zeroed at 15 ounces. This process is repeated so that more weight-based items can be added, weighed, and priced correctly. The structure of the bag is shown in figures 22 to 24.

Fig. 22 to 24 relate to the drone assisting system of the present specification. The drone system 1200 of the present specification is configured to work directly with the mobile applications described herein. In this embodiment, the drone 1202 is a mini-drone and is configured to wirelessly charge on a charging pad 1208 on top of a shelf unit 1204 having a plurality of shelves 1206. A wire 1210 connects the wireless charging pad 1208 to a power source. In some embodiments, the charging pad 1208 is located at the top of the aisle shelf. This placement allows the drone 1202 to charge in the exact area where the shopper is located, so that if the shopper stays in the aisle for an extended period of time, the user's dedicated drone may land and receive a quick charge before the user continues with the drone's pairing.

Upon activation according to a user request guided by the drone, the user device will send a navigation pulse, signaling the user location to the scheduled drone. The activated drone will then pair with the user through a device connected to the user, facial recognition of the user, or a combination of both. The drone 1202 utilizes a live video feed from an onboard camera to detect, identify, and track users.

In some embodiments, the drone is equipped with a light projection 1218 capable of tracing an image, color, or outline onto a surface (such as shown at reference numeral 1216, etc.) spaced from the drone. In the embodiment shown in fig. 24, the drone projects light onto a single product 1214 on a shelf 1206 within the aisle. When the user requests through the mobile application, the unmanned aerial vehicle locates products that the user cannot locate and highlights the specific item that the user is looking for with the light projection.

Pairing the drone onboard camera of the drone assistance system with computer vision software allows the drone to perform object detection and recognition as well as facial recognition. These functions, in combination with AI programming, enable the drone to guide the user to the requested item and indicate to the user the precise location of the item (such as the item highlighting described above, etc.). AI programming provides functionality for the drone charging process by recognizing low power levels and then directing the drone back to the charging pad and assigning a replacement drone to continue user service.

It should also be noted that the present system and method are intended for use with a user's mobile device (such as a cellular telephone or the like). In this specification, the system is intended for use in both the user's home and store, thereby preventing the incorporation of a kiosk system or a personal device system to have the store own the personal device.

Another element of the system is brand AURATMThe object detection system of (1), which utilizes radio frequency to determine an item that is not visible to a conventional camera. The present system includes the use of wireless radio frequency or Wi-Fi to determine items that are not visible to a conventional camera or other detection component. Using a radio transmitter, a series of signals are transmitted that are projected onto a desired object or area (such as in a store or warehouse). These signals travel through solid surfaces and, when they interact with a surface or object, they produce a return signal that is received by a radio receiver (essentially a miniature radar). These signals are sent through the shelving unit, for example, to determine if additional items are present behind those items visible to the camera. These signals are classified and converted into usable data that detail the object. By using the camera in an appropriate context within the system, the system will detect if there are more items behind the object visible to the camera. These signals will use the signal rebounding from the object to give a measurement and a latent quantity of the object. These signals will provide the system with the ability to give accurate position and triangulation of the detected items.

The object detection system uses an artificial intelligence ("AI") program. The AI program communicates with the radio transmitter. A radio transmitter transmits signals in a space such as a warehouse. The signal is configured to pass through a solid object, such as a vertical service on a shelf or the like. The radio waves return them to the radio receiver. The radio receiver includes a processor configured to determine whether an object is located behind a solid surface (such as a vertical surface on a shelf, etc.) or behind an existing product on the shelf. The signal data is received by the detection system and the AI program interprets the data to determine whether an object is present. Decision making and detection are performed by a detection system. The system then determines a confidence threshold, upon which the detection system outputs the data in a specified format (such as graphically, by tracing, or by some other signal such as an audible signal, etc.). The detection system then repeats the same process. If the required threshold is not met, the detection system sends a new RF to emphasize the data set and determine if an object is present. The detection system then establishes parameters for the next transmitted RF and the radio transmitter transmits the signal. The process then continues as with the previous signal transmission and reception.

The system also communicates with existing consumer technologies as described above and is capable of detecting customer location and triangulation values in real time.

By way of example, the signal transmitter begins transmitting signals throughout a room or designated area (such as a shopping area or warehouse, etc.) at setup. As the signal travels and comes into contact with the object, the signal will continue to travel through the object while also generating a rebound signal, referred to as a return signal, which travels to a receiver. The velocity and number of return signals inform the system of a number of metrics including the size, relative position and number of objects. When this data is paired with camera imaging, the system is able to detect objects that are invisible to the detection of the camera (i.e., hidden behind the product visible by a conventional camera).

It should be noted that all of the above systems may be applied to any area where it is desired to locate, track, visualize, consider, image, detect and/or inspect any item within a store, warehouse, production facility, etc. Any of the above-described artificial intelligence, camera, radio frequency, etc. systems may be used to locate, track, visualize, consider, image, detect, and/or inspect any item within a store, warehouse, production facility, etc.

In another aspect of the present description, Wi-Fi is used to detect biomagnetism, body frequencies, body wavelengths, and the like. The system relies on a miniature radar. Using a radio transmitter, a series of signals are transmitted that are projected onto a desired object or area (such as described above). These signals travel through a solid surface and, when they interact with a surface or object, they generate a return signal that is received by the radio receiver. This current aspect of the application focuses on biomagnetic radio frequency and electroencephalography to set parameters for body functions and nerve commands without wearable technology. The system is tethered to the body of the user and uses AI commands and Wi-Fi signals to signal the user's designation (AURA)TM) Isolated only to the body area of the user.

The system performs in a manner similar to human eye function and the interpretation of light in the ganglion cells of the eye in response to light frequency and amplitude. The Wi-Fi signal interprets the frequency of the waves to associate it with a particular biological state (mood, disease, disorder, or other physical state). The frequency of the light determines the hue and the amplitude of the frequency determines the brightness. The eye responds in a manner corresponding to that of light waves, since the pupil acts as a window for ganglion cells and many other elements.

The present system is designed to use radio frequencies to monitor and communicate with ganglion cells. The system serves as a window into these cells. The ganglia act as relay stations for neural signals in the manner of connection points: the plexus begins and ends with ganglion cells. The ganglia serve as response points in the nervous system, which is why they are the first neurons in the retina to respond with action potentials to use the present methods and systems in combination.

The present system further uses Wi-Fi as a means of electromagnetic frequency to communicate with ganglion cells and plexuses within the human body. Using Wi-Fi, the present system detects functionally autonomous brain wavelengths and associated motor skills (such as whole body movement and fine body movement). The system serves as a bridge to the nervous system, relaying brain commands to ganglia that are disconnected due to spinal cord injury, nervous system disorders or conditions.

Using AI, the present system takes a language understood by the body and programs that language within the Wi-Fi signal to allow seamless communication flow between the brain and the nervous system repeater. This allows the use of programmed Wi-Fi (AURA) without surgical implants or wearable devicesTM) As a means of connection to the body and computer to immobilize spinal cord injuries and other nervous system somatic and autonomic problems.

FIG. 29 depicts a self-service terminal and corresponding flow diagrams of a self-service terminal according to one or more embodiments shown and described herein. Kiosk 1300 includes a body 1302 and a plurality of cameras 1306, 1308, 1310 that are all mounted on body 1302. The kiosk at 1300 also includes a display 1304 mounted to be viewable by the user. The processor 1312 is contained within the kiosk 1300 or is accessible via a cloud or other server, or is a separate processor or system. In the present embodiment as shown in fig. 29, the camera 1306 is a biometric camera configured to scan a user's face or collect other biometric data related to the user, including but not limited to retinal scan information, facial scan information, infrared information, and the like.

The processor 1312 communicates with the plurality of cameras 1306, 1308, 1310, the display 1304, and any other necessary hardware to complete the transaction. As shown in the flow chart of FIG. 29, the system starts when the user approaches the kiosk. One of the cameras (most commonly the biometric camera 1306) detects the user. If biometric features are used, facial scanning, retinal scanning, or other means may be used to detect the user. A facial scan or other biometric data is used to connect the user to a user-created account within the system. The user-created account may contain payment information, shopping history information, favorites, dislikes, biographical information, or any other information typically created with a typical store account. The user-created account may also be associated with a user photograph or other connected image. Once the user is detected, the user data may be used to generate user-specific marketing advertisements and/or experiences to display to the user. The processor determines what content (such as advertisements) should be displayed and displays the material to the user on the kiosk. The user can then interact with the advertisement on the display using gestures, voice commands, applications, and/or physical kiosk display interactions. The processor may display information such as advertisements or products that are customized in view of the account created by the user. Customized advertisements or products may also be displayed based on what is currently present in the user's shopping cart in the current checkout exchange.

In some embodiments, the cameras 1308, 1310 continue to detect items within the user's shopping cart. The system uses machine learning and/or compares a database available to the processor 1312 and compares the database to determine the items located in the cart. The processor is configured to collect data to improve the accuracy of the system. The processor then generates a manifest upon detection of each item. The list may be displayed on the display 1304 of the kiosk. The user then proceeds with checkout via user initiated checkout (such as a button or the like) or when any of the cameras 1306, 1308, 1310 detect that the user is walking away. Any of these actions will result in the user created account being automatically debited. The transaction is then complete.

Referring now to fig. 30-31, a checkout self-service terminal 1400 is provided in which cart cameras 1408, 1410 are utilized to ensure that all items contained within cart 1450 are billed. The kiosk 1400 generally includes a body 1402 having a plurality of cameras 1404, 1406 and cart cameras 1408, 1410. Along with the display 1420, bagging station/rack/belt locations 1412, 1414 may also be provided. Cameras 1408, 1410 are configured to point and angle downward toward cart 1450 to detect whether any items are still contained in cart 1450. If one or more items are still contained in cart 1450, the processor will audibly and/or visually notify the user and/or store that the items are still contained in cart 1450 to prevent theft. For example, if camera 1408 detects an item in cart 1450, the processor may aluminize a portion of kiosk 1400 and/or make a loud noise to alert of a potential theft.

Referring now to fig. 32, a checkout system 1500 having a first camera 1502 and a second camera 1506 is provided. An order entry interface 1504 is also provided. The camera located on the first camera and/or the second camera may be a biometric camera 1508. The processor 1510 communicates with the first camera 1502, the second camera 1506, and the order entry interface 1504. The system within processor 1510 is initiated when the camera detects the user using biometric data and/or determines whether the user is placing an order. The user is identified using the mobile application, biometric features, and/or account login. The payment may have been completed after the user places an order for the employee, either by direct input or by verbal communication. Alternatively, payment is completed after the user picks up their item. The order is marked as incomplete until the user and the item are connected to the same pixel buffer (this may be done using either or both camera 1 and/or camera 2). Then, category verification, product verification, and/or item authentication is performed by any one of the cameras. Category verification occurs when the processor determines that the user has picked up a general category of goods, such as the size of a beverage, at a time. Product verification and/or item authentication is performed using a machine learning and data comparison system as previously described. Once the confirmation detects the pick, the order is completed or payment is completed. The order and/or payment may also be completed if any of the cameras detects that the user is walking away.

It should be noted that the terms "substantially" and "about" may be utilized herein to represent the inherent degree of uncertainty that may be attributed to any quantitative comparison, value, measurement, or other representation.

These terms are also utilized herein to represent the quantitative representation that may vary from a stated reference without resulting in a change in the basic function of the subject matter at issue.

Although specific embodiments have been illustrated and described herein, it should be understood that various other changes and modifications can be made without departing from the spirit and scope of the claimed subject matter.

Moreover, although various aspects of the claimed subject matter are described herein, these aspects need not be used in combination. It is therefore intended to cover in the appended claims all such changes and modifications that are within the scope of the claimed subject matter.

46页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种带内置灯的衣架

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!