On-site operation auxiliary system

文档序号:119017 发布日期:2021-10-19 浏览:44次 中文

阅读说明:本技术 现场作业辅助系统 (On-site operation auxiliary system ) 是由 仲村柄真人 植田良一 胜又大介 于 2020-03-27 设计创作,主要内容包括:本发明提供能适当地辅助农业等的现场作业的机制。现场作业辅助系统具备作业者W1所佩戴或携带的作业者终端1和服务器2。作业者终端1获取第一数据,所述第一数据包含使用摄像头6对作业者W1的视野中的包含农作物的作业对象物3进行了拍摄的第一图像,作业者终端1或计算机系统2将第一数据作为输入,基于反映了作业对象物3的第二图像的学习的第二数据来识别作业对象物3的状态,获取用于基于识别结果来辅助作业的第三数据,作业者终端1作为用于基于第三数据对作业者W1辅助作业的输出进行包含传递如下内容的输出:在与视野相对应的第一图像内有作业对象物3。(The invention provides a mechanism capable of appropriately assisting field work such as agriculture. The work site support system includes a worker terminal 1 worn or carried by a worker W1 and a server 2. The operator terminal 1 acquires first data including a first image obtained by photographing a work object 3 including a crop in a field of view of an operator W1 using a camera 6, the operator terminal 1 or the computer system 2 recognizes a state of the work object 3 based on second data reflecting learning of a second image of the work object 3 by using the first data as input, acquires third data for assisting a work based on a recognition result, and the operator terminal 1 outputs data including the following as output for assisting a work by an operator W1 based on the third data: the work object 3 is present in the first image corresponding to the field of view.)

1. An on-site work support system for supporting an on-site work including agriculture, comprising an operator terminal worn or carried by an operator,

the operator terminal acquires first data including a first image of a work object including a crop in a field of view of the operator captured by a camera,

the worker terminal or a computer system connected to the worker terminal receives the first data as input, recognizes a state of the work object based on second data reflecting learning of a second image of the work object, and acquires third data for assisting the work based on a recognition result,

the worker terminal performs output including, as output for assisting the worker with the work based on the third data: the work object is present in the first image corresponding to the field of view.

2. The on-site work assistance system according to claim 1,

the worker terminal performs the output based on a part of data selected from the third data by the worker terminal or the computer system in accordance with a designated work instruction including a state of the work target from an instructor.

3. The on-site work assistance system according to claim 1,

the operator terminal or the computer system selects all or a part of the third data based on a setting or an instruction, and performs output for transmitting that a part of the work object is present in the first image corresponding to the field of view when the part of the third data is selected.

4. The on-site work assistance system according to claim 1,

the operator terminal performs, as the output, an output for assisting the screening of the state of the object to be harvested as the operation target object at the time of the operation of harvesting the crop, and performs an output for assisting the screening of the state of the object to be harvested as the operation target object at the time of the operation of delivering the crop.

5. The on-site work assistance system according to claim 1,

as the output, the operator terminal performs an output indicating a state of the work object including a type, a maturity, a grade, or an actual size of the crop.

6. The on-site work assistance system according to claim 1,

the output includes at least one of displaying a third image based on the third data on a display surface of the operator terminal, outputting a sound based on the third data, and outputting a vibration or light emission based on the third data.

7. The on-site work assistance system according to claim 1,

the outputting includes displaying a third image based on the third data on a display surface of the worker terminal,

in the display of the third image, the difference in the state of the work object is expressed by a difference in at least one of color, shape, size, and position.

8. The on-site work assistance system according to claim 1,

the worker terminal detects a distance between the worker or the worker terminal and the work object,

the output is an output whose kind differs depending on the distance.

9. The on-site work assistance system according to claim 1,

the worker terminal detects a direction in which the work object is located from the worker or the worker terminal,

the output is an output whose kind differs depending on the direction.

10. The on-site work assistance system according to claim 1,

the second image includes a reference image related to at least one of a category, a color, a shape, a real size, a maturity, and a grade of the crop,

the second data is updated based on a process of image parsing or machine learning including the second image.

11. The on-site work assistance system according to claim 1,

the on-site work support system includes a prediction support function including at least one of the operator terminal and the computer system,

the prediction assistance function predicts a prediction amount related to the harvest or shipment of the crop at a future date time of the site based on the first data or the second data, and outputs a prediction result.

12. The on-site work assistance system according to claim 1,

the field work support system has an insect pest discrimination support function including at least one of the operator terminal and the computer system,

the disease and pest identification support function identifies a state of a disease and pest of the crop based on the first data or the second data, and outputs information for supporting a response including spraying of a pesticide on the site corresponding to the state of the disease and pest.

13. The on-site work assistance system according to claim 1,

the operator terminal

Detecting a direction of a line of sight of the operator,

and selecting a part of the third data in accordance with the direction of the line of sight, and outputting the image in such a manner that the part of the work object is present in the first image corresponding to the field of view.

14. The on-site work assistance system according to claim 1,

the work-in-place support system includes the computer system connected to the worker terminal by communication,

the worker terminal sends the first data to the computer system,

the computer system receives the first data as input, performs processing including image analysis or machine learning based on the second data, and transmits the third data including a recognition result regarding the state of the work object to the worker terminal.

15. The on-site work assistance system according to claim 1,

the operator terminal or the computer system grasps and outputs the state of the crop at each position in the site and the actual results of harvesting or shipment of the crop in the site as information based on the first data.

16. The on-site work assistance system according to claim 1,

the operator terminal displays, on a display screen of the operator terminal, a map indicating a state of the crop at each position in the site based on the first data and the third data.

17. The on-site work assistance system according to claim 1,

the work assistance system includes a performance detection function configured by at least one of the operator terminal and the computer system,

the performance detection function detects a target object to be harvested or shipped and a worker object corresponding to a part of a body of the worker based on the first data, calculates a distance between the target object and the worker object in the first image, determines an operation of harvesting or shipping the target object by the worker based on a state of the distance, counts the number of harvests or shipments of the target object based on the operation, and stores and outputs the number as a performance.

18. The on-site work assistance system according to claim 17,

the performance detection function determines that the harvesting or shipment operation has been performed when a state in which the distance is within a distance threshold value in the time series of the first image continues for a first time or longer and then a state in which the object is not detected in the first image continues for a second time or longer.

19. The on-site work assistance system according to claim 1,

the work assistance system includes a performance detection function configured by at least one of the operator terminal and the computer system,

the actual results detection function determines a specific operation related to the harvesting or shipment of the object to be harvested or shipped by the operator object corresponding to a part of the body of the operator based on the input of the first image and the image analysis or the machine learning, counts the number of the harvesting or shipment of the object based on the specific operation, and stores and outputs the counted number as actual results.

20. The on-site work assistance system according to claim 1,

the work assistance system includes a performance detection function configured by at least one of the operator terminal and the computer system,

the actual performance detection function determines a change in the relative positional relationship between the object to be harvested or shipped and another recognized object in the time series of the first images, counts the number of harvests or shipments of the object based on the change, and stores and outputs the counted number as actual performance.

Technical Field

The present invention relates to a technology of an information processing system or the like, and particularly to a technology for assisting a field work including agriculture.

Background

In agricultural fields, for example, fields (for example, vinyl houses) where crops (sometimes referred to as objects, work objects, and the like) are cultivated, for example, an operator performs an operation of harvesting crops such as tomatoes, and an operation of delivering the harvested crops. As social problems, there are deficiencies of operators in the field of agriculture and the like, deficiencies of skills, experiences and the like of operators, workload, efficiency of work and the like. Therefore, a mechanism for assisting a field work such as agriculture using IT technology including AI (artificial intelligence) and the like is being studied.

In recent years, technologies of Head Mounted Displays (HMDs) including smart glasses have been developed in addition to smart phones, smart watches, and the like. Therefore, a mechanism for assisting the field work using a device (smart device) such as an HMD is also being studied.

As an example of the above-mentioned related art, patent No. 6267841 (patent document 1) is cited. Patent document 1 describes, as a wearable terminal display system or the like, displaying the contents of the harvest timing of crops on a display panel of a wearable terminal.

Patent document 1: japanese patent No. 6267841

Disclosure of Invention

In the field of agriculture and the like, when an operator who performs harvesting or shipment work is not an experienced person but an inexperienced person or a person with insufficient experience, it is sometimes difficult to determine which crop or the like should be harvested or shipped, and the work may be difficult. The operator has a heavy workload and poor working efficiency. There are also problems of insufficient hands for workers, insufficient experienced persons, training costs for inexperienced persons, and the like. The invention provides a mechanism capable of appropriately assisting field work of agriculture and the like by using IT technology including AI, intelligent equipment and the like.

The representative embodiment of the present invention has the following configuration. An on-site work support system according to an embodiment is an on-site work support system for supporting an on-site work including agriculture, and includes an operator terminal worn or carried by an operator, the operator terminal acquires first data including a first image of a work object including a crop in a field of view of the operator captured by a camera, the worker terminal or a computer system connected to the worker terminal receives the first data as input, recognizes a state of the work object based on second data reflecting learning of a second image of the work object, and acquires third data for assisting the work based on a recognition result, the worker terminal performs output including, as output for assisting the worker with the work based on the third data: the work object is present in the first image corresponding to the field of view.

According to the representative embodiment of the present invention, IT is possible to appropriately assist the field work such as agriculture using IT technology including AI and smart devices, for example, to reduce the cost of agricultural operations and improve the efficiency, and to facilitate the work of harvesting or shipment even by inexperienced persons.

Drawings

Fig. 1 is a diagram showing a configuration of an on-site work support system according to embodiment 1 of the present invention.

Fig. 2 is a diagram showing a functional block configuration of the operator terminal in embodiment 1.

Fig. 3 is a diagram showing an example of a system configuration including cooperation with a host system in embodiment 1.

Fig. 4 is a diagram showing an outline of processing of the job assisting function in embodiment 1.

Fig. 5 is a diagram showing an outline of processing of the prediction support function in embodiment 1.

Fig. 6 is a diagram showing an outline of processing of the pest and disease identification support function in embodiment 1.

Fig. 7 is a diagram showing an outline of processing of the job assisting function in the modification of embodiment 1.

Fig. 8 is a diagram showing a predetermined example of maturity in embodiment 1.

Fig. 9 is a diagram showing an example of color samples of tomatoes in embodiment 1.

Fig. 10 is a diagram showing an example of a model sample of a cucumber in embodiment 1.

Fig. 11 is a diagram showing a configuration related to the AI function in embodiment 1.

Fig. 12 is a diagram showing an example of the recognition result in embodiment 1.

Fig. 13 is a diagram showing a first example of the harvest work assist display in embodiment 1.

Fig. 14 is a diagram showing a second example of the harvest work assist display in embodiment 1.

Fig. 15 is a diagram showing a third example of the harvest work assist display in embodiment 1.

Fig. 16 is a diagram showing a fourth example of the harvest work assist display in embodiment 1.

Fig. 17 is a diagram showing a fifth example of the harvest work assist display in embodiment 1.

Fig. 18 is a diagram showing a sixth example of the harvest work assist display in embodiment 1.

Fig. 19 is a diagram showing an example of an image at the time of shipment work support in embodiment 1.

Fig. 20 is a diagram showing a first example of the shipment work support display in embodiment 1.

Fig. 21 is a diagram showing a second example of the shipment work support display in embodiment 1.

Fig. 22 is a diagram showing a configuration example of a field in a plan view in embodiment 1.

Fig. 23 is a diagram showing a configuration example of a top view of a field in a modification of embodiment 1.

Fig. 24 is a diagram showing a configuration of an actual result detection unit in the field work support system according to embodiment 2 of the present invention.

Fig. 25 is a diagram showing an example of an image in embodiment 2.

Fig. 26 is a diagram showing a second example of an image in embodiment 2.

Fig. 27 is a diagram showing a third example of an image in embodiment 2.

Fig. 28 is a diagram showing a fourth example of the image in embodiment 2.

Fig. 29 is a diagram showing a fifth example of the image in embodiment 2.

Fig. 30 is a diagram showing the structure of a portion to be determined in the modification of embodiment 2.

Fig. 31 is a schematic diagram of the relative positional relationship in the modification of embodiment 2.

Detailed Description

Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.

(embodiment mode 1)

An on-site work support system according to embodiment 1 of the present invention will be described with reference to fig. 1 to 23.

[ field work support System ]

The work-on-site support system according to the embodiment mainly includes a worker terminal 1 and a server 2 (a computer system including the server 2). The operator W1 and the instructor W2 are persons who use the system. The worker W1 uses the worker terminal 1 as an intelligent device (in other words, a portable information device). The operator terminal 1 is a portable terminal 1A or a wearable terminal 1B used by an on-site operator W1. The worker W1 carries the mobile terminal 1A or wears the wearable terminal 1B. The operator W1 uses one or both of the mobile terminal 1A and the wearable terminal 1B. The mobile terminal 1A is, for example, a smartphone or a tablet terminal. The wearable terminal 1B is, for example, an HMD including a smart watch and smart glasses. The operator terminal 1 has an operation assisting function 41. The work support function 41 is a function of outputting work support information to the worker W1 by combining the functions of the server 2.

The worker W1 is a worker who performs an agricultural operation such as harvesting or shipment of crops in a field such as an agricultural field (e.g., a vinyl house). Worker W1 may also be a foreigner worker or a student. The worker W1 may be an experienced person. The commander W2 is a person who gives a command related to an agricultural work to the worker W1, and is, for example, an employee of an agricultural operator, JA (agricultural cooperative). The work object 3 is a crop, and is an object for assisting a work, and a specific example thereof is tomato or cucumber.

JA performs the business of agricultural operations such as guidance, management, support, and purchase. JA is desired to grasp the state (actual results, prediction, etc.) of harvest or shipment of agricultural products of each farmer as accurately as possible. However, JA has conventionally been burdened with time, effort, and cost.

The present system assists the work performed by the worker W1 using the smart device (worker terminal 1) of the worker W1. The operator W1 views the work object 3 through the operator terminal 1. The operator terminal 1 captures an image including the work object 3. The system acquires an image of the operator terminal 1 and uses the image for work assistance. The system determines the state of the work object 3 in real time from the image of the operator terminal 1, and provides work assistance such as work guidance. The present system outputs work support information to the operator W1 through the operator terminal 1. The output is not limited to image display, and includes sound-based output, vibration, or light-based output.

The system can be used at least during harvesting operations or shipping operations involving screening of crops, particularly vegetables and fruits. The screening is, for example, judgment or screening of maturity, grade, actual size, shape, state of disease and insect pest of the crop. The output work support information includes at least harvest support information (in other words, harvest target instruction information, harvest target discrimination support information, and the like) such as a distinction as to whether the crop visible in the field of view of the worker W1 can be harvested or whether the crop should be harvested.

The present system can be easily used only by the worker W1 carrying or wearing the worker terminal 1. Even in the case where the worker W1 is not an experienced person but an inexperienced person (a beginner, a person with low skill level, or the like) having low skill level or experience, it is possible to easily perform the work of harvesting or shipment by work assistance. The system can efficiently provide the skill/experience/knowledge of the experienced person to the inexperienced person by the above-mentioned work assistance, and can realize training. Inexperienced persons can improve the skills involved in the work, etc.

The system is configured to learn, install, and reflect skills, experiences, and the like of experienced persons involved in the work by the AI function 20, and provide the skill, experiences, and the like to the worker W1 as work assistance. The present system includes an AI function 20 and the like in a computer system such as a server 2 cooperating with an operator terminal 1. The AI function 20 includes functions such as image analysis and machine learning. As an example of the AI function 20, machine learning such as deep learning is used. The AI function 20 performs machine learning by using the image data of the work object 3 as input. The image may be an image of a crop actually captured by a camera used as the business terminal 1, or may be an image such as a color sample described later. The AI function 20 updates the learning model involved in the recognition by learning the input image.

The AI function 20 recognizes the state of maturity of the crop or the like reflected in the image from the input image, and outputs the recognition result. The operator terminal 1 outputs work assistance information, for example, harvesting work assistance information to the operator W1 using the recognition result of the AI function. The operator terminal 1 displays an image showing the object to be harvested on the display screen 5, for example. The operator W1 can easily perform operations such as harvesting based on the operation support information.

A portable terminal 1A such as a smartphone includes a display surface 5 such as a touch panel, a camera 6, and the like. The camera 6 has an internal camera 61 or an external camera 62. The operator W1 uses the camera 6 to photograph the work object 3. The display surface 5 of the mobile terminal 1A displays the work assistance information and the like. The portable terminal 1A outputs a sound corresponding to the work support information from the speaker, and controls vibration or light corresponding to the work support information.

The wearable terminal 1B such as an HMD includes a display surface 5, a camera 6, and the like. The camera 6 includes a camera for line-of-sight detection, a camera for range sensor, and the like. The wearable terminal 1B is also attached with an operator 9 that communicates with the main body in the case of HMD or the like. The operator W1 can also hold the manipulator 9 in his hand for operation. The display surface 5 may be of a transmissive type or a non-transmissive type (VR type). The display surface 5 of the HMD displays an image (sometimes referred to as a virtual image or the like) based on AR or the like corresponding to the work support information, in superimposition on the actual image of the work object 3 in real space, in accordance with the range of the field of view of the user.

The computer system of the service provider including the server 2, DB, PC, and the like includes a management function 40 and an AI function 20. The management function 40 is a function of registering and managing information on a plurality of workers W1 or pointers W2 as users, a plurality of fields, and the like. For example, when a certain agricultural operator has a plurality of workers W1 and a plurality of fields, the management function 40 manages the information in a lump. The present system can be installed in, for example, a server 2 such as a data center or a cloud computing system on a communication network. The worker terminal 1 communicates with and cooperates with the server 2 and the like via a communication network. The server 2 or the like manages, integrates, shares, or the like data of each user, and assists each site and each worker W1. The system shares processing with a computer system including an operator terminal 1 and a server 2. The way of sharing can be various ways, and embodiment 1 shows an example. In embodiment 1, the computer system is responsible for processing the AI function 20, which has a relatively large computational processing load.

The present system performs not only work assistance for harvesting or shipment but also other assistance described later on the basis of the image through the operator terminal 1. The system is used as other assistance for detecting or responding to the state of the plant diseases and insect pests of crops. The system provides pest and disease identification auxiliary information, pesticide spraying judgment information and the like as auxiliary information. In addition, the system assists in the prediction of the harvest (or shipment) of the crop as an additional aid. The system provides the forecast information such as harvest forecast as auxiliary information.

[ operator terminal 1]

Fig. 2 shows a functional block configuration of the operator terminal 1. In this example, a case is shown where the operator terminal 1 is an HMD including smart glasses. The operator terminal 1 includes a processor 101, a memory 102, a display device 50, a camera 6, a sensor 7, a communication device 80, a microphone 81, a speaker 82, an operation button 83, and a battery 84, which are connected to each other via a bus or the like. The operator terminal 1 includes an operator 9, and communicates with the operator 9 via a communication device 80.

The processor 101 controls the whole and each part of the operator terminal 1. The operator terminal 1 includes an imaging unit 11, an object recognition unit 12, an object selection unit 13, a display control unit 14, a sound notification unit 15, a vibration notification unit 16, a light notification unit 17, and the like as a processing unit configured by hardware and software including a processor 101.

The memory 102 stores data or information processed by the processor 101. The memory 102 stores a control program 110, an application program 120, setting information 130, captured image data 140, virtual image data (in other words, job assistance data) 150, and the like. The control program 110 is a program for realizing the work assisting function 41 and the like. The application programs 120 are various programs originally provided in the HMD. The setting information 130 is system setting information or user setting information. The captured image data 140 is data of an image captured by the camera 6. The virtual image data 150 is data for displaying an image of the job assistance information on the display surface 5.

The display device 50 is, for example, a projection type display device, and projects a display image on a lens surface constituting the display surface 5. Note that an apparatus not limited to the projection type display apparatus can be applied. The display device 50 is a touch panel or the like in the case of the portable terminal 1A. The camera 6 includes one or more cameras that capture images of the front direction of the field of view of the operator W1 as a user. The camera 6 includes a camera constituting a line-of-sight detection sensor and a camera constituting a distance measurement sensor. Examples of the sensor 7 include a known GPS receiver, a geomagnetic sensor, an acceleration sensor, and a gyro sensor. The operator terminal 1 detects the position, direction, acceleration, and the like of the operator terminal 1 or the operator W1 using the detection information of the sensor 7, and uses the detection information for control.

The communication device 80 is a part that corresponds to various communication interfaces and performs wireless communication with the server 2, near field communication with the operator 9, and the like. The microphone 81 may include a plurality of microphones, and is a sound input device that inputs and records sound. The speaker 82 may include a plurality of speakers, and is a sound output device that outputs sound. The operation button 83 receives an input operation by the operator W1. The battery 84 supplies electric power to each portion based on charging.

[ example of System configuration and Each function ]

Fig. 3 shows an example of a system configuration including cooperation between the field work sight line system and the host system in embodiment 1 of fig. 1, and a configuration outline of each function of the host system. The system configuration example of fig. 3 includes an instruction system 201, a prediction system 202, and a growth management system 203 in addition to the worker terminal 1 and the server 2. The configuration contents of the instruction system 201, the prediction system 202, and the growth management system 203 are not particularly limited. The configuration example of fig. 3 shows a case where the instruction system 201, the prediction system 202, and the growth management system 203 are installed in the computer system of the service provider including the server 2. In other words, the computer system incorporates the pointing system 201 and the like. Not limited to this, a higher-level system such as the instruction system 201 may be externally connected to the computer system including the server 2 by communication.

Indicating system 201 comprises a harvest indicating system or a shipment indicating system. The instruction system 201 receives an input of a work instruction from the instructor W2. The job designation is, for example, an instruction accompanied by designation of maturity or designation of rank as an instruction of a harvesting job. The instruction system 201 configures and provides a job instruction to the job assisting function 41 based on a job instruction from the instructor W2. The job instruction includes display target information for the job assistance information. For example, when the maturity is specified, the display target information is information indicating the harvest target corresponding to the specified maturity.

In this system, a person other than the worker W1 on the spot can be used as the instructor W2, a work instruction can be given to the system (for example, the server 2 or the worker terminal 1), and work support information (or work instruction information) matching the work instruction can be generated and output. The work support information (or the work instruction information) is, for example, information for directly specifying the harvest target object. The work instruction from the upper-level instructor W2 is, for example, information for indirectly expressing the object to be harvested, and is, for example, information for specifying the maturity or the like. When a certain maturity is specified, all crops in a state corresponding to the maturity are called harvest targets. Such job instructions can be similarly applied to a case where a grade, an actual size, or the like is designated at the time of shipment. In this system, the instruction system 201 can perform the instruction of the harvest target or the like as the work assist in accordance with the work instruction from the instructor W2, and thereby the operator W1 can be prevented from being confused or not easily confused with respect to the harvest or the like at the site.

In more detail, the work assisting function 41 includes a harvesting work assisting function 41A and a shipment work assisting function 41B. The harvest work assist function 41A is a function of output assist at the time of harvest work. The shipment work support function 41B is a function of output support during shipment work. The job assisting function 41 outputs job assisting information based on the job instruction information from the instruction system 201. For example, the job support function 41 displays a job-supported image on the display screen 5. The image is an image representing a job target and an image for screen assist. The work support function 41 can grasp the position of the operator terminal 1 and the like.

Prediction system 202 is a system that predicts a harvest prediction amount or a shipment prediction amount, etc. Forecast system 202 includes a harvest forecast system and a shipment forecast system. The operator terminal 1 has a prediction support function 42. The prediction assistance function 42 cooperates with the prediction system 202. The prediction assistance function 42 transmits information such as the number of objects (for example, the number per maturity) at the current time to the prediction system 202 using information such as the recognition result processed by the job assistance function 41. The prediction system 202 uses this information to predict a harvest prediction amount or the like, and outputs prediction information as a prediction result. Prediction system 202 outputs the prediction information to a system such as instructor W2 or JA. In addition, the prediction system 202 may respond the prediction information to the operator terminal 1. The prediction support function 42 may output the prediction information as the harvest prediction to the display screen 5.

The prediction support system 42 may grasp the actual results of harvesting by performing a predetermined process using information such as the number of objects (the amount of recognition) at the current time in the recognition results. Since the number of objects is known in accordance with the work of harvesting or shipping the crops accompanying the work assistance, the system can measure the actual results of harvesting or shipping as information. The present system may record the actual result information and output the actual result information to the upper system or the instructor W2.

The growth management system 203 is a system for managing the growth of crops in a field. The growth management system 203 includes a pest judgment system and a pesticide spray judgment system. The operator terminal 1 has a pest identification support function 43. The pest identification support function 43 cooperates with the growth management system 203. The pest identification support function 43 transmits information on the state of a pest of the object to the growth management system 203 using information such as the recognition result processed by the work support function 41. The growth management system 203 determines a response such as pesticide spraying, fertilizer application, removal, and the like using the information, and outputs response information. The response information includes, for example, pesticide spray instruction information. The growth management system 203 outputs the response information to a system such as the instructor W2 or JA. The growth management system 203 may respond the response information to the operator terminal 1. The operator terminal 1 outputs the operation support information indicating the state of the pest to the display screen 5. Further, the operator terminal 1 outputs work assistance information such as an instruction to spray agricultural chemicals to the display screen 5 based on the response information from the growth management system 203. The learning model of machine learning of the AI function 20 can recognize (estimate) the position, maturity, grade, and state of the plant diseases and insect pests together. In other ways, the AI function 20 may use another machine-learned learning model for each maturity or pest status.

[ work assisting function ]

Fig. 4 shows an outline of processing of the work support function 41 in the configuration of cooperation of the operator terminal 1 and the server 2. Along with the operation of the operator W1, the image pickup unit 11 of the operator terminal 1 picks up an image of the work object 3 by using the camera 6 to obtain an image (corresponding image data). The image includes a still image and a moving image. The imaging unit 11 stores the image as the captured image data 140. The photographing is a photo photographing based on visible light. The operator terminal 1 obtains information such as date and time, position, and direction by using not only the image but also the sensor 7 and the like at the time of shooting. The position is, for example, position information (for example, latitude and longitude) that can be acquired by the GPS receiver in positioning, but is not limited to this, and a position acquired by another positioning means may be used. The direction corresponds to the front direction of the operator W1, the front direction of the operator terminal 1, the shooting direction of the camera 6, and the like, and can be measured by a geomagnetic sensor or a line-of-sight detection sensor, for example.

The object recognition unit 12 receives the image from the imaging unit 11, performs recognition processing on the state of the work object 3 using the AI function 20, and obtains a recognition result. The image data (input image) input to the object recognition unit 12 is associated with information such as the ID, date, position, and direction of the image. The object recognition unit 12 transmits the input image to the AI function 20 together with the request. The AI function 20 includes a module for image analysis and machine learning configured in a computer system. Machine learning involves learning models, for example, using deep learning. The AI function 20 performs recognition processing based on the input image and outputs a recognition result. The recognition result includes information on the state of each object in the image, such as the position and maturity of the object. The AI function 20 transmits a response including the recognition result to the object recognition unit 12 of the operator terminal 1. The object recognition unit 12 stores the recognition result and transmits the recognition result to the object selection unit 13.

The object selection unit 13 selects a part of the information as the display object based on the recognition result of the object recognition unit 12. In other words, the object selection unit 13 extracts, limits, refines, or filters a part of the information based on the recognition result. The object selection unit 13 selects whether to use all the information of the recognition results or whether to use a part of the information. The object selection unit 13 can select the object using, for example, the maturity and the rank. The object selection unit 13 can perform selection according to user setting or user instruction, for example. The selected part of the information is, for example, harvest instruction information for only the harvest target object, and the harvest instruction information includes, for example, an image indicating the harvest target object and does not include an image indicating a non-harvest target object.

When a job instruction is given from the instruction system 201, the object selection unit 13 performs selection based on the job instruction. The object selection unit 13 performs selection based on the recognition result, for example, so as to notify only the object corresponding to the maturity of the harvest target based on the operation instruction from the instruction system 201. The instruction system 201 provides the object selecting unit 13 with the harvesting object information corresponding to the job instruction from the instructor W2, for example. The work instruction (information on the object to be harvested) is, for example, designated harvest instruction information including the maturity of the object to be harvested, and is, for example, information such as "the object to be harvested is a tomato having a maturity of 3 or more". Another example is "please harvest tomatoes with a maturity of 1", etc. The maturity designated by the designator W2 or the designation system 201 may be determined by any mechanism. In one example, maturity, etc. may be determined based on shipping plans, order information, etc. and in consideration of transport distances, etc.

In the case of the mobile terminal 1A, the display control unit 14 draws an image (virtual image) indicating the position, maturity, and the like of the selection result from the object selection unit 13, within the image obtained by the imaging unit 11. In the case of the wearable terminal 1B, the display control unit 14 superimposes and displays an image (virtual image) indicating the position, maturity, and the like of the selection result on the display screen 5.

The processing unit cooperating with the display control unit 14 includes an audio notification unit 15, a vibration notification unit 16, and a light notification unit 17. The sound notification unit 15 outputs a sound for assisting the work from the speaker 82. The vibration notification unit 16 outputs vibration for work assistance. The light notification unit 17 controls light emission for work assistance.

[ prediction support function ]

Fig. 5 shows an outline of processing of the prediction support function 42 in the configuration of cooperation of the operator terminal 1 and the server 2. In particular, the case where the pre-harvest-amount prediction assistance is performed will be described. The operator terminal 1 includes a target field selection unit 21, a recognition result counting unit 22, and a recognition result transmission unit 23 in addition to the above-described imaging unit 11 and the like. The target field selection unit 21 selects a field (corresponding region) to be subjected to data collection for the harvest prediction amount prediction before the operation. The target field selection unit 21 refers to the field information from the DB250 of the server 2 to select a target field. The server 2 may be provided with the target field selection unit 21. The DB250 is a DB for collecting data for a plurality of agricultural operators, a plurality of fields, and the like, and stores field information including a list of fields, and the like.

The imaging unit 11 and the object recognition unit 12 have the same configuration as described above. The object recognition unit 12 cooperates with the AI function 20 based on the input image and receives a recognition result, which is an output from the AI function 20. The recognition result includes information such as the position, number, maturity, and the like of the object in the image. The recognition result counting unit 22 receives the recognition result from the object recognition unit 12, and counts the number of objects in the recognition result as a number divided by maturity. The recognition result transmitting unit 23 receives the statistical result information from the recognition result counting unit 22 and the target field information from the target field selecting unit 21, creates transmission information in which these pieces of information are combined, and transmits the transmission information to the prediction system 202 of the server 2.

The transmission information is information for prediction, and is, for example, "1/5/2019, field a, maturity 1: 10, maturity 2: such as 100 pieces of information including the current date and time, the target field, the identification, and the number of measured objects (identification amount) classified by maturity. Prediction system 202 accepts the sent information, integrates it into the DB, and predicts a future time-of-day harvest prediction amount in the subject field. The prediction time unit can be applied to, for example, 1 day, 1 week, 1 month, or the like. The predicted result information is, for example, "12/1/2019, field a, maturity 1: 15, maturity 2: 150 "and so on include information such as future date and time, target fields, and harvest advance amount classified by maturity. The prediction system 202 predicts the amount of harvest prediction at a future date and time using, for example, the number of pieces of transmission information in the history of the time series, the weather at the current time, and the weather forecast. The mechanism and logic of the prediction processing are not particularly limited.

The prediction system 202 provides information of the prediction result to the host system, the instructor W2, the worker W1, or the like. For example, a person such as JA or a higher-level system can easily obtain prediction information such as a harvest prediction amount. Thus, JA and the like can reduce the number of conventional field surveys and improve the prediction accuracy. JA and the like can grasp the state (actual results, prediction, etc.) of harvest or shipment of each agricultural product of each agricultural operator with as high accuracy as possible.

[ auxiliary function for identifying diseases and pests ]

Fig. 6 shows an outline of processing of the pest identification support function 43 in the configuration of cooperation between the operator terminal 1 and the server 2. The operator terminal 1 includes a target field selection unit 21 and a recognition result transmission unit 23 in addition to the above-described imaging unit 11 and the like. In this configuration, the object recognition unit 12 and the AI function 20 have a recognition function relating to the state of the pest in addition to the above-described functions. The object recognition unit 12 receives the image, date and time, position, direction, and other information from the imaging unit 11. The object recognition unit 12 inputs these pieces of information to the AI function 20. The AI function 20 recognizes the state of the object such as the presence or absence and the type of a pest, in addition to the position of the object, and responds the recognition result to the object recognition unit 12. The recognition result includes information such as the date and time, the target field, the position (the position of the operator W1 and the operator terminal 1), the presence or absence or the type of the pest, and the position of the pest corresponding to the position of the target object in the image. The pests are not limited to those produced only on the vegetable and fruit, and may be produced on the stem, leaf, or the like. The recognition result also includes information in this case. When the crop is too mature and rotten, the crop is removed. This state can also be recognized by the AI function 20.

The recognition result transmitting unit 23 receives the recognition result from the object recognition unit 12, creates predetermined transmission information including the target field, the position (the position of the worker W1 and the worker terminal 1), the recognition result information regarding the state of the pest, and the like, and transmits the predetermined transmission information to the growth management system 203 of the server 2. The growth management system 203 of the server 2 creates response support information for responding to the state of the plant diseases and insect pests based on the transmission information, and transmits the response support information to the instructor W2 or the upper-level system. The growth management system 203 of the server 2 may respond the response support information to the worker terminal 1. The treatment is not limited to pesticide spraying, and can be fertilization, removal and the like. The response auxiliary information includes pesticide spraying indication information. The pesticide spray instruction information includes information specifying a pesticide spray position such as a position or an area to be sprayed with pesticide in the target field, and information specifying a type and a spray amount of pesticide.

In this system, a person such as a commander W2 or JA or a worker W1 can grasp a part in a field where a response such as pesticide spraying is required, and the omission of the response can be prevented. The commander W2 or the like can suppress the position and amount of pesticide sprayed, can cope with this efficiently at low cost, and can improve the quality of crops or the like. The response information may include an indication of thinning of the crop. For example, a plurality of objects are formed on a certain stem, but a pest is partially generated. The growth management system 203 determines a crop to be thinned (removed) in consideration of such a state, and provides the determined crop as response information.

[ working assistance function-modification ]

Fig. 7 shows a configuration of a modification of the work assisting function 41 of fig. 4. In this modification, the object selection unit 13 is provided in the server 2. The AI function 20 of the server 2 inputs an image or the like from the object recognition unit 21, performs recognition processing, and supplies the recognition result to the object selection unit 13. The object selection unit 13 selects all or a part of the recognition results from the AI function 20 based on the job instruction information from the instruction system 201, and transmits the selection results to the object recognition unit 12. With this configuration, similar effects can be obtained. Note that, a mode in which user setting is performed with respect to selection by the object selection unit 13 may be employed.

[ work assistance information output mode ]

In the field work support system according to embodiment 1, the intelligent device serving as the operator terminal 1 can display the work support information on the display screen 5 based on the recognition result of the AI function 20 described above and transmit the work support information to the operator W1. In this case, the present system studies the display of the work assistance so that the worker W1 can easily understand it and the worker W1 can easily make a judgment such as harvesting. The display control unit 14 of the operator terminal 1 controls the display mode when displaying an image representing the harvest object or the like on the display screen 5. The operator terminal 1 can highlight only the harvest target by selecting, for example, in the range of the display screen 5 corresponding to the field of view of the operator W1 using the target selection unit 13. The worker terminal 1 or the server 2 selects (filters, etc.) only a part of the information from the work assistance information that becomes the output candidate by using the object selection unit 13. Further, the operator terminal 1 can give a notification or a work guidance by image display, sound output, vibration, light output, or the like when the harvesting object is present in the vicinity of the operator W1 and the operator terminal 1 (for example, when the harvesting object is within a predetermined distance range).

The operator W1 performs operations on crops, cultivation soil, implements, and the like in a field such as a vinyl house. The operation of the work may be, for example, harvesting with both hands holding the crop. Therefore, the worker W1 basically uses both hands for a long time, and is desired not to release the hands for other tasks (for example, a task of operating IT equipment) as much as possible. The present system displays a job-supporting image on the display surface 5 of the mobile terminal 1A, for example. In this case, however, the operator W1 needs to hold the mobile terminal 1A in his/her hand and view the display surface 5. Therefore, the output method in the present system is not limited to displaying an image on the physical display surface 5, and other means may be provided. The present system can display the work assistance information in a manner such that the visual field of the worker W1 is superimposed by using a mechanism such as AR of the wearable terminal 1B. In this case, the operator W1 can easily freely operate the hand, and the operator W1 can easily perform the operation. The present system is not limited to such a display means, and may transmit the work support information using sound, vibration, light, or the like even when the hand of the operator W1 is not empty.

[ maturity ]

The crop serving as the work object 3 may be specified in terms of maturity, grade, size (actual size), and the like for each category. In other words, a rank is a quality-related classification. Fig. 8 shows an example of regulations relating to maturity in the case where the crop is tomatoes. In this example, the maturity is 1 to 6. The maturity is 1 highest and 6 lowest. In addition, as an example of the maturity threshold corresponding to the job instruction, a case where the maturity is 3 or more is shown. In this case, the ripeness of 1 to 3 is not less than the threshold value, and thus, for example, the plant becomes a harvested subject, while the ripeness of 4 to 6 is not less than the threshold value, and thus, the plant becomes a non-harvested subject. The maturity (and the grade described later) can be set differently according to the type and region of the crop (e.g., county). As the maturity to be harvested, any maturity of 1 to 6 may be the harvest target. The maturity of the harvest target is determined in consideration of, for example, a transport distance, use, demand, and the like. An article having a maturity of 5 or 6 may be harvested.

[ color sample ]

Fig. 9 shows an example of an image of a color sample (in other words, a reference of maturity) in the case where the crop is a tomato. The image example of the color sample is an actual image example of 6 tomatoes having ripeness of 1 to 6 in parallel on a paper sheet in a comparable manner. The AI function 20 performs machine learning for recognition with an image of such a color sample as an input (in other words, teaching data) in advance. Thus, the learning model of the AI function 20 can perform recognition regarding the maturity of tomatoes. In the field of the object, color samples and model samples (reference for shape and the like) of the object such as a crop are prepared in advance and can be used for machine learning.

[ grade ]

Fig. 10 shows an example of regulations relating to a grade in the case where the crop is cucumber. The dimensions (actual dimensions) are also defined in the same way. In this example, A, B, C is the level specified in the model sample. Individuals whose shape approximates a straight line conform to level a. The individual body was rated as B when the shape was bent to some extent (rating B1) or when the shape had drooping at the tip (rating B2). The shape is further deformed than the grade B, and the like, and the grade C is satisfied.

The AI function 20 inputs images of samples of individuals at each level in advance, and performs machine learning with respect to the level. Thus, the learning model of the AI function 20 can perform recognition regarding the grade of cucumber. Likewise, with respect to the actual size of the crop, the AI function 20 can identify based on machine learning. The AI function 20 may calculate the actual size of the object using the size of the object in the image and the distance detected by the distance measuring sensor.

[ AI function ]

The AI function 20 is supplemented with fig. 11 and the like. Fig. 11 shows an explanatory diagram about the AI function 20. The operator terminal 1 acquires an image of a crop including the work object 3 as an object through the image pickup unit 11. The image 111 shows an example of an input image, for example, 3 tomatoes are shown. The image 111 is associated with information such as ID, date, time, position, and direction. The operator terminal 1 inputs the input image 111 to the object recognition unit 22. The object recognition unit 22 transmits data such as the input image 111 to the AI function 20. Note that the AI function 20 may be included in the object recognition unit 22. The object recognition unit 21 or the AI function 20 inputs data including an image, performs recognition processing on an object, and outputs data including a recognition result.

The recognition result 112 in the output data includes information on the ID, the type, the maturity, the position, or the area of the object in the image 111. The category is an estimated value of the classification of agricultural products such as tomatoes and cucumbers. The recognition result 112 of fig. 11 shows an example of information related to the individual 111a, for example. The ID of the object corresponding to the individual 111a is 001, the category is a (tomato), and the maturity is 3. The position of the object is L1. The position L1 of the object is represented by position coordinates or the like. The region corresponding to the position of the object may be represented by a rectangular or circular region. In the case of a circular area, the area may be represented by coordinates (cx, cy) of the center point, a radius (r), or the like, and in the case of an elliptical area, the area may be represented by an ellipticity or the like. In the case where the region is a rectangular region, for example, the region may be represented by 2-point coordinates { (x1, y1), (x2, y2) } of upper left and lower right, or may be represented by a center point coordinate, a width, a height, and the like.

In the case of the assistance of the identification of a pest, the outputted recognition result includes information such as the position and type of the pest. The output data is not limited to the maturity, and may include information such as the grade and actual size of the crop. The maturity or rating is obtained as a result of the recognition based on the aforementioned sample.

The details of the recognition processing by the object recognition unit 12 and the AI function 20 are as follows. The AI function 20 inputs an image (a still image, a moving image, or a streaming video of a camera, etc.). When the format of the input image data is a moving image, the AI function 20 cuts out the moving image sequentially every 1 image frame and inputs the image frame as an image frame to be subjected to recognition processing (i.e., a still image). The AI function 20 recognizes the type, position, maturity, and the like of the object in the input image frame, and outputs the recognition processing result.

Fig. 12 shows a specific configuration example of the output of the recognition result of the AI function 20. The input image frame corresponding to the image 111 of fig. 11 has 1280 pixels in the horizontal direction (x) and 720 pixels in the vertical direction (y), for example. The position coordinates (x, y) are (0, 0) with the upper left pixel of the image frame as the origin. In the example of the image 111, for example, 3 tomatoes (vegetables and fruits) are displayed as objects (shown by objects OB1, OB2, and OB 3). Fig. 12 shows, for example, the result of recognition of the position of object OB 1. The position of the object OB1 is indicated by a corresponding rectangular area. The rectangular region is represented by 2 points, i.e., the position coordinates (x1, y1) of the pixel p1 at the top left vertex and the position coordinates (x2, y2) of the pixel p2 at the bottom right vertex. For example, the point p1(x1, y1) ═ (274, 70), and the point p2(x2, y2) ═ (690, 448). The identification result of the object OB1 type is "a (tomato)" and the identification result of the maturity is "3". When a rectangular region is defined by a center point, a width, and a height, for example, the position coordinate of the center point of object OB1 is (482, 259), the width is 416, and the height is 378.

The object recognition unit 12 can similarly recognize a plurality of objects in the image, and can output recognition result information of the plurality of objects in a lump. The format of the output of the recognition result can be set as in table 113, for example. The table 113 stores identification result information of objects for each row, and has object IDs, categories, maturity, position coordinates { x1, y1, x2, y2} and the like as columns.

In the case of the harvest work assist, the present system receives a designated work instruction including the maturity of the harvest target from the instruction system 201. In this case, the object selection unit 13 selects a part of the information as the information of the harvest object from the recognition result according to the specification of the maturity. Thereby, the data to be displayed is refined. For example, when the job instruction is designated as "maturity is 3 or more", the selection result is the same as that of the table 114 when the object selection unit 13 selects a part of the recognition result data of the table 113. As a result of selection of the table 114, only data of tomatoes (object ID 001) in the first row of the table 113 is extracted.

[ working auxiliary display (1) ]

An example of output in the case where the output mode of the work assistance information is a display will be described below. Fig. 13 shows an example of display of the harvesting work assistance information on the display surface 5 of the worker terminal 1. This example corresponds to the example of image 111 in fig. 11. The image 131 is an image of a field captured at one location, and shows a plurality of tomatoes as an example of a crop (including not only vegetables and fruits but also stems or leaves). The operator terminal 1 displays an image 132 indicating the object to be harvested on the object in the image 131. An example of this image 132 is a red frame image of an area surrounding the object OB 1. The display control unit 14 of the operator terminal 1 controls the color, shape, size, and thickness of the frame line of the frame image (for example, the image 132) to be different depending on the maturity (for example, maturity of 3) of the object (for example, the object OB 1). The operator terminal 1 controls the frame image to be different in the shape of a frame image such as a rectangle, a circle (including an ellipse), or a triangle according to the shape of the object. The shape of the frame image is preferably matched to the contour of the object as much as possible, but is not limited thereto, and may be a rectangle, a circle, or the like of a region including the object. In this example, the shape of the frame image is circular according to the type of the object, i.e., the tomato. In the image 132, no frame image is displayed on the tomatoes which are not the harvest targets. The operator W1 can focus on the image 132 indicating the object to be harvested and easily harvest the corresponding individual.

[ working auxiliary display (2) ]

Fig. 14 shows another display example in the case where the image of the harvesting work assistance information is displayed on the display surface 5. In this example, the image 141 shows a plurality of individual tomatoes. The operator terminal 1 superimposes and displays the frame image as an image representing the object of each individual on the display screen 5 based on the information of the object of the recognition result (corresponding selection result). As the frame image, a solid-line circular frame image (e.g., images g1, g2) indicates that the maturity is 1 and the harvest target is displayed, for example, by a red thick frame line. The dashed circle box image (e.g., image g3) indicates a maturity of 2 and a non-harvest object. The dotted circle images (e.g., images g4, g5) indicate that the maturity is 3 and that the non-harvest object is displayed, for example, by yellow thin lines. In this example, the maturity number is displayed for each object, but the display may be omitted.

The display content of the harvesting work assistance information can be changed according to the instruction or setting of the user (the worker W2, etc.). In this example, the setting for auxiliary display is performed for an object (tomato) having a maturity of 3 or more, and a frame image or the like is not displayed for an object having a maturity of 4 or less. In this example, the setting of displaying is performed by differentiating the degrees of maturity 1, 2, and 3, and different frame images are displayed according to the degrees of maturity. In this example, when "harvest at maturity of 1" is received as a harvest operation instruction from the operator W2, a frame image (images g1 and g2) showing the harvest target is displayed for the target at maturity of 1. In this example, an explanatory image showing the content of the harvest object whose frame image (image g1, etc.) is maturity 1 is also displayed on the display screen 5.

The operator W1 can easily recognize which object should be received by viewing the display of the harvest assist. When the operator W1 focuses on the solid-line frame image in the field of view, the frame image shows the object to be harvested (or the harvest instruction), and therefore, the operation of harvesting the object can be easily performed.

[ working auxiliary display (3) ]

Fig. 15 shows another display example of the harvesting work assistance information in the display surface 5. Image 151 in fig. 15 shows an example of information on all objects showing the recognition result (corresponding selection result). A plurality of individual tomatoes (for example, individual T1, T2, T3, etc.) are displayed in the image 151. In the example of the image 151, cultivation soil (ridge), channel, support bar, electric wire, tape, cap, and the like are shown in addition to the crop. This example shows a case where images are taken obliquely left from the positions of the operator W1 and the operator terminal 1 on the columnar path. The image shows tomato fruits or stems on the front side, ridges or lids on the lower side of the back surface, adjacent channels on the back surface, and adjacent ridges or lids on the further back surface.

The solid-line circular frame images (e.g., frame images G1, G2) are images showing the object to be harvested, and in this example, show objects corresponding to a maturity of 3 or more (maturity 1, 2, 3). The dotted circle frame image (for example, frame image G3) is an image showing a non-harvest object, and in this example, shows an object with a maturity of less than 3 (maturity 4, 5, 6). In this example, the thickness of the frame line of the frame image is changed and displayed according to the distance between the operator W1 and the object. The operator terminal 1 displays the object with a thicker frame line as the distance becomes smaller and closer. The worker W1 can easily recognize the degree of the object existing in the space of the field of view and the distribution of the degree of maturity by viewing the image of the harvest assist.

In this example, all information is displayed, and therefore, a frame image is displayed in both the harvest target object and the non-harvest target object in the image 151. The frame image is displayed with, for example, different colors depending on whether it is a harvest object or not. For example, the harvested objects are red solid-line frame images, and the non-harvested objects are yellow dashed-line frame images. The frame image may be a color different depending on the maturity of the object. For example, when there are 6 stages of maturity of 1 to 6, the color of the frame image may be set in accordance with each maturity, or the color of the frame image may be set in accordance with a specification of a range of specified maturity. For example, when 6 colors are provided, red, orange, yellow, green, blue, and gray may be used. For example, when 3 colors corresponding to 3 ranges are provided, red, yellow, and green may be set. For example, using the maturity threshold, red is set to the first range when the maturity is 1 or 2, yellow is set to the second range when the maturity is 3 or 4, and green is set to the third range when the maturity is 5 or 6. The worker W1 harvests the individual with the red frame image attached. The worker W1 identified that the individual with the yellow frame image should not be harvested because the individual is not ripe enough.

The frame size of the frame image is displayed so as to match the size of the object (vegetable or fruit) in the image. The frame image having a small size in the image is a fruit having a small growth size or a fruit located at a position separated from the position of the operator W1. The operator W1 can perform the harvesting operation while paying attention in order from the object indicated by the large-sized frame image. The operator terminal 1 can select so as not to display the object or the frame image having the size smaller than the threshold value in the image. For example, in the image 151, a plurality of fruits projected on the back surface are fruits facing adjacent lanes, and the operator cannot immediately harvest the fruits from the lane where the operator is currently located. In this case, the frame image may not be displayed even if the fruit (corresponding object) is a harvest target. This reduces the amount of information displayed and reduces the load on the worker W1.

As another display control example, the color of the frame image may be matched to the color based on each maturity of the color sample.

[ working auxiliary display (4) ]

Fig. 16 shows another display example of the harvesting work assistance information in the display surface 5. This image 161 shows an example of displaying information of a part of the objects selected by the object selecting unit 12 in the recognition result. In this image 161, only the solid-line circular red frame images (for example, frame images G1 and G2) are displayed on the harvest target. The operator W1 can focus on the object to be harvested shown in the frame image and easily perform the harvesting operation.

As described later, even when the target object has a pest, a predetermined frame image showing the state of the pest is displayed. For example, when the individual Tx1 has a pest, a block image Gx1 showing the state of the pest is displayed. The frame image Gx1 is a single-dot chain line in purple, for example.

The operator terminal 1 may vary the form of the frame image with respect to the object in the image according to the distance from the positions of the operator W1 and the operator terminal 1 to the position of the object. The operator terminal 1 can detect the distance from the viewpoint of the operator W1 and the position of the operator terminal 1 to the position of the object by using, for example, image analysis processing or a distance measuring sensor. The operator terminal 1 controls the color, shape, size, and the like of the frame image of the object using the distance information.

In the example of the image 151 or the image 161, the thickness of the frame line of the frame image is different depending on the distance. For example, the operator terminal 1 is set to make the thickness of the frame line thicker and more conspicuous as the distance decreases, that is, as the object is closer to the operator W1. Thus, the operator W1 can sequentially focus on and harvest or judge objects close to the operator W1.

[ working auxiliary display (5) ]

Fig. 17 shows another display example of the harvesting work assistance information in the display surface 5. In the image 171, an image representing the object to be harvested is not a frame image but an image in which an arrow line is connected to a number. In the image 171, a numbered image is displayed at a position that is apart from the position of the object to some extent and does not overlap with another object. The position of the image of the display number may be an end portion of the display surface 5. The numbered images show, for example, a case of a circular frame image, and the color of the circular frame line or the thickness of the line may be changed according to, for example, the degree of maturity. The number is not limited to the maturity, and may be a number indicating the order of harvesting. For example, the object closer to the operator W1 may be given the numbers 1, 2, … … in this order.

[ working auxiliary display (6) ]

Fig. 18 shows another display example of the harvesting work assistance information in the display surface 5. The operator terminal 1 displays information of the object only in a partial range (for example, the range 182) centered on a point (for example, the point E1) of the destination in the visual line direction using the visual line information of the visual line detection result of the visual line detection sensor. In this example, frame images representing the harvested object and the non-harvested object are displayed in the range 182. The image indicating the range 182 may be displayed or may not be displayed. The range 182 is not limited to a rectangle, but may be an ellipse or the like.

A first example of the case of performing this display control is as follows. The operator terminal 1 detects the direction of the line of sight of the operator W1 using the line-of-sight detection sensor. The operator terminal 1 calculates the position of a point in the image of the destination in the detected line-of-sight direction. The operator terminal 1 sets a range of a predetermined size around the position of the point. The operator terminal 1 sets the range (for example, the range 182) as a detection area for the recognition processing. The object recognition unit 12 and the AI function 20 perform recognition processing for the detection region in the image data. In this case, since the processing target data can be reduced, the amount of calculation in the recognition processing can be reduced.

The second example is shown below. The operator terminal 1 sets a range (for example, the range 182) having a predetermined size with the position of the point in the image of the destination in the detected line-of-sight direction as the center. The object recognition unit 12 and the AI function 20 perform recognition processing for the entire area of the image data. The object selection unit 13 of the operator terminal 1 displays only a part of the information corresponding to the range 182 based on the information of the recognition result.

[ case of shipment work ]

Fig. 19 is an explanatory view of the shipment work support. Fig. 19 is an image 191 showing an example of a case where a plurality of cucumber individuals harvested by the worker W1 during shipment are arranged on a table. In this example, the images 191 include cucumber individuals K1 to K5. The worker W1 sorts the individuals in terms of rank, actual size, etc. at the time of shipment, and puts the sorted individuals into boxes, packages, etc. for each group. The worker W1 particularly uses the shipment work support function when performing the shipment work including the screening. The operator terminal 1 inputs such an image 191, and recognizes it by the object recognition unit 12 and the AI function 20. The AI function 20 takes the image 191 as input, and outputs information of the level or the actual size of each object as a recognition result. The display control unit 14 of the operator terminal 1 displays information on the grade or the actual size of each object in the image as shipment work support information based on the recognition result (corresponding selection result) of the object recognition unit 12.

Fig. 20 shows an example in which the shipping work support information is superimposed and displayed on the image 191 in fig. 19 on the display screen 5. In this example, as the shipping work support information, the image 192 showing the rank and the actual size is displayed at a position near the object in the image 191, for example, at a lower position. The image 192 is, for example, a textual image. In this example, the 5 individuals K1 to K5 have characters "S", "M", "L", "B", and "C" in order from the left. In this example, the products of class a are displayed with text images indicating the actual sizes. For example, S represents small, and in M, L represents large. These displays can be selected as only a grade, only a physical size, or both. By viewing the shipping work support information, the worker W1 can easily recognize the rank and the actual size of each individual, and can easily perform the shipping work including the screening of the individual for each rank and each actual size.

Fig. 21 shows another display example related to shipment work assistance. In this example, the image of the shipment job support information shows a case where the frame image is set for each individual unit. For example, rectangular red frame images are displayed for each of the individuals K1 to K3. The block image represents level a. Similarly to the description of the work instruction at the time of the harvest work assist, the work assist output in accordance with the delivery work instruction from the instructor W2 can be performed at the time of the delivery work assist. For example, the indicator W2 indicates shipment (or screening) at level a as an instruction for shipment. The object selection unit 13 of the operator terminal 1 selects a part of the information corresponding to the level a from the recognition result based on the operation instruction, and displays the shipment work support information corresponding to the information on the display screen 5. In this example, frame images showing shipment targets (or shipment instructions) are displayed for the individuals K1 to K3 who meet the level a. An explanatory image indicating that the frame image is the shipment object (level a) is also displayed on the display surface 5.

As another example of the display control, when a plurality of objects to be harvested or shipped are adjacent to each other in an image, a frame image or the like may be displayed for the plurality of objects, the frame image or the like being formed by grouping the plurality of objects into 1 group.

In the case of shipment work, the harvest performance or shipment performance can be counted based on the recognition result inputted with the image as shown in fig. 19. The actual result information grasped can be applied to a higher-level system and the like.

[ task Assist-Sound ]

In the following, an example of output in the case where the output mode of the work assistance information is voice is described. When there is at least a harvest target object in the image based on the information of the selection result, the operator terminal 1 outputs a predetermined sound (for example, "ping-pong" or the like) indicating that there is the harvest target object. The operator terminal 1 may directly output the sound "this is the harvest target" or "the harvest target is present" using the sound synthesis function. The operator terminal 1 may control the output sound to be different depending on the position of the harvest target object in the image, the direction or distance from the operator W1. For example, the image is roughly divided into several regions in the vicinity of the center of the image, and the left, right, upper, and lower sides with respect to the vicinity of the center. The operator terminal 1 changes the sound depending on which region in the image the object (which corresponds to the relationship between the operator W1 and the position or direction of the operator terminal 1 and the object) is reflected. For example, "ping-pong" may be output when the position of the object in the image is close to the center, and "porn" may be output when the position is far from the center. Alternatively, control for changing the sound volume such as increasing the sound volume when approaching the object may be used. The operator terminal 1 may notify the position of the harvest object or the like based on the positional relationship between the operator W1 and the object. For example, a sound such as "there is a harvest target on the right side" may be used.

The operator terminal 1 may control the sound output from the speaker 82 (particularly, a multi-speaker or stereo speaker including a plurality of speakers) to be different depending on the direction of the object when viewed from the operator W1. For example, when an object is present on the right side as viewed from the operator W1, sound is heard from the right side speaker of the multi-speaker, and when an object is present on the left side, sound is heard from the left side speaker.

In the case where the output is only a sound and is not displayed, as a control example, a sound indicating whether or not the crop is the harvest target may be output for a crop positioned in the direction of the camera 6 of the operator terminal 1 (roughly corresponding to the direction in which the head of the operator W1 is directed). Alternatively, the operator terminal 1 may determine, based on the image of the camera 6, a state in which the operator W1 has extended his/her hand to the object, a state in which the object is held by his/her hand, a state in which the operator W1 is close to the object, and output a sound indicating the harvest target.

[ work assistance-vibration, light ]

Next, an example of output in the case where the output mode of the work assistance information is vibration or light will be described. The operator terminal 1 outputs predetermined vibration or predetermined light indicating that there is a harvest target object at least in the image based on the information of the selection result. The operator terminal 1 may control the type or intensity of vibration, the type or intensity of light emission, and the like to be different depending on the position or orientation of the object to be harvested or the distance from the operator. The operator terminal 1 may change the state of vibration or the state of light emission depending on whether the operator W1 is close to or away from the object, for example. The kind of vibration or light emission may be specified by, for example, a pattern of on/off. In the case of light emission, for example, the light emission duration, the flicker, the light amount, and the like can be used for distinction. In the case where the field is dark, transmission by light is also effective.

The output modes of the display, sound, vibration, light, and the like may be used in combination. When a speaker device or a light emitting device is provided in a field, sound, vibration, light, or the like may be output from the operator terminal 1 instead of the operator terminal 1 in cooperation with the device. As the light emitting device, a laser pointer device or the like can be used. Laser pointer devices may also be provided within the field. The light emitting device may emit light such as laser light toward the object to be harvested to indicate the object. The operator W1 can recognize the harvest object and the like from the light.

[ field ]

Fig. 22 shows a structure of a map of a certain field (field a) viewed from the top. The operator terminal 1 may display an image showing an area where a pest is present or an image showing an area where a pesticide is to be sprayed on the display screen 5 in relation to the above-described pest identification support function 43. In particular, the operator terminal 1 can display a simple map of a field as shown in fig. 22 on the display surface 5, and display an image showing an area with a pest and disease damage and an area to be sprayed with a pesticide on the map. The area 221 shows an example of an area of the pesticide spraying portion corresponding to the area of the target object in which the pest is detected. The information of the area 221 is included in the aforementioned support information for coping of fig. 6.

Positions W1, W2, and W3 show examples of positions where the worker W1 is located. The directions d1, d2, d3 show examples of the shooting direction of the camera 6. The work assistance can be performed at any position or direction.

As another utilization method of the system, first, the operator W1 takes an image of the crop on each ridge by the camera 6 once along each lane of the field. Thus, the operator terminal 1 and the server 2 can aggregate the acquired data and perform recognition processing, and create a map indicating the state of the crop in the field based on the result of the recognition processing. The map describes the position or maturity of the harvest object, the state of the pest, and the like.

[ Effect and the like ]

As described above, according to the field work support system of embodiment 1, the following effects can be obtained. First, the work assist function 41 for harvesting or shipment can provide a specific and easily understandable harvest instruction or shipment instruction to the worker W1. Therefore, even when the worker W1 is an inexperienced person, it is possible to easily perform the judgment such as the screening at the time of harvesting or shipment by approximating the skill or experience level of the experienced person. The operator W1 can easily perform work based on the output of the work assistance, and can reduce the load of work and efficiently perform work. In addition, the work assistance output can also contribute to improvement in the skill of an inexperienced person and the like. Even if the worker W1 is an inexperienced person, the agricultural operator can perform work such as harvesting, and cost reduction and efficiency improvement of agricultural operation can be achieved.

The prediction support function 42 can achieve cost reduction, improvement in prediction accuracy, and the like. In the conventional JA and the like, much time, effort, and cost are required for predicting the yield and the like, such that each farmer needs to visit and listen. According to the prediction assistance function 42, prediction information such as a harvest prediction amount can be provided based on data collection by communication or processing. Therefore, the time, effort and cost which have been conventionally required can be reduced. According to the prediction support function 42, data collection can be performed in a short time, and prediction can be performed based on the actual knowledge of the growth state of the object in the field, so that the prediction accuracy can be improved.

The pest and disease identification support function 43 can reduce costs and prevent omission of work. By combining the recognition result and the positional information relating to the state of the pest with each other by the pest identification support function 43, it is possible to efficiently provide the response support such as the pesticide spraying. The agricultural operator can perform the agricultural chemical spraying operation at a predetermined spraying amount to a predetermined position in the field by the aid of the auxiliary. Agricultural operators can prevent omission of pesticide spraying, can reduce pesticide cost or pesticide spraying man-hour, and can realize cost reduction or efficiency improvement.

The harvest or shipment of crops varies depending on the transport distance, the number of shipment days, etc., in which maturity or rank is preferred. According to this system, the work assistance output according to the degree of maturity and the like can be performed in accordance with the work instruction of the instructor W2. In the case of agriculture, in a field on the spot, the state changes every day according to the type of crop or each individual. The system can perform work assistance in consideration of the characteristics of the crop.

In the case of the system of the prior art example as in patent document 1, an operator on site needs to determine whether or not harvesting of crops is to be performed. Therefore, in the case of an inexperienced person, it may be difficult to determine the result of harvesting or the like. According to the system of embodiment 1, even if the operator W1 is an inexperienced person, it is possible to easily determine which crop should be harvested.

[ modified examples ]

As a modification of the system of embodiment 1, the following can be mentioned. The operator terminal 1 can determine the object facing the lane where the operator W1 is currently located by using the AI function 20, and can select the work support information displayed on the display screen 5. Thereby, a frame image of the object that can be harvested by the worker W1 on the current aisle is displayed on the display screen 5. In the display surface 5, for example, a frame image of the object to be harvested in the adjacent lane is not displayed. Therefore, the worker W1 can efficiently perform the harvesting operation.

Fig. 23 shows a structure of a map viewed from the top of a certain field in a modification. In a certain field, camera devices 500 (for example, 4 cameras) are provided at predetermined fixed positions. In addition, speaker devices and the like, not shown, are also provided in parallel with the camera devices 500. The operator terminal 1 of the operator W1 cooperates with each camera device 500 or each speaker device. Each camera device 500 captures an image in a predetermined direction, and transmits the image to the operator terminal 1 or the server 2. The worker terminal 1 or the server 2 recognizes the object using the image in the same manner as described above, and outputs the work support information based on the recognition result. For example, the harvesting work assistance information is displayed on the display surface 5 of the operator terminal 1 of the operator W1 located at the position W1.

The harvest work assistance information in this case may be information for notifying or guiding the position W1 where the worker W1 is located of the position where the object to be harvested is located. The positions indicated by asterisks show the locations where the harvested objects are located. The operator terminal 1 may display an image indicating the position of each of the objects to be harvested on the map of the field. As described above, the output of the notification of the position of the harvest object and the like can be performed by various means such as display, sound, vibration, light, and the like. When a plurality of devices (for example, a plurality of speaker devices 500) are installed in a field, the position of the object can be transmitted to the operator W1 by controlling the output device by distinguishing the positions of the plurality of devices. As described above, the imaging means (the camera 6 or the camera device 500) may be separated from the recognition means (the AI function 20).

The camera device 500 may include an operator W1 as an object to perform imaging. The operator terminal 1 can detect the crop and the hand of the operator W1 in the image by using the camera 6 or the camera device 500, determine the operation of harvesting or the like, and measure the operation as the actual result. For example, the operator W1 picks up an object recognized in the image and harvests the object. In this case, the object is not shown in the image, and it can be estimated that the object has been harvested.

The system according to embodiment 1 is configured to include the server 2, but is not limited to this, and may be realized only by the operator terminal 1. The detection means for the position of the worker W1 is not limited to the GPS receiver, and a beacon, an RFID tag, an indoor positioning system, or the like may be applied. The camera for acquiring an image may be configured to use a camera provided in a helmet, work clothes, or the like worn by the operator W1. The operator terminal 1 may have functions of dust prevention, water prevention, heat resistance, heat dissipation, and the like for agricultural work. The image display and the audio output are preferably designed to be common and independent of languages of each country. The AI function 20 may perform the recognition process in consideration of the state of light on the spot (e.g., weather, morning, evening, etc.). The system may be a system including only a part of the work assisting function 41, the prediction assisting function 42, or the pest identification assisting function 43 in the system of embodiment 1. It may be a manner in which only a part of the functions can be used according to user settings.

(embodiment mode 2)

An on-site work support system according to embodiment 2 of the present invention will be described with reference to fig. 24 to 31. The basic configuration of embodiment 2 is the same as that of embodiment 1, and the following describes the components of embodiment 2 that are different from those of embodiment 1. In embodiment 2, a function (sometimes described as an actual result detection function) is added to embodiment 1. The actual performance detection function is a function of estimating and counting the number of objects at the time of harvesting or shipment based on the recognition of the image, and grasping the number as the actual performance. In embodiment 2, a portion for performing the work assistance output is the same as that in embodiment 1. The following description will be made about this function in the case of harvesting, but the same function can be applied also in the case of shipment.

The prediction support function 42 and the prediction system 202 in fig. 3 described above predict the amount of harvest prediction and the like, but in order to improve the accuracy of the prediction, it is effective to grasp and use the actual amount of harvest and the like, for example, the number of harvests and the like. Therefore, in embodiment 2, as shown in fig. 24, an actual performance detection unit 30 (particularly, a harvest detection unit) corresponding to the actual performance detection function is added to the system of embodiment 1. The performance detection unit 30 estimates and counts the number of objects to be harvested (for example, the number of harvested objects) at the time of a task such as harvesting performed by the operator W1 (fig. 1) based on the recognition result 301 of the image from the object recognition unit 12 (fig. 4 or fig. 5) and the image 302 from the image capturing unit 11. The actual results detection unit 30 stores and outputs the result of the counting, that is, information including the number of harvests as the harvest actual results 306. The prediction support function 42 and the prediction support system 202 described above can perform prediction processing such as a harvest prediction amount using the harvest results 305 of the detection results of the results detection unit 30. The system may output information such as the harvest performance 306 grasped using the function to the indicator W2 or the host system in fig. 3, or may output the information via the operator terminal 1 (for example, display the number of harvests on the display screen 5).

Thus, the field work support system according to embodiment 2 can efficiently measure the number of objects in accordance with the work of harvesting or shipping the crops in association with the work support output (for example, fig. 13), and can grasp this as the actual result. When the functions of the system are used, the workload is small and the measurement and the grasping can be performed with high accuracy as compared with the conventional measurement method of the number of harvests and the like at the site. In the conventional field, for example, when grasping the harvest performance, it is considered that the work load and the cost often stop until the approximate number of the harvested materials is measured. Examples of the measuring method include measurement of an approximate weight in a box or the like which contains a plurality of harvested products in a lump, and measurement of the number in a box or the like. In contrast, according to the function of the present system, the number of harvests and the like can be automatically measured in accordance with the operation such as harvesting in accordance with the work assist output. In the case of the system without the prediction support function 42 and the prediction support system 202, it is also useful to grasp the actual results by the actual results detection function.

[ actual performance detection function ]

Fig. 24 shows a configuration of the actual result detection unit 30 as a characteristic portion in the field work support system according to embodiment 2. The performance detector 30 is attached to at least one of the operator terminal 1 and the server 2 in fig. 1. The actual results detecting unit 30 may be installed as a part of the prediction support function 42 and the prediction system 202 in fig. 3 and 5, or may be installed as an actual results detecting function and an actual results detecting system that cooperate independently with the prediction support function 42 and the prediction system 202 in fig. 3 and 5, for example. In the present example, the performance detection unit 30 is realized by program processing or the like in the server 2. The performance detection unit 30 includes a harvest detection unit at the time of harvest and a shipment detection unit at the time of shipment, but fig. 24 shows the case of the harvest detection unit.

The actual results detection unit 30 receives the recognition result 301 (recognition result information) output from the object recognition unit 12 in fig. 4 or 5 and the image 302 output from the imaging unit 11. The image 302 and the recognition result 301 are information corresponding to the time point. The actual results detection unit 30 may also input and use a work instruction 310 (work instruction information) including information on the harvest target, which is output from the instruction system 201, the target selection unit 13, or the like in fig. 4. As shown in fig. 12, the recognition result 301 includes information on each object in the image. As described above, the work instruction information 310 includes information on the harvest target, for example, "the harvest target is tomatoes having a maturity of 3 or more," or in other words, information for selecting/limiting the target. Further, the selection result information output from the object selection unit 13 (fig. 4 or 7) may be used. In this case, as shown in fig. 12 and the like, the selection result information indicates that the object has been selected according to the harvesting work instruction and the like.

The performance detection unit 30 shown in fig. 24 includes a harvest object detection unit 31, a worker detection unit 32, a distance calculation unit 33, a harvest determination unit 34, and a condition setting unit 35 as more detailed processing units realized by program processing or the like. The performance detection unit 30 repeats the processing similarly for each image at each time point in the time series (the image output by the imaging unit 11 in fig. 4).

The outline of the processing flow performed by the performance detection unit 30 is as follows. First, in the first step, the harvesting object detection unit 31 detects whether or not the harvesting object has been recognized in the image using the information of the recognition result 301 and the job instruction 310, and detects the harvesting object if it has been recognized. The harvest target object here is the work target object 3 to be harvested by the worker W1 in accordance with the work instruction and the work assistance output. The number of harvests was counted for each type of harvest (for example, tomatoes). The object detection unit 31 detects each object when there are a plurality of objects in the image. The harvest object detection unit 31 outputs the harvest object information 303 as detection result information. The harvesting object information 303 includes the ID of each harvesting object and position information in the image.

Furthermore, the harvest object detection unit 31 may refine the detected harvest object by using the harvest object information of the work instruction 310 (or the selection result information from the object selection unit 13) in addition to the recognition result 301. For example, when the information about the object to be harvested is designated as "tomato having a maturity of 3 or more", the object to be harvested detecting unit 31 detects the object to be harvested among the objects in the image, the object to be harvested matching the designation. When the detailed information is obtained by using the work instruction information 310 or the like, an effect of further improving the accuracy or efficiency of the actual result detection can be expected.

On the other hand, in the second step, the operator detecting unit 32 detects whether or not a body part (described as an operator object) such as a hand of the operator W1 is included in the image 302 at the same time point corresponding to the recognition result 301 of the first step, using the input image 302, and detects the operator object if included. The operator detecting unit 32 detects an operator object based on image analysis or machine learning, as in the AI function 20 (fig. 4). The operator object to be detected is not limited to a hand or an arm, and may be a work glove, a tool such as scissors, or a machine used in a work such as harvesting. The operator object can be specified in advance by a program or the like constituting the performance detecting unit 30. The operator detecting unit 32 outputs the operator object information 304 as a detection result. The worker object information 304 includes an ID of the worker object or positional information within the image. As a modification, the AI function 20 (fig. 4) may be provided with a function of detecting an operator object from an image, and the recognition result 301 from the object recognition unit 12 may include recognition result information of the operator object. In this case, the operator detecting unit 32 may detect the operator object based on the input recognition result 301.

When the harvest object is detected in the first step and the operator object is detected in the second step, the distance calculation unit 33 calculates the distance DST between the harvest object and the operator object in the image using the input harvest object information 303 and operator object information 304 in the third step (fig. 25 and the like described later). The distance DST is a parameter for determining the harvesting operation and the harvested state related to the distance or overlap between the harvesting object and the operator object. The distance calculation unit 33 outputs distance information 305 including the distance DST of the calculation result.

In the fourth step, the harvested object determination unit 34 determines whether or not the object to be harvested is harvested (described as "harvested object") by the operator object such as a hand based on the input distance information 305 and the predetermined condition, and counts the number of harvests based on the determination result. The harvest determining unit 34 uses a condition such as a threshold value preset by the condition setting unit 35 when performing the determination process. In this example, the threshold values include a distance threshold value TD, a time threshold value TT1 (first time threshold value), and a time threshold value TT2 (second time threshold value). The threshold value of the condition may be changed according to the determination processing method, or may be user-settable.

The fourth step comprises the following step A, B, C, D in more detail. First, in step a, the harvest determination unit 34 determines whether or not the distance DST between the harvest target object and the operator object is equal to or less than a predetermined distance threshold TD (DST ≦ TD). In other words, the determination is a determination as to whether or not the operator object such as a hand is sufficiently close to the harvest target object.

When the distance DST is equal to or less than the predetermined distance threshold TD in step a, the harvested subject determination unit 34 further determines whether or not the state in which the distance DST is small (for example, the time T1 corresponding to the number of image frames) continues for the predetermined time threshold TT1 or more (T1 ≧ TT1) in step B. This determination corresponds to a determination of whether or not the harvest object is held by an operator object such as a hand. In this determination, a certain time period (TT1) is used to eliminate the temporal overlap.

When the threshold TT1 or more continues for the predetermined time period in step B, the harvested subject determination unit 34 further determines whether or not the subject is not recognized in the image in the time series and whether or not the unrecognized time period (for example, the time T2 corresponding to the number of image frames) continues for the predetermined time period or more (the second time threshold TT2 or more) (T2 ≧ TT2) in step C. In this determination, it is used that when the harvesting operation is performed, the harvesting object is not recognized because the harvesting object appears outside the image together with the operator object such as the hand.

When the period during which the object to be harvested is not recognized continues for a predetermined period or longer in step C, the harvest determination unit 34 determines that the object to be harvested has been harvested ("harvested"). In step D, the number-of-harvests parameter value is counted by the number-of-harvests parameter determining unit 34 by 1. The harvested part 34 stores and outputs the result information 306 including the number of harvested plants up to that point. Thereafter, the process returns to the first step and repeats the same.

The period in which the number of harvests is to be counted by the performance detection unit 30 can be applied to, for example, a mode in which the start time and the end time are designated by the user or the operation of turning on and off the function is performed.

As described above, the performance detection unit 30 determines the harvesting operation and counts the number of harvests by determining the distance or overlap between the object to be harvested and the operator object such as a hand in the image. In short, in the fourth step of the determination processing method, the harvested object determination unit 34 determines that the harvested object is harvested when the distance DST between the harvested object and the operator's object is equal to or less than the predetermined value, the state continues for the predetermined time or more, and the state in which the harvested object is not recognized continues for the predetermined time or more thereafter.

As described above, according to the actual results detection function of embodiment 2, the number of objects can be efficiently measured and grasped as actual results in association with the work of harvesting or shipping the crops in association with the work assist output. Further, according to the actual results detection function, it is possible to perform collation or difference determination in information of work instructions such as harvesting instructions or work assistance outputs and actual results information such as the number of harvests. Based on the comparison, it is possible to grasp, for example, whether or not the operator W1 actually performs the harvesting operation with respect to the harvesting object designated as the work assist output.

[ example of image ]

Hereinafter, a specific example of processing for the harvest determination will be described using examples of images such as those shown in fig. 25 to 29. Fig. 25 shows a specific example of the detection results of the harvest target object and the operator object with respect to an image at a certain time point. Within the image, the harvest object 251 and the operator object 252 have been detected. Here, the position of the object to be harvested is defined as a position within the two-dimensional image (x on the horizontal axis and y on the vertical axis), and is defined as the center position of the object region. The position Pt of the harvest object 251 is (tx, ty). Here, the region of the harvest object 251 based on the recognition result 301 is shown by a rectangular frame. The operator object 252 is a right-hand object of the operator W1, and is schematically illustrated as a transmissive region for the sake of explanation. Similarly, the position of the operator object is defined as the position in the two-dimensional image as the center position of the object region. The position Pw of the operator object 252 is (wx, wy). Here, the region of the operator object 252 based on the image 302 is shown by a dotted line frame. In the image, there are cases where the object to be harvested is not included and there are cases where a plurality of objects to be harvested are included. In this example, one harvest object 251 corresponding to "tomatoes having a maturity of 3 or more" is included and detected in the image.

The worker W1 sets the right hand to be extended to pick up the harvest object 251 as a harvest operation corresponding to the work assist output (for example, fig. 13). Such a hand may be used as an example of the operator object to be detected. The operator detecting unit 32 may detect the shape, color, and the like of the hand. Not limited to this, when the worker W1 wears a work glove on his hand, the work glove may be a detection target. When the worker W1 holds a tool such as a pair of scissors for work with his or her hand and uses it, or when a work machine is being used, the tool, the machine, or the like may be a detection target. For example, a predetermined mark may be given to a work glove or the like in advance, and the mark may be a detection target. The operator detecting unit 32 may detect the mark as an operator object. The mark may be a code such as a barcode. By using these means, an effect of easily detecting an operator object from within an image can be expected.

Further, a plurality of operator objects such as the left hand and the right hand of the operator W1 may be detected in one image. In this case, the system according to embodiment 2 may detect each of the operator objects and apply the same processing for each of the operator objects. For example, when both hands are included in the image, the performance detection unit 30 may perform the determination using the hand closer to the harvest target object than the DST.

The distance DST (especially DST1) in fig. 25 is a distance between the position Pt (tx, ty) of the harvest object 251 in the image and the position Pw (wx, wy) of the operator object 252. The distance DST can be calculated by the following equation, for example.

Fig. 26 is an example of an image at a time point later than the time point of the image of fig. 25, and particularly shows a case where the head or the field of view of the operator W1 is almost fixed. In this image, although the position Pt of the harvest object 251 does not change, the position Pw of the operator object 252 is close to the position Pt. The distance DST becomes smaller, in particular DST2(DST2 < DST 1). The distance DST2 is within the distance threshold TD. For comparison, an example of the distance threshold TD is also illustrated. The distance threshold TD can be set in consideration of, for example, a standard size of the harvest object or hand. In this example, a part of the operator object 252 is superimposed on the harvest object 251, i.e., on the side closer to the operator W1 in the depth direction (z) of the image.

Fig. 27 is an example of an image at a time point further back than fig. 26, and particularly shows a state where the operator object 252 as a hand is almost overlapped on the harvest object 251 and the distance DST (particularly DST3) is close to 0. This state corresponds to a state in which the operator W1 has picked up the harvest object 251 by hand, or the like. In this state, the harvest object 251 may not be recognized because it is blocked by the operator object 252 such as a hand. When the harvest object 251 is located on the front side of the hand, the possibility of recognition is high. When the operator object 252 such as a hand is located on the inner side of the harvest object 251, it may not be recognized.

Fig. 28 further shows an example of an image in a state where the object 251 and the operator object 252 such as a hand are moved from the state of fig. 27 in the image by the harvesting operation of the operator W1. The operator W1 moves the hand holding the harvest object 251 forward (downward right direction in the image).

Fig. 29 further shows an example of an image in a state where the object 251 to be harvested and the operator object 252 are not detected from the image by the harvesting operation of the operator W1. When the state where the harvest object 251 is not detected from the image continues for a predetermined time or longer as shown in fig. 28 or 29, it is determined that the harvest object is being performed as described above.

[ modification-determination of harvesting behavior ]

Fig. 30 shows an example of an image obtained when the harvest determination is performed in the modification. In particular, the determination of harvest by the harvest determination unit 34 in fig. 24 is not limited to the above-described manner based on the distance DST. In the modification, the determination unit 34 determines that the object is harvested based on the determination of the harvesting operation, which is a specific operation of the operator's body. The specific operation is predetermined as a detection target in the operator detecting unit 32 or the AI function 20. For example, when the object to be harvested is tomatoes, the worker W1 holds the tomatoes by wrapping them with hands or work gloves during the harvesting operation. In the image example of fig. 30, tomatoes, which are the objects 251 to be harvested, are held so as to be enclosed by the operator objects 252 such as work gloves, and the objects 251 themselves are not recognized. In this modification, the discriminator 34a is created in advance by machine learning, image analysis, or the like using an image as an input so that an image portion corresponding to such a specific operation can be identified and discriminated from the image. For example, the plurality of images 320 captured in the harvesting operation are input to the discriminator 34a as teaching information to perform machine learning.

The performance detecting unit 30 in fig. 30 does not need the distance calculating unit 33 as in fig. 24. The harvest determining unit 34 shown in fig. 30 includes a discriminator 34a capable of discriminating the harvest operation. The harvesting determination unit 34 receives an image 302 as in the illustrated example, and determines and detects a harvesting operation, which is a specific operation of the operator object 252, from the image 302 by the determiner 34 a. The harvested part 34 determines that the object is harvested when the harvesting operation is detected from the image 302, or when a condition such as a state where the object 251 cannot be recognized due to the harvesting operation continues for a predetermined time or longer is further satisfied. In the case of using this method, the object to be learned or determined as the harvesting operation may be one hand by the right hand or the left hand, both hands, or a work tool. These can be learned and discriminated from the image inputted as teaching.

[ modified example-relative positional relationship ]

Fig. 31 shows another example of processing related to the harvest determination performed by the harvested part 34 in fig. 24 as a modification. In this modification, the relative positional relationship between a plurality of objects within an image is used. In the image of fig. 31 (a), as an example, the left hand of the harvested object a, the non-harvested objects B and C, and the operator object D is recognized. It should be noted that the present invention is not limited to these objects, and other arbitrary objects may be recognized from the image based on image analysis or the like. The positions of the 4 objects are shown with points A, B, C, D. Here, when focusing on the harvest object a, the relative positional relationship with each other object (B, C, D) is shown by a broken line. For example, the harvest object a (point a) has a relative positional relationship including a distance Dab between the non-harvest object B (point B).

Fig. 31 (B) shows a state in which the operator W1 grips and moves the harvest object a with a hand (operator object D) from the state of (a). Here, the head position or the field of view of the operator W1 is assumed to be substantially stationary between the images (a) and (B). The position of the harvest object a in the image is moved, and the movement is indicated by a dotted arrow. With this movement, the relative positional relationship between the object a and another object, for example, the distance Dab from the object B not to be harvested changes. The harvested object determination unit 34 determines whether or not the harvested object a is sufficiently separated from the original position (for example, point a) in order to determine the harvesting of the harvested object a. Therefore, the harvest determination unit 34 determines that the object is harvested when, for example, a change in the distance Dab between the object a to be harvested (point a) and the object B to be harvested (point B), for example, a difference between before and after the movement, is greater than or equal to a predetermined threshold value. In addition, when only one relative positional relationship is determined as the distance Dab, the non-harvest object B may be moved. Therefore, the other relative positional relationship such as the distance Dac can be determined more reliably.

As described above, the harvested object determination unit 34 can determine the harvested object a (corresponding harvesting operation) by determining the change in the relative positional relationship between the objects in the image, and can count the number of harvests. When the object a cannot be recognized in the image, the position of the hand (operator object D) gripping the object a may be used instead to determine the object a to be harvested. When the head of the operator W1 moves in time series, the contents of the image at each time point also change, and the position of each object in the image changes. In this case, since the relative positional relationship between the objects is used, it is possible to determine that the object is harvested in the same manner as described above.

[ modified example-work assistance information ]

As a modification of embodiment 2, the performance detection unit 30 may similarly perform determination of harvesting or the like using the work support information 4 output from the output control unit including the display control unit 14 or the like of fig. 4. For example, the performance detection unit 30 determines that the object is harvested based on the state of the work support information 4 (for example, the image 132 of fig. 13 showing the harvesting object such as the surrounding harvesting object) in the image and the distance between the operator's object.

The present invention has been specifically described above based on the embodiments, but the present invention is not limited to the embodiments and various modifications can be made without departing from the scope of the present invention.

Description of the reference numerals

1 … operator terminal (intelligent device), 1a … portable terminal, 1B … wearable terminal, 2 … server, 3 … work object (object, crop), 4 … work auxiliary information, 5 … display screen, 6 … camera, 61 … built-in camera, 62 … external camera, 40 … management function, 41 … work auxiliary function, 42 … AI function, W1 … operator, W2 … indicator.

53页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:面部认证系统以及面部认证方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!