Method and device for monitoring industrial process steps

文档序号:348168 发布日期:2021-12-03 浏览:21次 中文

阅读说明:本技术 用于监控工业过程步骤的方法和设备 (Method and device for monitoring industrial process steps ) 是由 托马斯·诺伊曼 丹尼尔·马塞克 弗洛里安·魏斯 于 2020-02-26 设计创作,主要内容包括:本发明涉及一种用于借助于监控系统来监控工业过程的工业过程步骤的方法,其中所述方法包括如下步骤:-提供监控系统的机器学习系统,所述机器学习系统借助于至少一个机器训练的决策算法包含作为输入数据的数字图像数据和作为输出数据的待监控的工业过程步骤的过程状态之间的相关性;-借助于监控系统的至少一个图像记录单元的至少一个图像传感器来记录数字图像数据;-通过机器学习系统的决策算法来确定工业过程步骤的至少一个当前过程状态,其方式为:基于训练的决策算法从作为机器学习系统的输入数据的所记录的数字图像数据中生成工业过程步骤的至少一个当前过程状态作为机器学习系统的输出数据;和-通过借助于输出单元与至少一个确定的当前过程状态相关地产生视觉的、声学的和/或触觉的输出来监控工业过程步骤。(The invention relates to a method for monitoring an industrial process step of an industrial process by means of a monitoring system, wherein the method comprises the following steps: -providing a machine learning system of the monitoring system, which machine learning system contains, by means of at least one machine-trained decision algorithm, a correlation between digital image data as input data and a process state of an industrial process step to be monitored as output data; -recording digital image data by means of at least one image sensor of at least one image recording unit of the monitoring system; -determining at least one current process state of the industrial process step by a decision algorithm of the machine learning system by: generating at least one current process state of the industrial process step from the recorded digital image data as input data of the machine learning system based on a trained decision algorithm as output data of the machine learning system; and-monitoring the industrial process step by generating a visual, acoustic and/or tactile output in relation to the at least one determined current process state by means of the output unit.)

1. A method for monitoring industrial process steps of an industrial process by means of a monitoring system (1), wherein the method comprises the steps of:

-providing a machine learning system of the monitoring system (1), which machine learning system contains, by means of at least one machine-trained decision algorithm (131, 311), correlations between digital image data as input data (D110, D120) and process states of industrial process steps to be monitored as output data (D131, D311);

-recording the digital image data (D110, D120) by means of at least one image sensor (110, 120) of at least one image recording unit of the monitoring system (1);

-determining at least one current process state (D131, D311) of the industrial process step by the decision algorithm (131, 311) of the machine learning system by: generating at least one current process state of an industrial process step from the recorded digital image data as input data (D110, D120) of the machine learning system as output data (D131, D311) of the machine learning system based on a trained decision algorithm (131, 311); and

-monitoring the industrial process step by generating a visual, acoustic and/or tactile output in relation to at least one determined current process state (D131, D311) by means of an output unit (140).

2. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,

it is characterized in that the preparation method is characterized in that,

the machine learning system comprises an artificial neural network as a decision algorithm (131, 311).

3. The method according to claim 1 or 2,

it is characterized in that the preparation method is characterized in that,

the digital image data (D110, D120) is recorded and transmitted to the machine learning system by means of at least one mobile device (100) which is carried by a person participating in the industrial process step and is arranged on at least one digital image sensor (110, 120) of an image recording unit.

4. The method according to any one of the preceding claims,

it is characterized in that the preparation method is characterized in that,

learning, by a training module (312) of the machine learning system, one or more parameters (D312) of the decision algorithm (131, 311) based on the recorded digital image data (D110, D120) in a training mode, and/or determining, by the decision algorithm (131, 311) of the machine learning system, at least one current process state (D131, D311) of the industrial process step in a production mode.

5. The method according to any one of the preceding claims,

it is characterized in that the preparation method is characterized in that,

at least one current process state (D131) of the industrial process step is determined by a decision algorithm (131) executed on at least one mobile device (100) carried by a person participating in the industrial process step.

6. The method of claim 5, wherein the first and second light sources are selected from the group consisting of,

it is characterized in that the preparation method is characterized in that,

transmitting the recorded digital image data (D110, D120) to a data processing facility (300) accessible in a network (200), wherein one or more parameters (D312) of the decision algorithm (131) are learned by a training module (312) of the machine learning system executing on the data processing facility (300) based on the recorded digital image data (D110, D120), and subsequently the parameters (D312) of the decision algorithm (131) are transmitted by the data processing facility (300) onto a mobile device (100) carried by a person and are a basis for the decision algorithm (131).

7. The method according to any one of the preceding claims,

it is characterized in that the preparation method is characterized in that,

transmitting the recorded digital image data (D110, D120) to a data processing facility (300) accessible in a network (200), wherein the at least one current process state (D131, D311) of the industrial process step is determined by a decision algorithm (311) executed on the data processing facility (300), wherein subsequently in relation to the determined current process state of the industrial process step the output unit (140) is manipulated by the data processing facility (300) to produce a visual, acoustic and/or haptic output.

8. The method of claim 7, wherein the first and second light sources are selected from the group consisting of,

it is characterized in that the preparation method is characterized in that,

learning, by a training module (312) of the machine learning system executing on the data processing facility (300), one or more parameters (D312) of the decision algorithm (311) based on the recorded digital image data.

9. The method according to any one of the preceding claims,

it is characterized in that the preparation method is characterized in that,

a plurality of decision algorithms are stored on the data processing facility (300), which have been learned or are learned independently of one another, wherein one decision algorithm is selected from the plurality of decision algorithms in relation to selection criteria and/or optimization criteria, wherein the selected decision algorithm serves as a basis for determining the current process state.

10. A monitoring system (1) for monitoring an industrial process step of an industrial process, wherein the monitoring system (1) has the following:

-at least one image recording unit having at least one digital image sensor (110, 120) for recording digital image data (D110, D120);

-a machine learning system with at least one machine trained decision algorithm (131, 311) containing correlations between digital image data as input data (D110, D120) of the machine learning system and process states of industrial process steps to be monitored as output data (D131, D311) of the machine learning system;

-at least one computing unit (130, 310) for determining at least one current process state of the industrial process step by means of a decision algorithm (131, 311) executable on the computing unit (130, 310) in such a way that: generating at least one current process state of the industrial process step from the recorded digital image data as input data (D110, D120) of the machine learning system based on a trained decision algorithm as output data (D131, D311) of the machine learning system; and

an output unit (140) which is set up to generate a visual, acoustic and/or haptic output for the person in relation to the at least one determined current process state (D131, D311).

11. Monitoring system (1) according to claim 10,

it is characterized in that the preparation method is characterized in that,

the machine learning system includes an artificial neural network as a decision algorithm.

12. Monitoring system (1) according to claim 10 or 11,

it is characterized in that the preparation method is characterized in that,

the monitoring system (1) has at least one mobile device (100), which forms at least one digital image sensor (110, 120) for carrying by at least one person and at which the image recording unit can be arranged, so that digital image data (D110, D120) can be recorded, wherein the mobile device (100) is set up to transmit the recorded digital image data (D110, D120) to the machine learning system.

13. Monitoring system (1) according to one of the claims 10 to 12,

it is characterized in that the preparation method is characterized in that,

the monitoring system (1) has a training mode in which one or more parameters (D312) of the decision algorithm (131, 311) are learned by the training module (312) of the machine learning system on the basis of the recorded digital image data (D110, D120), and/or the monitoring system (1) has a production mode in which the at least one current process state (D131, D311) of the industrial process step is determined by the decision algorithm (131, 311) of the machine learning system.

14. Monitoring system (1) according to one of the claims 10 to 13,

it is characterized in that the preparation method is characterized in that,

the monitoring system (1) has a mobile device (100) having a computing unit (130, 310), which can be carried by a person participating in the industrial process step, wherein the mobile device is set up to determine at least one current process state of the industrial process step by means of a decision algorithm executed on the computing unit (130, 310).

15. Monitoring system (1) according to claim 14,

it is characterized in that the preparation method is characterized in that,

the monitoring system (1) has a data processing facility (300) accessible in a network (200), which is set up to receive digital image data recorded by the image recording unit, to learn one or more parameters (D312) of the decision algorithm based on the received digital image data by a training module (312) of the machine learning system executing on the data processing facility (300), and to subsequently transmit the parameters of the decision algorithm by the data processing facility (300) to a mobile device carried by a person.

16. Monitoring system (1) according to one of the claims 10 to 15,

it is characterized in that the preparation method is characterized in that,

the monitoring system (1) has a data processing facility (300) accessible in a network (200), which is set up to receive digital image data recorded by the image recording unit, to determine at least one current process state of the industrial process step by means of a decision algorithm executed on the data processing facility (300), and to manipulate the output unit (140) in relation to the determined current process state of the industrial process step to produce a visual, acoustic and/or haptic output.

17. Monitoring system (1) according to claim 16,

it is characterized in that the preparation method is characterized in that,

the data processing facility (300) is furthermore set up to learn one or more parameters of the decision algorithm on the basis of the received digital image data by means of a training module (312) of the machine learning system executing on the data processing facility (300) and to serve as a basis for the decision algorithm.

18. Monitoring system (1) according to one of the claims 10 to 17,

it is characterized in that the preparation method is characterized in that,

the monitoring system (1) is designed to carry out the method according to one of claims 1 to 9.

Technical Field

The invention relates to a method for monitoring an industrial process step of an industrial process by means of a monitoring system. The invention also relates to a monitoring system therefor.

Background

In industrial production, also partially manual process steps are required today, which have to be carried out manually by a person. In the scope of quality assurance, manual or manual process steps are required which must be carried out actively by a person in order to check the product with respect to its predetermined features and to archive the checks if necessary.

Even in the case of sub-processes in production, which still require manual or manual process steps to be carried out by a specialist, it is desirable to check or monitor the manually or manually carried out process steps for their correctness in the sense of quality assurance. This requires additional maintenance and assembly time, since errors during the process steps of manually or manually handling the entire industrial process can cause facility shutdowns or damage to the facility in the subsequent automated sub-process. Furthermore, process steps that may be executed incorrectly are not discovered until the end in the quality assurance, which causes a large waste of resources.

From EP 1183578B 1 a device is known which describes an augmented reality system with a mobile device for context-dependently fading in installation instructions.

A method for contextually supporting interactions by means of augmented reality technology is known from EP 1157316B 1. In order to support optimization, in particular during system construction, it is proposed that the specific operating situation be automatically detected and statistically analyzed during commissioning up to maintenance of the automation-technology-controlled system and process.

Networked augmented reality systems are known from US 2002/0010734 a1, which are composed of one or more local sites or a plurality of local sites and one or more remote sites. The remote site may provide resources that are not available in the local site, such as databases, high performance calculators, and the like.

From US 6,463,438B 1 a neural network based image recognition system is known for detecting cancer cells and classifying tissue cells as normal or abnormal.

Disclosure of Invention

The object of the present invention is to provide an improved method and an improved device, by means of which manual process steps of an industrial process can be monitored with respect to quality assurance.

This object is achieved according to the invention by means of a method according to claim 1 and a corresponding device according to claim 9. Advantageous embodiments are to be found in the respective dependent claims.

According to claim 1, a method for monitoring an industrial process step of an industrial process by means of a monitoring system is proposed, wherein a machine learning system of the monitoring system is first provided. A machine learning system is provided having at least one machine-trained decision algorithm containing correlations between digital image data as input data and process states of industrial process steps as output data. The machine learning system is therefore provided to the system by means of at least one decision algorithm, wherein the digital image data is learned as input data with respect to its respective process state, so that the respective process state can be derived and determined from the learned correlations by inputting the digital image data on the basis of the principle of learned generality.

In order to monitor industrial process steps, in particular process steps which are carried out manually or manually by a person, digital image data are now recorded continuously by means of at least one image sensor of at least one image recording unit. The digital image sensor can be worn by a person on the body and therefore records digital image data, in particular in the line of sight or in the operating range of the person. It can be provided that a plurality of persons participate in the process steps to be carried out, wherein a plurality of these persons can be equipped with an image recording unit. However, it is also conceivable for the line of sight and/or the operating range of one or more persons to be recorded by means of at least one static image recording unit and a corresponding image sensor.

The digital image data recorded by the at least one image recording unit are transmitted via a wired or wireless connection to a machine learning system having at least one decision algorithm, wherein the process state learned for this is determined as output data on the basis of the digital image data as input data into the decision algorithm. Based on the determined process state, the output unit is now actuated such that a visual, acoustic and/or haptic output is output to a person, in particular a person participating in the process.

For example, it is conceivable that, in the identified process state, which characterizes an erroneous state of a process step, a corresponding visual, acoustic and/or haptic warning is output to the person in order to be thus aware of the erroneous process progress.

It is thereby possible that, during a process error when a particularly manual process step is carried out, the corresponding faulty process progress can already be indicated to a person, so that this faulty process progress does not propagate further through the entire industrial process and can therefore cause greater damage. Rather, the invention allows errors in the execution of manual process steps to be identified already during the occurrence and to be indicated to the relevant personnel. In addition, it is also possible, in the context of manual quality assurance, to improve and more efficiently carry out the process steps of quality assurance by automatically identifying faulty components in favor of the person responsible for the quality assurance. Furthermore, by means of the invention, manually executed process steps can be archived, whereby filing obligations in the execution of safety-relevant process steps can be met.

The machine learning system with decision algorithm can be executed, for example, on a computing unit, wherein the computing unit together with the digital image sensor can be arranged in a mobile device and carried by the relevant person. It is also conceivable that the digital computation unit with the decision algorithm is a component of a larger data processing facility to which the image recording device or the digital image sensor is connected wirelessly or by wire. Of course, a mixture of the two variants, i.e. a central and a non-central provision of the decision algorithm, is also conceivable.

In one embodiment, the decision algorithm of the machine learning system is an artificial neural network that obtains digital image data (in a processed state or in an unprocessed state) as respective input data via respective input neurons and generates outputs by respective output neurons of the artificial neural network, wherein the outputs are representative of a process state of the industrial sub-process. Due to the ability of the artificial neural network to be trained in a training method in conjunction with its weighted connection such that it can generalize learning data, the currently recorded image data can be provided as input data to the artificial neural network such that the recorded image data can be associated with the respective process state based on learning.

In one embodiment, the digital image data is recorded by at least one mobile device, wherein the mobile device is carried by a person participating in the step of the industrial process, and wherein one or more digital image sensors are provided on the mobile device. The image data recorded by the mobile device is then transmitted to a machine learning system having at least one decision-making algorithm.

Such a mobile device may for example comprise or be a glasses construction wearable by a person, wherein at least one image sensor is provided at the wearable glasses construction. The image data is now recorded by means of a spectacle construction worn by the person and transmitted to a machine learning system with a decision algorithm. The digital image sensor is arranged here at the spectacle construction such that it records the line of sight of the person when the spectacle construction is worn as spectacles by the person. Since the head is usually oriented in the direction of the line of sight, preferably the operating range or section of the person is also recorded when the person is looking in the respective direction. Such a mobile device with a glasses configuration may be, for example, VR glasses (virtual reality) or AR glasses (augmented reality).

The spectacle construction can be connected to the above-described computing unit or have such a computing unit. It is conceivable here for the eyeglass design to have a communication module in order to communicate with the computing unit when the computing unit is arranged at a remote location from the knowledge database of the machine learning algorithm. Such a communication module may be wireless or wired, for example, and be directed to a corresponding communication standard, such as bluetooth, ethernet, WLAN, etc. The communication module can be used to transmit image data and/or the current process state, which are already known by means of decision algorithms.

The output unit is provided for visual, acoustic and/or haptic output at the spectacle structure, so that the output unit can generate a corresponding visual, acoustic and/or haptic output to the person. In a corresponding augmented reality system with glasses, it is conceivable to project a corresponding indication of the type of vision in the region of the person's line of sight in order to transmit the process state determined from the machine learning system as a corresponding output to the person. If, for example, the position of the spectacle construction in space and its orientation are known, in addition to a purely visual output, a location-accurate output can also be made, i.e. the surroundings of the person that can be perceived by the human eye are visually expanded by the respective indication, so that the indication is directly at the respective object in the surroundings of the person.

It is contemplated that the acoustic output is in the form of speech output, tones, or other acoustic indications. Haptic output is contemplated, for example in the form of vibrations or the like.

The digital image sensor may be, for example, a 2D image sensor for recording 2D image data. In this case, a digital image sensor is generally sufficient. It is also conceivable, however, that the digital image sensor is a 3D image sensor for recording digital 3D image data. Corresponding combinations of 2D and 3D image data are also contemplated. The 2D image information or the 3D image information is then provided as input data according to at least one decision algorithm of a machine learning system in order to obtain a process state as output data. By means of the 3D image data or the combination of the 2D and 3D image data, a significantly higher output accuracy is achieved. Thus, in connection with the 3D image data or the combination of the 2D and 3D image data, corresponding (additional) parameters of the physical object, such as for example size and ratio values, may be detected and taken into account together when determining the current process state. Furthermore, additional depth information can be determined with the aid of the 3D image data in the context of the present invention and taken into account together when determining the current process state.

The 3D image data can also be used to scan, measure objects and/or measure distances to them, and be taken into account when determining the current process state. This improves the method, since further information, for example for identifying faulty components, is detected and evaluated and thus the process steps for quality assurance are improved.

The 3D image sensor may be, for example, a so-called time-of-flight camera. There are other, known image sensors that may be used in the context of the present invention.

Furthermore, it is conceivable that parameters determined from the 3D image data, which parameters can be derived directly or indirectly from the 3D image data, such as, for example, size, relationship, distance, etc., are at least partially learned together. The decision algorithm therefore includes not only the correlation between the image data and the process state, but additionally in an advantageous embodiment also the correlation between the process parameters and the process state, which is derived from the 3D image data or from a combination of the 2D and 3D image data. Whereby the recognition accuracy can be improved.

However, a telephone, such as, for example, a smartphone or a tablet computer, is also conceivable as a mobile device with an image sensor. The mobile device can also comprise an output unit in addition to the image recording unit, so that a corresponding person wearing the mobile device can also perceive a corresponding output of the output unit via the mobile device.

The monitoring system can be designed such that in the training mode at least one decision algorithm of the machine learning system learns from the recorded digital image data. In this case, it is conceivable that the decision algorithm of the machine learning system is first trained in a training mode and then only operated in a production mode. However, a combination of training mode and production mode is also conceivable, so that not only the process state is continuously determined from the decision algorithm of the machine learning system as output data, but the decision algorithm (and the knowledge base stored therein) is also continuously learned (for example in the form of the disclosed learning method) further. It is thereby possible to develop decision algorithms continuously in order to improve the output performance accordingly.

In this case, it is conceivable that the decision algorithm of the machine learning method is run as an example on the computing unit in a first possible alternative, so that the production mode and the possible training mode are executed on the basis of the same knowledge or by means of the same decision algorithm. In a further alternative, however, it is also conceivable that at least one decision algorithm is run on two separate computing units or is present in one computing unit as at least two instances, wherein the production mode of a first instance of the decision algorithm is executed while the training mode is executed on a second instance. Thus, in production mode, the decision algorithm remains unchanged, while the second instance of the decision algorithm continues to evolve. The second alternative is particularly advantageous if the machine learning system with the decision algorithm is executed on a mobile computing unit. Since the computing power is not usually provided for a complex training mode, it is possible to execute only the production mode in the mobile computing unit, while learning a further knowledge database on a second computing unit (e.g. a server facility) that is located remotely.

It is hereby advantageous that one or more parameters of the decision algorithm are learned by a training module of the machine learning system based on the recorded digital image data in the training mode and/or that at least one current process state of the industrial process step is determined by the decision algorithm of the machine learning system in the production mode.

In a further advantageous embodiment, the at least one current process state of the industrial process step is determined by a decision algorithm executed on at least one mobile device, wherein the mobile device is carried by a person participating in the industrial process step. It is conceivable here that there are also a plurality of mobile devices on which the respective decision algorithm of the machine learning system is executed in each case, so that the respective current process state is determined at each mobile device by means of the executed decision algorithm.

It is conceivable here to transmit the recorded digital image data to a data processing facility accessible in the network, wherein one or more parameters of the decision algorithm are learned on the basis of the recorded digital image data by a training module of the machine learning system executed on the data processing facility, and subsequently the parameters of the decision algorithm are transmitted by the data processing facility to a mobile device carried by the person and serve as a basis for the decision algorithm.

It is thereby possible to continuously train the decision algorithm with the recorded digital image data and subsequently transmit the parameters of the trained decision algorithm to the respective mobile device at regular intervals in order to thus continuously improve the basis or knowledge base for the decision algorithm. Due to the fact that the mobile device does not have the required computational power for training the parameters of the decision algorithm based on the newly recorded image data, it is advantageous to perform the production mode and the training mode on different devices. Here, large server facilities are particularly well suited for training such decision algorithms.

Alternatively or additionally, but also conceivable, the recorded digital image data are transmitted to a data processing facility accessible in a network, wherein at least one current process state of the industrial process step is determined by a decision algorithm executed on the data processing facility, wherein subsequently an output unit is operated by the data processing facility in relation to the determined current process state of the industrial process step to generate a visual, acoustic and/or haptic output. It may be provided that one or more parameters of the decision algorithm are learned on the basis of the recorded digital image data by a training module of the machine learning system that is executed on the data processing facility. The control of the output unit can take place here directly via the data processing facility or indirectly via the docking of one or more mobile devices.

In this embodiment, the production mode and possibly the training mode are therefore carried out on a data processing facility accessible in the network, so that only the image data of the image sensor are transmitted by the mobile device as long as the output unit is set on the mobile device, the result of the current process state being transmitted back to the mobile device.

It is conceivable here that for each mobile device there is an own decision algorithm on the data processing facility, which is learned in the training mode. The data processing facility can then be set up such that it combines the decision algorithms with one another in order to improve the result, in order to further optimize it in this way. However, it is also conceivable that only a single decision algorithm exists on the data processing facility for a plurality of mobile devices, which decision algorithm is trained in the training mode by input from a plurality of different mobile devices.

If a plurality of decision algorithms are present at the data processing facility, it can also be considered that they learn independently of one another and thus subsequently select the best trained decision algorithm. The selection can be performed in accordance with various criteria, such as, for example, the quality of recognition, the simplicity of the knowledge structure, etc.

In this context, it is therefore particularly advantageous if, for example, a decision algorithm present at the data processing facility is selected from a plurality of decision algorithms trained independently of one another in relation to the selection criterion and/or the optimization criterion. Such selection criteria and/or optimization criteria may here be, for example, recognition quality, simplicity, knowledge structure, characteristics of the mobile device on which the decision algorithm is executed, etc.

The selected decision algorithm is then used to determine the current process state. This can be achieved, for example, by: the image data are transmitted to a data processing unit and are used there as input data as a basis for a selected decision algorithm. However, this can also be achieved as follows: the decision algorithm is transmitted to the relevant mobile device and applied there.

Hereby, an efficient selection of a decision algorithm that optimally matches the current situation may be achieved. Thus, for example, the decision algorithm can be selected such that it optimally matches the mobile device. If the mobile device is, for example, a resource-constrained or resource-weak device (reduced performance relative to other mobile devices), a decision algorithm may be selected that optimally matches the resource conditions present in the mobile device. This may for example mean that the decision algorithm is less computationally intensive and may thus be executed optimally on the mobile device (however with reduced accuracy or speed or efficiency for this purpose). This can be achieved, for example, by a simplified knowledge structure of the decision algorithm. This of course also applies to the monitoring system.

However, it is also conceivable that the production mode is executed on the mobile devices and thus each mobile device has a decision algorithm, the parameters of the decision algorithm present there being transmitted by the data processing facility and the decision algorithm trained there to all (or selected ones of) the mobile devices in order to thus combine the different trained decision algorithms on the mobile devices.

The object is also achieved by a monitoring system according to claim 9, wherein the monitoring system has the following:

-at least one image recording unit having at least one digital image sensor for recording digital image data;

-a machine learning system with at least one machine-trained decision algorithm containing correlations between digital image data as input data of the machine learning system and process states of industrial process steps to be monitored as output data of the machine learning system;

at least one computing unit for determining at least one current process state of an industrial process step by means of a decision algorithm executable on the computing unit in such a way that: generating a current process state of the industrial process step from the recorded digital image data as input data of the machine learning system based on a trained decision algorithm as output data of the machine learning system; and

an output unit which is set up to generate a visual, acoustic and/or haptic output in relation to the at least one determined current process state.

Advantageous embodiments of the monitoring system are derived from the corresponding dependent claims.

It can therefore be provided that the machine learning system is or comprises an artificial neural network as a decision algorithm.

It can furthermore be provided that the monitoring system has at least one mobile device which is designed to be carried by at least one person and on which at least one digital image sensor of the image recording unit is arranged, so that digital image data can be recorded, wherein the mobile device is designed to transmit the recorded digital image data to the machine learning system.

Furthermore, it can be provided that the monitoring system has a training mode in which one or more parameters of the decision algorithm are learned by a training module of the machine learning system on the basis of the recorded digital image data, and/or that the monitoring system has a production mode in which at least one current process state of the industrial process step is determined by a decision algorithm of the machine learning system.

It can furthermore be provided that the monitoring system has a mobile device with a computing unit, which can be carried by a person participating in the industrial process step, wherein the mobile device is set up to determine at least one current process state of the industrial process step by means of a decision algorithm executed on the computing unit.

It can furthermore be provided that the monitoring system has a data processing facility accessible in the network, which is set up to receive the digital image data recorded by the image recording unit, to learn one or more parameters of the decision algorithm on the basis of the received digital image data by means of a training module of the machine learning system executed on the data processing facility, and to subsequently transmit the parameters of the decision algorithm from the data processing facility to a mobile device carried by the person.

It can furthermore be provided that the monitoring system has a data processing facility accessible in the network, which is set up to receive the digital image data recorded by the image recording unit, to determine at least one current process state of the industrial process step by means of a decision algorithm executed at the data processing facility, and to actuate the output unit in relation to the determined current process state of the industrial process step in order to generate a visual, acoustic and/or haptic output.

In this case, it can be provided that the data processing facility is additionally designed to learn one or more parameters of the decision algorithm based on the received digital image data by means of a training module of the machine learning system, which training module is executed on the data processing facility, and to serve as a basis for the decision algorithm.

In addition, it can always be provided that there is more than one decision algorithm, in particular a decision algorithm for a training mode or a training module and a decision algorithm for a production mode or a production module. In this case, for each mobile terminal, its own decision algorithm can be present in the training mode and in the production mode. However, it is also conceivable for a specific mobile device group to have its own decision algorithm, which is jointly learned by the mobile device group in the training mode. The decision algorithm trained on the group of mobile devices is then only transmitted to the mobile devices in the group by means of their parameters.

Drawings

The invention is explained in detail by way of example on the basis of the accompanying drawings. The figures show:

FIG. 1 shows a schematic diagram of a monitoring system;

FIG. 2 shows a schematic diagram of a mobile device;

fig. 3 shows a schematic diagram of a data processing facility.

Detailed Description

Fig. 1 shows schematically in a very greatly simplified illustration the individual components of a monitoring system 1, by means of which manual industrial process steps, not shown, of an industrial process are to be monitored. The monitoring system 1 has in the embodiment of fig. 1 an augmented reality system 100, which has at least two image sensors 110 and 120 in the form of a mobile device. The first image sensor 110 is here a 2D image sensor for recording 2D image data, while the second image sensor 120 is a 3D image sensor for recording digital 3D image data.

The digital image data recorded by the image sensors 110 and 120 are then provided to a first calculation unit 130, which then manipulates an output unit 140 of the augmented reality system 100 based on its calculations. The output unit 140 is designed here for visual, acoustic and/or tactile output to the person.

The image sensor 110 or 120 and the output unit 140 need not necessarily be integral parts of the mobile device. It is also conceivable here for distributed components to be linked to the computing unit 130 only via mobile devices. However, an integrated solution is contemplated and preferred, wherein the mobile device, e.g. AR glasses or VR glasses, comprises the image sensor 110 or 120 and the output unit 140.

It is therefore advantageous if the image sensor 110 or 120 itself and the output unit 140 are components of a spectacle construction which is worn by the person concerned in the form of spectacles. The first computing unit 130 can also be a component of the glasses, as a result of which a very compact design can be achieved. However, it is also conceivable for the computing unit 130 to be worn on the body of the person concerned in the form of a mobile device and to be connected to the glasses here wirelessly and/or by wire.

The monitoring system 1 furthermore has a data processing facility 300 which is connected via a network 200 to the mobile device 100 or the augmented reality system 100. The data processing facility 300 has a second computing unit 310, which is set up in accordance with the determination of the current process state. Thus, the second computing unit 310 of the data processing facility 300 may, for example, execute a training module by means of which a decision algorithm is trained. It is also conceivable that the second computation unit 310 executes a production module, by means of which the current process state is determined on the basis of a decision algorithm.

The configuration unit 400 is furthermore accessible via the network 200 to a data processing facility 300, which may contain, inter alia, information about the classification of images. This is significant, for example, in the following cases: i.e. the recorded image data, now 2D image data or 3D image data, has been analyzed and, if necessary, classified in advance.

Fig. 2 schematically shows an augmented reality system 100 with a first computing unit 130 and data transmitted in different designs. First, the first calculation unit 130 obtains 2D image data D110 from the 2D image sensor 110. Further, the first calculation unit 130 obtains 3D image data D120 from the 3D image sensor. Of course, it is conceivable that only the 2D image data D110 or the 3D image data D120 is provided to the first calculation unit 130.

In a first embodiment, the image data D110 and/or the image data D120 are provided to a first decision module 131 of a first calculation unit 130 of the augmented reality system 100, wherein the first decision module is configured to execute a decision algorithm, for example in the form of a neural network. The decision algorithm of the first decision module 130 is here a component of a machine learning system and contains a correlation between the process state of the digital image data as input data on the one hand and the industrial process step to be monitored as output data on the other hand. The decision algorithm of the first decision module 131 is now fed with the image data D110 and/or D120 as input data and thus determines the current process state D131 as output data. The current process state D131 is locally generated decision data which is generated by means of a first decision module 131 by means of a decision algorithm executed on a first computing unit. The current process state D131 thus determined is then transmitted via the interface of the first computing unit 130 to the output unit 140, where a corresponding acoustic, visual and/or haptic output can then be made. The output unit 140 may be designed such that it generates a corresponding output directly as a function of the determined current process state D131. However, it is also conceivable to actuate the output unit 140, which is present without further intelligence, correspondingly on the basis of the current process state D131.

In this embodiment, the augmented reality system 100 works autonomously with the server system that may be present with respect to production patterns, where the decision algorithm may or may not remain trained. It is conceivable here that the training mode is also executed by the first decision module in order to further train the decision algorithm present in the first decision module. The training mode and the production mode are thus jointly executed by the first calculation unit 130.

In a further embodiment, it is, however, also conceivable to transmit the image data D110 and D120 via the network 200 to a data processing facility 300 already known from fig. 1 and a second computing unit 310 present there. Depending on which function the data processing facility 300 performs, the result of the first calculation unit 130 as the augmented reality system 100 may be a remotely determined current process state D311 or a parameter D312 of a further trained decision algorithm. However, it is also conceivable that two data sets D311, D312 are provided to the first calculation unit 130.

If a parameter D312 of a decision algorithm further trained by the data processing facility is provided by the data processing facility 300 via the network 200, said parameter D312 is provided to the first decision module 131. The decision algorithm present there is now supplemented or expanded or replaced by the parameter D312, so that the production mode of the first decision module 131 is based on a decision algorithm trained in the data processing facility. In parallel to this, of course, the image data D110 and D120 are also supplied to the first decision module 131 in order to determine the current process state D131 locally by means of the first calculation unit 130. The basis of the decision module 131 is continuously refined by a remotely trained decision algorithm, whereby the recognition rate can be improved.

However, it is also conceivable that, alternatively or in parallel, the current process state is determined by the data processing facility 300 in the production mode of the second computing unit 310 and subsequently provided to the first computing unit 130. If the current process state is determined solely by the data processing facility 300, it is subsequently transmitted as data D311 to the output unit 140. However, if the corresponding current process state D131 is also determined at the same time by the first computing unit and the decision module 131 contained therein, two process states of the corresponding output unit can be provided. It can then generate corresponding outputs from the two process states (local: D131, remote: D311).

Fig. 3 illustrates in schematic detail the data flow of the second computing unit 310 of the data processing facility 300. As already mentioned in fig. 2, the image data D110 and D120 are transmitted to the second computing unit 310 via a network. The second computation unit 310 may have a second decision module 311 and/or a training module 312, wherein both modules, if present, are provided with the respective image data D110 and D120.

The second decision module 311 has one or more decision algorithms which contain correlations between the digital image data D110, D120 as input data and the process state D311 as output data. The output data D311 in the form of the current process state is then transmitted back again via the network to the augmented reality system 100 (see fig. 2).

Furthermore, the second calculation unit 310 may have a training module 312, which likewise obtains the image data D110 and D120. The parameters of the decision algorithm are then learned in a corresponding learning method by means of the training module and are then provided to the decision module 311 in the form of parameter data D312, if necessary. The newly learned parameter D312 of the decision algorithm may here be provided again to the augmented reality system 100 via the network by the training module 312.

The transmission of the learned parameter D312 to the augmented reality system 100 takes place at discrete, not necessarily permanently preset times. It is also conceivable here for the parameter D312 of the decision algorithm to be transmitted to more than one augmented reality system, which is connected to the data processing facility 300.

List of reference numerals:

1 monitoring system

100 mobile device/augmented reality system

1102D image sensor

1203D image sensor

130 first calculation unit

131 first decision module

140 output unit

200 network

300 data processing facility

310 second calculation unit

311 second decision module

312 training module

400 configuration unit

D1102D image data

D1203D image data

D131 locally determined current process state

D311 remotely determined Current Process State

Parameters of D312 decision algorithm

D400 configuration data

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:使用SCADA系统的变电站设备监测

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类