method, device and equipment for calibrating modelica model

文档序号:191156 发布日期:2021-11-02 浏览:13次 中文

阅读说明:本技术 modelica模型校准的方法、装置和设备 (method, device and equipment for calibrating modelica model ) 是由 刘宇超 周凡利 陈立平 刘奇 于 2021-07-13 设计创作,主要内容包括:本申请公开了一种modelica模型校准的方法、装置和设备。一种modelica模型校准的方法,包括:选择目标系统;建立所述目标系统的modelica模型;采用神经网络算法对所述modelica模型进行训练计算;输出经过训练后的所述modelica模型的输出值。本申请通过神经网络模型来对modelica模型进行校准,提高了modelica模型校准的效率。(The application discloses a method, a device and equipment for calibrating a modelica model. A method of modelica model calibration, comprising: selecting a target system; establishing a modelica model of the target system; training and calculating the modelica model by adopting a neural network algorithm; and outputting the trained output value of the modelica model. According to the method and the device, the modelica model is calibrated through the neural network model, and the efficiency of the modelica model calibration is improved.)

1. A method of modelica model calibration, comprising:

selecting a target system;

establishing a modelica model of the target system;

training and calculating the modelica model by adopting a neural network algorithm;

and outputting the trained output value of the modelica model.

2. The method of modelica model calibration of claim 1, wherein building a neural network computational model from the modelica model comprises:

splitting the target system to obtain a plurality of sequentially connected subsystems;

splitting the modelica model to obtain a plurality of modelica submodels which are sequentially connected, wherein each submodel corresponds to one subsystem;

for any two adjacent submodels; the output of the previous submodel is used as the input of the next submodel;

each modelica sub-model is taken as a node of the neural network.

3. The method of modelica model calibration of claim 1, wherein training the modelica model comprises:

taking an ideal output value of the modelica model as a target value; adjusting the output value of each modelica sub-model; training of the modelica model is stopped until the error between the output value of the modelica model and the ideal output value is within a predetermined error range.

4. The method of modelica model calibration of claim 1, wherein said neural network computational model is a BP neural network computational model.

5. A method of modelica model calibration according to claim 1,

selecting a target system, comprising:

determining whether the target system can be split into a plurality of sequentially cascaded subsystems,

in any two adjacent subsystems in the plurality of subsystems, the output of the previous subsystem is used as the input of the next subsystem;

and if so, determining that the target system is modeled.

6. The method of modelica model calibration of claim 1, wherein building a modelica model of the target system comprises:

splitting the target system to obtain a plurality of sequentially connected subsystems;

establishing a modelica model of each subsystem;

and (4) cascading and combining the modelica models of each subsystem to obtain the modelica model of the target system.

7. The method of modelica model calibration of claim 3, wherein after stopping training the modelica model, the method further comprises: the output value of each modelica sub-model is stored.

8. A device for calibrating a modelica model is characterized in that,

a selection module for selecting a target system;

the modeling module is used for establishing a modelica model of the target system;

the processing module is used for training and calculating the modelica model by adopting a neural network algorithm; and outputting the trained output values of the modelica model.

9. A modelica model calibration apparatus, comprising: at least one processor and at least one memory; the memory is to store one or more program instructions; the processor, configured to execute one or more program instructions to perform the method of any of claims 1-7.

10. A computer-readable storage medium having one or more program instructions embodied therein for performing the method of any one of claims 1-7.

Technical Field

The application relates to the technical field of software engineering, in particular to a method, a device and equipment for calibrating a modelica model.

Background

When the modelica model is used for simulation calculation, because some modelica models comprise a plurality of submodels, if the error of each submodel is accumulated, the total modelica model has a large difference from the actual value. For example, the reliability obtained by one sub-model simulation is 99%, in the case of 100 models, the reliability of the whole model is only 36% (100 times of 0.99), even in the case of only 10 models, the reliability is only 90% (10 times of 0.99), and thus, in the operation and maintenance stage of complex products, after long-time simulation, errors are accumulated to the point that the system is unusable. In the prior art, the input and the output of each modelica sub-model can only be manually adjusted, so that the output of the total modelica model is adjusted, and the manual adjustment method is low in efficiency and accuracy.

Disclosure of Invention

The present application is directed to a method, apparatus and device for modelica model calibration to solve the above problems.

To achieve the above object, according to one aspect of the present application, there is provided a method of modelica model calibration, including: selecting a target system;

establishing a modelica model of the target system;

training and calculating the modelica model by adopting a neural network algorithm;

and outputting the trained output value of the modelica model.

In one embodiment, building a neural network computational model from the modelica model comprises:

splitting the target system to obtain a plurality of sequentially connected subsystems;

splitting the modelica model to obtain a plurality of modelica submodels which are sequentially connected, wherein each submodel corresponds to one subsystem;

for any two adjacent submodels; the output of the previous submodel is used as the input of the next submodel;

each modelica sub-model is taken as a node of the neural network.

In one embodiment, training the modelica model includes:

taking an ideal output value of the modelica model as a target value; adjusting the output value of each modelica sub-model; training of the modelica model is stopped until the error between the output value of the modelica model and the ideal output value is within a predetermined error range.

In one embodiment, the neural network computational model is a BP neural network computational model.

In one embodiment, selecting a target system includes:

determining whether the target system can be split into a plurality of sequentially cascaded subsystems,

in any two adjacent subsystems in the plurality of subsystems, the output of the previous subsystem is used as the input of the next subsystem;

and if so, determining that the target system is modeled.

In one embodiment, a modelica model of the target system is created, comprising:

splitting the target system to obtain a plurality of sequentially connected subsystems;

establishing a modelica model of each subsystem;

and (4) cascading and combining the modelica models of each subsystem to obtain the modelica model of the target system.

In one embodiment, after stopping training the modelica model, the method further comprises: the output value of each modelica sub-model is stored.

To achieve the above object, according to another aspect of the present application, there is provided an apparatus for modelica model calibration, including:

a selection module for selecting a target system;

the modeling module is used for establishing a modelica model of the target system;

the processing module is used for training and calculating the modelica model by adopting a neural network algorithm;

and outputting the trained output value of the modelica model.

In one embodiment, the modeling module is further configured to split the target system to obtain a plurality of sequentially connected subsystems;

splitting the modelica model to obtain a plurality of modelica submodels which are sequentially connected, wherein each submodel corresponds to one subsystem;

for any two adjacent submodels; the output of the previous submodel is used as the input of the next submodel;

each modelica sub-model is taken as a node of the neural network.

In one embodiment, the processing module is further configured to:

taking an ideal output value of the modelica model as a target value; adjusting the output value of each modelica sub-model; training of the modelica model is stopped until the error between the output value of the modelica model and the ideal output value is within a predetermined error range.

In one embodiment, the neural network computational model is a BP neural network computational model.

In one embodiment, the selection module is further configured to:

determining whether the target system can be split into a plurality of sequentially cascaded subsystems,

in any two adjacent subsystems in the plurality of subsystems, the output of the previous subsystem is used as the input of the next subsystem;

and if so, determining that the target system is modeled.

In one embodiment, the modeling module is further configured to:

splitting the target system to obtain a plurality of sequentially connected subsystems;

establishing a modelica model of each subsystem;

and (4) cascading and combining the modelica models of each subsystem to obtain the modelica model of the target system.

In one embodiment, the processing module is further configured to: after the modelica models are stopped from being trained, the output values of each modelica sub-model are stored.

In order to achieve the above object, according to a third aspect of the present application, there is provided an electronic apparatus; comprising at least one processor and at least one memory; the memory is to store one or more program instructions; the processor is configured to execute one or more program instructions to perform any of the above steps.

According to a fourth aspect of the present application, there is provided a computer readable storage medium having embodied therein one or more program instructions for performing the steps of any of the above.

According to the technical scheme, a neural network calculation model is established according to a modelica model of a target system; and outputting the trained output value of the modelica model. The modelica model is calibrated through the neural network model, and the efficiency of the modelica model calibration is improved.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this application, serve to provide a further understanding of the application and to enable other features, objects, and advantages of the application to be more apparent. The drawings and their description illustrate the present application and are not intended to limit the application. In the drawings:

FIG. 1 is a flow chart of a method of modelica model calibration according to an embodiment of the present application;

FIG. 2 is a schematic diagram of a modelica model calibration apparatus according to an embodiment of the present application;

fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.

Detailed Description

In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given in the present application without any inventive step, shall fall within the scope of protection of the present application.

It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.

It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.

The application provides a method for calibrating a modelica model, and the method is shown in a flow chart of the method for calibrating the modelica model in fig. 1; the method comprises the following steps:

step S102, selecting a target system;

specifically, an identifier is set for each system.

For example, when the system is stored, different identifiers may be set according to the structure of the system.

If the subsystem is in a cascade structure, setting the identifier to be 1, otherwise, setting the identifier to be 0.

Step S104, establishing a modelica model of the target system;

step S106, a neural network algorithm is adopted to train and calculate the modelica model;

and S108, outputting the trained output value of the modelica model.

Specifically, after modelica model modeling is completed, the modelica model resembles a net woven with Outer/Inner, similar to a two-layer BP neural network.

An inner parameter, d _ inner, is added among all modelica sub-models, the parameter does not participate in simulation, and is only used for correcting the output value of each modelica sub-model by superposing the product of the d _ inner parameter and a weight on the basis of the original simulation result when the outer parameter is output, wherein the d _ inner is an error which reversely propagates in the BP neural network.

The weight parameters between the various modelica sub-model models can be updated randomly or based on the error inverse. For example, if the simulation result is larger, the weight is increased, and if the simulation result is smaller, the weight is decreased.

Training with a certain data amount or adjusting the number of times of the error reversely is carried out through the simulation data and the measured data. And stopping training after the output result is more consistent with the measured value to obtain a trained model, and completing the calibration of the model when the output result is more consistent with the actual condition.

It is worth emphasizing that the output values of each modelica sub-model are stored in time after the training of the modelica model is stopped. And stores the output values of the modelica model.

In one embodiment, when a neural network computation model is established according to the modelica model, the target system is split to obtain a plurality of sequentially connected subsystems;

splitting the modelica model to obtain a plurality of modelica submodels which are sequentially connected, wherein each submodel corresponds to one subsystem;

for any two adjacent submodels; the output of the previous submodel is used as the input of the next submodel;

each modelica sub-model is taken as a node of the neural network.

In one embodiment, the modelica model is trained with the ideal output values of the modelica model as target values; adjusting the output value of each modelica sub-model; training of the modelica model is stopped until the error between the output value of the modelica model and the ideal output value is within a predetermined error range.

In one embodiment, the neural network computational model is a BP neural network computational model.

Specifically, the BP neural network has the following characteristics:

the basic idea of the BP neural network consists of two parts: input sample forward propagation and output result, error backward propagation updating network weight

When the sample data is transmitted forward, the input sample is transmitted from the input layer, processed by the hidden layer and transmitted to the output layer, if the actual output of the output layer is not in accordance with the expected output, the stage of updating the network weight by error reverse transmission is entered.

The back propagation of the error is used for updating the network weight value, namely, the output error is inverted layer by layer to the input layer through the hidden layer in a certain mode, and the error is distributed to each neural unit of each layer of neuron.

The weight value adjustment process of each layer of signal forward propagation and error backward propagation is carried out circularly, and the weight value is also adjusted continuously, namely the learning process of the network. This process is continued until the error in the network output is reduced to an acceptable level or until a predetermined number of learning cycles.

In one embodiment, when a target system is selected, whether the target system can be split into a plurality of sequentially cascaded subsystems is judged;

in any two adjacent subsystems in the plurality of subsystems, the output of the previous subsystem is used as the input of the next subsystem; and if so, determining that the target system is modeled.

Specifically, a plurality of target systems are stored in the system, but not every target system can adopt the method of the present application, and only target systems satisfying a specific structure can use the method of the present application.

In specific implementation, the identifiers may be set for a plurality of target systems stored in the system, and whether each target system meets the condition is respectively determined. When the system is stored for the first time, it is specified whether the system includes a plurality of cascaded subsystems. If yes, the identification is carried out by using a special identification.

In one embodiment, when a modelica model of the target system is established, the target system is split to obtain a plurality of sequentially connected subsystems;

establishing a modelica model of each subsystem;

and (4) cascading and combining the modelica models of each subsystem to obtain the modelica model of the target system.

It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than here.

In a second aspect, the present application also proposes a modelica model calibration apparatus, see the schematic structural diagram of a modelica model calibration apparatus shown in fig. 2; the device includes:

a selection module 21 for selecting a target system;

a modeling module 22 for establishing a modelica model of the target system;

the processing module 23 is configured to perform training calculation on the modelica model by using a neural network algorithm; and outputting the trained output values of the modelica model.

In one embodiment, the modeling module 22 is further configured to split the target system to obtain a plurality of sequentially connected subsystems;

splitting the modelica model to obtain a plurality of modelica submodels which are sequentially connected, wherein each submodel corresponds to one subsystem;

for any two adjacent submodels; the output of the previous submodel is used as the input of the next submodel;

each modelica sub-model is taken as a node of the neural network.

In one embodiment, the processing module 23 is further configured to: taking an ideal output value of the modelica model as a target value; adjusting the output value of each modelica sub-model; training of the modelica model is stopped until the error between the output value of the modelica model and the ideal output value is within a predetermined error range.

In one embodiment, the neural network computational model is a BP neural network computational model.

In one embodiment, the selection module 21 is further configured to: determining whether the target system can be split into a plurality of sequentially cascaded subsystems,

in any two adjacent subsystems in the plurality of subsystems, the output of the previous subsystem is used as the input of the next subsystem;

and if so, determining that the target system is modeled.

In one embodiment, the modeling module 22 is further configured to: splitting the target system to obtain a plurality of sequentially connected subsystems;

establishing a modelica model of each subsystem;

and (4) cascading and combining the modelica models of each subsystem to obtain the modelica model of the target system.

In one embodiment, the processing module 23 is further configured to: after the modelica models are stopped from being trained, the output values of each modelica sub-model are stored.

According to a third aspect of the present application, there is provided an electronic device, see the schematic structural diagram of the electronic device shown in fig. 3; comprises at least one processor 31 and at least one memory 32; the memory 32 is for storing one or more program instructions; the processor 31 is configured to execute one or more program instructions to perform any one of the above methods.

In a fourth aspect, the present application also proposes a computer-readable storage medium having one or more program instructions embodied therein for performing the method of any one of the above.

The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or may be implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in a random access memory, a flash memory, a read only memory, a programmable read only memory or an electrically erasable programmable memory, a register, etc. storage media well known in the art. The processor reads the information in the storage medium and completes the steps of the method in combination with the hardware.

The storage medium may be a memory, for example, which may be volatile memory or nonvolatile memory, or which may include both volatile and nonvolatile memory.

The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory.

The volatile Memory may be a Random Access Memory (RAM) which serves as an external cache. By way of example and not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM for short), Dynamic RAM (DRAM for short), Synchronous DRAM (SDRAM for short), Double Data rate Synchronous DRAM (ddr Data random SDRAM for short), Enhanced Synchronous DRAM (ESDRAM for short), Synchronous link DRAM (SLDRAM for short), and direct memory bus RAM (drdrdrdrdram for short).

The storage media described in connection with the embodiments of the invention are intended to comprise, without being limited to, these and any other suitable types of memory.

Those skilled in the art will appreciate that the functionality described in the present invention can be implemented in a combination of hardware and software in one or more of the examples described above. When software is applied, the corresponding functionality may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.

It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed over a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, or they may be separately fabricated into various integrated circuit modules, or multiple modules or steps thereof may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.

The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

11页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于卷积简单循环单元网络的水泥成品比表面积预测方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类