Intelligent network-connected hybrid electric vehicle multi-system dynamic coordination control system and method

文档序号:1870135 发布日期:2021-11-23 浏览:11次 中文

阅读说明:本技术 智能网联混合动力汽车多系统动态协调控制系统及方法 (Intelligent network-connected hybrid electric vehicle multi-system dynamic coordination control system and method ) 是由 郭景华 王班 王靖瑶 肖宝平 何智飞 于 2021-09-29 设计创作,主要内容包括:智能网联混合动力汽车多系统动态协调控制系统及方法,涉及汽车智能安全与自动驾驶。系统包括数据模块、数据感知模块、与多系统动态协调控制系统。智能网联混合动力汽车通过车载传感器获得车辆状态信息,并将其发送给多系统动态协调控制模块;多系统动态协调控制模块根据获得的车辆状态信息,求解最优发动机功率与电机功率分配方案以提高车辆的燃油经济性;建立可准确表征智能网联混合动力汽车多过程耦合的车辆动力学模型,作为智能网联混合动力汽车多系统动态协调控制算法的执行机构,执行由多系统动态协调控制器输出的可执行控制信号,进行车辆状态更新。有效解决多目标对于车辆控制要求相矛盾等问题,可获得更好的计算结果,提高计算速度。(An intelligent network-connected hybrid electric vehicle multi-system dynamic coordination control system and method relate to intelligent safety and automatic driving of an automobile. The system comprises a data module, a data perception module and a multi-system dynamic coordination control system. The intelligent network connection hybrid electric vehicle obtains vehicle state information through a vehicle-mounted sensor and sends the vehicle state information to the multi-system dynamic coordination control module; the multi-system dynamic coordination control module is used for solving an optimal engine power and motor power distribution scheme according to the obtained vehicle state information so as to improve the fuel economy of the vehicle; and establishing a vehicle dynamic model capable of accurately representing the multi-process coupling of the intelligent network-connected hybrid electric vehicle, and taking the vehicle dynamic model as an execution mechanism of the multi-system dynamic coordination control algorithm of the intelligent network-connected hybrid electric vehicle to execute the executable control signal output by the multi-system dynamic coordination controller so as to update the vehicle state. The problems that multiple targets contradict the vehicle control requirements and the like are effectively solved, a better calculation result can be obtained, and the calculation speed is improved.)

1. The intelligent network connection hybrid electric vehicle multi-system dynamic coordination control system is characterized by comprising a data module, a data sensing module and a multi-system dynamic coordination control system; the multi-system dynamic coordination control system consists of a simulation environment, a reward function and a multi-system dynamic coordination control module which are connected in sequence; the data module consists of classical driving cycle working condition data and real vehicle acquisition data and is used as a training set and a testing set of deep reinforcement learning; the data sensing module is used for acquiring current speed, acceleration and battery electric quantity information of the vehicle by using the vehicle-mounted sensor and sending the acquired vehicle state information to the multi-system dynamic coordination control system; the data module is used for performing off-line training on data, namely performing model pre-training by using prior knowledge and empirical data to obtain a better training model; the training model is used for performing energy output coordination control among the engine, the motor and the power battery according to the vehicle information obtained by the data sensing module, outputting a control signal to a simulation environment to obtain a control signal which can be actually executed by the vehicle and is applied to the vehicle, so that multi-system dynamic coordination control of the intelligent network-connected hybrid electric vehicle is realized.

2. The intelligent network-connected hybrid electric vehicle multi-system dynamic coordination control method based on deep reinforcement learning is characterized by comprising the following steps of:

1) the intelligent network connection hybrid electric vehicle obtains vehicle state information through a vehicle-mounted sensor and sends the vehicle state information to the multi-system dynamic coordination control module;

2) the multi-system dynamic coordination control module is used for solving an optimal engine power and motor power distribution scheme according to the obtained vehicle state information so as to improve the fuel economy of the vehicle;

3) and establishing a vehicle dynamic model capable of accurately representing the multi-process coupling of the intelligent network-connected hybrid electric vehicle, and taking the vehicle dynamic model as an execution mechanism of the multi-system dynamic coordination control algorithm of the intelligent network-connected hybrid electric vehicle to execute the executable control signal output by the multi-system dynamic coordination controller so as to update the vehicle state.

3. The method for multi-system dynamic coordination control of the intelligent networked hybrid electric vehicle based on deep reinforcement learning as claimed in claim 2, wherein in step 1), the vehicle state information comprises the speed, acceleration and battery power information of the vehicle.

4. The intelligent networked hybrid electric vehicle multi-system dynamic coordination control method based on deep reinforcement learning as claimed in claim 2, wherein in step 2), the optimal engine power and motor power distribution scheme is solved to improve the fuel economy of the vehicle, firstly, according to the dynamic characteristics of the engine and the motor, the engine is selected as a main control object to be controlled, expert knowledge consisting of the optimal working curve of the engine and the battery characteristics is embedded into a deep reinforcement learning algorithm, and by utilizing the optimal working point curve of the engine, the control quantity is reduced, the dimension of the control quantity is reduced, the search range of the algorithm is narrowed, the calculation burden is reduced, and the calculation speed of the algorithm is improved; then, analyzing the influence of each state quantity of the vehicle on the action value of the deep reinforcement learning algorithm, and clarifying a lower-layer multi-system dynamic coordination control principle based on the deep reinforcement learning algorithm; and finally, designing a lower-layer multi-system dynamic coordination controller of a depth certainty strategy gradient algorithm based on ant colony intelligent optimization.

5. The method for controlling the multi-system dynamic coordination of the intelligent networked hybrid electric vehicle based on the deep reinforcement learning as claimed in claim 4, wherein the specific method for designing the lower-layer multi-system dynamic coordination controller based on the ant colony intelligent optimization depth certainty strategy gradient algorithm is as follows:

(1) designing input and output variables of a multi-system dynamic coordination controller;

(2) designing an algorithm simulation environment: the simulation environment of the algorithm mainly has the functions of obtaining an optimal engine and motor power distribution scheme through calculation, judging the working mode of a vehicle power system according to vehicle charge-discharge conditions determined by a battery characteristic diagram and other judgment conditions, converting the instruction into an actually controllable instruction of a vehicle dynamic model, sending the actually controllable instruction to each power component of the vehicle, feeding the execution result of each power component back to a lower-layer multi-system dynamic coordination controller, and using a reward function for calculating a reward value so as to guide the training of a network model;

(3) the DDPG algorithm based on ant colony intelligent optimization refers to battery electric quantity information according to the input vehicle speed and acceleration, outputs the power of an engine, performs power distribution on an intelligent network-connected hybrid electric vehicle power system, applies an output signal to a simulation environment to obtain reward, and guides the algorithm to perform next training.

6. The method for controlling multi-system dynamic coordination of an intelligent networked hybrid electric vehicle based on deep reinforcement learning as claimed in claim 5, wherein in step (2), the reward function calculates a reward value according to vehicle state information obtained by simulated environment transmission, and transmits the reward value to the multi-system dynamic coordination controller for guiding the evolution of the training model, and the reward function is a function related to battery power change and instantaneous oil consumption, and is specifically as follows:

the reward function consists of two parts: the first part represents the difference value of the battery electric quantity between the current moment and the initial moment and represents the consumed battery electric quantity; the second part represents fuel consumption from the initial time to the current time,the fuel consumption rate of the vehicle; alpha and beta are constant factors, and a multi-system dynamic coordination control strategy based on deep reinforcement learning can keep a certain balance on fuel economy and battery power maintenance through parameter adjustment.

7. The method for multi-system dynamic coordination control of the intelligent networked hybrid electric vehicle based on deep reinforcement learning as claimed in claim 2, wherein in step 3), the vehicle dynamics model comprises an engine model, a motor model, a battery model, a power distribution mechanism model and a brake model.

Technical Field

The invention relates to intelligent safety and automatic driving of an automobile, in particular to an intelligent network-connected hybrid electric vehicle multi-system dynamic coordination control system and method.

Background field of the invention

The energy-saving, safe and comfortable hybrid electric vehicle is a subject of current world development and a subject of current automobile industry development, an intelligent network-connected hybrid electric vehicle is used as a product combining an intelligent vehicle and an automobile network, an engine and a motor are power sources of the vehicle, a planetary gear set is a power distribution component of the vehicle, through mutual coupling between planetary plate gears, the power coordination control of the two power sources can be realized by changing the rotating speed between the motor and a generator, the energy of a power system is reasonably distributed, and the energy-saving potential of the vehicle is developed to the maximum extent.

The main tasks of the intelligent network-connected hybrid electric vehicle multi-system dynamic coordination control are as follows: different power distribution schemes of the engine, the motor and the generator can be flexibly designed, because the working points of the engine and the motor are in different working areas by the different power distribution schemes even under the same required power requirement, the working efficiency of the engine and the motor is different, and the efficiency of the whole vehicle is influenced; and selecting the most appropriate power distribution scheme according to the working condition of the vehicle and the characteristic parameters of the engine, the motor and the battery to improve the fuel economy of the vehicle. And considering the stability of the queue and the riding comfort of the vehicle, and performing coordinated control on the power output of each power component so as to ensure the stability and continuity of the power output in the switching process of different working modes. Document [1] (and liujian.) energy management strategy [ D ] of a single-axle parallel hybrid electric vehicle based on dynamic planning, university of beijing physics, 2019.) energy management control strategy research of a single-axle parallel hybrid electric vehicle is carried out by using dynamic planning, in order to solve the problem that a DP algorithm cannot meet the requirement of real-time performance, a DP algorithm control result is taken as a training sample, a neural network is trained to meet the requirement of real-time performance of an energy management control strategy, and although the capability of the neural network which can infinitely approach any continuous function is utilized, the potential of the neural network cannot be fully exerted; document [2] (Zhou W, Zhang N, Zhai h. enhanced Battery Power configuration in MPC-based HEV Energy Management: a Two-phase Dual-model Approach [ J ]. IEEE Transactions on transfer estimation, 2021, PP (99):1-1.) combines the Battery Power characteristic region with the model prediction control prediction time domain, calculates the optimal Energy problem using a forward dynamic programming algorithm, and can achieve the purpose of solving the optimal solution, but cannot take into account the real-time requirements of the algorithm.

Disclosure of Invention

The invention aims to solve the problems in the prior art, and provides an intelligent network-connected hybrid electric vehicle multi-system dynamic coordination control system which effectively solves the problems that multiple targets contradict vehicle control requirements and the like, and obtains optimal expected acceleration by continuously optimizing acceleration increment through a model predictive control algorithm.

The invention also aims to provide an intelligent network-connected hybrid electric vehicle multi-system dynamic coordination control method which can obtain better calculation results, improve the calculation speed and fully exert the energy-saving potential of the hybrid electric vehicle.

The intelligent network connection hybrid electric vehicle multi-system dynamic coordination control system comprises a data module, a data sensing module and a multi-system dynamic coordination control system; the multi-system dynamic coordination control system consists of a simulation environment, a reward function and a multi-system dynamic coordination control module which are connected in sequence; the data module consists of classical driving cycle working condition data and real vehicle acquisition data and is used as a training set and a testing set of deep reinforcement learning; the data sensing module acquires information such as the current speed, the acceleration and the battery power of the vehicle by using the vehicle-mounted sensor and sends the acquired vehicle state information to the multi-system dynamic coordination control system; the method comprises the steps of performing offline training by using data module data, namely performing model pre-training by using priori knowledge and empirical data to obtain a better training model, performing online optimization by using the established vehicle model, and obtaining a better training result by using the self-learning capability of deep reinforcement learning; the training model with a better training result is obtained through offline training, energy output coordination control among the engine, the motor and the power battery is carried out according to the vehicle information obtained by the data sensing module, and a control signal is output to a simulation environment so as to obtain a control signal which can be actually executed by the vehicle and act on the vehicle, so that the dynamic coordination control of the intelligent network-connected hybrid electric vehicle multi-system is realized.

The intelligent network-connected hybrid electric vehicle multi-system dynamic coordination control method based on deep reinforcement learning comprises the following steps:

1) the intelligent network connection hybrid electric vehicle obtains vehicle state information through a vehicle-mounted sensor and sends the vehicle state information to the multi-system dynamic coordination control module;

2) the multi-system dynamic coordination control module is used for solving an optimal engine power and motor power distribution scheme according to the obtained vehicle state information so as to improve the fuel economy of the vehicle;

3) and establishing a vehicle dynamic model capable of accurately representing the multi-process coupling of the intelligent network-connected hybrid electric vehicle, and taking the vehicle dynamic model as an execution mechanism of the multi-system dynamic coordination control algorithm of the intelligent network-connected hybrid electric vehicle to execute the executable control signal output by the multi-system dynamic coordination controller so as to update the vehicle state.

In step 1), the vehicle state information includes vehicle speed, acceleration, battery power information, and the like.

In the step 2), solving the optimal engine power and motor power distribution scheme to improve the fuel economy of the vehicle, firstly, selecting the engine as a main control object to control according to the dynamic characteristics of the engine and the motor, embedding expert knowledge consisting of the optimal working curve and the battery characteristic of the engine into a deep reinforcement learning algorithm, and reducing the control quantity, the control quantity dimension, the algorithm search range, the calculation load and the calculation speed of the algorithm by utilizing the optimal working point curve of the engine; then, analyzing the influence of each state quantity of the vehicle on the action value of the deep reinforcement learning algorithm, and clarifying a lower-layer multi-system dynamic coordination control principle based on the deep reinforcement learning algorithm; and finally, designing a lower-layer multi-system dynamic coordination controller based on an ant colony intelligent optimization Depth Deterministic Policy Gradient (DDPG) algorithm.

The specific method for designing a lower-layer multi-system dynamic coordination controller based on the ant colony intelligent optimization Depth Deterministic Policy Gradient (DDPG) algorithm may be as follows:

(1) designing input and output variables of a multi-system dynamic coordination controller;

(2) designing an algorithm simulation environment: the simulation environment of the algorithm mainly has the functions that the optimal engine and motor power distribution scheme is obtained through calculation, the reward function judges the working mode of a vehicle power system according to the vehicle charging and discharging conditions determined by a battery characteristic diagram and other judgment conditions, the instruction is converted into an actually controllable instruction of a vehicle dynamic model and is sent to each power component of the vehicle, and each power component feeds back the execution result to a lower-layer multi-system dynamic coordination controller to calculate a reward value so as to guide the training of a network model;

in the step (2), the reward function calculates a reward value according to vehicle state information obtained by the transfer of the simulated environment, and transfers the reward value to the multi-system dynamic coordination controller for guiding the evolution of the training model, wherein the reward function is a function related to the change of the battery power and the instantaneous oil consumption, and is specifically as follows:

the reward function consists of two parts: the first part represents the difference value of the battery electric quantity between the current moment and the initial moment and represents the consumed battery electric quantity; the second part represents fuel consumption from the initial time to the current time,the fuel consumption rate of the vehicle. Alpha and beta are constant factors, and a multi-system dynamic coordination control strategy based on deep reinforcement learning can keep a certain balance on fuel economy and battery power maintenance through parameter adjustment.

(3) The DDPG algorithm based on ant colony intelligent optimization refers to battery electric quantity information according to the input vehicle speed and acceleration, outputs the power of an engine, performs power distribution on an intelligent network-connected hybrid electric vehicle power system, applies an output signal to a simulation environment to obtain reward, and guides the algorithm to perform next training.

In step 3), the vehicle dynamics model includes an engine model, a motor model, a battery model, a power distribution mechanism model, and a brake model.

The invention provides an intelligent network-connected hybrid electric vehicle multi-system dynamic coordination control strategy based on deep reinforcement learning, which utilizes priori knowledge and empirical data to perform model pre-training to obtain a better training model, self-learning online optimization is performed, continuous optimization is performed to obtain a better calculation result, the calculation speed is increased, and the energy-saving potential of a hybrid electric vehicle is fully exerted. The upper-layer controller establishes a corresponding objective function based on the multi-objective optimization problem of vehicle safety, comfort and economy, effectively solves the problem that multiple objectives contradict the vehicle control requirements, and utilizes a model predictive control algorithm to continuously optimize acceleration increment so as to obtain the optimal expected acceleration. And the lower-layer controller solves the optimal engine power and motor power distribution scheme according to the optimal expected acceleration and the current vehicle state information solved by the upper-layer controller so as to improve the fuel economy of the vehicle. The method comprises the steps of selecting an engine as a main control object to control according to dynamic characteristics of the engine and a motor, embedding expert knowledge consisting of an optimal working curve of the engine and battery characteristics into a deep reinforcement learning algorithm, reducing control quantity, reducing dimension of the control quantity, reducing search range of the algorithm, reducing calculation burden and improving calculation speed of the algorithm by utilizing the optimal working point curve of the engine. A Deep Q-learning algorithm (Deep Q-Network, DQN) with good performance in a discrete space is adopted to design a lower-layer multi-system dynamic coordination controller based on the DQN algorithm so as to improve the stability and the convergence speed of the algorithm.

Drawings

FIG. 1 is a multi-system dynamic coordination control framework.

Fig. 2 is a neural network structure.

Detailed Description

The following examples will further illustrate the present invention with reference to the accompanying drawings.

As shown in fig. 1, the intelligent networked hybrid electric vehicle multi-system dynamic coordination control system based on deep reinforcement learning comprises a data module, a data sensing module and a multi-system dynamic coordination control system; the multi-system dynamic coordination control system consists of a simulation environment, a reward function and a multi-system dynamic coordination control module which are connected in sequence; the data module consists of classical driving cycle working condition data and real vehicle acquisition data and is used as a training set and a testing set of deep reinforcement learning; the data sensing module acquires information such as the current speed, the acceleration and the battery power of the vehicle by using the vehicle-mounted sensor and sends the acquired vehicle state information to the multi-system dynamic coordination control system; the method comprises the steps of performing offline training by using data module data, namely performing model pre-training by using priori knowledge and empirical data to obtain a better training model, performing online optimization by using the established vehicle model, and obtaining a better training result by using the self-learning capability of deep reinforcement learning; the training model with a better training result is obtained through offline training, energy output coordination control among the engine, the motor and the power battery is carried out according to the vehicle information obtained by the data sensing module, and a control signal is output to a simulation environment so as to obtain a control signal which can be actually executed by the vehicle and act on the vehicle, so that the dynamic coordination control of the intelligent network-connected hybrid electric vehicle multi-system is realized.

The invention discloses an intelligent network connection hybrid electric vehicle multi-system dynamic coordination control method based on deep reinforcement learning, which comprises the following steps:

A. data module

The data module comprises three parts of classical driving cycle working condition data, natural driving data and self-vehicle collected data which are used as training set and test set data of a deep reinforcement learning algorithm, the data are subjected to normalization processing and then transmitted to a multi-system dynamic coordination control system, and offline training of deep reinforcement learning is carried out to obtain a better training model.

B. Data perception module

The method comprises the steps of obtaining the current speed v (t), the acceleration a (t) and the battery electric quantity SOC (t) of the intelligent network-connected hybrid electric vehicle by using a vehicle-mounted sensor, carrying out data normalization on the data, and then transmitting the data to a multi-system dynamic coordination control module for carrying out on-line optimization of deep reinforcement learning. The specific formula is as follows:

C. multi-system dynamic coordination control module

Firstly, establishing a simulation environment, establishing a model which can accurately represent that an actual vehicle executes different action instructions after receiving a multi-system dynamic coordination control algorithm instruction, taking the model as the simulation environment of a deep reinforcement learning algorithm as shown in figure 1, inputting the required power of the whole vehicle and the expected power of an engine, and outputting the required power as a vehicle state, wherein the specific process comprises the following steps: (1) according to the optimal working point curve of the engine, solving the engine rotating speed and the engine torque corresponding to the highest engine efficiency corresponding to the current expected power of the engine; (2) determining the charging and discharging conditions of the battery by using the battery characteristic curve; (3) when the required power of the whole vehicle is not large, the electric quantity of a battery is in a low-resistance area, the engine is turned off, and only the motor drives; when the electric quantity of the battery is too low, the engine works along the optimal working area, and the residual energy is reversed to charge the battery; (4) when the required power of the whole vehicle is larger, the engine and the motor are driven in a combined manner; (5) when the vehicle is in a deceleration state, the electric quantity of the battery is sufficient, and only mechanical braking is carried out to avoid the overcharge of the battery; if the battery power is insufficient, a regenerative braking mode is started, the motor braking is used as much as possible, and when the required braking torque exceeds the maximum braking torque provided by the motor, the motor braking and the mechanical braking are combined for braking. And calculating the vehicle states after different actions are adopted according to different working modes, and transmitting the vehicle states to a reward function to calculate the reward value.

Then, a reward function is designed. And the reward function calculates a reward value according to the vehicle state information obtained by the simulated environment transmission, and transmits the reward value to the multi-system dynamic coordination controller for guiding the evolution of the training model. The reward function is a function related to battery power change and instantaneous fuel consumption, and is specifically as follows:

the reward function consists of two parts: the first part represents the current time t and the initial time t0The battery power difference value of (a) indicates the consumed battery power; the second part represents fuel consumption from the initial time to the current time,the fuel consumption rate of the vehicle. Alpha and beta are constant factors, and a multi-system dynamic coordination control strategy based on deep reinforcement learning can keep a certain balance on fuel economy and battery power maintenance through parameter adjustment.

And finally, designing the multi-system dynamic coordination controller based on deep reinforcement learning, wherein the design mainly comprises control principle design and neural network design.

The intelligent agent of the deep reinforcement learning algorithm calculates the action value corresponding to each action according to the acquired vehicle state, randomly selects one action from the action library if the algorithm searches, and selects the action with the maximum action value from the action values corresponding to each action to be applied to the simulation environment if the algorithm does not search the state.

When the vehicle battery state of charge is at a lower level, the deep reinforcement learning algorithm is more inclined to choose to greatly increase the engine power. However, when the vehicle battery state of charge is at a higher level, the deep enhancement algorithm tends to choose to shut down the engine rather than continue to increase engine power, thereby keeping the vehicle battery state in a low resistance operating region.

When the vehicle speed is lower, the required power of the whole vehicle is lower, the battery electric quantity is lower and is in a low electric quantity area, if the engine still runs according to the required power of the whole vehicle, the engine is in a low-efficiency state and does not meet the requirement of low fuel consumption of the vehicle, the intelligent agent tends to select the action corresponding to a larger action value, so that the engine works in a high-efficiency working area, and the residual engine energy drives the motor to reversely rotate to charge the battery of the vehicle.

When the acceleration is small and the vehicle speed is unchanged, the intelligent agent tends to and selects not to change the power of the engine so as to keep the existing vehicle state; while at higher accelerations, the agent tends to choose to increase engine power substantially to provide sufficient power for vehicle acceleration.

The experience pool sampling has randomness, repeated sampling and partial data are not sampled due to the same priority, the learning process falls into a local optimal solution or the learning effect is poor, and the network convergence speed and the network training effect are improved by adopting the experience pool sampling strategy of priority playback. Priority playback (Prioritized Experience playback) is to assign a priority to each Experience in the Experience pool, and Experience selection tends to select an Experience with a higher priority. Assume a certain experience j in the experience pool with priority pjThen the experience is selected with probability PjAs follows:

an Actor network and criticic network structure of the DDPG algorithm is shown in FIG. 2, wherein the Actor network and criticic network have five-layer network structures, an input layer, an output layer and three hidden layers, the Actor network inputs vehicle speed, acceleration and battery power information after data normalization and outputs action values corresponding to the selected actions, the hidden layer 1 comprises 200 neurons, the hidden layer 2 comprises 100 neurons, and the hidden layer 3 comprises 50 neurons; the Critic network inputs the vehicle speed, the acceleration, the battery power and the action after data normalization, outputs the value Q, the hidden layer 1 comprises 200 neurons, the hidden layer 2 comprises 100 neurons, and the hidden layer 3 comprises 50 neurons.

A priority playback mechanism is adopted by a Q network of the Critic network, a batch of experience pairs are selected from a sampling pool, loss values are calculated, Q network parameters are updated by using a minimum loss function, and the minimum loss function is as follows:

wherein, yj=Rj+γQ′(S′,u′(S′|wμ′)|wQ′) N is the number of sampling experience pairs, wQAnd wQ′Q network parameters and target Q network parameters, respectively.

The strategy network of the Actor network updates the strategy network parameters by utilizing the strategy gradient of the Q network, and the expression of the strategy gradient is as follows:

wherein, wμAnd wμ′Respectively, a policy network parameter and a target policy network parameter.

And combining an ant colony intelligent optimization algorithm and a DDPG algorithm model parameter updating mechanism, taking a strategy gradient descent function (5) for updating the Actor online network weight and a mean square error loss function (4) for updating the Critic online network weight as fitness functions, and respectively optimizing the weight parameters of the Actor and Critic online networks in the DDPG model at each moment by using the ant colony intelligent optimization algorithm.

In order to make the algorithm have better convergence, the DDPG algorithm adopts the ReLU function as an activation function of the neural network, and meanwhile, in order to limit the action output range within a certain range, the tanh function is often selected as the activation function of the network output layer. Therefore, the activation function of the Actor network input layer and the hidden layer is a ReLU function, the activation function of the output layer is a tanh function, and the output layer is a full connection layer; the activation functions of the critical network input layer, the hidden layer and the output layer are all ReLU functions.

D. Execution module

And establishing a vehicle dynamic model capable of accurately representing the multi-process coupling of the intelligent network-connected hybrid electric vehicle, wherein the vehicle dynamic model comprises an engine model, a motor model, a battery model, a power distribution mechanism model and a braking model, and is used as an execution mechanism of the intelligent network-connected hybrid electric vehicle multi-system dynamic coordination control algorithm, and an executable control signal output by the multi-system dynamic coordination controller is executed to update the vehicle state.

The above description is further detailed in connection with the preferred embodiments of the present invention, and it is not intended to limit the practice of the invention to these descriptions. It will be apparent to those skilled in the art that various modifications, additions, substitutions, and the like can be made without departing from the spirit of the invention.

10页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:使用大数据的车辆动力控制系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!