CGRU (China-swarm optimization and RU-based radar echo nowcasting) method with strong space-time characteristics

文档序号:905300 发布日期:2021-02-26 浏览:3次 中文

阅读说明:本技术 基于cgru的强时空特性雷达回波临近预报方法 (CGRU (China-swarm optimization and RU-based radar echo nowcasting) method with strong space-time characteristics ) 是由 陈苏婷 张松 张闯 陈耀登 杨春 于 2020-12-17 设计创作,主要内容包括:本发明公开了一种基于CGRU的强时空特性雷达回波临近预报方法,所述方法包括如下步骤:(1)获取关于天气临近预报的连续雷达回波图像,并对连续雷达回波图像进行预处理,构建具有统一时间维度和空间维度的张量数据;(2)构建并训练3DCNN-CGRU网络训练模型,得到3DCNN-CGRU编码预测网络模型;(3)将步骤(1)所述用于进行天气临近预报的连续雷达回波图像序列的张量数据输入到所述3DCNN-CGRU网络模型,生成天气临近预报结果;本发明提出一种3DCNN-CGRU网络模型,增强了时空特征的传输能力,更有效地捕获和学习连续雷达回波图像的时空特征相关性,解决了时空信息易丢失,预测准确度低的问题。(The invention discloses a CGRU-based radar echo nowcasting method with strong space-time characteristics, which comprises the following steps: (1) acquiring continuous radar echo images related to weather proximity forecast, preprocessing the continuous radar echo images, and constructing tensor data with unified time dimension and space dimension; (2) constructing and training a 3DCNN-CGRU network training model to obtain a 3DCNN-CGRU coding prediction network model; (3) inputting tensor data of the continuous radar echo image sequence for weather nowcasting in the step (1) into the 3DCNN-CGRU network model to generate a weather nowcasting result; the invention provides a 3DCNN-CGRU network model, which enhances the transmission capability of space-time characteristics, more effectively captures and learns the correlation of the space-time characteristics of continuous radar echo images and solves the problems of easy loss of space-time information and low prediction accuracy.)

1. A CGRU-based radar echo nowcasting method with strong space-time characteristics is characterized by comprising the following steps: the method specifically comprises the following steps:

(1) acquiring a continuous radar echo image sequence for weather proximity prediction, and preprocessing the continuous radar echo image sequence to obtain tensor data with uniform time dimension and space dimension;

wherein the tensor data is a three-dimensional tensor X epsilon RT×W×H(ii) a In the formula, R represents a real number set; t is a time dimension; w, H are row and column space dimensions, respectively;

the sequence of successive radar echo images is represented by y (t) ═ y1,y2,...,yNT is 1, 2., N; wherein t represents time; n represents the length of the radar echo image sequence;

(2) constructing and training a 3DCNN-CGRU network training model to obtain a 3DCNN-CGRU network model;

(3) and (3) tensor data of the continuous radar echo image sequence for weather nowcasting in the step (1) are input into the 3DCNN-CGRU network model, and a weather nowcasting result is generated.

2. The method as claimed in claim 1, wherein the 3DCNN-CGRU network model is composed of a coding network and a prediction network;

the coding network consists of a 3DCNN network and a three-layer CGRU network and is used for extracting echo image space-time characteristic information of the continuous radar echo image sequence;

the 3DCNN is used for extracting local short space-time motion characteristics of a continuous radar echo image sequence; the three layers of the CGRU networks are used for learning the global long-time space characteristic dependency relationship of the continuous radar echo image sequence and compressing the space-time characteristics of the radar echo motion obtained by learning into a hidden state;

the prediction network consists of three layers of CGRU networks and 3DCNN networks; and the prediction network takes the output of the encoder as input, reversely reconstructs the image according to the characteristic information of the current echo image, generates a future echo image sequence and further obtains a weather forecast result.

3. The method as claimed in claim 2, wherein the 3DCNN network is calculated as follows:

in the formula (I), the compound is shown in the specification,the j-th radar echo feature map position of the i-th layer in the 3DCNN is represented as the output of a (T, W, H) unit; t represents a time dimension; w, H are row and column space dimensions, respectively; f represents a nonlinear activation function; bijThe bias parameters represent the jth radar echo characteristic diagram of the ith layer in the 3 DCNN;representing the weight of the convolution kernel connected to the mth characteristic diagram of the (i-1) layer; p, q and r respectively represent actual parameter values of the convolution operation in a position (T, W and H) unit;the m-th radar echo characteristic map position in the (i-1) -th layer is represented as the output of a unit (W + p, H + q, T + r); pi,Qi,RiRespectively representing a convolutionThe dimensions of the kernel are in three dimensions.

4. The method as claimed in claim 3, wherein the CGRU network structure changes the transition between states from multiplication to convolution by adjusting the proposed GRU network structure; wherein the CGRU network structure calculation process is as follows:

Zt=σ(Wxz*Xt+Whz*Ht-1)

Rt=σ(Wxr*Xt+Whr*Ht-1)

in the formula, ZtRepresents an update gate in the CGRU network structure; rtRepresenting a reset gate in a CGRU network structure; xtA radar echo diagram input representing time t; htHidden layer output representing time t; ht-1Representing the hidden layer output at time t-1; wxzRepresenting the weight parameters input to the update gate in the CGRU network; whzA weight parameter representing a hidden layer to an update gate; wxrRepresenting the weight parameters input to the reset gate in the CGRU network; whrA weight parameter representing a hidden layer to reset gate; ht' represents the memory content of the hidden layer at the time t; f represents a nonlinear activation function; wxhRepresenting a weight parameter input to a hidden layer in the CGRU network; whhRepresenting a hidden layer to hidden layer weight parameter;representing to control each unit to screen the radar space-time information;is a Hadamard product, i.e. multiplication of corresponding elements of a matrix; the σ nonlinear activation function is Sigmoid, and the formula is s (x) ═ 1+ e-x)-1The value range of the gate structure in the control model is [0,1 ]]。

5. The method as claimed in claim 4, wherein the step (2) of constructing and training the 3DCNN-CGRU network training model to obtain the 3DCNN-CGRU network model comprises:

(2.1) acquiring a continuous historical radar echo image sequence by taking the first continuous time sequence and the second continuous time sequence as sliding windows; wherein the first continuous-time series is temporally continuous with the second continuous-time series;

(2.2) preprocessing the historical radar echo image sequence to construct tensor data with uniform time dimension and space dimension; simultaneously setting tensor data of the radar echo image of each time frame in a first continuous time sequence as training data; setting tensor data of the radar echo image of each time frame in the second continuous time sequence as live data;

(2.3) establishing a 3DCNN-CGRU network training model, inputting tensor data of the historical radar echo image into the 3DCNN-CGRU network training model, performing iterative prediction, calculating a difference value between live data of the radar echo image in a continuous time sequence and model prediction output data, updating the 3DCNN-CGRU network weight through back propagation until loss function value MSE is converged, and representing training to obtain the 3DCNN-CGRU network model.

6. The CGRU-based radar echo nowcasting method with strong spatiotemporal characteristics according to claim 5, wherein the prediction output data obtained by training the live data of the echo image of each time frame of the first continuous time sequence corresponds to the live data of the echo image of each time frame of the second continuous time sequence; the iterative prediction iterates according to the radar echo image of each time frame of the second continuous time sequence.

7. The CGRU-based radar echo nowcasting method with strong space-time characteristics according to claim 5, wherein the loss function of the 3DCNN-CGRU network training model in step (2.3) is a pixel-level Mean Square Error (MSE) of the radar echo image:

in the formula (I), the compound is shown in the specification,representing a loss function value; y represents real live data;representing model prediction output data; n is the length of the radar echo image sequence; n is a counting unit; A. b represents the abscissa and ordinate of the radar echo image, respectively.

Technical Field

The invention relates to the technical field of meteorological observation, in particular to a CGRU-based radar echo nowcasting method with strong space-time characteristics.

Background

The goal of radar echo nowcasting is to make timely and accurate predictions of the weather conditions in a local area over a relatively short period of time (e.g., 0-2 hours) in the future. At present, the technology is widely applied to the aspects of resident trip, agricultural production, flight safety and the like, can bring convenience to people, and is favorable for disaster prevention and reduction. With the current climate change and the acceleration of the urbanization process, the atmospheric conditions become more and more complex, various meteorological disasters frequently occur, the climate change brings many negative impacts to the life and work of people, many uncertain dangers are increased, and if the climate disasters can be effectively predicted and prevented, the loss of people can be greatly reduced.

The currently used methods for radar echo prediction are mainly cross-correlation and optical flow based methods, which have been proven to be effective in extrapolating future radar echo patterns. There are inevitable disadvantages to both of these conventional approaches; when the echo changes rapidly, the Lagrangian conservation condition cannot be met, and the prediction effect can be reduced rapidly; the traditional radar echo nowcasting method still has certain defects in the aspects of short-term prediction accuracy and full utilization of massive radar echo image data. Compared with the traditional radar echo forecasting method, the deep learning method can better perform deep mining and analysis on big data, and improves the model prediction precision. Deep learning as a big data driven emerging technology, especially the Recurrent Neural Network (RNN) and the long-short term memory network (LSTM), brings some new solutions to the radar echo prediction task. By fully utilizing massive collected radar echo map data, a network model can be trained more effectively, and the future echo trend can be predicted more accurately. Although the LSTM network with a common structure can solve the meteorological time sequence problem to a certain extent, radar echo prediction has strong front-back space-time correlation, space-time information of a previous moment can determine prediction of a next moment, and the common LSTM model does not consider the space-time correlation, so that the space-time information is easily lost, the prediction accuracy is reduced, and the speed cannot be guaranteed.

Disclosure of Invention

The purpose of the invention is as follows: aiming at the problems, the invention provides a radar echo nowcasting method with strong space-time characteristics based on CGRU.

The technical scheme is as follows: in order to realize the purpose of the invention, the technical scheme adopted by the invention is as follows: a CGRU-based radar echo nowcasting method with strong space-time characteristics comprises the following specific steps:

(1) acquiring a continuous radar echo image sequence for weather proximity forecast, and comparing with a single radar image, wherein the image sequence can better reflect the front-back correlation of meteorological data; then preprocessing the continuous radar echo image sequence to obtain tensor data with uniform time dimension and space dimension; processing the three-dimensional data can obtain tensor data with complete space-time characteristics;

wherein the tensor data is a three-dimensional tensor X epsilon RT×W×H(ii) a In the formula, R represents a real number set; t is a time dimension; w, H are row and column space dimensions, respectively;

the sequence of successive radar echo images is represented by y (t) ═ y1,y2,...,yNT is 1, 2., N; wherein t represents time; n represents the length of the radar echo image sequence;

(2) the method comprises the following steps of constructing and training a 3DCNN-CGRU network training model to obtain the 3DCNN-CGRU network model, and specifically comprises the following steps:

(2.1) acquiring a continuous historical radar echo image sequence by taking the first continuous time sequence and the second continuous time sequence as sliding windows; wherein the first continuous-time series is temporally continuous with the second continuous-time series;

(2.2) preprocessing the historical radar echo image sequence to construct tensor data with uniform time dimension and space dimension; simultaneously setting tensor data of the radar echo image of each time frame in a first continuous time sequence as training data; setting tensor data of the radar echo image of each time frame in the second continuous time sequence as live data;

(2.3) establishing a 3DCNN-CGRU network training model, inputting tensor data of the historical radar echo image into the 3DCNN-CGRU network training model, performing iterative prediction, calculating a difference value between live data of the radar echo image in a continuous time sequence and model prediction output data, updating the 3DCNN-CGRU network weight through back propagation until loss function value MSE is converged, and representing training to obtain the 3DCNN-CGRU network model;

wherein the loss function of the 3DCNN-CGRU network training model is the pixel-level mean square error MSE of the continuous radar echo image sequence:

in the formula (I), the compound is shown in the specification,representing a loss function value; y represents real live data;representing model prediction output data; n is the length of the continuous time sequence; n is a counting unit; A. b represents the abscissa and ordinate of the radar echo image, respectively.

(3) Inputting tensor data of the continuous radar echo image sequence for weather nowcasting in the step (1) into the 3DCNN-CGRU network model to generate a weather nowcasting result;

further, the prediction output data obtained by training the live data of the echo image of each time frame of the first continuous time sequence corresponds to the live data of the echo image of each time frame of the second continuous time sequence; the iterative prediction iterates according to the radar echo image of each time frame of the second continuous time sequence.

Further, the 3DCNN-CGRU network model consists of a coding network and a prediction network;

furthermore, the coding network is composed of a 3DCNN network and a three-layer CGRU network and is used for extracting echo image space-time characteristic information of the radar echo image sequence;

the 3DCNN is used for extracting local short space-time motion characteristics of a continuous radar echo image sequence; the three layers of the CGRU networks are used for learning the global long-time space characteristic dependency relationship of the continuous radar echo image sequence and compressing the space-time characteristics of the radar echo motion obtained by learning into a hidden state;

further, the prediction network is composed of three layers of CGRU networks and 3DCNN networks; and the prediction network takes the output of the encoder as input, reversely reconstructs the image according to the characteristic information of the current echo image, generates a future echo image sequence and further obtains a weather forecast result.

Furthermore, the convolutional neural network is particularly suitable for processing image data due to the characteristics of feature mapping, local connection, weight sharing and the like; the conventional 2DCNN network has strong feature extraction capability for image data, but does not consider the influence of the connection between continuous multi-frame images on prediction when processing a task of processing continuous echo images, so that the related information of the motion change trend among features is easily lost, and the problem of prediction of moving images cannot be solved. The invention utilizes the constructed 3DCNN to replace the traditional 2DCNN, wherein the 3DCNN has the following calculation formula:

in the formula (I), the compound is shown in the specification,the j-th radar echo feature map position of the i-th layer in the 3DCNN is represented as the output of a (T, W, H) unit; t represents a time dimension; w, H are row and column space dimensions, respectively; f represents a nonlinear activation function; bijThe bias parameters represent the jth radar echo characteristic diagram of the ith layer in the 3 DCNN;represents the connection of the convolution kernel to (i-1) The weight of the mth characteristic diagram of the layer; p, q and r respectively represent actual parameter values of the convolution operation in a position (T, W and H) unit;the m-th radar echo characteristic map position in the (i-1) -th layer is represented as the output of a unit (W + p, H + q, T + r); pi,Qi,RiRespectively representing the sizes of three dimensions of a convolution kernel;

furthermore, the invention provides a CGRU network structure, which changes the conversion between states from multiplication operation to convolution operation by adjusting the proposed GRU network structure, so that not only can a time sequence relation be established, but also the spatial characteristics can be described, and the problem of spatial information loss in the time sequence transmission process is effectively solved.

Wherein, each CGRU network unit includes outputs from 3DCNN network time and space, and the structural calculation process is as follows:

Zt=σ(Wxz*Xt+Whz*Ht-1)

Rt=σ(Wxr*Xt+Whr*Ht-1)

in the formula, ZtRepresents an update gate in the CGRU network structure; rtRepresenting a reset gate in a CGRU network structure; xtA radar echo diagram input representing time t; htHidden layer output representing time t; ht-1Representing the hidden layer output at time t-1; wxzRepresenting the weight parameters input to the update gate in the CGRU network; whzA weight parameter representing a hidden layer to an update gate; wxrRepresenting the weight parameters input to the reset gate in the CGRU network; whrA weight parameter representing a hidden layer to reset gate; ht' represents the memory content of the hidden layer at the time t; f represents a nonlinear activation function; wxhRepresenting a weight parameter input to a hidden layer in the CGRU network; whhRepresenting a hidden layer to hidden layer weight parameter;representing to control each unit to screen the radar space-time information;is a Hadamard product, i.e. multiplication of corresponding elements of a matrix; the σ nonlinear activation function is Sigmoid, and the formula is s (x) ═ 1+ e-x)-1The value range of the gate structure in the control model is [0,1 ]];

Furthermore, the invention provides a BN method and utilizes a ReLU nonlinear activation function to replace the traditional Sigmoid skill to improve the network convergence speed, relieve the overfitting phenomenon, and can obviously enhance the space-time feature learning capability of the model, so that the model has stronger feature expression capability of a multi-frame radar echo diagram, and the prediction accuracy is improved.

Further, data of the radar echo image in the training prediction process are all constructed into a three-dimensional tensor X epsilon RT ×W×H

Wherein R represents a real number set; t is a time dimension; w, H are row and column space dimensions, respectively; the individual echo images are converted into vectors of multi-frame time dimensions on a space grid, and a three-dimensional structure is formed by sequentially stacking continuous images in front and back.

Has the advantages that: compared with the prior art, the technical scheme of the invention has the following beneficial technical effects:

the invention provides a deep learning method of a 3DCNN-CGRU coding prediction structure for the first time aiming at a radar echo proximity prediction task. Aiming at a 3DCNN-CGRU network structure, the dimension of echo image input data needs to be reconstructed first, and the time dimension and the space dimension of the data are respectively constructed; in the processes of space-time feature extraction and motion information learning, input and output are three-dimensional tensors, and conversion between states is three-dimensional tensor convolution operation, so that radar echo data have uniform dimensionality, all time and space characteristics are reserved, and radar echoes in the region are more comprehensively and accurately forecasted; the 3DCNN provided by the invention is firstly used for extracting local short-term space-time characteristics, so that spatial characteristic confusion caused by directly utilizing a CGRU network for learning is avoided, meanwhile, the CGRU structure can more fully learn the global long-term motion trend of forward and backward radar echoes, network parameters are reduced, and the convergence speed is accelerated; the method improves the fuzzy condition of the predicted echo image, solves the problems of easy loss of space-time information and low prediction precision, has obviously better overall performance than other radar echo adjacent prediction methods under various rainfall threshold values, has more accurate predicted future echo image, and fully proves the effectiveness of the method.

Drawings

FIG. 1 is a flow chart of a radar echo proximity prediction method based on a 3DCNN-CGRU network and having strong space-time characteristics;

fig. 2 is a diagram of a CGRU network structure.

Detailed Description

The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.

The invention discloses a CGRU-based radar echo nowcasting method with strong space-time characteristics, which specifically comprises the following steps:

(1) acquiring a continuous radar echo image sequence for weather proximity forecast, and comparing with a single radar image, wherein the image sequence can better reflect the front-back correlation of meteorological data; then preprocessing the continuous radar echo image sequence to obtain tensor data with uniform time dimension and space dimension; processing the three-dimensional data can obtain tensor data with complete space-time characteristics;

wherein the tensor data is a three-dimensional tensor X epsilon RT×W×H(ii) a In the formula, R represents a real number set; t is a time dimension; w, H are row and column space dimensions, respectively;

the sequence of successive radar echo images is represented by y (t) ═ y1,y2,...,yNT is 1, 2., N; wherein t represents time; n represents the length of the radar echo image sequence;

(2) the method comprises the following steps of constructing and training a 3DCNN-CGRU network training model to obtain the 3DCNN-CGRU network model, and specifically comprises the following steps:

(2.1) acquiring a continuous historical radar echo image sequence by taking the first continuous time sequence and the second continuous time sequence as sliding windows; wherein the first continuous-time series is temporally continuous with the second continuous-time series;

(2.2) preprocessing the historical radar echo image sequence to construct tensor data with uniform time dimension and space dimension; simultaneously setting tensor data of the radar echo image of each time frame in a first continuous time sequence as training data; setting tensor data of the radar echo image of each time frame in the second continuous time sequence as live data;

(2.3) establishing a 3DCNN-CGRU network training model, inputting tensor data of the historical radar echo image into the 3DCNN-CGRU network training model, performing iterative prediction, calculating a difference value between live data of the radar echo image in a continuous time sequence and model prediction output data, updating the 3DCNN-CGRU network weight through back propagation until loss function value MSE is converged, and representing training to obtain the 3DCNN-CGRU network model;

wherein the loss function of the 3DCNN-CGRU network training model is the pixel-level mean square error MSE of the continuous radar echo image sequence:

in the formula (I), the compound is shown in the specification,representing a loss function value; y represents real live data;representing model prediction output data; n isThe length of the continuous time sequence; n is a counting unit; A. b represents the abscissa and ordinate of the radar echo image, respectively.

(3) Inputting tensor data of the continuous radar echo image sequence for weather nowcasting in the step (1) into the 3DCNN-CGRU network model to generate a weather nowcasting result;

further, the prediction output data obtained by training the live data of the echo image of each time frame of the first continuous time sequence corresponds to the live data of the echo image of each time frame of the second continuous time sequence; the iterative prediction iterates according to the radar echo image of each time frame of the second continuous time sequence.

Further, the 3DCNN-CGRU network model consists of a coding network and a prediction network;

furthermore, the coding network is composed of a 3DCNN network and a three-layer CGRU network and is used for extracting echo image space-time characteristic information of the radar echo image sequence;

the 3DCNN is used for extracting local short space-time motion characteristics of a continuous radar echo image sequence; the three layers of the CGRU networks are used for learning the global long-time space characteristic dependency relationship of the continuous radar echo image sequence and compressing the space-time characteristics of the radar echo motion obtained by learning into a hidden state;

further, the prediction network is composed of three layers of CGRU networks and 3DCNN networks; and the prediction network takes the output of the encoder as input, reversely reconstructs the image according to the characteristic information of the current echo image, generates a future echo image sequence and further obtains a weather forecast result.

Furthermore, the convolutional neural network is particularly suitable for processing image data due to the characteristics of feature mapping, local connection, weight sharing and the like; the conventional 2DCNN network has strong feature extraction capability for image data, but does not consider the influence of the connection between continuous multi-frame images on prediction when processing a task of processing continuous echo images, so that the related information of the motion change trend among features is easily lost, and the problem of prediction of moving images cannot be solved. The invention utilizes the constructed 3DCNN to replace the traditional 2DCNN, wherein the 3DCNN has the following calculation formula:

in the formula (I), the compound is shown in the specification,the j-th radar echo feature map position of the i-th layer in the 3DCNN is represented as the output of a (T, W, H) unit; t represents a time dimension; w, H are row and column space dimensions, respectively; f represents a nonlinear activation function; bijThe bias parameters represent the jth radar echo characteristic diagram of the ith layer in the 3 DCNN;representing the weight of the convolution kernel connected to the mth characteristic diagram of the (i-1) layer; p, q and r respectively represent actual parameter values of the convolution operation in a position (T, W and H) unit;the m-th radar echo characteristic map position in the (i-1) -th layer is represented as the output of a unit (W + p, H + q, T + r); pi,Qi,RiRespectively representing the sizes of three dimensions of a convolution kernel;

furthermore, the invention provides a CGRU network structure, which changes the conversion between states from multiplication operation to convolution operation by adjusting the proposed GRU network structure, so that not only can a time sequence relation be established, but also the spatial characteristics can be described, and the problem of spatial information loss in the time sequence transmission process is effectively solved.

Wherein, each CGRU network unit includes outputs from 3DCNN network time and space, and the structural calculation process is as follows:

Zt=σ(Wxz*Xt+Whz*Ht-1)

Rt=σ(Wxr*Xt+Whr*Ht-1)

in the formula, ZtRepresents an update gate in the CGRU network structure; rtRepresenting a reset gate in a CGRU network structure; xtA radar echo diagram input representing time t; htHidden layer output representing time t; ht-1Representing the hidden layer output at time t-1; wxzRepresenting the weight parameters input to the update gate in the CGRU network; whzA weight parameter representing a hidden layer to an update gate; wxrRepresenting the weight parameters input to the reset gate in the CGRU network; whrA weight parameter representing a hidden layer to reset gate; h'tRepresenting the memory content of the hidden layer at the time t; f represents a nonlinear activation function; wxhRepresenting a weight parameter input to a hidden layer in the CGRU network; whhRepresenting a hidden layer to hidden layer weight parameter;representing to control each unit to screen the radar space-time information;is a Hadamard product, i.e. multiplication of corresponding elements of a matrix; the σ nonlinear activation function is Sigmoid, and the formula is s (x) ═ 1+ e-x)-1The value range of the gate structure in the control model is [0,1 ]];

Furthermore, the invention provides a BN method and utilizes a ReLU nonlinear activation function to replace the traditional Sigmoid skill to improve the network convergence speed, relieve the overfitting phenomenon, and can obviously enhance the space-time feature learning capability of the model, so that the model has stronger feature expression capability of a multi-frame radar echo diagram, and the prediction accuracy is improved.

Further, data of the radar echo image in the training prediction process are all constructed into a three-dimensional tensor X epsilon RT ×W×H

Wherein R represents a real number set; t is a time dimension; w, H are row and column space dimensions, respectively; the individual echo images are converted into vectors of multi-frame time dimensions on a space grid, and a three-dimensional structure is formed by sequentially stacking continuous images in front and back.

10页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种实时及感知动态物体信息的语义SLAM方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类