Method for embedding and positioning watermark based on audio frequency-to-frequency domain

文档序号:381820 发布日期:2021-12-10 浏览:17次 中文

阅读说明:本技术 一种基于音频变频域的嵌入和定位水印的方法 (Method for embedding and positioning watermark based on audio frequency-to-frequency domain ) 是由 李平 蒋升 于 2021-09-14 设计创作,主要内容包括:本发明公开了一种基于音频变频域的嵌入和定位水印的方法,属于音频数字水印的领域,该嵌入方法包括:S1:对原始音频进行分帧,并且对每个分帧进行采样和DCT变换,得到相应的子带以及其DCT系数;S2:对一帧语音中的频率为6kHz-7kHZ的高频子带进行VAD检测和拼帧操作,用于获取语音段;其中,VAD检测和拼帧操作包括:获取子带的语音能量谱,生成语音信号的FBANK特征,以及拼帧形成语音数据;S3:在语音段中,根据水印信息和同步信息生成水印位;S4:在语音段中每个水印位嵌入水印,得到带水印的信号;S5:对带水印的信号进行IDCT变换,得到带水印的音频信号。本发明添加的水印具有极好的透明性,没有明显的感知失真。(The invention discloses a method for embedding and positioning watermarks based on an audio frequency-to-frequency domain, belonging to the field of audio digital watermarks, wherein the embedding method comprises the following steps: s1: framing original audio, and sampling and DCT transforming each frame to obtain corresponding sub-band and DCT coefficient; s2: VAD detection and frame splicing operation are carried out on a high-frequency sub-band with the frequency of 6kHz-7kHZ in a frame of voice, and a voice section is obtained; wherein the VAD detection and framing operation comprises: acquiring a speech energy spectrum of a sub-band, generating FBANK characteristics of a speech signal, and splicing frames to form speech data; s3: in the voice section, generating watermark bits according to the watermark information and the synchronization information; s4: embedding a watermark in each watermark bit in a voice section to obtain a signal with the watermark; s5: and carrying out IDCT conversion on the signal with the watermark to obtain the audio signal with the watermark. The watermark added by the invention has excellent transparency and has no obvious perception distortion.)

1. A method for embedding watermark based on audio frequency-to-frequency domain is characterized by comprising the following steps:

s1: framing original audio, and sampling and DCT transforming each frame to obtain corresponding sub-band and DCT coefficient;

s2: VAD detection and frame splicing operation are carried out on a high-frequency sub-band with the frequency of 6kHz-7kHZ in a frame of voice, and a voice section is obtained; wherein the VAD detection and framing operation comprises: acquiring a speech energy spectrum of a sub-band, generating FBANK characteristics of a speech signal, and splicing frames to form speech data;

s3: in the voice section, generating watermark bits according to the watermark information and the synchronization information;

s4: embedding a watermark in each watermark bit in a voice section to obtain a signal with the watermark;

s5: and carrying out IDCT conversion on the signal with the watermark to obtain the audio signal with the watermark.

2. The audio frequency-varying domain based method for embedding watermark according to claim 1, wherein the step S2 further comprises the steps of: and judging whether the voice data is a voice segment, if so, performing step S3, and if not, reselecting the next frame for VAD detection.

3. The method for embedding a watermark based on an audio frequency-varying domain as claimed in claim 1, wherein the step S2 specifically comprises the steps of:

s201: carrying out Hanning window adding operation on the high-frequency sub-band, and then carrying out FFT (fast Fourier transform) to obtain a voice energy spectrum;

s202: the voice energy spectrum is subjected to logarithm extraction after passing through a group of Mel-scale triangular filter banks, FBANK characteristics of the voice signal are generated, and 2 nd to 6 th frequency bands of the voice energy spectrum are selected according to the FBANK characteristics;

s203: selecting a frame corresponding to 2 to 6 frequency bands as a current frame, and splicing the first 5 frames and the last 5 frames of the current frame to form 11 frames of voice data;

s204: and inputting the voice data after frame splicing into the full connection layer to obtain a voice section.

4. The method for embedding a watermark based on an audio frequency-varying domain as claimed in claim 1, wherein the step S3 specifically includes: in the speech section, spread spectrum is carried out according to the binary number of the watermark, a noise generator is used for generating M linearly uncorrelated AWGN sequences as watermark bits according to watermark information and synchronization information, and the distance between adjacent sequence vectors is equal to the number of sampling points of a high-frequency sub-band.

5. The audio frequency-variable domain based watermark embedding method of claim 1, wherein in step S4, a synchronization code, a watermark, a synchronization code, and a watermark are embedded in sequence in the first 4 frames of each speech segment.

6. The audio frequency-varying domain based method for embedding watermark according to claim 1, wherein the step S4 comprises:

embedding watermarks in corresponding sampling points in a high-frequency sub-band with the frequency of 6kHz-7 kHZ; the embedded watermark should satisfy the following conditions: when the current watermark is larger than 0, if the average value of the current watermark is smaller than the watermark value of the previous frame, FrameDCT is DCT/Var _ Dct _ value PreDct _ value, otherwise, no operation is performed; when the watermark value is less than or equal to 0, no operation is performed;

wherein, FrameDCT is a watermark value embedded in the current sub-band range; DCT is a sampling point value of the current sub-band; var _ Dct _ value is the mean value of the sampling points corresponding to the current sub-band; pre _ Dct _ value is the average of the sample points of the previous subband.

7. The audio-frequency-domain based watermark embedding method of claim 1, wherein the IDCT transformation in the step S5 is an inverse of the DCT transformation in the step S1.

8. A method for positioning watermark based on audio frequency-to-frequency domain is characterized by comprising the following steps:

s11: framing the audio with the watermark, and sampling and DCT transforming each sub-frame to obtain a frequency domain sub-band corresponding to each sampling point;

s12: performing VAD detection and frame splicing operation on all frequency bands in a frame of voice to obtain voice sections; wherein the VAD detection and framing operation comprises: acquiring a speech energy spectrum of a sub-band, generating FBANK characteristics of a speech signal, and splicing frames to form speech data;

s13: detecting whether the synchronous code is consistent with the set synchronous code by using the synchronous code in the voice section to judge whether the synchronous frame is a synchronous frame, if the synchronous frame is a synchronous frame, performing the step S14, and if the synchronous frame is not a synchronous frame, returning to the step S12, and selecting the next frame of voice to judge;

s14: and identifying the watermark section based on the synchronous code, and further acquiring the position of the current watermark by calculating the DCT coefficient.

9. The method for positioning watermark based on audio frequency-varying domain according to claim 8, wherein the step S12 specifically includes the following steps:

s1201: carrying out Hanning window adding operation on all frequency bands in a frame of voice, and then carrying out FFT (fast Fourier transform) to obtain a voice energy spectrum;

s1202: the voice energy spectrum is subjected to logarithm extraction after passing through a group of Mel-scale triangular filter banks, FBANK characteristics of the voice signal are generated, and 2 nd to 6 th frequency bands of the voice energy spectrum are selected according to the FBANK characteristics;

s1203: taking the corresponding frames of 2 to 6 frequency bands as the current frame, and splicing the first 5 frames and the last 5 frames to form 11 frames of voice data;

s1204: inputting the voice data after splicing into the full connection layer, and if the output result of the full connection layer is 0, judging whether the voice after splicing is a voice section; and if the output result of the full connection layer is 1, judging that the speech after frame splicing is a speech segment.

10. The method for positioning watermark based on audio frequency-varying domain as claimed in claim 8, wherein the correlation between the DCT coefficient and the pseudo random noise block is calculated in step S14 as follows:

RSg=S(W).G(W)

wherein S (W) represents a matrix of frequency domain coefficients of a DCT with a watermark signal, G (W) represents a matrix of pseudo-random noise blocks, and two matrix phasesMultiplying to obtain a cross-correlation vector RSg

Technical Field

The invention belongs to the field of audio digital watermarking, and particularly relates to a method for embedding and positioning a watermark based on an audio frequency-variable frequency domain.

Background

The digital watermarking technology is an information hiding technology, namely an audio digital watermarking algorithm, namely, a digital watermark is embedded into an audio file (such as wav, mp3, avi and the like) through a watermarking embedding algorithm, but the digital watermarking technology has no great influence on the original tone quality of the audio file or cannot be influenced by human ears. And on the contrary, the audio digital watermark is completely extracted from the audio host file through a watermark extraction algorithm, and the embedded watermark and the extracted watermark are called the audio digital watermark.

The technical difficulty of embedding watermarks in digital audio signals is greater compared to image watermarking techniques, mainly because the human auditory system has a higher sensitivity than the visual system. The human auditory system is particularly sensitive to additive noise, and if an additive rule is used to embed a watermark in the time domain, it is difficult to achieve a reasonable compromise between the robustness and imperceptibility of the watermark. Although the dynamic range of the auditory system is large, it is still possible to embed a watermark in an audio signal for other characteristics. For example, the watermark may be embedded using the masking effect of the auditory system, the insensitivity of the auditory system to absolute phase, etc. The masking properties of the auditory system indicate the feasibility of adding watermarks in an audio signal.

Audio watermarks can be basically divided into two broad categories, time domain and transform domain. The idea of the time domain is to directly superimpose a watermark on the time domain new number, such as: LSB method, Echo method, splicing method, pitch extraction method, etc. The idea of the variable-loop domain is that it is faster and more convenient in computational processing, in practice, a time-domain signal is transformed to a signal in another domain, then watermarked, and then inversely transformed to a signal in the original domain. Such as FFT, DCT, DWT, SVD, etc., whereas the original DCT watermarking algorithm adds too much pseudo random noise, which is noticeable in human hearing and the watermark is easily attacked.

In view of the above, the present invention is particularly proposed.

Disclosure of Invention

The invention aims to provide a method for embedding and positioning a watermark based on an audio frequency-variable domain, wherein the added watermark has excellent transparency and has no obvious perception distortion.

In order to achieve the above object, the present invention provides a method for embedding a watermark based on an audio frequency-to-frequency domain, comprising the following steps:

s1: framing original audio, and sampling and DCT transforming each frame to obtain corresponding sub-band and DCT coefficient;

s2: VAD detection and frame splicing operation are carried out on a high-frequency sub-band with the frequency of 6kHz-7kHZ in a frame of voice, and a voice section is obtained; wherein the VAD detection and framing operation comprises: acquiring a speech energy spectrum of a sub-band, generating FBANK characteristics of a speech signal, and splicing frames to form speech data;

s3: in the voice section, generating watermark bits according to the watermark information and the synchronization information;

s4: embedding a watermark in each watermark bit in a voice section to obtain a signal with the watermark;

s5: and carrying out IDCT conversion on the signal with the watermark to obtain the audio signal with the watermark.

Further, the step S2 further includes the following steps: and judging whether the voice data is a voice segment, if so, performing step S3, and if not, reselecting the next frame for VAD detection.

Further, the step S2 specifically includes the following steps:

s201: carrying out Hanning window adding operation on the high-frequency sub-band, and then carrying out FFT (fast Fourier transform) to obtain a voice energy spectrum;

s202: the voice energy spectrum is subjected to logarithm extraction after passing through a group of Mel-scale triangular filter banks, FBANK characteristics of the voice signal are generated, and 2 nd to 6 th frequency bands of the voice energy spectrum are selected according to the FBANK characteristics;

s203: selecting a frame corresponding to 2 to 6 frequency bands as a current frame, and splicing the first 5 frames and the last 5 frames of the current frame to form 11 frames of voice data;

s204: and inputting the voice data after frame splicing into the full connection layer to obtain a voice section.

Further, the step S3 specifically includes: in the speech section, spread spectrum is carried out according to the binary number of the watermark, a noise generator is used for generating M linearly uncorrelated AWGN sequences as watermark bits according to watermark information and synchronization information, and the distance between adjacent sequence vectors is equal to the number of sampling points of a high-frequency sub-band.

Further, in step S4, a synchronization code, a watermark, a synchronization code, and a watermark are sequentially embedded in the first 4 frames of each speech segment.

Further, the step S4 includes:

embedding watermarks in corresponding sampling points in a high-frequency sub-band with the frequency of 6kHz-7 kHZ; the embedded watermark should satisfy the following conditions: when the current watermark is larger than 0, if the average value of the current watermark is smaller than the watermark value of the previous frame, FrameDCT is DCT/Var _ Dct _ value PreDct _ value, otherwise, no operation is performed; when the watermark value is less than or equal to 0, no operation is performed;

wherein, FrameDCT is a watermark value embedded in the current sub-band range; DCT is a sampling point value of the current sub-band; var _ Dct _ value is the mean value of the sampling points corresponding to the current sub-band; pre _ Dct _ value is the average of the sample points of the previous subband.

Further, the IDCT in the step S5 is transformed into the inverse of the DCT transform in the step S1.

The invention also provides a method for positioning the watermark based on the audio frequency-to-frequency domain, which comprises the following steps:

s11: framing the audio with the watermark, and sampling and DCT transforming each sub-frame to obtain a frequency domain sub-band corresponding to each sampling point;

s12: performing VAD detection and frame splicing operation on all frequency bands in a frame of voice to obtain voice sections; wherein the VAD detection and framing operation comprises: acquiring a speech energy spectrum of a sub-band, generating FBANK characteristics of a speech signal, and splicing frames to form speech data;

s13: detecting whether the synchronous code is consistent with the set synchronous code by using the synchronous code in the voice section to judge whether the synchronous frame is a synchronous frame, if the synchronous frame is a synchronous frame, performing the step S14, and if the synchronous frame is not a synchronous frame, returning to the step S12, and selecting the next frame of voice to judge;

s14: and identifying the watermark section based on the synchronous code, and further acquiring the position of the current watermark by calculating the DCT coefficient.

Further, the step S12 specifically includes the following steps:

s1201: carrying out Hanning window adding operation on all frequency bands in a frame of voice, and then carrying out FFT (fast Fourier transform) to obtain a voice energy spectrum;

s1202: the voice energy spectrum is subjected to logarithm extraction after passing through a group of Mel-scale triangular filter banks, FBANK characteristics of the voice signal are generated, and 2 nd to 6 th frequency bands of the voice energy spectrum are selected according to the FBANK characteristics;

s1203: taking the corresponding frames of 2 to 6 frequency bands as the current frame, and splicing the first 5 frames and the last 5 frames to form 11 frames of voice data;

s1204: inputting the voice data after splicing into the full connection layer, and if the output result of the full connection layer is 0, judging whether the voice after splicing is a voice section; and if the output result of the full connection layer is 1, judging that the speech after frame splicing is a speech segment.

Further, in step S14, the method for calculating the correlation between the DCT coefficient and the pseudo random noise block is as follows:

RSg=S(W).G(W)

wherein S (W) represents the frequency domain coefficient matrix of DCT with watermark signal, G (W) represents the matrix of pseudo-random noise block, the two matrixes are multiplied to obtain the cross-correlation vector RSg

Compared with the algorithm in the prior art, the method for embedding and positioning the watermark based on the audio frequency-to-frequency domain, which is provided by the invention, performs vad operation in the full frequency band, so that the watermark is more accurately solved; and adding a smaller amount of pseudo-random code noise according to the algorithm of the step of the watermarking system according to the screened sub-bands meeting the condition, so that the transparency of the watermark is better, and the imperceptibility of people is stronger.

Drawings

Fig. 1 is a flowchart of a method for embedding a watermark based on an audio frequency-to-frequency domain in this embodiment.

Fig. 2 is a flowchart of a method for positioning a watermark based on an audio frequency-to-frequency domain in this embodiment.

Fig. 3 is a diagram illustrating a hamming window function used in this embodiment.

Fig. 4 is a schematic view of a practical fully-connected layer in the present embodiment.

Detailed Description

The present invention will be described in further detail with reference to specific embodiments in order to make the technical field better understand the scheme of the present invention.

As shown in fig. 1, an embodiment of the present invention is a method for embedding a watermark based on audio frequency-variable domain, which embeds a watermark by selecting a specific frequency band for VAD operation to perform framing, and selecting the first 5 frames and the second 5 frames to perform framing to form 11 frames of voice data; and embedding the watermark in the specified bandwidth sampling point to realize the new watermark embedding method.

The method for embedding the watermark specifically comprises the following steps:

s1: the original audio is framed, and each frame is subjected to sampling and DCT (Discrete Cosine Transform) to obtain a frequency domain sub-band corresponding to each sampling point and a DCT coefficient of each frame.

The DCT coefficient is calculated as follows:

wherein C (0) represents the 0 th DCT coefficient; n is the number of signal sampling points, in the invention, N is selected to be 1024, C (i) represents the ith DCT coefficient, i is 1, 2, 3, …, N-1; y (x) represents the original signal.

S2: and performing VAD (Voice Activity Detection) and framing operation on a high-frequency sub-band range specified by DCT (discrete cosine transform) in a frame of Voice to acquire Voice segments. The purpose of this step is to identify and eliminate long periods of silence from the speech signal stream. Wherein the frequency division of the high frequency sub-bands specified by the DCT ranges from 6kHz to 7 kHZ.

In this step, the VAD detection and framing operation includes: acquiring a speech energy spectrum of a sub-band, generating FBANK characteristics of a speech signal, and splicing frames to form speech data; and then judging whether the voice data is a voice segment, if so, performing step S3, and if not, reselecting the next frame for VAD detection.

Specifically, the step S2 specifically includes the following steps:

s201: and performing Hanning window operation on a high-frequency sub-band (6kHz-7kHZ) designated by DCT (discrete cosine Transform), and then performing FFT (Fast Fourier Transform) to obtain a voice energy spectrum.

The hanning window function used in this step is shown in fig. 3.

In the step, the windowing operation is multiplied by a Hanning window function; windowing is followed by fourier expansion. Audio framing after windowing has the following advantages: the whole situation is more continuous, and the Gibbs effect is avoided; when windowing, the speech signal which originally has no periodicity presents partial characteristics of the periodic function.

S202: the voice energy spectrum is passed through a set of Mel-scale triangular filter banks and then logarithmized, FBANK features of the voice signal are generated, and 2 nd to 6 th frequency bands of the voice energy spectrum are selected according to the FBANK features.

S203: and selecting the corresponding frames of 2 to 6 frequency bands as the current frame, and splicing the first 5 frames and the last 5 frames of the current frame to form 11 frames of voice data.

S204: inputting the voice data after splicing into the full connection layer, and if the output result of the full connection layer is 0, judging whether the voice after splicing is a voice section; and if the output result of the full connection layer is 1, judging that the speech after frame splicing is a speech segment.

The full-connection layer is as shown in fig. 4, the data after frame splicing passes through the full-connection layer, the first layer is 128 nodes, the second layer is 128 nodes, the third layer is 64 nodes, the fourth layer is 64 nodes, the fifth layer is 32 nodes, the sixth layer is 32 nodes, the seventh layer is 2 nodes, and the final output label is 0 or 1.

S3: in the speech section, watermark bits are generated according to the synchronization code information and the watermark information.

Wherein, the synchronous code, the watermark, the synchronous code and the watermark are embedded in the beginning 4 frames of each voice section in sequence, and the embedded information structure is shown in the following table:

synchronization code Watermarking Synchronization code Watermarking

When the watermark is extracted, the watermark contained in the current data can be accurately positioned through the synchronous code information, and the synchronous code information is different from the watermark information.

Specifically, in a speech segment, spread spectrum is performed according to the binary number of the watermark, and M linearly uncorrelated ([ -1,1]) sequence vectors are generated as watermark bits according to watermark information and synchronization information, and the distance between adjacent sequence vectors is equal to the size of the number of sampling points of a high frequency sub-band (6kHz-7kHZ) specified by DCT.

S4: and embedding a watermark into each watermark bit in the voice section to obtain a signal with the watermark.

Specifically, a watermark is embedded in a corresponding sampling point in a high-frequency sub-band with the frequency of 6kHz-7 kHZ; the embedded watermark should satisfy the following conditions: when the current watermark value is larger than 0, if the average value of the current watermark is smaller than the watermark value of the previous frame, FrameDCT is DCT/Var _ Dct _ value PreDct _ value, otherwise, no operation is performed; when the watermark value is less than or equal to 0, no operation is performed.

Wherein, FrameDCT is a watermark value embedded in the current sub-band range; DCT is a sampling point value of the current sub-band; var _ Dct _ value is the mean value of the sampling points corresponding to the current sub-band; pre _ Dct _ value is the average of the sample points of the previous subband.

S5: the watermarked signal is subjected to an IDCT transform (inverse discrete cosine transform) to obtain a watermarked audio signal.

The IDCT transform is computed as follows:

the specific algorithm is equivalent to the inverse of the DCT transform in step S1, and the obtained result is the audio signal with the watermark embedded.

By the method for embedding the watermark based on the audio frequency-to-frequency domain, VAD operation can be performed on the word frequency, an appropriate sub-band can be screened out for adding the watermark, and when the watermark is embedded through the step S4, a smaller amount of pseudo-random code noise can be added, and the transparency of the watermark is higher.

Furthermore, as shown in fig. 2, an embodiment of the present invention is a method for audio-frequency-variant-domain-based positioning watermarking, which reads a watermark by performing VAD operation on the full channel to perform frame splicing and forming 11 frames of voice data.

The method for positioning the watermark specifically comprises the following steps:

s11: and framing the audio with the watermark, and sampling and DCT (discrete cosine transformation) each sub-frame to obtain a frequency domain sub-band corresponding to each sampling point.

S12: and performing VAD detection and frame splicing operation on all frequency bands in a frame of voice to obtain voice sections. Wherein the VAD detection and framing operation comprises: acquiring a speech energy spectrum of a sub-band, generating FBANK characteristics of a speech signal, and framing to form speech data.

Specifically, the VAD detection operation includes the steps of:

s1201: all frequency bands in a frame of voice are subjected to a hanning window operation, and then subjected to FFT (Fast Fourier Transform) to obtain a voice energy spectrum.

The hanning window function used in this step is shown in fig. 3.

In the step, the windowing operation is multiplied by a Hanning window function; windowing is followed by fourier expansion. Audio framing after windowing has the following advantages: the whole situation is more continuous, and the Gibbs effect is avoided; when windowing, the speech signal which originally has no periodicity presents partial characteristics of the periodic function.

S1202: the voice energy spectrum is passed through a set of Mel-scale triangular filter banks and then logarithmized to generate FBANK features of the voice signal, and 2 nd to 6 th frequency bands of the voice energy spectrum are selected according to the FBANK features.

S1203: and taking the corresponding frames of 2 to 6 frequency bands as the current frame, and splicing the first 5 frames and the last 5 frames of the current frame to form 11 frames of voice data.

S1204: inputting the voice data after splicing into the full connection layer, and if the output result of the full connection layer is 0, judging whether the voice after splicing is a voice section; and if the output result of the full connection layer is 1, judging that the speech after frame splicing is a speech segment.

The full-connection layer is as shown in fig. 4, the data after frame splicing passes through the full-connection layer, the first layer is 128 nodes, the second layer is 128 nodes, the third layer is 64 nodes, the fourth layer is 64 nodes, the fifth layer is 32 nodes, the sixth layer is 32 nodes, the seventh layer is 2 nodes, and the final output label is 0 or 1.

S13: whether the synchronization code in the speech segment is identical with the set synchronization code is detected, if so, the position of the synchronization frame is found, the frame is determined to be a synchronization frame, and step S41 is performed, and if not, the frame is not a synchronization frame, the process returns to step S12, and the next frame of speech is selected for determination.

S14: and identifying the watermark section based on the synchronous code, and further acquiring the position of the current watermark by calculating the DCT coefficient.

Specifically, the watermark section is identified based on the synchronous code, then the correlation between the DCT coefficient of the watermark section, the DCT coefficient of the corresponding sub-band and the pseudo-random noise block is calculated, and the vector index value of the maximum correlation is extracted to be used as the bit of the current watermark.

The DCT coefficient is calculated as follows:

wherein C (0) represents the 0 th DCT coefficient; n is the number of signal sampling points; c (i) denotes the ith DCT coefficient, i ═ 1, 2, 3, …, N-1; y (x) represents the original signal.

The method for calculating the correlation between the DCT coefficients and the pseudo random noise block is as follows:

RSg=S(W).G(W)

wherein S (W) represents the frequency domain coefficient matrix of DCT with watermark signal, G (W) represents the matrix of pseudo-random noise block, the two matrixes are multiplied to obtain the cross-correlation vector RSg

According to the method for positioning the watermark based on the audio frequency-to-frequency domain, VAD operation is carried out in the full frequency band, so that the voice section is identified, the voice section can be rapidly screened out, and the watermark is identified.

The inventive concept is explained in detail herein using specific examples, which are given only to aid in understanding the core concepts of the invention. It should be understood that any obvious modifications, equivalents and other improvements made by those skilled in the art without departing from the spirit of the present invention are included in the scope of the present invention.

12页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:语音合成方法、声码器的训练方法、装置、设备及介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类