Attention-based multi-scale convolution speech emotion recognition method and device

文档序号:1965060 发布日期:2021-12-14 浏览:19次 中文

阅读说明:本技术 一种基于注意力的多尺度卷积语音情感识别方法及装置 (Attention-based multi-scale convolution speech emotion recognition method and device ) 是由 唐小煜 陈嘉仪 程慧慧 郑梦云 于 2021-08-11 设计创作,主要内容包括:本发明涉及一种基于注意力的多尺度卷积语音情感识别方法及装置。本发明所述的一种基于注意力的多尺度卷积语音情感识别方法包括:构建语音情感识别模型,其中,所述模型包括第一卷积神经网络层、两个由注意力层和第二卷积神经网络层组成的并行通道、第一全连接层、空间注意力层、第二全连接层和softmax分类器;将待识别语音对应的语谱图输入到训练好的所述语音情感识别模型中,得到待识别语音的情感分类结果。本发明所述的一种基于注意力的多尺度卷积语音情感识别方法在深度学习神经网络中嵌入两个并行的通道注意力机制以及融合通道的空间注意力机制,增强有用信息并抑制对当前任务无用的信息,识别结果更为准确。(The invention relates to a multi-scale convolution speech emotion recognition method and device based on attention. The attention-based multi-scale convolution speech emotion recognition method comprises the following steps: constructing a speech emotion recognition model, wherein the model comprises a first convolutional neural network layer, two parallel channels consisting of an attention layer and a second convolutional neural network layer, a first full connection layer, a spatial attention layer, a second full connection layer and a softmax classifier; and inputting a spectrogram corresponding to the voice to be recognized into the trained voice emotion recognition model to obtain an emotion classification result of the voice to be recognized. According to the attention-based multi-scale convolution speech emotion recognition method, two parallel channel attention mechanisms and a spatial attention mechanism of a fusion channel are embedded in a deep learning neural network, useful information is enhanced, information useless for a current task is restrained, and a recognition result is more accurate.)

1. A multi-scale convolution speech emotion recognition method based on attention is characterized by comprising the following steps:

constructing a speech emotion recognition model, wherein the model comprises a first convolutional neural network layer, two parallel channels consisting of an attention layer and a second convolutional neural network layer, a first full connection layer, a spatial attention layer, a second full connection layer and a softmax classifier;

inputting a spectrogram corresponding to a voice to be recognized into the trained voice emotion recognition model, so that the first convolution neural network layer extracts low-level voice features from the spectrogram to obtain a first feature map;

dividing the first feature map into two sub-feature maps, and feeding the two sub-feature maps into two parallel channels respectively, so that each parallel channel performs parallel attention weighting processing and low-level feature extraction processing on one sub-feature map;

inputting the processing results of the two parallel channels into the first full-connection layer, and performing feature fusion processing to obtain a second feature map;

inputting the second feature map into the spatial attention layer, and performing attention weighting processing;

inputting the output result of the space attention layer into the second full-connection layer, and performing feature dimension reduction processing;

and inputting the output result of the second full connection layer into a softmax classifier to obtain an emotion classification result of the speech to be recognized.

2. The method of claim 1, wherein the step of segmenting the first feature map into two sub-feature maps comprises:

setting a parameter alpha, and enabling the feature map to be represented by a ratio of 1-alpha: dividing the channel according to the proportion of alpha, and carrying out average pooling downsampling processing on the feature map with the proportion of alpha;

before the processing results of the two parallel channels are input into the feature fusion layer, the method further comprises the following steps:

and performing up-sampling processing on the feature map with the occupation ratio of alpha.

3. The attention-based multi-scale convolution speech emotion recognition method of claim 2, characterized in that:

α=0.8。

4. the attention-based multi-scale convolution speech emotion recognition method of claim 1, wherein the attention-weighting processing on the sub-feature map includes:

inputting the sub-feature map into the attention layer to obtain a new feature map;

and multiplying the new feature map and the atomic feature map to obtain an attention weighted feature map.

5. The attention-based multi-scale convolution speech emotion recognition method of claim 1, characterized in that:

the second convolutional neural network layer comprises two convolutional layers of 5 multiplied by 5, and a maximum pooling layer of 2 multiplied by 2 is connected behind the two convolutional layers.

6. The attention-based multi-scale convolution speech emotion recognition method of claim 1, characterized in that:

the first convolutional neural network layer includes a 5 x 5 convolutional layer and a batch normalization layer.

7. The attention-based multi-scale convolution speech emotion recognition method of claim 1, wherein the step of inputting the second feature map into the spatial attention layer and performing attention weighting processing includes:

calculating the attention score of the second feature map:

calculating the probability alpha of selecting the ith input information under the condition of giving a task-related query vector q and an input X by using a softmax information selection mechanismi

8. The attention-based multi-scale convolution speech emotion recognition method of claim 1, characterized in that:

the second fully-connected layer comprises a Dropout layer.

9. An attention-based multi-scale convolution speech emotion recognition device, comprising:

the model construction module is used for constructing a speech emotion recognition model, wherein the model comprises a first convolutional neural network layer, two parallel channels consisting of an attention layer and a second convolutional neural network layer, a first full connection layer, a space attention layer, a second full connection layer and a softmax classifier;

the feature map extraction module is used for inputting a spectrogram corresponding to a voice to be recognized into the trained voice emotion recognition model, so that the first convolution neural network layer extracts low-level voice features from the spectrogram to obtain a first feature map;

the feature map segmentation module is used for segmenting the first feature map into two sub-feature maps, and feeding the two sub-feature maps into two parallel channels respectively, so that each parallel channel performs parallel attention weighting processing and low-level feature extraction processing on one sub-feature map;

the feature map fusion module is used for inputting the processing results of the two parallel channels into the first full-connection layer to perform feature fusion processing to obtain a second feature map;

the attention weighting module is used for inputting the second feature map into the spatial attention layer and carrying out attention weighting processing;

the characteristic dimension reduction module is used for inputting the output result of the space attention layer into the second full-connection layer and performing characteristic dimension reduction processing;

and the emotion classification result output module is used for inputting the output result of the second full connection layer into the softmax classifier to obtain the emotion classification result of the voice to be recognized.

Technical Field

The invention relates to the field of speech emotion recognition, in particular to a multi-scale convolution speech emotion recognition method based on attention.

Background

The speech emotion recognition can help a machine to understand the intention of a user and improve the user experience in an interactive application scene. With the deep development of human-computer interaction, the SER is widely concerned by researchers, and how to extract emotional states capable of effectively distinguishing emotions is one of the difficulties of the current research. Therefore, feature generation and fusion are one of the key steps of speech emotion recognition, that is, original speech features are sent to a feature extractor to generate emotion-related information.

In recent years, deep learning algorithms have been widely used to generate highly abstract emotion-related feature representations, where Convolutional Neural Networks (CNNs) have become the mainstay of research. However, CNN often has convolution operation on redundant information in the process of extracting features, which causes waste of calculation amount and storage space. To further improve the representation capability of the network, attention mechanism is recently widely applied to feature fusion of different branches. The channel attention mechanism has great advantages in improving the performance of deep Convolutional Neural Networks (CNNs). In order to solve the problem of information imbalance of the characteristic channels, a neural network model of multi-channel convolution is introduced. Hujie et al propose a novel feature recalibration strategy in "Squeze-and-excitation networks" to explicitly model the interdependencies between the channels of the convolution features to perform fusion between the feature channels.

However, some studies improve the SE module by capturing more complex channels or incorporate additional spatial attention. SEnet focuses on the fact that the importance of pixels in different channels may vary, and Woo et al propose a simple but effective CBAM model in "CBAM: volumetric Block Attention Module", considering both the importance of pixels in different channels and the importance of pixels in different positions in the same channel. Although the methods have high precision, the methods often have the problems of high model complexity and large calculation amount.

To solve this problem, Wanglong et al proposed an ECA module in "ECA-Net: Efficient Channel Attention for Deep conditional Neural Networks", which demonstrated that avoiding dimensionality reduction and appropriate cross-Channel interaction can significantly reduce the complexity of the model while maintaining performance.

However, none of the above schemes can solve the information imbalance of each feature channel itself in feature extraction.

Disclosure of Invention

Based on this, the invention aims to provide an attention-based multi-scale convolution speech emotion recognition method, which improves a CBAM (cone beam amplitude modulation) model for splicing channel attention and spatial attention, embeds two parallel channel attention mechanisms and a channel-fused spatial attention mechanism in a deep learning neural network, enhances useful information and inhibits information useless for a current task, and is beneficial for a deep model to capture more emotion-related information and find a significant emotion area.

In a first aspect, the invention provides an attention-based multi-scale convolution speech emotion recognition method, which comprises the following steps:

constructing a speech emotion recognition model, wherein the model comprises a first convolutional neural network layer, two parallel channels consisting of an attention layer and a second convolutional neural network layer, a first full connection layer, a spatial attention layer, a second full connection layer and a softmax classifier;

inputting a spectrogram corresponding to a voice to be recognized into the trained voice emotion recognition model, so that the first convolution neural network layer extracts low-level voice features from the spectrogram to obtain a first feature map;

dividing the first feature map into two sub-feature maps, and feeding the two sub-feature maps into two parallel channels respectively, so that each parallel channel performs parallel attention weighting processing and low-level feature extraction processing on one sub-feature map;

inputting the processing results of the two parallel channels into the first full-connection layer, and performing feature fusion processing to obtain a second feature map;

inputting the second feature map into the spatial attention layer, and performing attention weighting processing;

inputting the output result of the space attention layer into the second full-connection layer, and performing feature dimension reduction processing;

and inputting the output result of the second full connection layer into a softmax classifier to obtain an emotion classification result of the speech to be recognized.

Further, dividing the first feature map into two sub-feature maps includes:

setting a parameter alpha, and enabling the feature map to be represented by a ratio of 1-alpha: dividing the channel according to the proportion of alpha, and carrying out average pooling downsampling processing on the feature map with the proportion of alpha;

before the processing results of the two parallel channels are input into the feature fusion layer, the method further comprises the following steps:

and performing up-sampling processing on the feature map with the occupation ratio of alpha.

Further, α is 0.8.

Further, the attention weighting processing is carried out on the sub-feature map, and the processing comprises the following steps:

inputting the sub-feature map into the attention layer to obtain a new feature map;

and multiplying the new feature map and the atomic feature map to obtain an attention weighted feature map.

Further, the second convolutional neural network layer comprises two convolutional layers of 5 × 5, and a maximum pooling layer of 2 × 2 is connected behind both convolutional layers.

Further, the first convolutional neural network layer includes a 5 × 5 convolutional layer and a batch normalization layer.

Further, inputting the second feature map into the spatial attention layer, and performing attention weighting processing, including:

calculating the attention score of the second feature map:

calculating the probability alpha of selecting the ith input information under the condition of giving a task-related query vector q and an input X by using a softmax information selection mechanismi

Further, a Dropout layer is included in the second fully-connected layer.

In a second aspect, the present invention further provides an attention-based multi-scale convolution speech emotion recognition apparatus, mmyy includes:

the model construction module is used for constructing a speech emotion recognition model, wherein the model comprises a first convolutional neural network layer, two parallel channels consisting of an attention layer and a second convolutional neural network layer, a first full connection layer, a space attention layer, a second full connection layer and a softmax classifier;

the feature map extraction module is used for inputting a spectrogram corresponding to a voice to be recognized into the trained voice emotion recognition model, so that the first convolution neural network layer extracts low-level voice features from the spectrogram to obtain a first feature map;

the feature map segmentation module is used for segmenting the first feature map into two sub-feature maps, and feeding the two sub-feature maps into two parallel channels respectively, so that each parallel channel performs parallel attention weighting processing and low-level feature extraction processing on one sub-feature map;

the feature map fusion module is used for inputting the processing results of the two parallel channels into the first full-connection layer to perform feature fusion processing to obtain a second feature map;

the attention weighting module is used for inputting the second feature map into the spatial attention layer and carrying out attention weighting processing;

the characteristic dimension reduction module is used for inputting the output result of the space attention layer into the second full-connection layer and performing characteristic dimension reduction processing;

and the emotion classification result output module is used for inputting the output result of the second full connection layer into the softmax classifier to obtain the emotion classification result of the voice to be recognized.

According to the attention-based multi-scale convolution speech emotion recognition method and device, the improved multi-scale convolution neural network based on the traditional CNN is used, the information characteristics of characteristic graphs with different scales are fully considered on the premise of not increasing extra calculation amount, the perception domain of a convolution kernel is increased, efficient emotion characteristic extraction is carried out, and therefore the performance of an SER is improved. Meanwhile, a CBAM model for splicing the channel attention and the space attention is improved, two parallel channel attention mechanisms and a space attention mechanism for fusing channels are embedded in the deep learning neural network, useful information is enhanced, information which is useless for a current task is restrained, the deep learning neural network is favorable for capturing more emotion-related information by the deep learning model, and a remarkable emotion area is found.

For a better understanding and practice, the invention is described in detail below with reference to the accompanying drawings.

Drawings

FIG. 1 is a flowchart of a method for recognizing emotion of multi-scale convolution speech based on attention according to the present invention;

FIG. 2 is a schematic diagram of a speech emotion recognition model used in one embodiment of the present invention;

FIG. 3 is a flow chart illustrating the division of a first feature map into two sub-feature maps according to one embodiment of the present invention;

FIG. 4 is a schematic structural diagram of an attention-based multi-scale convolution speech emotion recognition apparatus according to the present invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.

It should be understood that the embodiments described are only some embodiments of the present application, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without any creative effort belong to the protection scope of the embodiments in the present application.

The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the embodiments of the present application. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.

When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims. In the description of the present application, it is to be understood that the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not necessarily used to describe a particular order or sequence, nor are they to be construed as indicating or implying relative importance. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.

Further, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.

To solve the problems in the background art, the present invention provides an attention-based multi-scale convolution speech emotion recognition method, as shown in fig. 1, the method includes the following steps:

s1: and constructing a speech emotion recognition model, wherein the model comprises a first convolutional neural network layer, two parallel channels consisting of an attention layer and a second convolutional neural network layer, a first full connection layer, a spatial attention layer, a second full connection layer and a softmax classifier.

The speech emotion recognition model used by the invention is based on the improvement of the CNN network and the CBAM model.

Convolutional Neural Networks (CNN) are a class of feed forward Neural Networks (fed forward Neural Networks) that contain convolution computations and have a deep structure, and are one of the representative algorithms for deep learning (deep learning). Convolutional Neural Networks have a feature learning (rendering) capability, and can perform Shift-Invariant classification (Shift-Invariant classification) on input information according to a hierarchical structure thereof, and are therefore also called "Shift-Invariant Artificial Neural Networks (SIANN)".

The relational Block Attention Module (CBAM) represents an Attention mechanism Module of a convolution Module, and is an Attention mechanism Module combining space (spatial) and channel (channel). Better results can be achieved compared to the attention mechanism of senet focusing only on channels.

As shown in FIG. 2, the speech emotion recognition model used in the invention embeds two parallel channel attention mechanisms and a spatial attention mechanism of a fusion channel in a deep learning neural network, enhances useful information and suppresses information which is useless for a current task, and is beneficial for the deep model to capture more emotion-related information and find a significant emotion area.

S2: inputting a spectrogram corresponding to the speech to be recognized into the trained speech emotion recognition model, so that the first convolution neural network layer extracts low-level speech features from the spectrogram to obtain a first feature map.

The spectrogram is a two-dimensional graph which is formed by performing framing and fast Fourier transform on an original voice signal, stacking and converting the original voice signal into amplitude values by using shades.

Preferably, the first convolutional neural network layer includes a 5 × 5 convolutional layer and a batch normalization layer.

The convolution formula is as follows:

wherein denotes convolution, and the convolution kernel V ═ V1,v2,...,vC],vCRepresenting the parameters of the c-th convolution kernel. The output is U ═ U1,u2,...,uC]。X is the input of the convolutional layer,is a two-dimensional spatial kernel representing v acting on the corresponding channel of XCOf the single channel of (a).

S3: and dividing the first feature map into two sub-feature maps, and feeding the two sub-feature maps into two parallel channels respectively, so that each parallel channel performs parallel attention weighting processing and low-level feature extraction processing on one sub-feature map.

Preferably, as shown in fig. 3, the dividing the first feature map into two sub-feature maps includes:

setting a parameter alpha, and enabling the feature map to be represented by a ratio of 1-alpha: and dividing the channel according to the alpha proportion, and carrying out average pooling downsampling processing on the feature map with the alpha proportion.

For the setting of the alpha parameter, through comparison experiments, the model can capture more useful information and has better performance when the alpha is larger. However, as α increases, the model takes longer. Thus, in a preferred embodiment, α is set to 0.8.

And then, feeding the two sub-feature graphs into the two parallel channels respectively, so that the attention layer in each parallel channel carries out attention weighting on the sub-feature graphs, and the second convolutional neural network layer carries out bottom layer feature extraction processing on the sub-feature graphs after the attention weighting.

Preferably, the attention weighting processing is performed on the sub-feature map, and includes:

inputting the sub-feature map into the attention layer to obtain a new feature map;

and multiplying the new feature map and the atomic feature map to obtain an attention weighted feature map.

In a particular embodiment, the attention weighting process includes:

s301: by generating channel-level statistics using the global average pool, the calculation formula for the c-th element of the statistics z generated by the spatial dimension H × W contracting the output U in step S2 is as follows:

wherein u isc(i, j) is the output of the c-th element in step S2.

S302: using a simple gating mechanism involving sigmoid activation, the formula is calculated as follows:

s=Fex(z,W)=σ(g(z,W))=σ(W2δ(W1z))

where δ represents the ReLU activation function, the full connection layer operation W is performed on the output z of the previous step1ReLU after xz, then full connection layer operation W2δ(W1z) followed by sigmoid activation function to get s. W1And W2The parameters of the two fully-connected layers are respectively expressed, and the two operations are the gating mechanism in whole.

S303: and (3) performing re-weighting operation, namely taking the output of the previous step as the importance of each characteristic channel, and then weighting the previous characteristic channel by channel through multiplication to finish the re-calibration of the original characteristic in the channel dimension. As shown in the following equation:

wherein the content of the first and second substances,in order to focus on the final output of the module,and Fscale (uc, sc) represents a scalar scAnd feature mapping ucChannel-wise multiplication between.

Preferably, the second convolutional neural network layer comprises two 5 × 5 convolutional layers for further extracting low-level features; after both convolutional layers, a 2 × 2 max pooling layer is connected, which is used to discard other features in the feature map except the strongest feature.

Then, before inputting the processing results of the two parallel channels into the feature fusion layer, the method further includes:

and performing up-sampling processing, namely deconvolution processing on the feature map with the proportion of alpha to restore the original size of the feature map.

S4: and inputting the processing results of the two parallel channels into the first full-connection layer, and performing feature fusion processing to obtain a second feature map.

S5: and inputting the second feature map into the spatial attention layer to carry out attention weighting processing. And establishing a global interdependence relation and reducing spatial redundant information.

In a particular embodiment, given a task-dependent query vector q, the attention variable z ∈ [1, N]The index position indicating the selected information, i.e., z-i indicates that the ith input information is selected. For convenience of calculation, a softmax information selection mechanism is selected. Calculating the probability alpha of selecting the ith input information given q and XiAs shown in the following equation:

wherein the output is alphaiFor attention distribution, s (x)iAnd q) is an attention scoring function.

S6: and inputting the output result of the space attention layer into the second full-connection layer for feature dimension reduction processing.

Preferably, a Dropout layer is added to the second fully-connected layer to reduce the characteristic parameters and avoid overfitting of the model.

S7: and inputting the output result of the second full connection layer into a softmax classifier to obtain an emotion classification result of the speech to be recognized.

In a second aspect, corresponding to the foregoing method, the present invention further provides an attention-based multi-scale convolution speech emotion recognition apparatus, as shown in fig. 4, the apparatus includes:

the model construction module is used for constructing a speech emotion recognition model, wherein the model comprises a first convolutional neural network layer, two parallel channels consisting of an attention layer and a second convolutional neural network layer, a first full connection layer, a space attention layer, a second full connection layer and a softmax classifier;

the feature map extraction module is used for inputting a spectrogram corresponding to a voice to be recognized into the trained voice emotion recognition model, so that the first convolution neural network layer extracts low-level voice features from the spectrogram to obtain a first feature map;

the feature map segmentation module is used for segmenting the first feature map into two sub-feature maps, and feeding the two sub-feature maps into two parallel channels respectively, so that each parallel channel performs parallel attention weighting processing and low-level feature extraction processing on one sub-feature map;

the feature map fusion module is used for inputting the processing results of the two parallel channels into the first full-connection layer to perform feature fusion processing to obtain a second feature map;

the attention weighting module is used for inputting the second feature map into the spatial attention layer and carrying out attention weighting processing;

the characteristic dimension reduction module is used for inputting the output result of the space attention layer into the second full-connection layer and performing characteristic dimension reduction processing;

and the emotion classification result output module is used for inputting the output result of the second full connection layer into the softmax classifier to obtain the emotion classification result of the voice to be recognized.

Compared with a CNN network and a CBAM model, the attention-based multi-scale convolution model fully considers the information characteristics of different scale characteristic graphs and increases the perception domain of a convolution kernel on the premise of not increasing extra calculation amount, thereby performing efficient emotional characteristic extraction. Therefore, the speech emotion recognition method can obviously improve speech emotion recognition accuracy. Convolutional neural networks CNN have been used for tasks in various fields of artificial intelligence since being proposed by Yann LeCun in 1998, and have been successful in the field of speech emotion recognition. The CNN with local perception can model local structure information of frequency spectrum characteristics, and has weight sharing and pooling technology with stronger universality and robustness. The CBAM model proposed by Woo et al in 2018 splices a channel attention mechanism and a space attention mechanism, and considers multi-aspect characteristic information. On the basis of taking CNN as a convolutional layer, the model provided by the invention increases two parallel channel attention mechanisms and a spatial attention mechanism of a fusion channel in consideration of the imbalance of different characteristic channel information.

The comparison of the accuracy of speech emotion recognition using different models is shown in the following table:

TABLE 1 weighted average of model accuracy

Model CASIA(WA%)
CNN 62.77
CBAM 90.87
Proposed 94.07

In the CASIA speech data set, the weighted average of the model accuracy rate provided by the invention is far higher than that of the CNN and CBAM models, wherein the accuracy rate is 3.2% higher than that of the CBAM model with 90.87%. Therefore, the model provided by the invention has the capabilities of filtering redundant information and mining depth characteristics, and can obviously improve the accuracy of speech emotion recognition.

The invention provides an attention-based multi-scale convolution speech emotion recognition method and device, and discloses a multi-scale convolution neural network improved based on a traditional CNN (convolutional neural network). on the premise of not increasing extra calculation amount, information characteristics of characteristic graphs with different scales are fully considered, a perception domain of a convolution kernel is increased, efficient emotion characteristic extraction is carried out, and therefore the performance of an SER (sequence-dependent noise-free) is improved. Meanwhile, a CBAM model for splicing the channel attention and the space attention is improved, two parallel channel attention mechanisms and a space attention mechanism for fusing channels are embedded in the deep learning neural network, useful information is enhanced, information which is useless for a current task is restrained, the deep learning neural network is favorable for capturing more emotion-related information by the deep learning model, and a remarkable emotion area is found.

The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention.

13页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:灵敏放大器及存储装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!