Sound abnormality detection method, sound abnormality detection device, computer device, and storage medium

文档序号:70700 发布日期:2021-10-01 浏览:15次 中文

阅读说明:本技术 声音异常检测方法、装置、计算机设备及存储介质 (Sound abnormality detection method, sound abnormality detection device, computer device, and storage medium ) 是由 司世景 王健宗 于 2021-06-30 设计创作,主要内容包括:本申请涉及智能决策技术,尤其涉及分类模型,提供了一种声音异常检测方法、装置、计算机设备及存储介质,所述方法包括:获取待检测的音频数据;对所述待检测的音频数据进行分段处理,得到多个音频序列;对多个所述音频序列进行特征提取处理,得到多个音频特征数据;对多个所述音频特征数据进行归类,确定多个所述音频特征数据各自对应的类别标识;根据所述多个所述音频特征数据各自对应的类别标识,确定多个所述音频特征数据中的目标音频特征数据;根据所述目标音频特征数据,输出异常告警信息。本申请还涉及区块链技术,得到的异常告警信息可以存储于区块链中。(The application relates to an intelligent decision technology, in particular to a classification model, and provides a sound anomaly detection method, a device, computer equipment and a storage medium, wherein the method comprises the following steps: acquiring audio data to be detected; carrying out segmentation processing on the audio data to be detected to obtain a plurality of audio sequences; carrying out feature extraction processing on the plurality of audio sequences to obtain a plurality of audio feature data; classifying the audio characteristic data, and determining a category identifier corresponding to each of the audio characteristic data; determining target audio characteristic data in the plurality of audio characteristic data according to the category identification corresponding to the plurality of audio characteristic data; and outputting abnormal alarm information according to the target audio characteristic data. The application also relates to a block chain technology, and the obtained abnormal alarm information can be stored in the block chain.)

1. A method for detecting a sound abnormality, the method comprising:

acquiring audio data to be detected;

carrying out segmentation processing on the audio data to be detected to obtain a plurality of audio sequences;

carrying out feature extraction processing on the plurality of audio sequences to obtain a plurality of audio feature data;

classifying the audio characteristic data, and determining a category identifier corresponding to each of the audio characteristic data;

determining target audio characteristic data in the plurality of audio characteristic data according to the category identification corresponding to the plurality of audio characteristic data;

and outputting abnormal alarm information according to the target audio characteristic data.

2. The method of claim 1, wherein the performing feature extraction on the plurality of audio sequences to obtain a plurality of audio feature data comprises:

performing acoustic feature extraction on the audio sequence to obtain acoustic features of the audio sequence;

inputting the acoustic features into a contrast learning model to obtain the audio feature data of the audio sequence; the comparison learning model is used for carrying out similar coding on the acoustic features corresponding to the normal state according to comparison learning.

3. The method according to claim 2, wherein the performing acoustic feature extraction on the audio sequence to obtain the acoustic features of the audio sequence comprises:

fourier transforming the audio sequence to obtain a Fourier spectrum of the audio sequence;

filtering the Fourier spectrum of the audio sequence through a Mel filter to obtain a Mel spectrum of the audio sequence;

performing cepstrum analysis on a Mel spectrum of the audio sequence to obtain the acoustic features of the audio sequence.

4. The sound abnormality detection method according to any one of claims 2-3, characterized in that the method further comprises:

acquiring training data, wherein the training data comprises a plurality of audio sequences in a normal state;

randomly enhancing the audio sequence to obtain a plurality of enhanced data corresponding to the audio sequence;

obtaining a training model, wherein the training model comprises a twin network, the twin network comprises a first encoder, a second encoder and a prediction head, the first encoder and the second encoder share the same encoder network parameters, and the prediction head comprises a multilayer perceptron network;

inputting one enhancement data of the plurality of enhancement data into the first encoder to obtain an output vector of the first encoder, and inputting another enhancement data into the second encoder to obtain an output vector of the second encoder;

inputting the output vector of the first encoder into the prediction head to obtain first prediction data;

inputting the output vector of the second encoder into the prediction head to obtain second prediction data;

determining symmetry loss according to the similarity of the first prediction data and the output vector of the second encoder and the similarity of the second prediction data and the output vector of the first encoder;

adjusting network parameters of the twin network by gradient back propagation according to the symmetry loss;

and if the training model converges, determining the comparison learning model according to the first encoder and/or the second encoder.

5. The method of claim 4, wherein the randomly enhancing the audio sequence to obtain a plurality of enhancement data corresponding to the audio sequence comprises:

performing acoustic feature extraction on the audio sequence to obtain acoustic features of the audio sequence;

obtaining a plurality of enhancement data for each of the audio sequences by adding random noise to the acoustic features and/or randomly changing values of partial data of the acoustic features.

6. The method according to any one of claims 1 to 3, wherein the classifying the plurality of audio feature data and determining the category identifier corresponding to each of the plurality of audio feature data comprises:

acquiring a plurality of audio characteristic data corresponding to normal states as a normal characteristic sample set;

adjusting parameters of a clustering class number self-adaptive clustering algorithm until all samples of the normal feature sample set are clustered into one class after the clustering algorithm clusters the feature sample set;

determining an audio feature subset to be clustered, wherein the audio feature subset comprises all samples of the normal feature sample set and at least part of samples of a suspected feature sample set, the suspected feature sample set comprises a plurality of audio feature data of the audio data to be detected, and the sample number of the normal feature sample set is more than half of the total number of the audio feature data of the audio feature subset;

clustering the audio feature subsets through the clustering algorithm;

determining the clustering categories of which the number meets a preset condition according to the number of the audio characteristic data corresponding to each clustering category in the clustering result, wherein the clustering categories of which the number meets the preset condition comprise any one of the following categories: the cluster category with the least number of corresponding audio characteristic data in all the plurality of cluster categories and the cluster category with the number of corresponding audio characteristic data in all the cluster categories lower than a preset threshold value;

and determining the category identifier of the corresponding sample of the to-be-detected feature sample set as an abnormal category identifier in the clustering categories of which the number meets the preset condition.

7. The method according to claim 6, wherein the determining the target audio feature data in the plurality of audio feature data according to the class identifier corresponding to each of the plurality of audio feature data comprises:

and determining the sample of the sample set of the characteristics to be detected corresponding to the abnormal class identification as the target audio characteristics.

8. A sound abnormality detection apparatus, characterized in that the apparatus comprises:

the data acquisition module is used for acquiring audio data to be detected;

the segmentation processing module is used for carrying out segmentation processing on the audio data to be detected to obtain a plurality of audio sequences;

the characteristic extraction module is used for carrying out characteristic extraction processing on the audio sequences to obtain a plurality of audio characteristic data;

the data classification module is used for classifying the audio characteristic data and determining the category identification corresponding to the audio characteristic data;

the target determining module is used for determining target audio characteristic data in the audio characteristic data according to the category identification corresponding to the audio characteristic data;

and the alarm issuing module is used for outputting abnormal alarm information according to the target audio characteristic data.

9. A computer device, wherein the computer device comprises a memory and a processor;

the memory for storing a computer program;

the processor, configured to execute the computer program and to implement the sound anomaly detection method according to any one of claims 1-7 when executing the computer program.

10. A computer-readable storage medium, in which a computer program is stored, wherein the computer program, if executed by a processor, implements the sound abnormality detection method according to any one of claims 1 to 7.

Technical Field

The present application relates to the field of computer technologies, and in particular, to a sound anomaly detection method and apparatus, a computer device, and a storage medium.

Background

At present, a monitoring system is mainly realized based on videos, and practical application has limitations, for example, a blind area exists when a sight line is shielded, and the monitoring system is easily influenced by factors such as light, severe weather and the like. The abnormal event usually accompanies with the occurrence of abnormal sound, the abnormal sound can effectively reflect the occurrence of major accidents and critical situations, and has the advantages of low complexity, easy acquisition, no space limitation and the like. The sound signal bears abundant information quantity, realizes abnormal detection based on the sound signal, and has unique advantages under the conditions that a plurality of visual sense, smell sense and touch sense are not suitable.

Disclosure of Invention

The application provides a sound anomaly detection method and device, computer equipment and a storage medium, which can classify based on extracted audio features to determine abnormal data according to class identification, and then issue anomaly alarm information according to the abnormal data to enable related personnel to know the anomaly condition.

In a first aspect, the present application provides a sound abnormality detection method, including:

acquiring audio data to be detected;

carrying out segmentation processing on the audio data to be detected to obtain a plurality of audio sequences;

carrying out feature extraction processing on the plurality of audio sequences to obtain a plurality of audio feature data;

classifying the audio characteristic data, and determining a category identifier corresponding to each of the audio characteristic data;

determining target audio characteristic data in the plurality of audio characteristic data according to the category identification corresponding to the plurality of audio characteristic data;

and outputting abnormal alarm information according to the target audio characteristic data.

In a second aspect, the present application provides a sound abnormality detection apparatus, comprising:

the data acquisition module is used for acquiring audio data to be detected;

the segmentation processing module is used for carrying out segmentation processing on the audio data to be detected to obtain a plurality of audio sequences;

the characteristic extraction module is used for carrying out characteristic extraction processing on the audio sequences to obtain a plurality of audio characteristic data;

the data classification module is used for classifying the audio characteristic data and determining the category identification corresponding to the audio characteristic data;

the target determining module is used for determining target audio characteristic data in the audio characteristic data according to the category identification corresponding to the audio characteristic data;

and the alarm issuing module is used for outputting abnormal alarm information according to the target audio characteristic data.

In a third aspect, the present application provides a computer device comprising a memory and a processor; the memory is used for storing a computer program; the processor is configured to execute the computer program and implement the sound abnormality detection method when executing the computer program.

In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and if the computer program is executed by a processor, the sound abnormality detection method is implemented.

The application discloses a sound abnormity detection method, a sound abnormity detection device, computer equipment and a storage medium, wherein audio data to be detected are obtained; carrying out segmentation processing on the audio data to be detected to obtain a plurality of audio sequences; carrying out feature extraction processing on the plurality of audio sequences to obtain a plurality of audio feature data; classifying the audio characteristic data, and determining a category identifier corresponding to each of the audio characteristic data; determining target audio characteristic data in the plurality of audio characteristic data according to the category identification corresponding to the plurality of audio characteristic data; and outputting abnormal alarm information according to the target audio characteristic data. The classification based on the extracted audio features is realized, so that abnormal data is determined according to the class identification, and the accuracy of sound abnormality detection is improved.

Drawings

In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.

Fig. 1 is a schematic flowchart illustrating a sound anomaly detection method according to an embodiment of the present application;

FIG. 2 is a schematic diagram of a twin network according to an embodiment of the present disclosure;

fig. 3 is a block diagram schematically illustrating a structure of a sound detection apparatus according to an embodiment of the present application;

fig. 4 is a block diagram illustrating a structure of a computer device according to an embodiment of the present disclosure.

Detailed Description

The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

The flow diagrams depicted in the figures are merely illustrative and do not necessarily include all of the elements and operations/steps, nor do they necessarily have to be performed in the order depicted. For example, some operations/steps may be decomposed, combined or partially combined, so that the actual execution sequence may be changed according to the actual situation. In addition, although the division of the functional blocks is made in the device diagram, in some cases, it may be divided in blocks different from those in the device diagram.

The embodiment of the application provides a sound abnormity detection method, a sound abnormity detection device, computer equipment and a computer readable storage medium. For improving the accuracy of sound anomaly detection based on a comparative learning model. For example, in the current monitoring system, the monitoring range is limited mainly based on video, for example, a blind area exists when the sight of a camera is blocked, the sound anomaly detection method according to the embodiment of the present application can classify based on audio features extracted from an audio sequence to determine anomalous data according to category identification, and thus, anomaly alarm information is issued according to the anomalous data, so that related personnel can know the anomaly, the limitation of video monitoring is made up, and the accuracy of sound anomaly detection is improved.

The sound abnormality detection method can be used for a server, and can also be used for a terminal, wherein the terminal can be an electronic device such as a mobile phone, a tablet computer, a notebook computer and a desktop computer; the servers may be, for example, individual servers or clusters of servers. However, for the sake of understanding, the following embodiments will be described in detail with reference to a sound abnormality detection method applied to a server.

Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.

Referring to fig. 1, fig. 1 is a schematic flow chart of a sound anomaly detection method according to an embodiment of the present application.

As shown in fig. 1, the sound abnormality detection method may include the following steps S110 to S160.

And step S110, acquiring audio data to be detected.

For example, the audio data may be a sound signal acquired in real time, or may be a sound signal stored in a storage space after being acquired.

For example, the audio data may be obtained directly, for example, an audio signal is obtained directly from a recording device, or the audio data may be separated from other audio-containing mixed signals such as a video signal and an audio signal.

Specifically, in this embodiment, the audio data is a time domain audio signal; in a specific implementation, the audio data may also be a frequency domain audio signal.

And step S120, carrying out segmentation processing on the audio data to be detected to obtain a plurality of audio sequences.

Illustratively, the audio data to be detected is subjected to framing processing according to a preset frame length, and the frame lengths of all the audio sequences are the same. The frame length of the audio sequence is unified, so that subsequent data processing is facilitated.

Illustratively, the number of frames shifted between two adjacent audio sequences of the audio data does not exceed the preset frame length, so as to ensure that each frame of data in the audio data has corresponding data in the audio sequences.

And step S130, performing feature extraction processing on the plurality of audio sequences to obtain a plurality of audio feature data.

Illustratively, step S130 specifically includes steps S131-S132.

S131, extracting acoustic features of the audio sequence to obtain the acoustic features of the audio sequence;

through the acoustic feature extraction, signals irrelevant to the acoustic features in the audio sequence can be greatly compressed, and the accuracy of subsequent detection is improved.

Illustratively, in this embodiment, the step S131 specifically includes steps S131a-S131 c.

S131a, performing Fourier transform on the audio sequence to obtain a Fourier spectrum of the audio sequence;

converting the audio sequence in the time domain into the Fourier spectrum in the frequency domain by Fourier transformation.

S131b, filtering the Fourier spectrum of the audio sequence through a Mel filter to obtain a Mel spectrum of the audio sequence;

the Mel filter is a group of filters designed according to the center frequency and the bandwidth of the human ear filter group, and acoustic characteristics related to human ear auditory sense can be extracted through the Mel filter.

S131c, performing cepstrum analysis on the Mel frequency spectrum of the audio sequence to obtain the acoustic features of the audio sequence.

Illustratively, the mel frequency spectrum is logarithmized, and then the result of the logarithmization is subjected to inverse fourier transform to obtain the acoustic feature.

In other embodiments, acoustic feature extraction methods such as extracting fundamental frequency features, extracting formant features, or extracting acoustic features according to a deep learning model may also be used according to actual detection needs.

S132, inputting the acoustic features of the audio sequence into the contrast learning model to obtain the audio feature data of the audio sequence, wherein the contrast learning model is used for performing feature analysis on the acoustic features through contrast learning (contrast learning).

Illustratively, in contrast learning, the representation is learned by comparing between input samples, e.g., positive comparison between positive samples, negative comparison between positive samples and negative samples, i.e., maximizing the similarity between positive samples, and minimizing the similarity between positive samples and negative samples. Through contrast learning, the contrast learning model can acquire hidden features of higher layers.

Illustratively, the sound anomaly detection method provided by the present application further includes obtaining the trained contrast learning model through steps S100-S108.

S100, acquiring training data, wherein the training data comprises a plurality of audio sequences in a normal state;

illustratively, the training data is acquired in a scenario where the detection data is the same or similar.

S101, randomly enhancing the audio sequence to obtain a plurality of enhanced data corresponding to the audio sequence.

Illustratively, the step S102 specifically includes steps S101 a-S101 b.

S101a, extracting acoustic features of the audio sequence to obtain the acoustic features of the audio sequence;

illustratively, step S103a may be implemented with reference to the acoustic feature extraction method in steps S131a-S131 c.

S101b, adding random noise in the acoustic features and/or randomly changing the values of partial data of the acoustic features to obtain a plurality of enhanced data of each audio sequence.

Since the enhancement is random, there is typically a difference between several enhancement data corresponding to the same audio sequence.

S102, obtaining a training model, wherein the training model comprises a twin network, the twin network comprises a first encoder, a second encoder and a prediction head, the first encoder and the second encoder share the same encoder network parameters, and the prediction head comprises a multilayer perceptron network;

the twin network structure model is used as a training model for learning, and has the advantages of simple structure, no need of introducing negative samples, and no need of having large batch size (batch size) for training samples. Of course, in specific implementation, other models for contrast learning, such as BYOL and Simclr, may be used as the training model.

Illustratively, the encoder includes a backbone network for feature embedding and a projection head for data conversion of output vectors of the backbone network, the projection head being a multi-layer perceptron network (mlp).

And S103, inputting one enhancement data in the plurality of enhancement data into the first encoder to obtain an output vector of the first encoder, and inputting the other enhancement data into the second encoder to obtain an output vector of the second encoder.

For example, enhancement data X1 is input to the encoder, resulting in the output vector Z1 of the first encoder; the enhancement data X2 is input to a second encoder, resulting in an output vector Z2 of the second encoder.

And S104, inputting the output vector of the first encoder into the prediction head to acquire first prediction data.

For example, Z1 is input to the prediction head to acquire first prediction data P1.

And S105, inputting the output vector of the second encoder into the prediction head to acquire second prediction data.

For example, Z2 is input to the prediction head to acquire second prediction data P2.

S106, determining symmetry loss according to the similarity between the first prediction data and the output vector of the second encoder and the similarity between the second prediction data and the output vector of the first encoder.

Illustratively, the formula for the symmetry loss is:

wherein D (P1, Z2) is the negative cosine similarity between the first prediction data P1 and the output vector Z2 of the second encoder, and D (P2, Z1) is the negative cosine similarity between the second prediction data P2 and the output vector Z1 of the first encoder.

And S107, adjusting the network parameters of the twin network through gradient back propagation according to the symmetry loss.

Illustratively, summing up the symmetry losses corresponding to a batch (batch) of the audio sequences to obtain a total symmetry loss;

illustratively, for the loss obtained by the negative cosine similarity of the output vector of the first prediction data and the output vector of the second encoder, the gradient back propagation is performed in the first encoder and the prediction head, and the gradient back propagation is stopped in the second encoder; in the case of a loss obtained from the negative cosine similarity between the second prediction data P2 and the output vector Z1 of the first encoder, the gradient back propagation is performed in the second encoder and the prediction head, and the gradient back propagation is stopped in the first encoder.

And S108, if the training model is converged, determining the comparison learning model according to the first encoder and/or the second encoder.

Illustratively, the iteration is performed in steps S103 to S107, and if the total symmetry loss is smaller than a preset threshold, the change of the network parameter between two iterations is smaller than a preset threshold, or the number of iterations exceeds a preset maximum number of iterations, the training model converges.

For example, if the training model converges, the backbone network in the first encoder or the second encoder is determined as the contrast learning model.

Through training, the comparison learning model can learn the similarity between the audio sequences in the normal state, and in the obtained audio feature data, the acoustic features of the audio sequences in the normal state are subjected to similarity coding, so that the difference between the audio sequences in the normal state is reduced, the difference between the audio sequences in the abnormal state and the audio sequences in the normal state is more obvious, and the accuracy of sound abnormality detection is improved.

Step S140, classifying the plurality of audio feature data, and determining a category identifier corresponding to each of the plurality of audio feature data.

In an embodiment, the step S140 is implemented by a clustering algorithm with adaptive clustering category number, and specifically includes steps S140a-S140 c.

S140a, obtaining a plurality of audio feature data corresponding to the normal state to serve as a normal feature sample set.

For example, a plurality of consecutive audio sequences may be obtained in a case where the state is always normal, and with reference to step S130, feature extraction processing may be performed on each of the audio sequences to obtain samples of the normal feature sample set.

S140b, adjusting parameters of the clustering algorithm with the self-adaptive clustering category number until all samples of the normal feature sample set are clustered into one class by the clustering algorithm.

Clustering is an unsupervised classification method, i.e., the clustering algorithm does not need prior classification knowledge to determine a clustering scheme, but classifies similar objects into one class by analysis.

And (3) self-adapting the clustering category number, namely, the clustering algorithm obtains the category number in the clustering result in a self-adapting manner in the clustering process, wherein the category number is not directly obtained through presetting.

For example, the clustering class number adaptive clustering algorithm may adopt an iterative self-organizing data analysis algorithm (ISODATA). The iterative self-organizing data analysis algorithm has a merging and splitting mechanism, for example, when the distance between the clustering centers of two categories is smaller than a preset distance threshold, the merging mechanism merges the two categories into one category, and when the standard deviation of a certain category is larger than a preset standard deviation threshold, the splitting mechanism divides the category into two categories, so that the number of the categories is automatically adjusted and optimized through iteration, and finally an ideal classification result is obtained.

For example, the normal feature sample set is clustered by the iterative self-organizing data analysis algorithm, and if all samples of the normal feature sample set are not clustered into one class, the control parameter of the iterative self-organizing data analysis algorithm, for example, the distance threshold and/or the standard deviation threshold, is continuously adjusted until all samples of the normal feature sample set are clustered into one class.

S140c, determining an audio feature subset to be clustered, wherein the audio feature subset comprises all samples of the normal feature sample set and at least part of samples of a to-be-clustered feature sample set, the to-be-clustered feature sample set comprises a plurality of audio feature data of the to-be-detected audio data, and the sample number of the normal feature sample set is more than half of the total number of the audio feature data of the audio feature subset.

For example, if there are 50 samples of a normal feature sample set in the audio feature subset, the number of samples of the suspected feature sample set in the audio feature subset is not allowed to exceed 50; in practical application, the proportion of various samples may also be set, for example, if the number of samples in the to-be-detected feature sample set in the audio feature subset is not allowed to exceed ten percent of the number of samples in the normal feature sample set, the number of samples in the to-be-detected feature sample set is not allowed to exceed 5.

S140, clustering the audio feature subsets through the clustering algorithm;

because the comparative learning model learns the similarity of the sound signals in the normal state, the obtained audio characteristic data can perform similarity processing on the sound signals in the normal state to highlight the sound signals in the abnormal state, therefore, under an ideal condition, the samples of the to-be-detected characteristic sample set corresponding to the normal state are gathered with all the samples of the normal characteristic sample set into one class, and the samples of the to-be-detected characteristic sample set corresponding to the abnormal state are gathered into other classes.

S140e, determining the cluster categories of which the number meets the preset condition according to the number of the audio characteristic data corresponding to each cluster category in the cluster result, wherein the cluster categories of which the number meets the preset condition include any one of the following: the cluster category with the least amount of corresponding audio characteristic data in all the plurality of cluster categories and the cluster category with the amount of corresponding audio characteristic data in all the cluster categories lower than a preset threshold value.

For example, in the audio feature subset, there are 50 samples of a normal feature sample set, 5 samples of a feature sample set to be detected, and there are 3 obtained cluster categories, where the first cluster category has 50 audio feature data, the second cluster category has 3 audio feature data, and the third cluster category has 2 audio feature data; the third clustering category has the least amount of audio characteristic data, and accordingly, the third clustering category can be determined to be the clustering category of which the amount meets the preset condition; the threshold is 5, and the number of the audio features of the second clustering category is less than 5, so that the second clustering category can be determined to be the clustering category of which the number meets the preset condition.

S140f, determining the category identification of the corresponding sample of the to-be-detected feature sample set as the abnormal category identification in the clustering categories with the number meeting the preset condition.

Illustratively, steps S140c-S140f are performed a plurality of times until all samples of the set of suspect feature samples participate in the clustering.

In another embodiment, the audio feature data of the audio sequence is categorized directly by a classification algorithm.

Classification is a supervised classification method, i.e. the classification method requires a priori classification knowledge to determine the classification scheme, e.g. learning a priori classification knowledge from tagged data, thereby predicting the tags of unknown data to classify according to the tags.

Illustratively, the audio feature data of the audio sequence may be classified by a K-nearest neighbor (KNN) classification algorithm.

The K-nearest neighbor method determines the category to which the sample to be classified belongs according to the categories of the nearest a plurality of prior samples. Firstly, obtaining a prior sample, for example, a group of audio characteristic data in a normal state forms a prior sample set of a normal category, and a plurality of groups of audio characteristic data in an abnormal state correspondingly form a prior sample set of a plurality of abnormal categories; then calculating the distance from the audio characteristic data of the audio data to be detected to each prior sample; determining K prior samples with the minimum distance; and comparing the categories of the K prior samples with the minimum distance, classifying the audio characteristic data of the audio data to be detected into the category with the highest proportion according to the principle that a minority obeys majority, and determining the category identification of the audio characteristic data of the audio data to be detected according to the prior sample corresponding to the category with the highest proportion.

S150, determining target audio characteristic data in the audio characteristic data according to the category identification corresponding to the audio characteristic data.

For example, in an embodiment implemented by a clustering algorithm based on cluster category number adaptation in step S140, a sample of the suspected feature sample set corresponding to the category identifier of the anomaly is determined as the target audio feature data.

In another embodiment that step S140 is directly implemented by a classification algorithm, for example, several classification identifiers corresponding to abnormal states are determined as target classification identifiers, and audio feature data of the to-be-detected audio data corresponding to the target classification identifiers is determined as the target audio feature data.

And S160, outputting abnormal alarm information according to the target audio characteristic data.

In an exemplary embodiment implemented based on a clustering algorithm with a clustering category number self-adaptation in step S140, the audio data to be detected is obtained in real time, once the target audio characteristic data appears, it is indicated that a sound signal in an abnormal state appears, and the abnormal warning information is issued to remind related personnel of finding abnormality in time.

In another embodiment, in which step S140 is implemented directly through a classification algorithm, the category information of the abnormality in the abnormality warning information is issued according to the category identifier corresponding to the target feature data, so that the relevant personnel can know what kind of abnormality occurs.

Illustratively, according to the audio sequence corresponding to the target audio characteristic data, issuing abnormal alarm information of the detection data. For example, according to the time information corresponding to the audio sequence, the time information of the occurrence of the abnormality in the abnormality warning information is issued, and according to the device information corresponding to the audio sequence, the information of where the abnormality occurs in the abnormality warning information is issued.

In some embodiments, the exception alert information may be stored in the blockchain node. The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism and an encryption algorithm. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.

Referring to fig. 3, fig. 3 is a schematic diagram of a sound anomaly detection apparatus according to an embodiment of the present application, where the sound anomaly detection apparatus may be configured in a server or a terminal, and is used to execute the sound anomaly detection method.

As shown in fig. 3, the sound abnormality detection apparatus includes: the system comprises a data acquisition module 110, a segmentation processing module 120, a feature extraction module 130, a data classification module 140, a target determination module 150 and an alarm issuing module 160.

The data acquisition module 110 is configured to acquire audio data to be detected;

the segmentation processing module 120 is configured to perform segmentation processing on the audio data to be detected to obtain a plurality of audio sequences;

a feature extraction module 130, configured to perform feature extraction processing on the multiple audio sequences to obtain multiple audio feature data;

a data classifying module 140, configured to classify the plurality of audio feature data, and determine a category identifier corresponding to each of the plurality of audio feature data;

a target determining module 150, configured to determine target audio feature data in the multiple audio feature data according to the category identifier corresponding to each of the multiple audio feature data;

and the alarm issuing module 160 is configured to output abnormal alarm information according to the target audio characteristic data.

Illustratively, the feature extraction module 130 includes an acoustic feature extraction sub-module and a contrast learning sub-module.

The acoustic feature extraction sub-module is used for extracting acoustic features of the audio sequence to obtain the acoustic features of the audio sequence;

and the comparison learning submodule is used for inputting the acoustic features of the audio sequence into a comparison learning model so as to obtain the audio feature data of the audio sequence, and the comparison learning model is used for carrying out feature analysis on the acoustic features through comparison learning.

Illustratively, the acoustic feature extraction submodule includes a fourier transform unit, a mel filtering unit and a cepstrum unit.

A Fourier transform unit for performing a Fourier transform on the audio sequence to obtain a Fourier spectrum of the audio sequence.

And the Mel filtering unit is used for filtering the Fourier spectrum of the audio sequence through a Mel filter so as to obtain the Mel spectrum of the audio sequence.

A cepstrum unit for performing cepstrum analysis on the mel frequency spectrum of the audio sequence to obtain the acoustic features of the audio sequence.

Illustratively, the sound anomaly detection device further comprises a model training device.

The model training device comprises a training data acquisition unit, an enhancement unit, a training model acquisition unit, a coding unit, a first prediction unit, a second prediction unit, a loss determination unit, a model parameter adjustment unit and a model determination unit.

The data acquisition unit is used for acquiring training data, and the training data comprises a plurality of audio sequences in a normal state;

the enhancement unit is used for carrying out random enhancement on the audio sequence to obtain a plurality of enhancement data corresponding to the audio sequence;

illustratively, the enhancement unit includes an acoustic feature extraction enhancement sub-module and an enhancement processing sub-module.

The acoustic feature extraction enhancement sub-module is used for extracting acoustic features of the audio sequence to obtain the acoustic features of the audio sequence;

and the enhancement processing sub-module is used for acquiring a plurality of enhancement data of each audio sequence by adding random noise in the acoustic features and/or randomly changing the values of partial data of the acoustic features.

The device comprises a training model obtaining unit, a prediction unit and a prediction unit, wherein the training model comprises a twin network, the twin network comprises a first encoder, a second encoder and a prediction head, the first encoder and the second encoder share the same encoder network parameters, and the prediction head comprises a multilayer perceptron network;

an encoding unit configured to input one enhancement data of the plurality of enhancement data into the first encoder to obtain an output vector of the first encoder, and input another enhancement data into the second encoder to obtain an output vector of the second encoder;

a first prediction unit configured to input an output vector of the first encoder to the prediction head to obtain first prediction data;

a second prediction unit configured to input an output vector of the second encoder to the prediction head to obtain second prediction data;

a loss determining unit configured to determine a symmetry loss according to a similarity between the first prediction data and an output vector of the second encoder and a similarity between the second prediction data and an output vector of the first encoder;

the model parameter adjusting unit is used for adjusting the network parameters of the twin network through gradient back propagation according to the symmetry loss;

and the model determining unit is used for determining the comparison learning model according to the first encoder and/or the second encoder if the training model converges.

Illustratively, the data classifying module 140 includes a to-be-normal sample obtaining unit, an algorithm optimizing unit, a to-be-clustered data determining unit, a clustering unit, a quantity judging unit, and an identifier determining unit.

A normal sample acquiring unit, configured to acquire a plurality of audio feature data corresponding to a normal state as a normal feature sample set;

the algorithm optimization unit is used for adjusting parameters of a clustering algorithm with a clustering category number self-adaption until all samples of the normal characteristic sample set are clustered into one class after the clustering algorithm clusters the characteristic sample set;

the device comprises a to-be-clustered data determining unit, a to-be-clustered data determining unit and a clustering unit, wherein the to-be-clustered data determining unit is used for determining an audio feature subset to be clustered, the audio feature subset comprises all samples of the normal feature sample set and at least part of samples of a to-be-clustered feature sample set, the to-be-clustered feature sample set comprises a plurality of audio feature data of the to-be-detected audio data, and the sample number of the normal feature sample set in the total number of the audio feature data of the audio feature subset exceeds half;

the clustering unit is used for clustering the audio feature subsets through the clustering algorithm;

the quantity judging unit is used for determining the clustering categories of which the quantity meets the preset conditions according to the quantity of the audio characteristic data corresponding to each clustering category in the clustering result, wherein the clustering categories of which the quantity meets the preset conditions comprise any one of the following categories: the cluster category with the least number of corresponding audio characteristic data in all the plurality of cluster categories and the cluster category with the number of corresponding audio characteristic data in all the cluster categories lower than a preset threshold value;

and the identification determining unit is used for determining the category identification of the corresponding sample of the to-be-detected feature sample set as the abnormal category identification in the clustering categories of which the number meets the preset condition.

Illustratively, the target determining module 150 includes a target audio feature determining sub-module, which is configured to determine a sample of the set of suspected feature samples corresponding to the class identifier of the anomaly as the target audio feature.

It should be noted that, as will be clear to those skilled in the art, for convenience and brevity of description, the specific working processes of the apparatus, the modules and the units described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

The methods, apparatus, and devices of the present application are operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.

The above-described methods and apparatuses may be implemented, for example, in the form of a computer program that can be run on a computer device as shown in fig. 4.

Referring to fig. 4, fig. 4 is a schematic diagram of a computer device according to an embodiment of the present disclosure. The computer device may be a server or a terminal.

As shown in fig. 4, the computer device includes a processor, a memory, and a network interface connected by a system bus, wherein the memory may include a nonvolatile storage medium and an internal memory.

The non-volatile storage medium may store an operating system and a computer program. The computer program includes program instructions that, when executed, cause a processor to perform any of the sound anomaly detection methods.

The processor is used for providing calculation and control capability and supporting the operation of the whole computer equipment.

The internal memory provides an environment for running a computer program in the non-volatile storage medium, which when executed by the processor, causes the processor to perform any one of the notification sound abnormality detection methods.

The network interface is used for network communication, such as sending assigned tasks and the like. Those skilled in the art will appreciate that the configuration of the computer apparatus is merely a block diagram of a portion of the configuration associated with aspects of the present application and is not intended to limit the computer apparatus to which aspects of the present application may be applied, and that a particular computer apparatus may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.

It should be understood that the Processor may be a Central Processing Unit (CPU), and the Processor may be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, etc. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.

Wherein, in some embodiments, the processor is configured to execute a computer program stored in the memory to implement the steps of: acquiring audio data to be detected; carrying out segmentation processing on the audio data to be detected to obtain a plurality of audio sequences; carrying out feature extraction processing on the plurality of audio sequences to obtain a plurality of audio feature data; classifying the audio characteristic data, and determining a category identifier corresponding to each of the audio characteristic data; determining target audio characteristic data in the plurality of audio characteristic data according to the category identification corresponding to the plurality of audio characteristic data; and outputting abnormal alarm information according to the target audio characteristic data.

Illustratively, the processor is configured to perform feature extraction processing on a plurality of audio sequences, and when obtaining a plurality of audio feature data, implement: performing acoustic feature extraction on the audio sequence to obtain acoustic features of the audio sequence; inputting the acoustic features into a contrast learning model to obtain the audio feature data of the audio sequence, wherein the contrast learning model is used for performing feature analysis on the acoustic features through contrast learning.

Illustratively, the processor, when being configured to perform acoustic feature extraction on the audio sequence to obtain the acoustic features of the audio sequence, is configured to: fourier transforming the audio sequence to obtain a Fourier spectrum of the audio sequence; and filtering the Fourier spectrum of the audio sequence through a Mel filter to obtain the Mel spectrum of the audio sequence. Performing cepstrum analysis on the Mel frequency spectrum of the audio sequence to obtain the acoustic features of the audio sequence.

Illustratively, the processor is further configured to implement: acquiring training data, wherein the training data comprises a plurality of audio sequences in a normal state; randomly enhancing the audio sequence to obtain a plurality of enhanced data corresponding to the audio sequence; obtaining a training model, wherein the training model comprises a twin network, the twin network comprises a first encoder, a second encoder and a prediction head, the first encoder and the second encoder share the same encoder network parameters, and the prediction head comprises a multilayer perceptron network; inputting one enhancement data of the plurality of enhancement data into the first encoder to obtain an output vector of the first encoder, and inputting another enhancement data into the second encoder to obtain an output vector of the second encoder; inputting the output vector of the first encoder into the prediction head to obtain first prediction data; inputting the output vector of the second encoder into the prediction head to obtain second prediction data; determining symmetry loss according to the similarity of the first prediction data and the output vector of the second encoder and the similarity of the second prediction data and the output vector of the first encoder; adjusting network parameters of the twin network by gradient back propagation according to the symmetry loss; and if the training model converges, determining the comparison learning model according to the first encoder and/or the second encoder.

Illustratively, the processor is configured to implement random enhancement on the audio sequence, and when obtaining a plurality of enhancement data corresponding to the audio sequence, implement: performing acoustic feature extraction on the audio sequence to obtain acoustic features of the audio sequence; obtaining a plurality of enhancement data for each of the audio sequences by adding random noise to the acoustic features and/or randomly changing values of partial data of the acoustic features.

Illustratively, the processor is configured to classify the plurality of audio feature data, and when determining a category identifier corresponding to each of the plurality of audio feature data, implement: acquiring a plurality of audio characteristic data corresponding to normal states as a normal characteristic sample set; adjusting parameters of a clustering class number self-adaptive clustering algorithm until all samples of the normal feature sample set are clustered into one class after the clustering algorithm clusters the feature sample set; determining an audio feature subset to be clustered, wherein the audio feature subset comprises all samples of the normal feature sample set and at least part of samples of a suspected feature sample set, the suspected feature sample set comprises a plurality of audio feature data of the audio data to be detected, and the sample number of the normal feature sample set is more than half of the total number of the audio feature data of the audio feature subset; clustering the audio feature subsets through the clustering algorithm; determining the clustering categories of which the number meets a preset condition according to the number of the audio characteristic data corresponding to each clustering category in the clustering result, wherein the clustering categories of which the number meets the preset condition comprise any one of the following categories: the cluster category with the least number of corresponding audio characteristic data in all the plurality of cluster categories and the cluster category with the number of corresponding audio characteristic data in all the cluster categories lower than a preset threshold value; and determining the category identifier of the corresponding sample of the to-be-detected feature sample set as an abnormal category identifier in the clustering categories of which the number meets the preset condition.

Illustratively, the processor is configured to, when determining a target audio feature data of the plurality of audio feature data according to the category identifier corresponding to each of the plurality of audio feature data, implement: and determining the sample of the sample set of the characteristics to be detected corresponding to the abnormal class identification as the target audio characteristics.

From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application, such as:

a computer-readable storage medium, which stores a computer program, where the computer program includes program instructions, and the processor executes the program instructions to implement any sound abnormality detection method provided in this application.

The computer-readable storage medium may be an internal storage unit of the computer device described in the foregoing embodiment, for example, a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the computer device.

While the invention has been described with reference to specific embodiments, the scope of the invention is not limited thereto, and those skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:通过实时音频流分析解决司乘安全及提升服务质量的方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!