Secure utterance storage

文档序号:1510446 发布日期:2020-02-07 浏览:25次 中文

阅读说明:本技术 安全的话语存储 (Secure utterance storage ) 是由 W·F·H·克鲁斯 P·特克 P·托马斯 于 2018-06-13 设计创作,主要内容包括:公开了用于话语的安全存储的技术。计算装置捕获发出口头话语的人的音频。所述话语被提供给语音到文本(STT)服务,所述STT服务将所述话语转译为文本。所述STT服务还可以识别所述话语中的各种说话者特定的属性。所述文本和属性被提供给文本到语音(TTS)服务,所述TTS服务从所述文本和所述属性的子集创建语音。所述语音存储在数据存储区中,所述数据存储区的安全性低于存储原始话语所需的安全性。然后,可以丢弃所述原始话语。所述STT服务还可以将由所述TTS服务生成的所述语音转译为文本。然后,比较由所述STT服务从所述语音生成的所述文本和由所述STT服务从所述原始话语生成的所述文本。如果所述文本不匹配,则可以保留所述原始话语。(Techniques for secure storage of utterances are disclosed. The computing device captures audio of a person who uttered the spoken utterance. The utterance is provided to a speech-to-text (STT) service that translates the utterance into text. The STT service may also identify various speaker-specific attributes in the utterance. The text and attributes are provided to a text-to-speech (TTS) service that creates speech from a subset of the text and the attributes. The speech is stored in a data store that is less secure than the security required to store the original utterance. The original utterance may then be discarded. The STT service may also translate the speech generated by the TTS service into text. Then, the text generated by the STT service from the speech and the text generated by the STT service from the original utterance are compared. If the text does not match, the original utterance may be retained.)

1. A processor for:

causing a speech-to-text service to perform speech recognition on first audio data to identify one or more first words and a plurality of attributes in the first audio data;

causing a text-to-speech service to generate second audio data using at least the one or more first words and a subset of the plurality of attributes; and

such that only the second audio data is stored.

2. The processor of claim 1, wherein the plurality of attributes are indicative of personally identifiable information of a speaker.

3. The processor of claim 2, wherein the subset of the plurality of attributes includes attributes that do not indicate the personally identifiable information of the speaker.

4. The processor of any one of claims 1, 2, or 3, further providing a user interface for defining the subset of the plurality of attributes.

5. A processor according to any one of claims 1, 2, 3, or 4, further caused to provide a user interface for specifying whether the first audio data is to be stored or deleted.

6. The processor of any one of claims 1, 2, 3, 4, or 5, further to:

causing speech recognition to be performed on the second audio data to identify one or more second words;

comparing the one or more first terms to the one or more second terms; and

in response to determining that the one or more first words are the same as the one or more second words, causing the first audio data to be discarded and causing the second audio data to be stored.

7. The processor of claim 6, further responsive to determining that the one or more first terms are different from the one or more second terms, causing the first audio data to be stored and causing the second audio data to be discarded.

8. The processor of any of claims 1, 2, 3, 4, 5, 6, or 7, wherein the first audio data comprises a first utterance of the one or more first words and the second audio data comprises a second utterance of the one or more first words.

9. A computer-implemented method, comprising:

cause speech recognition to be performed on first audio data to identify one or more first words and a plurality of attributes in the first audio data;

causing second audio data to be generated using at least the one or more first words and a subset of the plurality of attributes; and

such that only the second audio data is stored.

10. The computer-implemented method of claim 9, wherein the plurality of attributes indicate personally identifiable information of a speaker.

11. The computer-implemented method of claim 10, wherein the subset of the plurality of attributes includes attributes that do not indicate the personally identifiable information of the speaker.

12. The computer-implemented method of any of claims 9, 10, or 11, further comprising providing a user interface for defining the subset of the plurality of attributes.

13. The computer-implemented method of any of claims 9, 10, 11, or 12, further comprising:

performing speech recognition on the second audio data to identify one or more second words;

comparing the one or more first terms with the one or more second terms; and

in response to determining that the one or more first words are the same as the one or more second words, discarding the first audio data and storing only the second audio.

14. The computer-implemented method of claim 13, further comprising, in response to determining that the one or more first terms are different from the one or more second terms, storing the first audio data and discarding the second audio data.

15. The computer-implemented method of any of claims 9, 10, 11, 12, 13, or 14, wherein the first audio data comprises a first utterance of the one or more first words and the second audio data comprises a second utterance of the one or more first words.

Background

Voice-driven computing systems are widely used. These computing systems typically receive a recording of a user's voice (often referred to as an "utterance"). Voice recognition may be applied to an utterance to determine whether a user has requested to perform a command, has requested information, or has requested to perform another type of action. In many systems, an original recording of the utterance is saved for future use. However, since such records may have attributes from which personally identifiable information (e.g., the user's age or gender) may be derived, high security levels are typically used to store the records. However, storing such records at a high security level may utilize significant computing resources, such as processor cycles, memory, and storage space.

With respect to these and other considerations, the disclosure set forth herein is presented.

Drawings

FIG. 1 is a system architecture diagram illustrating aspects of the configuration and operation of a web service for securely storing utterances, according to one embodiment;

FIG. 2 is a flow diagram illustrating a routine illustrating further aspects of the network service of FIG. 1 for securely storing utterances, according to one embodiment;

FIG. 3 is a system architecture diagram illustrating another example embodiment of the network service of FIG. 1 for securely storing utterances;

FIG. 4 is a flow diagram illustrating a routine illustrating further aspects of the network service of FIG. 3 for securely storing utterances, according to one embodiment;

FIG. 5 is a system architecture diagram illustrating aspects of the configuration and operation of a processor configured for securely storing utterances, according to one embodiment;

FIG. 6 is a computing system diagram illustrating a configuration of a distributed execution environment that may be used to implement aspects of the technology disclosed herein;

FIG. 7 is a computing system diagram illustrating aspects of a configuration of a data center that may be used to implement aspects of the technology disclosed herein; and is

FIG. 8 is a computer architecture diagram showing an illustrative computer hardware architecture for implementing a computing device that may be used to implement aspects of the various techniques presented herein.

Detailed Description

The following detailed description is directed to techniques for securely storing utterances, such as utterances recorded by a voice-driven computing device. Using the disclosed techniques, utterances can be stored in a secure manner while using less storage resources than previously required to securely store such utterances. Thus, savings in utilization of various types of computing resources (such as, but not limited to, processor cycles, memory usage, and mass storage usage) may be realized. In addition, power consumption savings may also be realized because computing resources may be more efficiently utilized using the techniques disclosed herein. The disclosed technology may also provide other technical benefits not specifically identified herein.

To provide the disclosed functionality, in one embodiment, the recorded utterance is provided to a speech to text ("STT") service that recognizes words in the utterance to translate the utterance into text. The STT service may also recognize various speaker-specific attributes of the utterance, such as, but not limited to, attributes of the utterance from which personally identifiable information ("PII") of the utterance may be derived (e.g., attributes indicating the age or gender of the speaker).

The text (i.e., recognized words) and utterance attributes are provided to a text-to-speech ("TTS") service that creates speech from a subset of the text and attributes recognized by the STT service, thereby removing at least some of the attributes that may be used to derive PII from the utterance. The speech created by the TTS is then stored in a data store that is less secure and therefore requires less computing resources than the data store required to store the original utterance containing the PII. The original utterance is then discarded by deleting or otherwise handling the utterance. In this manner, the utterance can be stored in a secure manner (i.e., without any PII or with limited PII included) while using less computing resources than needed to securely store the original utterance including the PII.

In one embodiment, the STT service may also translate speech generated by the TTS service into text. A comparison may then be made between the text generated by the STT service from the speech and the text generated by the STT service from the original utterance. If the text matches, the original utterance can be discarded and the text can be stored. If the text does not match, the original utterance may be retained and the speech generated by the STT service may be discarded. Recognized text in the original utterance may also be stored. Additional details regarding the various components and processes briefly described above for securely storing utterances will be presented below with respect to fig. 1-8.

It should be appreciated that the subject matter presented herein may be implemented as a computer process, a computer-controlled apparatus, a computing system, or as an article of manufacture such as a computer-readable storage medium. While the subject matter described herein is presented in the general context of program modules that execute on one or more computing devices, those skilled in the art will recognize that other implementations may be performed in combination with other types of program modules. Generally, program modules include routines, programs, components, data structures, and other types of structures that perform particular tasks or implement particular abstract data types.

Those skilled in the art will also appreciate that aspects of the subject matter described herein may be practiced with or in conjunction with other computer system configurations beyond those described herein, including the following: multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, hand-held computers, personal digital assistants, electronic readers, mobile telephone devices, tablet computing devices, dedicated hardware devices, network appliances, and the like. As mentioned briefly above, the embodiments described herein may be practiced in distributed computing environments where tasks may be performed by remote computing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.

In the following detailed description, references are made to the accompanying drawings that form a part hereof, and in which are shown by way of illustrations specific embodiments or examples. The drawings herein are not drawn to scale. Like numbers refer to like elements throughout the several views (which may be referred to herein as "figure(s)").

FIG. 1 is a system architecture diagram illustrating aspects of the configuration and operation of a web service 100 for securely storing utterances, according to one embodiment. In the embodiment shown in fig. 1, utterance storage service 100 (which may be referred to herein as "network service 100") implements a processing pipeline for securely storing utterances. In one embodiment, the web service 100 executes in a distributed execution environment 102.

As will be described in detail below, the distributed execution environment 102 provides computing resources for executing various components and network services (such as, but not limited to, the network service 100). The computing resources provided by the distributed execution environment 102 may include various types of computing resources, such as data processing resources, data storage resources, networking resources, data communication resources, network services, and so forth. Additional details regarding the configuration and operation of the illustrative distributed execution environment 102 will be provided below with respect to fig. 6-8.

In the embodiment shown in fig. 1, the network service 100 implements a processing pipeline that includes a speech to text ("STT") service 112, a text to speech ("TTS") service 118, and a storage service 124. In various embodiments disclosed herein, the web service 100 also operates in conjunction with a computing device 106. The computing device 106 is a computing device that can be connected to the distributed execution environment 102 over a suitable network connection and perform the functions described below. For example, but not limiting of, the computing device 106 may be implemented as a desktop computer, laptop computer, tablet computer, network appliance, e-reader, smartphone, video game, set-top box, voice-driven computer, or other type of computing device. Computing device 106 is equipped with audio input devices, such as an audio interface and a microphone, through which the computing device can record digital audio data (also referred to herein as "utterance 104"), such as speech 110 spoken by a user 108 (also referred to herein as "speaker 108") of computing device 106.

The speech 110 uttered by the user 108 may include words (e.g., "search word 'new car price'" or "song played U2"). Speech 110 also has various attributes. Attributes include, but are not limited to, volume of speech 110, pitch of speech 110, tempo or speed of speech 110, intonation or emotion conveyed by speech 110, hesitations in speech 110, and/or other attributes. Personally identifiable information ("PII") of user 108 may be derived from some attribute of speech 110. For example, but not limited to, the age, gender, or location of speaker 108 may be derived from attributes of speech 110. Since speech 110 may contain PII, recordings of speech 110 are typically stored in a highly secure environment. However, as described above, storing such records at a high security level may require significant computational resources, such as processor cycles, memory, and storage space.

To address this concern and possibly other considerations, the mechanisms described in detail below remove some or all of the attributes in speech 110 that indicate or from which PII may be derived. Thus, the generated utterance without PII can be stored in a less secure location, thereby requiring less computing resources than previous solutions. Detailed information on this process is provided below.

To provide the disclosed functionality, the computing device 106 records the utterance 104 that contains the speech 110 of the user 108. The computing device 106 then provides the utterance 104 to the utterance storage service 100. Utterance storage service 100 then coordinates the secure storage of utterances 104.

To securely store the utterance 104, the utterance storage service 100 first provides the utterance to the STT service 112. The STT service 112 is a web service or other type of component that can utilize voice recognition to recognize words in the utterance 104. The STT service 112 may also identify attributes of the speech 110 in the utterance 104, such as the attributes described above (e.g., volume, pitch, cadence, intonation, etc.).

The STT service 112 provides text 114 describing the words recognized in the utterance 104 to a TTS service 118. The STT service also provides the TTS service 118 with data that identifies the attributes 116 detected in the utterance 104. TTS service 118 is a web service or other type of component configured to generate human speech 120 (which may be referred to herein as "audio 120," "audio data 120," or "TTS-generated audio 120") from a text input (e.g., text 114 that recognizes words recognized in utterance 104). The human voice used to speak the text 114 may be an actual human voice or a computer simulated human voice.

TTS service 118 may also utilize some or all of attributes 116 to generate audio 120. For example, but not limiting of, the attributes 116 may be used to generate speech 120 having the same volume level as the original utterance 104. Similarly, the attributes 116 may be used to generate speech 120 having the same pitch, rhythm, intonation, hesitation, and/or other attributes as the original utterance 104. The attributes 116 may also be used to generate speech 120 by a person having the same gender or approximate age as the speaker 108 of the original utterance 104. In other embodiments, other attributes 116 may be utilized to generate audio data 120.

In some implementations, the TTS service 118 utilizes only a subset of the attributes 116 of the original utterance 104 to generate speech 120. In one embodiment, the subset of attributes 116 used to generate speech 120 includes only attributes 116 that do not convey PII, from which attributes of PII may be derived or in any other way conveyed. In this manner, the attributes 116 of the utterance 104 conveying PII may be removed from the speech 120 generated by the TTS service 118.

Because the attributes 116 conveying the PII are removed from the speech 120, the speech 120 can be stored in a location (e.g., a data store 122 provided by a storage service 124) that is less secure than the security required to store the original utterance 104 containing the PII. Since the speech 120 includes limited or no attributes to convey PII 116, the speech 120 may also be exposed to other network services or components for use in various ways (e.g., for training the STT service 112). In some embodiments, the original utterance 104 may also be discarded. For example, but not limiting of, the utterance 104 may be securely deleted, removed, or otherwise not stored on non-volatile storage. If the utterance 104 persists, it is stored in a location having a higher security level than the security level used to store the speech 120.

In some embodiments, the STT service 112 and/or the TTS service 118 may be configured to remove PII from the text 114. For example, but not limiting of, the services may be configured to remove social security numbers, credit card numbers, and/or other types of PII from the text 114. In this way, the PII contained in the utterance 104 will not be reflected in the stored speech 120.

As shown in FIG. 1, attributes 116 and text 114 may also be stored in data store 122. In some embodiments, the attributes 116 and text 114 are stored separately from each other and from the corresponding audio 120, thereby making it more difficult for an unauthorized user to create speech that includes all of the recognized attributes 116 of the utterance 104. In some embodiments, access to attributes 116 may be restricted based on the permission level of the user requesting access to attributes 116. This restriction may be applied per attribute 116. For example, a first level of permission may be required to access attributes 116 that do not convey PII (e.g., volume or pitch), while another higher level of permission may be required to access attributes 116 that convey PII (e.g., age or gender).

In some configurations, the utterance storage service 100 or another component may provide a user interface ("UI") 126 to the user 108 or an administrative user of the computing device 106. Through the UI 126, the user 108 can specify a configuration 128 for operation of the utterance storage service 100. For example, but not limiting of, the user 108 can utilize the UI 126 to define those attributes 116 that the STT service 112 is to recognize in the utterance 104. In this manner, the user 108 can limit the attributes 116 of the utterance 104 recognized by the STT service 112 and provided to the TTS service 118. For example, the user 108 may indicate that the STT service 112 does not recognize gender, age, or other types of PII.

In some embodiments, user 108 may also utilize UI 126 to define those attributes 116 to be utilized by TTS service 118 when generating audio 120. In this manner, the user 108 can restrict the attributes 116 of the utterance 104 contained in the speech 120 that was generated by the TTS service 118 and stored in the data store 122. For example, the user 108 may define a subset of the attributes 116 that will not be reflected in the speech 120 (e.g., gender, age, or other type of attribute that conveys PII). However, other attributes 116 of the original utterance 104 can be used to generate the speech 120.

As another example, if attributes 116 indicate that speaker 108 is a child, user 108 may utilize UI 126 to indicate that no attributes 116 will be utilized in generating speech 120. Thus, through the UI 126, the user 108 may specify those attributes 116 that are (or are not) to be recognized in the utterance 104, as well as those attributes 116 that are (or are not) to be used to generate speech 120 from the text 114 of the utterance 104. The UI 126 may also provide functionality for allowing the user 108 to specify whether the utterance 104 is to be stored or deleted and, if so, under what conditions.

It is to be appreciated that, in various embodiments, the above-described process can be performed synchronously as the utterance 104 is received, or can be performed asynchronously in an offline batch mode. Additional details regarding the various network services described above will be provided below with respect to fig. 2.

FIG. 2 is a flow diagram illustrating a routine 200, illustrating further aspects of the network service 100 of FIG. 1 for securely storing utterances 104, according to one embodiment. It should be understood that the logical operations described herein with respect to fig. 2, as well as other figures, may be implemented as: (1) as a series of computer-implemented acts or program modules running on a computing system, and/or (2) as interconnected machine logic circuits or circuit modules within a processor (such as the processor 500 described below with reference to fig. 5).

The implementation of the various components described herein is a matter of choice dependent on the performance and other requirements of the computing system. Accordingly, the logical operations described herein are referred to variously as operations, structural devices, acts, or modules. These operations, structural devices, acts and modules may be implemented in software, in firmware, in special purpose digital logic, and any combination thereof. It should also be appreciated that more or fewer operations may be performed than shown in the figures and described herein. These operations may also be performed in parallel, or in a different order than described herein. Some or all of these operations may also be performed by components other than the specifically identified components.

The routine 200 begins at operation 202, where a UI 126 may be provided to allow the user 108 to configure aspects of the operation of the utterance storage service 100. The configuration 128 may be received at operation 204 through the UI 126 before or during processing of the utterance. For example, and as described above, the user 108 can utilize the UI 126 to define those attributes 116 that the STT service 112 is to recognize (or not recognize) in the utterance 104. User 108 may also utilize UI 126 to define those attributes 116 that TTS service 118 is to use (or not use) when generating audio 120. Other types of configuration options for utterance storage service 100 can also be configured in UI 126.

From operation 204, the routine 200 proceeds to operation 206. At operation 206, the utterance storage service 100 determines whether the utterance 104 has been received from the computing device 106. If the utterance 104 has been received, the routine 200 proceeds from operation 206 to operation 208. At operation 208, the utterance storage service 100 provides the received utterance 104 to the STT service 112. The routine 200 then proceeds from operation 208 to operation 210.

At operation 210, the STT service 112 performs speech recognition on the utterance 104 to identify words contained therein. The STT service 112 may then generate text 114 containing the recognized words. At operation 210, the STT service 112 may also identify attributes 116 of the speech in the utterance 104. As described above, the configuration 128 may specify those attributes 116 that the STT service 112 is to recognize (or not recognize). From operation 210, the routine 200 proceeds to operation 212.

At operation 212, in some embodiments, the utterance 104 is discarded. The utterance 104 can be deleted or otherwise removed from persistent storage. In some embodiments, the UI 126 may be used to configure whether the utterance 104 is to be discarded. The routine 200 then proceeds from operation 212 to operation 214.

At operation 214, PII and/or other types of sensitive or undesirable information may be removed from text 114. In some embodiments, the user 108 may utilize the UI 126 to specify the type of material to be removed from the text 114. In some embodiments, material is not removed from text 114, but is encrypted, tokenized, or otherwise rendered unreadable and unable to speak. In these embodiments, the material may be accessed at a future time, for example, by decrypting the material, if necessary. From operation 214, the routine 200 proceeds to operation 216.

At operation 216, the text 114 and/or attributes 116 may be stored, such as on the data store 122. As described above, text 114 and attributes 116 may be stored separately from each other and from corresponding speech 120. Attributes 116 may be stored in a location having a higher security level than speech 120. Access to the attributes 116 may also be allowed based on the access rights of the requesting user.

From operation 216, the routine 200 proceeds to operation 218, where the STT service 112 provides the text 114 and attributes 116 to the TTS service 118. TTS service 118 then generates speech 120 using text 114 and zero or more attributes 116 at operation 220. As described above, user 108 may define a configuration 128 that specifies which attributes 116 (if any) are to be used to generate speech 120. In this manner, user 108 may define a subset of attributes 116 (e.g., those that do not convey PII) that will be used to generate speech 120.

From operation 220, the routine 200 proceeds to operation 222, where the TTS service 118 stores the speech 120 on, for example, the data store 122 provided by the storage service 124. As described above, speech 120 may be stored at a lower security level than that used to store utterance 104, due to the removal of some or all of attributes 116 from speech 120. From operation 222, the routine 200 proceeds to operation 224, where it ends at operation 224. Other utterances 104 may be processed in the manner described above.

As also described above, in some embodiments, speech 120, text 114, and/or attributes 116 may be exposed to other web services. For example, the PII-removed text 114 and corresponding speech 120 may be exposed to the STT service 112 for training purposes.

FIG. 3 is a system architecture diagram illustrating another example embodiment of the network service 100 of FIG. 1 for securely storing utterances. In this embodiment, utterance storage service 100 or another component is configured to provide speech 120 generated by TTS service 118 from text 114A to STT service 112. In turn, the STT service 122 performs speech recognition on the speech 120 to identify the words (i.e., text 114B) contained therein.

Utterance storage service 100 or another component then compares text 114A with text 114B. If the text 114A and 114B match, the original utterance 104 can be discarded. In addition, speech 120 may also be stored in some configuration based on user preferences.

If the text 114A and 114B do not match, the utterance 104 can be stored and the speech 120 can be discarded. Additionally, text 114A generated from the utterance 104 may also be stored. Additional details regarding this process will be provided below with respect to fig. 4.

FIG. 4 is a flow diagram illustrating a routine 400, illustrating further aspects of the network service 100 of FIG. 3 for securely storing utterances, according to one embodiment. The routine 400 begins at operation 402, where the speech 120 is generated in the manner described above with respect to FIGS. 1 and 2. The routine 400 then continues to operation 404, where the speech 120 is provided to the STT service 112.

From operation 404, the routine 400 proceeds to operation 406, where the STT service 112 converts the TTS generated audio 120 to text 114B. The routine 400 then proceeds to operation 408, where the utterance storage service 100 or another component compares the text 114B to the text 114A generated from the original utterance 104. The routine 400 then proceeds from operation 408 to operation 410.

At operation 410, the utterance storage service 100 or another component determines whether the text 114A and the text 114B are identical (i.e., match). If the text 114A and the text 114B match, the routine 400 proceeds from operation 410 to operation 412, where the original utterance 104 is discarded (e.g., deleted from persistent storage). The routine 400 then continues to operation 414, where the voices 120 may be stored. The routine 400 then proceeds from operation 414 to operation 424, where it ends at operation 424.

If, at operation 410, the utterance storage service 100 or another component determines that the text 114A and the text 114B do not match, the routine 400 proceeds from operation 410 to operation 416, where the utterance storage service 100 or another component determines whether the speech 120 is to be stored. In some embodiments, UI 126 may be used to define a configuration 128 that indicates whether speech 120 is to be stored when text 114A does not match text 114B. If the utterance 104 is not to be stored, then the routine 400 proceeds from operation 418 to operation 424, where it ends.

If speech 120 is to be stored, the routine 400 proceeds from operation 418 to operation 420, where the speech 120 is stored, such as in the data store 122. The routine 400 then continues from operation 420 to operation 422, where the text 114A may also be stored in accordance with the user preferences. The routine 400 then proceeds from operation 422 to operation 424, where it ends at operation 424.

FIG. 5 is a system architecture diagram illustrating aspects of the configuration and operation of a processor configured to securely store utterances, according to one embodiment. As shown in fig. 5, processor 500 includes a decoder circuit 502 configured to decode instructions to securely store utterance 104 in the manner described above.

The instructions decoded by the decoder circuit may be general-purpose instructions or function-specific instructions that receive speech attributes 116 and text 114 of the utterance 104 from the STT service 112. When decoder circuit 502 is configured to decode function-specific instructions, decoder circuit 502 may also be specific to those instructions in order to decode specific instruction fields included in x86, ARM, MIPS, or other architectures, such as an opcode, one or more data fields (immediate address of data), and so forth. In other embodiments, other processor configurations may be utilized to implement the functions described above. The decoder circuit 502 may output the audio 120 and store the audio 120 in the appropriate data storage area 122 in the manner described above.

FIG. 6 is a system and network diagram illustrating aspects of a distributed execution environment 602 that may provide computing resources for implementing the various techniques disclosed herein, including but not limited to the distributed execution environment 102, the utterance security service 100, the STT service 112, the TTS service 118, each of which is described in detail above. The computing resources provided by the distributed execution environment 602 may include various types of computing resources, such as data processing resources, data storage resources, networking resources, data communication resources, network services, and so forth.

Each type of computing resource provided by the distributed execution environment 602 may be general or may be capable of being used in many specific configurations. For example, a data processing resource may be capable of being used in many different configurations as a physical computer or virtual machine instance. The virtual machine instances may be configured to execute application programs, including web servers, application servers, media servers, database servers, some or all of the services described herein, and/or other types of programs. The data storage resources may include file storage, block storage, and the like. The distributed execution environment 602 may also be configured to provide other types of resources and network services.

In one embodiment, the computing resources provided by the distributed execution environment 602 are implemented by one or more data centers 604A-604N (which may be collectively referred to herein in the singular as "data center 604" or in the plural as "data centers 604"). The data center 604 is a facility for housing and operating computer systems and related components. The data center 604 typically includes redundant and backup power, communications, cooling, and security systems. The data center 604 may also be located in a different geographic location. One illustrative configuration of a data center 604 that implements some of the techniques disclosed herein will be described below with respect to fig. 7.

Users of the distributed execution environment 602, such as users of the computing device 106, may access various resources provided by the distributed execution environment 602 over a network 606, which may be a wide area communication network ("WAN"), such as the internet, an intranet, or an internet service provider ("ISP") network, or a combination of such networks. For example, but not limiting of, the user computing device 608 may be used to access the distributed execution environment 602 through the network 606. It should be understood that a local area network ("LAN"), the internet, or any other network topology known in the art that connects the data center 604 to remote computers may also be utilized. It will also be appreciated that combinations of such networks may also be utilized.

FIG. 7 is a diagram of a computing system illustrating one configuration of a data center 604 that may be used to implement the techniques disclosed herein. The example data center 604 shown in fig. 7 may include: a plurality of server computers 702A-702F for providing computing resources 708A-708E (which may be collectively referred to herein in the singular as "server computer 702" or in the plural as "server computers 702").

The server computer 702 may be a standard tower, rack, or blade server computer suitably configured to provide the computing resources 708. As described above, the computing resources 708 may be data processing resources, such as virtual machine instances or hardware computing systems, data storage resources, database resources, networking resources, and the like. Some servers 702 may also be configured to execute a resource manager 704 capable of instantiating and/or managing computing resources 708. For example, in the case of a virtual machine instance, resource manager 704 may be a virtual machine monitor or another type of program configured to allow multiple virtual machine instances to execute on a single server 702. The server computer 702 in the data center 704 may also be configured to provide web services and other types of services to support the provision of the elements of the utterance safety service 100, the STT service 112, and the TTS service 118, as well as related functions described herein.

The data center 704 shown in FIG. 7 also includes a server computer 702F that may execute some or all of the software components described above. For example, but not limiting of, the server computer 702F may be configured to execute software components for providing the utterance security service 100. The server computer 702F may also be configured to execute other components and/or store data that provide some or all of the functionality described herein. In this regard, it should be understood that the software components shown in fig. 7 executing on the server computer 702F may be executed on many other physical or virtual servers in the data center 704 in various embodiments.

In the example data center 704 shown in FIG. 7, a suitable LAN 706 is also used to interconnect the server computers 702A-702F. The LAN 706 is also connected to a network 606 shown in fig. 6. It should be appreciated that the configurations and network topologies described herein have been greatly simplified, and that many other computing systems, software components, networks, and network devices may be used to interconnect and provide the functionality described herein for the various computing systems disclosed herein.

Suitable load balancing devices or other types of network infrastructure components may also be used to balance the load between each data center 704A-704N, between each server computer 702A-702F in each data center 704, and potentially between the computing resources 708 in each data center 704. It should also be understood that the configuration of the data center 604 described with reference to fig. 7 is merely illustrative and that other implementations may be utilized.

Fig. 8 illustrates an example computer architecture for a computer 800 capable of executing program components for implementing various aspects of the functionality described herein. The computer architecture shown in fig. 8 illustrates a conventional server computer, workstation, desktop computer, laptop, tablet, network appliance, e-reader, smartphone, or other computing device, and may be used to execute any of the software components presented herein. For example, the computer architecture shown in FIG. 8 may be used to execute software components for providing the utterance security service 100, as well as related functions described herein. The computer architecture shown in FIG. 8 may also be used to implement the computing device 106.

The computer 800 includes a substrate 802 or "motherboard" that is a printed circuit board to which various components or devices may be connected via a system bus or other electrical communication path. In an illustrative embodiment, one or more central processing units ("CPUs") 804 operate in conjunction with a chipset 806. CPU 804 may be a standard programmable processor that performs arithmetic and logical operations necessary for the operation of computer 800.

The CPU 804 performs operations by operating switching elements that differentiate and change discrete physical states to transition from one discrete physical state to the next. A switching element may generally include electronic circuitry, such as a flip-flop, that maintains one of two binary states, and electronic circuitry, such as a logic gate, that provides an output state based on a logical combination of the states of one or more other switching elements. These basic switching elements may be combined to create more complex logic circuits including registers, adder-subtractors, arithmetic logic units, floating point units, and the like.

The chipset 806 provides an interface between the CPU 804 and the remaining components and devices on the substrate 802. The chipset 806 may provide an interface to a RAM 808 used as main memory in the computer 800. The chipset 806 may also provide an interface to a computer-readable storage medium, such as read-only memory ("ROM") 810 or non-volatile RAM ("NVRAM"), for storing basic routines that help to start the computer 800 and transfer information between various components and devices. The ROM 810 or NVRAM may also store other software components necessary for the operation of the computer 800 according to embodiments described herein.

The computer 800 may operate in a networked environment using logical connections to remote computing devices and computer systems through a network, such as the network 806. Chipset 806 may include functionality to provide network connectivity through NIC 812 (e.g., a gigabit ethernet adapter). The NIC 812 is capable of connecting the computer 800 to other computing devices via the network 706. It should be appreciated that multiple NICs 812 may be present in computer 800 to connect the computer to other types of networks and remote computer systems.

The computer 800 may be connected to a mass storage device 818, which provides a non-volatile storage for the computer. The mass storage device 818 may store an operating system 820, programs 822, and data, which have been described in greater detail herein. The mass storage device 818 may be connected to the computer 800 through a storage controller 814 that is connected to the chipset 806. The mass storage device 818 may be comprised of one or more physical memory units. Storage controller 814 may interface with physical storage units through a serial attached SCSI ("SAS") interface, a serial advanced technology attachment ("SATA") interface, a fibre channel ("FC") interface, or other type of interface for physically connecting and transferring data between a computer and physical storage units.

The computer 800 may store data on the mass storage device 818 by transforming the physical state of the physical memory units to reflect the information being stored. The particular transformation of physical state may depend on various factors, in different implementations of the present description. Examples of such factors may include, but are not limited to, the technology used to implement the physical storage unit, whether mass storage 818 is characterized as primary or secondary storage, and so forth.

For example, the computer 800 may store information to the mass storage device 818 by issuing instructions through the storage controller 814 to change the magnetic properties of a particular location within the disk drive unit, the reflective or refractive properties of a particular location in the optical storage unit, or the electrical properties of a particular capacitor, transistor, or other discrete component in the solid-state storage unit. Other transformations of physical media are possible without departing from the scope and spirit of the present description, with the foregoing examples provided only to facilitate this description. The computer 800 may also read information from the mass storage device 818 by detecting the physical state or characteristics of one or more particular locations within the physical storage unit.

In addition to the mass storage device 818, described above, the computer 800 may also access other computer-readable storage media to store and retrieve information, such as program modules, data structures, or other data. Those skilled in the art will appreciate that computer-readable storage media can be any available media that provide non-transitory storage of data and that can be accessed by the computer 800.

By way of example, and not limitation, computer-readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology. Computer-readable storage media include, but are not limited to, RAM, ROM, erasable programmable ROM ("EPROM"), electrically erasable programmable ROM ("EEPROM"), flash memory or other solid state memory technology, optical disk ROM ("CD-ROM"), digital versatile disks ("DVD"), high-definition DVD ("HD-DVD"), BLU-RAY or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information in a non-transitory manner.

As mentioned briefly above, the mass storage device 818 may store an operating system 820 for controlling the operation of the computer 800. In one embodiment, operating system 820 is a LINUX operating system. In another embodiment, operating system 820 is the WINDOWS SERVER operating system from MICROSOFT CORPORATION. In other embodiments, a UNIX operating system or one of its variants may be used as operating system 820. It should be understood that other operating systems may be used. The mass storage device 818 may store other systems or applications and data that are utilized by the computer 800.

In one embodiment, the mass storage device 818 or other computer-readable storage medium is encoded with computer-executable instructions that, when loaded into the computer 800 and executed, transform the computer from a general-purpose computing system into a special-purpose computer capable of implementing the embodiments described herein. As described above, these computer-executable instructions transform the computer 800 by specifying how the CPU 804 transitions between states. According to one embodiment, the computer 800 may access a computer-readable storage medium that stores computer-executable instructions that, when executed by the computer 800, perform various processes described herein. Computer 800 may also include a computer-readable storage medium for performing any other computer-implemented operations described herein.

The computer 800 may also include one or more input/output controllers 816 for receiving and processing input from a number of input devices, such as a keyboard, mouse, touchpad, touchscreen, electronic stylus, or other type of input device. Similarly, an input/output controller 816 may provide output to a display, such as a computer monitor, a flat panel display, a digital projector, a printer, or other type of output device. It should be understood that computer 800 may not include all of the components shown in fig. 8, may include other components not explicitly shown in fig. 8, or may utilize an architecture completely different than that shown in fig. 8.

One or more embodiments disclosed herein may include a processor to cause a speech-to-text service to perform speech recognition on first audio data to identify one or more first words and a plurality of attributes in the first audio data, to cause the text-to-speech service to generate second audio data using a subset of at least the one or more first words and the plurality of attributes, and to cause only the second audio data to be stored.

Optionally, in one or more embodiments disclosed herein, the plurality of attributes may indicate personally identifiable information of the speaker. Optionally, in one or more embodiments disclosed herein, the subset of the plurality of attributes may include attributes that are not indicative of personally identifiable information of the speaker. Optionally, in one or more embodiments disclosed herein, the processor may also provide a user interface for defining a subset of the plurality of attributes. Optionally, in one or more embodiments disclosed herein, the processor may further cause a user interface to be provided for specifying whether the first audio data is to be stored or deleted. Optionally, in one or more embodiments disclosed herein, the processor may further cause speech recognition to be performed on the second audio data to identify one or more second words; causing one or more first terms to be compared to one or more second terms; and in response to determining that the one or more first words are the same as the one or more second words, causing the first audio data to be discarded and the second audio data to be stored. Optionally, in one or more embodiments disclosed herein, in response to determining that the one or more first terms are different from the one or more second terms, the processor may further cause the first audio data to be stored and the second audio data to be discarded. Optionally, in one or more embodiments disclosed herein, the first audio data may include a first utterance of one or more first words, and the second audio data may include a second utterance of one or more first words.

One or more embodiments disclosed herein may cause speech recognition to be performed on first audio data to identify one or more first words and a plurality of attributes in the first audio data, cause second audio data to be generated using at least a subset of the one or more first words and the plurality of attributes, and cause only the second audio data to be stored.

Optionally, in one or more embodiments disclosed herein, the plurality of attributes may indicate personally identifiable information of the speaker. Optionally, in one or more embodiments disclosed herein, the subset of the plurality of attributes may include attributes that are not indicative of personally identifiable information of the speaker. Optionally, one or more embodiments disclosed herein may provide a user interface for defining a subset of the plurality of attributes. Optionally, one or more embodiments disclosed herein may perform speech recognition on the second audio data to identify one or more second words, compare the one or more first words to the one or more second words, and in response to determining that the one or more first words are the same as the one or more second words, discard the first audio data and store only the second audio. Optionally, in response to determining that the one or more first terms are different from the one or more second terms, one or more embodiments disclosed herein may store the first audio data and discard the second audio data. Optionally, in one or more embodiments disclosed herein, the first audio data may include a first utterance of one or more first words, and the second audio data may include a second utterance of one or more first words.

It should be appreciated that techniques for securely storing utterances have been disclosed herein. Although the subject matter presented herein has been described in language specific to computer structural features, methodological acts, and computer readable media, it is to be understood that the invention defined in the appended claims is not necessarily limited to the specific features, acts, or media described herein. Rather, the specific features, acts and mediums are disclosed as example forms of implementing the claims.

The above-described subject matter is provided by way of illustration only and should not be construed as limiting. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure. Various modifications and changes may be made to the subject matter described herein without following the example embodiments and applications illustrated and described, and without departing from the true spirit and scope of the present invention, which is set forth in the following claims.

22页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于相关性的近场检测器

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!