Apparatus and method for recognizing speech and text
阅读说明:本技术 用于识别语音和文本的设备和方法 (Apparatus and method for recognizing speech and text ) 是由 沙布霍吉特·查科拉达 于 2014-07-04 设计创作,主要内容包括:一种用于识别语音和文本的设备和方法,所述方法包括:接收包含多种语言的语音作为输入,通过使用与预设主要语言相匹配的语音识别算法,来识别语音的第一语音,识别包括在多种语言中的预设主要语言和不同于预设主要语言的非主要语言,基于上下文信息确定所述非主要语言的类型,通过将与确定的非主要语言的类型相匹配的语音识别算法应用于第二语音,来识别所述非主要语言的语音的第二语音,输出基于识别第一语音的结果和识别第二语音的结果的识别语音的结果。(An apparatus and method for recognizing speech and text, the method comprising: receiving a voice including a plurality of languages as an input, recognizing a first voice of the voice by using a voice recognition algorithm matched with a preset primary language, recognizing a preset primary language included in the plurality of languages and a non-primary language different from the preset primary language, determining a type of the non-primary language based on context information, recognizing a second voice of the non-primary language by applying a voice recognition algorithm matched with the determined type of the non-primary language to the second voice, and outputting a result of recognizing the voice based on a result of recognizing the first voice and a result of recognizing the second voice.)
1. A method of recognizing speech, the method comprising:
receiving a speech including a plurality of languages as an input (S101);
recognizing a first voice of the voice by using a voice recognition algorithm matched with a preset main language;
identifying a preset primary language included in the plurality of languages and a non-primary language different from the preset primary language (S103);
determining a type of the non-primary language based on context information (S105);
recognizing a second speech of the non-primary language by applying a speech recognition algorithm matching the determined type of the non-primary language to the second speech (S107);
outputting a result of recognizing the voice based on the result of recognizing the first voice and the result of recognizing the second voice.
2. The method of claim 1, wherein the context information comprises at least one of:
whether the name of the country exists in the voice;
information about where the apparatus for recognizing speech is located;
session history information;
an updated non-primary language database.
3. The method of claim 2, wherein determining the type of the non-primary language based on the context information comprises:
when the name of the country exists in the voice, if the language matching the name of the country is not the primary language, determining whether the language matching the name of the country is a non-primary language (S803); or
It is determined whether the non-dominant language is a language matching the characteristics of at least one of the country and the place where the apparatus for recognizing voice is located based on the measured global positioning system GPS coordinates (S805).
4. The method of claim 2, wherein determining the type of the non-primary language based on the context information comprises:
determining whether a dialogue history in which non-primary languages and country names coexist has been previously stored;
when the dialog history includes the non-primary language and the name of the country, it is determined whether the non-primary language is a language that matches the name of the country.
5. The method of claim 2, wherein determining the type of the non-primary language based on the context information comprises:
determining whether a history of identifying non-primary languages exists in a non-primary language database;
when there is a history of recognizing the non-primary language in the non-primary language database, the non-primary language is recognized by the apparatus for recognizing speech immediately based on a result of recognition corresponding to the history of recognizing the non-primary language.
6. The method of claim 1, wherein the step of identifying a preset primary language and a non-primary language different from the preset primary language comprises:
dividing the voice by taking phonemes as a unit;
determining a similarity of at least one segmented phoneme to a main language word by matching the at least one segmented phoneme to a database of main language phonemes;
identifying that the at least one segmented phoneme having the determined similarity less than the preset threshold belongs to a non-dominant language.
7. The method of claim 1, further comprising: updating at least one of the context information and the speech recognition algorithm matching the non-primary language by reflecting a result of recognizing the non-primary language in at least one of the context information and the speech recognition algorithm matching the non-primary language;
displaying results of identifying non-primary languages;
receiving as input user feedback indicating whether the result of identifying the non-primary language is appropriate or inappropriate; updating the non-primary language database by reflecting the result of identifying the non-primary language in the non-primary language database when the user feedback indicates that the result of identifying the non-primary language is appropriate;
when the user feedback indicates that the result of identifying the non-main language is not appropriate, excluding the result of identifying the non-main language;
re-identifying the non-primary language and outputting the identified non-primary language.
8. An apparatus for recognizing speech, the apparatus comprising:
a microphone (362) configured for receiving as input speech comprising a plurality of languages including a preset primary language and a non-primary language different from the preset primary language;
a storage unit (375) configured to store a speech recognition algorithm matching a preset primary language and a speech recognition algorithm matching a non-primary language;
a controller (310) configured to recognize a first voice of the voice by using a voice recognition algorithm matching a preset primary language, recognize a preset primary language included in the plurality of languages and a non-primary language different from the preset primary language, determine a type of the non-primary language based on the context information, and recognize a second voice of the non-primary language by applying a voice recognition algorithm matching the determined type of the non-primary language to the second voice;
a display unit (390) configured to output a result of the recognized speech based on a result of recognizing the first speech and a result of recognizing the second speech.
9. The apparatus of claim 8, wherein the context information comprises at least one of:
whether the name of the country exists in the voice;
information about where the apparatus for recognizing speech is located;
session history information;
an updated non-primary language database.
10. The apparatus of claim 9, further comprising: a Global Positioning System (GPS) module (355) configured to measure GPS coordinates at which a device for recognizing voice is located, and output the measured GPS coordinates,
wherein the controller (310) is configured to determine that the language matching the name of the country is a non-primary language if the language matching the name of the country is not a primary language when the name of the country is present in the speech; or
Wherein the controller (310) is configured to determine, based on the measured GPS coordinates, that the non-dominant language is a language that matches characteristics of at least one of a country and a place in which the apparatus for recognizing speech is located.
11. The apparatus of claim 9, wherein the storage unit (375) is configured to pre-store a conversation history in which names of the non-primary language and the country coexist, and the controller (310) is configured to determine that the non-primary language is a language matching the name of the country when the conversation history includes the names of the non-primary language and the country.
12. The apparatus according to claim 9, wherein the storage unit (375) is configured to store a non-primary language database, and the controller (310) is configured to immediately identify the non-primary language based on a result of identification corresponding to a history of identifying the non-primary language when the history of identifying the non-primary language exists in the non-primary language database.
13. The apparatus of claim 8, wherein the storage unit (375) is configured to store a database of phonemes of the predominant language, the controller (310) is configured to segment the speech in units of phonemes, to determine a similarity between the at least one segmented phoneme and a word of the predominant language by matching the at least one segmented phoneme to the database of phonemes of the predominant language, and to identify that the at least one segmented phoneme having the determined similarity less than a preset threshold belongs to the non-predominant language.
14. A method of recognizing text, the method comprising:
receiving as input text including characters of a plurality of languages (S1001);
recognizing a first text of the text by using a text recognition algorithm matching a preset primary language (S1003);
identifying a preset primary language included in the plurality of languages and a non-primary language different from the preset primary language (S1005);
determining a type of the non-primary language based on the context information (S1007);
recognizing a second text of the text in the non-primary language by applying a text recognition algorithm corresponding to the determined type of the non-primary language to the second text (S1009);
converting a result of recognizing the text based on a result of recognizing the first text and a result of recognizing the second text into voice;
the converted voice is output (S1011).
15. An apparatus for recognizing text, the apparatus comprising:
an input unit (360) configured to receive as input text comprising characters of a plurality of languages;
a controller (310) configured to recognize a first text of a text by using a text recognition algorithm matching a preset primary language; identifying a preset primary language included in the plurality of languages and a non-primary language different from the preset primary language; determining a type of the non-primary language based on the context information; identifying a second text of the text in the non-primary language by applying a text recognition algorithm corresponding to the determined type of the non-primary language to the second text; converting a result of recognizing the text based on a result of recognizing the first text and a result of recognizing the second text into voice;
an output unit (360) configured to output the converted speech.
Technical Field
The present disclosure relates generally to an apparatus and method for recognizing speech, and more particularly, to an apparatus and method for recognizing speech including a plurality of languages.
Background
With the advancement of traffic and communications, there is a rapid growth in foreign language speech. However, those who are not skilled in the foreign language may encounter difficulty in understanding the foreign language. In this regard, methods for recognizing foreign language voices and converting the result of recognizing the foreign language voices into text have been developed and made progress.
In a method for recognizing a speech according to the related art, an input speech is divided in units of phonemes, and then each of the divided phonemes is compared with a database, thereby determining which text or word the speech matches.
Meanwhile, since it often occurs that a person in a specific country lives in a foreign country, it is necessary to process speech simultaneously containing a plurality of languages. For example, there may be a case where people in a particular country use mainly the language of the particular country, but are mixed with words from a language used in another country. Therefore, it is desirable to develop an apparatus and method for recognizing speech including a plurality of languages.
For example, in a method for recognizing a voice including a plurality of languages according to the related art, a word or a conventional sentence for communication common to languages of various countries is defined as a code, and then the languages of the various countries are respectively mapped to the code. In this regard, the above method is disadvantageous in that the amount of calculation rapidly increases because mapping must be performed on all languages as long as the type of language different from the language set as the main language is not specified.
In particular, a method for recognizing speech, which cannot be performed in real time, has a low degree of utilization, and thus, reduction in the amount of computation in the method for recognizing speech is one of important requirements required for developing a technology.
Therefore, when recognizing a speech including a main language and a non-main language, there is a need for an apparatus and method capable of quickly recognizing a type of the non-main language different from the main language and recognizing a speech including a plurality of languages in real time.
The above information is presented merely as background information to assist in understanding the present disclosure. No statement has been made as to whether any of the above is applicable as prior art to the present disclosure.
Disclosure of Invention
To solve the above-discussed drawbacks, a primary object is to provide an apparatus and method capable of quickly recognizing a type of a non-primary language different from a primary language and recognizing a voice including a plurality of languages in real time, when recognizing a voice including a primary language and a non-primary language.
According to an aspect of the present disclosure, a method of recognizing speech is provided. The method comprises the following steps: receiving a voice including a plurality of languages as an input, recognizing a first voice of the voice by using a voice recognition algorithm matched with a preset primary language, recognizing a preset primary language included in the plurality of languages and a non-primary language different from the preset primary language, determining a type of the non-primary language based on context information, recognizing a second voice of the non-primary language by applying a voice recognition algorithm matched with the determined type of the non-primary language to the second voice, and outputting a result of recognizing the voice based on a result of recognizing the first voice and a result of recognizing the second voice.
According to another aspect of the present disclosure, there is provided an apparatus for recognizing speech. The apparatus includes a microphone that receives as input a voice of a plurality of languages including a preset main language and a non-main language different from the preset main language, a storage unit that stores a voice recognition algorithm matching the preset main language and a voice recognition algorithm matching the non-main language, a controller that recognizes a first voice of the voice by using the voice recognition algorithm matching the preset main language, recognizes a preset main language included in the plurality of languages and a non-main language different from the preset main language, determines a type of the non-main language based on context information, and recognizing a second voice of the non-primary language by applying a voice recognition algorithm matched with the determined type of the non-primary language to the second voice, the display unit outputting a result of the recognized voice based on a result of recognizing the first voice and a result of recognizing the second voice.
According to another aspect of the present disclosure, a method of recognizing text is provided. The method comprises the following steps: receiving as input text comprising characters in a plurality of languages; identifying a first text of the text by using a text recognition algorithm matching a preset primary language; identifying a preset primary language included in the plurality of languages and a non-primary language different from the preset primary language; determining a type of the non-primary language based on the context information; identifying a second text of the text in the non-primary language by applying a text recognition algorithm to the second text that matches the determined type of the non-primary language; converting a result of recognizing the text based on a result of recognizing the first text and a result of recognizing the second text into voice; and outputting the converted voice.
According to another aspect of the present disclosure, there is provided an apparatus for recognizing text. The apparatus includes an input unit receiving text including characters of a plurality of languages as input, a controller recognizing a first text of the text by using a text recognition algorithm matching a preset primary language, recognizing a preset primary language included in the plurality of languages and a non-primary language different from the preset primary language, determining a type of the non-primary language based on context information, recognizing a second text of the non-primary language by applying a text recognition algorithm matching the determined type of the non-primary language to the second text, and converting a result of the recognized text based on a result of recognizing the first text and a result of recognizing the second text into speech, and an output unit outputting the converted speech.
Before beginning the following detailed description of the invention, it may be advantageous to set forth definitions of certain words and phrases used throughout this patent document: the terms "include" and "comprise," as well as derivatives thereof, mean inclusion without limitation; the term "or" is inclusive, meaning and/or; the phrases "associated with …" and "associated therewith" and derivatives thereof may mean to include, be included with, be interconnected with, contain, be contained within …, be connected to … or be connected with …, be coupled to … or be coupled with …, be communicable with …, cooperate with …, be staggered, be juxtaposed, be proximate to …, be coupled to or be coupled with …, have … properties, and the like; the term "controller" means any device, system or component thereof that controls at least one operation, such a device may be implemented in hardware, firmware or software, or some combination of at least two of the same. It should be noted that the functionality associated with any particular controller may be centralized or distributed, whether locally or remotely. Definitions for certain words and phrases are provided throughout this patent document, those of ordinary skill in the art should understand that in many, if not most instances, such definitions apply to prior, as well as future uses of such defined words and phrases.
Drawings
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, wherein like reference numbers represent like parts:
FIG. 1 is a flow diagram illustrating a method for recognizing speech according to an embodiment of the present disclosure;
fig. 2A to 2D are conceptual diagrams illustrating a process for recognizing speech including a main language and a non-main language according to an embodiment of the present disclosure;
fig. 3A and 3B are each a block diagram illustrating a configuration of an apparatus for recognizing a voice according to various embodiments of the present disclosure;
FIG. 4 is a flow diagram illustrating a method for recognizing speech according to an embodiment of the present disclosure;
FIG. 5 is a flow chart detailing a method for recognizing speech according to an embodiment of the present disclosure;
fig. 6A to 6F are conceptual diagrams of an apparatus for recognizing a language illustrating a method for recognizing a voice according to an embodiment of the present disclosure;
FIG. 7 is a flow diagram illustrating a method for recognizing speech according to another embodiment of the present disclosure;
FIG. 8 is a flowchart illustrating a process for determining a type of a non-primary language based on pieces of context information in a method for recognizing speech according to an embodiment of the present disclosure;
FIG. 9 is a flow diagram illustrating a method for recognizing speech containing multiple languages in accordance with an embodiment of the present disclosure;
fig. 10 is a flowchart illustrating a text-to-speech (TTS) method according to an embodiment of the present disclosure.
Throughout the drawings, it should be noted that the same reference numerals are used to designate the same or similar elements, features and structures.
Detailed Description
Figures 1 through 10, discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged electronic device.
Fig. 1 is a flowchart illustrating a method for recognizing speech according to an embodiment of the present disclosure. In addition, fig. 2A to 2D are conceptual diagrams illustrating a method for recognizing a voice according to an embodiment of the present disclosure. The method for recognizing speech as illustrated in fig. 1 will be described in more detail below with reference to fig. 2A to 2D.
Referring to fig. 1, an apparatus for recognizing a voice receives a voice including a plurality of languages as an input at step S101. The plurality of languages may include a primary language and a non-primary language. Here, the main language may be a language occupying a larger part of the input speech. In contrast, the non-primary language is a type different from the type of primary language and may be a language that occupies a smaller portion of the input speech. The dominant language accounts for a large portion of the input speech and is thus set in the apparatus for recognizing speech to have the type of language to be recognized. In other words, a language that has been previously set for voice recognition in the apparatus for recognizing voice may be referred to as a "main language". A language which is not previously set as a language to be recognized in the apparatus for recognizing a voice may be referred to as a "non-primary language".
In this example, a case where a user who mainly speaks the first language also speaks the second language intermittently is considered. Since the user mainly speaks the first language, the first language is set as the language to be recognized in the apparatus for recognizing voice. In contrast, the second language is spoken relatively intermittently and is thus not set as the language to be recognized. Thus, the first language is a primary language and the second language is a non-primary language.
Fig. 2A is a conceptual diagram illustrating speech including a primary language and a non-primary language according to an embodiment of the disclosure.
FIG. 2A shows an input of an expression "How you Chuseok hold? "
Referring back to fig. 1, the apparatus for recognizing a speech recognizes a non-primary language from the
Alternatively, the apparatus for recognizing a speech may divide the
Those skilled in the art will readily appreciate that the technical idea of the present disclosure is not limited by the type of method for recognizing an acoustic model or a language model.
The apparatus for recognizing speech matches each of the segmented phonemes with a phoneme database, thereby determining a similarity therebetween. For example, the apparatus for recognizing speech recognizes a matching relationship between individual phonemes or a
Referring to fig. 2B, the apparatus for recognizing a speech recognizes a matching relationship between the first phoneme P1 and the
Fig. 2C is a conceptual diagram illustrating a process for recognizing the phonemes "w", "a", and "z" as the word "ws". Referring to fig. 2C, the apparatus for recognizing a speech detects the result of matching and the similarity of the segmented phoneme of "w", detects the result of matching and the similarities of the phonemes of "w" and "a", and detects the result of matching and the similarities of the three phonemes of "w", "a", and "z". As shown in fig. 2C, the apparatus for recognizing speech recognizes that the similarity between the three phonemes "w", "a", and "z" and the word "ws" of the
Meanwhile, fig. 2D is a conceptual diagram illustrating a process of recognizing "Chuseok". Referring to fig. 2D, the apparatus for recognizing speech detects the result of matching and the similarity of the segmented phoneme of "ch", detects the result of matching and the similarity of the phonemes of "ch" and "u", detects the result of matching and the similarity of the phonemes of "ch", "u", and "s", detects the result of matching and the similarity of the phonemes of "ch", "u", "s", and "aw", and detects the result of matching and the similarity of the phonemes of "ch", "u", "s", "aw", and "k". In this regard, the apparatus for recognizing speech may recognize that a word matching each of the phoneme items (i.e., "ch" and "u", "ch", "u" and "s", "ch", "u", "s" and "aw", "ch", "u", "s", "aw" and "k") does not exist. Alternatively, the apparatus for recognizing voice may recognize that each similarity is smaller than a preset threshold. Accordingly, the apparatus for recognizing voice determines that a word corresponding to "Chuseok" does not exist. The process described above may be termed "confidence measure".
In this regard, the apparatus for recognizing speech may perform a separation operation. In the embodiment of the present disclosure as shown in fig. 2C and 2D, the apparatus for recognizing speech recognizes that three phonemes "w", "a", and "z" correspond to the word "was", and that words corresponding to the phonemes "ch", "u", "s", "aw", and "k" do not exist. Accordingly, the apparatus for recognizing speech recognizes that the three phonemes "w", "a", and "z" belong to english, and the phonemes "ch", "u", "s", "aw", and "k" belong to other languages than english. The apparatus for recognizing speech determines that the phonemes "ch", "u", "s", "aw" and "k" belong to a non-dominant language, determines that the remaining phonemes except for the phonemes "ch", "u", "s", "aw" and "k" belong to a dominant language, then separates the remaining phonemes from the phonemes "ch", "u", "s", "aw" and "k", and recognizes the non-dominant language.
Meanwhile, the above-described processing can be similarly applied to a case where speech including three or more different languages is received. In this example, an apparatus for recognizing speech receives as input speech including a first language, a second language, and a third language. The apparatus for recognizing speech sets the first language as a main language. The apparatus for recognizing speech determines a similarity of phonemes or phoneme sets based on a speech recognition algorithm matching the first language. The apparatus for recognizing speech determines that a phoneme or a set of phonemes having a similarity smaller than a first threshold belongs to a language different from the first language. In addition, the apparatus for recognizing speech determines a similarity of the phonemes or phoneme sets based on a speech recognition algorithm matching the second language. The apparatus for recognizing speech determines that the phonemes or the phoneme sets having a similarity smaller than a second threshold belong to a language different from the second language. In addition, the apparatus for recognizing speech recognizes the remaining speech by using a speech recognition algorithm matched to the third language. As described above, the apparatus for recognizing speech recognizes speech including a plurality of languages.
Referring back to fig. 1, the apparatus for recognizing speech determines the type of the non-primary language based on the context information at step S105. In the embodiment of the present disclosure as shown in fig. 1, the context information includes at least one of whether a name of a country exists in the voice, information on where a device for recognizing the voice is located, dialog history information, and an updated non-primary language database. For example, when it is determined that the place where the apparatus for recognizing voice is located is korean, the apparatus for recognizing voice determines the type of the non-primary language as korean. The circumstances of determining the type of non-primary language that matches each context information will be described in more detail below.
Meanwhile, the apparatus for recognizing voice may display the determined type of the non-primary language. The user may identify the type of non-primary language and may enter positive or negative feedback regarding the identified type of non-primary language. The apparatus for recognizing speech may determine a type of the non-primary language in response to the input feedback. For example, when the user enters positive feedback, the device for recognizing speech may finalize the determined type of non-primary language. Conversely, when the user inputs negative feedback, the apparatus for recognizing speech may determine the type of the non-primary language as another language. Alternatively, the device for recognizing speech may provide a User Interface (UI) that provides a list of other languages and allows the user to select a type of the non-primary language.
In step S107, the apparatus for recognizing speech recognizes a primary language by using a preset type of language recognition algorithm, and recognizes a non-primary language by using an algorithm for recognizing speech of a determined type of language. For example, the apparatus for recognizing speech applies an algorithm for recognizing korean speech to the phonemes "ch", "u", "s", "aw" and "k", thereby recognizing that the phonemes "ch", "u", "s", "aw" and "k" correspond to the word "Chuseok".
The apparatus for recognizing speech determines the type of the non-primary language based on the context information, so that the amount of computation required to match the non-primary language with all languages in order to recognize the non-primary language can be greatly reduced.
Fig. 3A is a block diagram illustrating a configuration of an apparatus for recognizing voice according to an embodiment of the present disclosure.
Referring to fig. 3A, the
The
The
In addition, the
The
Fig. 3B is a more detailed block diagram illustrating a configuration of an apparatus for recognizing voice according to an embodiment of the present disclosure.
Referring to fig. 3B, the apparatus for recognizing
According to an embodiment of the present disclosure, the apparatus for recognizing
According to an embodiment of the present disclosure, the
The
According to an embodiment of the present disclosure, the
According to an embodiment of the present disclosure, the input/output module 360 includes at least one
The
The
The
The
According to a variant embodiment of the present disclosure, the apparatus for recognizing
The
According to an embodiment of the present disclosure, the
The
The
The input/output module 360 includes at least one input/output device, such as at least one of a plurality of
The
The
The
The
The
The
The sensor module 370 includes at least one sensor for detecting a state of the
The
The term "storage unit" may refer to any one of or a combination of the
The
The
According to an embodiment of the present disclosure, the touch is not limited to the touch of the body of the user or the input member capable of touching on the
According to an embodiment of the present disclosure, the
Meanwhile, the second touch panel 390b may measure a touch or proximity of an input means (such as a stylus pen). For example, the second touch panel 390b may be implemented in an electromagnetic radiation (EMR) measurement scheme.
The
Meanwhile, the
Fig. 4 is a flowchart illustrating a method for recognizing speech according to an embodiment of the present disclosure.
Referring to fig. 4, the apparatus for recognizing speech determines the type of a non-primary language based on context information at step S401. In step S403, the apparatus for recognizing speech recognizes speech of the non-primary language by using a speech recognition algorithm matched with the determined type of the non-primary language. Meanwhile, the apparatus for recognizing a voice updates at least one of context information and a voice recognition algorithm by using a result of recognizing the voice at step S405.
For example, as described above, in the embodiment of the present disclosure as illustrated in fig. 1 and fig. 2A to 2D, the phonemes "ch", "u", "s", "aw", and "k" are recognized as the korean language "Chuseok", and the non-primary language database is updated. When the phonemes "ch", "u", "s", "aw" and "k" are received as input, the apparatus for recognizing speech immediately applies an algorithm for recognizing korean speech to the received phonemes "ch", "u", "s", "aw" and "k". Alternatively, when the phonemes "ch", "u", "s", "aw", and "k" are received as input, the apparatus for recognizing speech may immediately recognize the received phonemes "ch", "u", "s", "aw", and "k" as the korean language "Chuseok". As described above, embodiments of the present disclosure may provide a method for recognizing speech having characteristics of each user. In this example, a case is described in which a american unfamiliar with korean does not pronounce "Chuseok" as the phonemes "ch", "u", "s", "aw" and "k", but as the phonemes "ch", "u", "s", "o" and "k". In this case, the apparatus for recognizing speech recognizes the phonemes "ch", "u", "s", "o" and "k" as "Chuseok" in the method described with reference to fig. 1, and information corresponding to the phonemes "ch", "u", "s", "o" and "k" and "Chuseok" is used to update the non-primary language database. Thereafter, even when the same user inputs the phonemes "ch", "u", "s", "o", and "k" again, the apparatus for recognizing voice may immediately apply the algorithm for recognizing korean voice to the phonemes "ch", "u", "s", "o", and "k", or may immediately recognize the phonemes "ch", "u", "s", "o", and "k" as korean "Chuseok". Therefore, in the method for recognizing voice according to the embodiment of the present disclosure, the apparatus for recognizing voice quickly determines that a different pronunciation for each user belongs to a non-dominant language and recognizes a different pronunciation for each user.
Fig. 5 is a flowchart illustrating in detail a method for recognizing a voice according to an embodiment of the present disclosure. An embodiment of the present disclosure as shown in fig. 5 will be described in more detail below with reference to fig. 6A to 6F. Fig. 6A to 6F are conceptual views of an apparatus for recognizing voice illustrating a method for recognizing voice according to an embodiment of the present disclosure.
Referring to fig. 5, the apparatus for recognizing a speech determines a type of a non-primary language based on context information at step S501. In step S503, the apparatus for recognizing speech recognizes speech of the non-primary language by using a speech recognition algorithm matched with the determined type of the non-primary language.
In step S511, the apparatus for recognizing a speech receives a speech including a plurality of languages as an input, and outputs a result of recognizing the speech.
For example, as shown in fig. 6A, the apparatus for recognizing
Subsequently, as shown in fig. 6B, the apparatus for recognizing voice displays text corresponding to the input voice on the display unit. Referring to fig. 6B, the apparatus for recognizing a speech displays a recognition result of "How wa you two halo withidea? ". As shown in fig. 6B, the apparatus for recognizing
In step S513, the apparatus for recognizing a voice receives user feedback on the recognition result as an input. Here, the user feedback may be user feedback indicating whether the result of the recognition is appropriate or inappropriate. In response to the result of the erroneous recognition as shown in fig. 6B, the user inputs user feedback indicating that the result of the recognition is not appropriate. For example, as shown in fig. 6C, the user inputs a
Alternatively, the user may indicate only the misrecognized portion. For example, a user may enter a drag gesture at a portion of a screen displaying "two sun". The apparatus for recognizing speech recognizes that an error occurs in recognizing the phonemes "ch", "u", "s", "aw" and "k" that match "two sunn" on which user feedback has been input.
The apparatus for recognizing speech updates at least one of context information and a speech recognition algorithm based on the input user feedback at step S515. For example, in fig. 6C, the apparatus for recognizing speech updates at least one of context information and a speech recognition algorithm based on information about an error in recognizing the phonemes "ch", "u", "s", "aw", and "k" as the english word "two sun".
Meanwhile, the apparatus for recognizing voice may re-recognize the input voice and may display the re-recognition result as shown in fig. 6D, for example. Referring to fig. 6D, the apparatus for recognizing voice displays the result of recognition as "How you chuseokhold? ". Referring to fig. 6E, the user may input a user feedback that the result of the recognition is appropriate by touching the display unit once as denoted by
The apparatus for recognizing speech updates at least one of context information and a speech recognition algorithm based on information that recognizes the phonemes "ch", "u", "s", "aw", and "k" as appropriate for the korean word "Chuseok". Therefore, thereafter, when the phonemes "ch", "u", "s", "aw" and "k" are received as input, the apparatus for recognizing speech immediately recognizes the phonemes "ch", "u", "s", "aw" and "k" as the korean word "Chuseok" while excluding the english word "twosun" from the result of recognition, and provides the korean word "Chuseok" that has been recognized from the phonemes "ch", "u", "s", "aw" and "k".
As described above, the apparatus for recognizing a speech updates the non-primary language database based on the result of recognizing the speech. The device for recognizing speech may update the non-primary language database based on user feedback. In contrast, the apparatus for recognizing speech may not need to update the non-primary language database based on user feedback.
Fig. 7 is a flowchart illustrating a method for recognizing speech according to another embodiment of the present disclosure.
In step S701, the apparatus for recognizing speech determines the type of non-primary language based on the context information and recognizes the input speech. In step S703, the apparatus for recognizing speech outputs the result of the recognition. In step S705, the apparatus for recognizing a voice receives as input user feedback indicating whether the result of the recognition is appropriate or inappropriate. When the user feedback indicates that the result of the recognition is appropriate (yes in step S707), the apparatus for recognizing a speech updates the non-primary language database based on the result of the correlation recognition in step S711. When the user feedback indicates that the result of recognition is not appropriate (no in step S707), the apparatus for recognizing a voice excludes the result of the relevant recognition from the non-primary language database and re-recognizes the input voice in step S709.
The above-described process can implement a method for recognizing a voice having a feature of each user. The apparatus for recognizing voice may set a method for recognizing voice differently for each user. Alternatively, the apparatus for recognizing voice may transmit a method for recognizing voice having a characteristic of a specific user to another apparatus for recognizing voice. Therefore, even when a specific user uses another device for recognizing voice, the specific user can use the method for recognizing voice having the characteristics of the specific user without any change. Alternatively, the apparatus for recognizing voice may receive a method for recognizing voice having a characteristic of a specific user from the outside and may use it. In other words, the method for recognizing speech may transmit the non-primary language database to the outside, or may receive the non-primary language database from the outside.
Fig. 8 is a flowchart illustrating a process of determining a type of a non-primary language based on pieces of context information in a method for recognizing speech according to an embodiment of the present disclosure. Hereinafter, a process for determining the type of the non-primary language based on pieces of context information will be described with reference to fig. 8.
In step S801, the apparatus for recognizing speech recognizes a primary language and a non-primary language from the input speech.
In step S803, the apparatus for recognizing speech determines whether the name of a country is mentioned in one sentence. When the apparatus for recognizing voice determines that the name of the country is mentioned (yes in step S803), the apparatus for recognizing voice may determine that the language of the relevant country is a non-primary language type in step S811.
In this example, it is described that the user would say "Are there direct lights from Incheon, South Korea to Reykjavik, Icelland? "to the device for recognizing a voice. In addition, the apparatus for recognizing voice sets english as a main language. The device for recognizing speech recognizes that "Incheon" and "reykkjavik" belong to non-primary languages. Meanwhile, the apparatus for recognizing a voice determines that the country name "South Korea" and the country name "Iceland" are mentioned in the input voice. Accordingly, the apparatus for recognizing voice determines that the type of non-primary language to which "Incheon" adjacent to "SouthKorea" belongs is korean, and determines that the type of non-primary language to which "reykkaik" adjacent to "Iceland" belongs is icelandic. In step S805, the apparatus for recognizing voice determines whether Global Positioning System (GPS) coordinates or location information exists. When the GPS coordinates or the location information exists (yes in step S805), the apparatus for recognizing voice determines the type of the non-primary language based on the GPS coordinates or the location information in step S811. Meanwhile, the terminal device may transmit the input voice to the server, and then the server may recognize the input voice. In this example, the server may receive the GPS coordinates of the terminal device. In addition, the server may determine the type of the non-primary language based on the received GPS coordinates of the terminal device.
For example, the apparatus for recognizing voice recognizes a country in which the apparatus for recognizing voice is located by using GPS coordinates. When the language of the recognized country is not set as the primary language, the apparatus for recognizing speech determines the language of the recognized country as a non-primary language. In addition, the apparatus for recognizing voice determines where the apparatus for recognizing voice is located by using characteristics of where the apparatus for recognizing voice is located. For example, when the place where the apparatus for recognizing a voice is located is a french restaurant, the apparatus for recognizing a voice determines that the non-primary language is french. The apparatus for recognizing voice may recognize location information by using GPS coordinates, or may recognize location information based on Wi-Fi channel characteristics, identifiers, etc., recognized by the
In step S807, the apparatus for recognizing a speech determines whether the dialogue history includes a language indicator. When the dialogue history includes the language indicator (yes in step S807), the apparatus for recognizing speech determines the type of the non-primary language based on the dialogue history in step S811. More specifically, the apparatus for recognizing speech determines whether the dialog history includes a name of a specific country. When the language matching the name of the specific country is not the primary language, the apparatus for recognizing voice determines that the language matching the name of the specific country corresponds to the type of the non-primary language.
For example, a user has a conversation with a device for recognizing speech. The apparatus for recognizing a voice provides an output matching a voice that a user has input. Specifically, the user may enter a word "Which city wish the witterolympics in 2018? "of the user. The apparatus for recognizing voice recognizes input voice and analyzes meaning of the input voice. The apparatus for recognizing speech provides an output matching the speech that has been input by the user, and may provide, for example, an output "Pyeongchang, the Republic of Korea". The apparatus for recognizing speech may provide "Pyeongchang, the Republic of Korea" displayed in the form of text. Alternatively, the apparatus for recognizing speech may provide "Pyeongchang, the Republic of Korea" in the form of speech based on TTS. The apparatus for recognizing speech stores a dialog history between itself and the user. Specifically, the apparatus for recognizing speech stores one sentence including the word "the Republic of Korea" and the word "Pyeongchang", and stores phonemes corresponding to the "the Republic of Korea" and phonemes corresponding to the "Pyeongchang".
Thereafter, when the apparatus for recognizing a speech receives as an input a speech having a phoneme corresponding to "Pyeongchang" as a phoneme of a non-primary language, the apparatus for recognizing a speech determines that a type of the non-primary language to which "Pyeongchang" belongs is korean, and applies an algorithm for recognizing a korean speech to "Pyeongchang".
In step S809, the apparatus for recognizing speech determines whether an updated non-primary language database exists. When the updated non-primary language database exists (yes in step S809), the apparatus for recognizing speech determines the type of the non-primary language based on the updated non-primary language database in step S811. More specifically, the apparatus for recognizing speech determines whether a history of recognizing a non-primary language exists in a non-primary language database. When there is a history of recognizing the non-primary language in the non-primary language database, the apparatus for recognizing a speech immediately recognizes the non-primary language based on a result of the recognition corresponding to the history, and outputs the result of recognizing the non-primary language. In contrast, when the updated non-primary language database does not exist (no in step S809), the apparatus for recognizing speech decodes the speech in a single language (i.e., the primary language) in step S813.
A method for determining the type of non-primary language based on the updated non-primary language database has been described with reference to fig. 5. As described above, the method for recognizing voice according to the embodiment of the present disclosure may be implemented as a voice recognition method having a feature of each user. In particular, the method for recognizing voice according to an embodiment of the present disclosure may be implemented as a voice recognition method based on at least one of a voice, a grammar/language pattern, and a behavior pattern of each user and having features of each user.
For example, the user's voice is correlated with speech characteristics and is determined by modeling the occurrence probability of a separate phoneme model or phoneme for each user utterance. In addition, the grammar/language mode is determined by recognizing the grammar of the finally decoded text. Additionally, the behavior patterns may be related to the manner in which each user speaks multiple languages.
As described above, the method for recognizing speech according to the embodiment of the present disclosure detects the type of non-primary language based on pieces of context information.
FIG. 9 is a flow diagram illustrating a method for recognizing speech containing multiple languages in accordance with an embodiment of the present disclosure.
Referring to fig. 9, in step S901, an apparatus for recognizing a speech receives a speech including a plurality of languages as an input. In the embodiment of the present disclosure as shown in fig. 9, the speech encompasses a first language and a second language. The apparatus for recognizing speech sets the first language as a main language. In step S903, the apparatus for recognizing a speech recognizes a first speech of an input speech by using a first speech recognition algorithm, which is a speech recognition algorithm matching a first language that has been set as a main language.
In step S905, the apparatus for recognizing a speech recognizes a primary language and a non-primary language based on a result of recognizing the first speech. For example, as described above, the apparatus for recognizing speech determines that each phoneme having a similarity smaller than a preset threshold belongs to a non-dominant language based on the similarity of each phoneme (the similarity of each phoneme is based on the first speech recognition algorithm).
The apparatus for recognizing speech determines the type of the non-primary language, for example, determines that the type of the non-primary language is the second language at step S907. The apparatus for recognizing a voice at step S909 recognizes a second voice of the input voice by using a second voice recognition algorithm matched with the second language. In step S911, the apparatus for recognizing speech outputs a result of recognizing speech based on a result of recognizing the first speech and a result of recognizing the second language.
Fig. 10 is a flowchart illustrating a text-to-speech (TTS) method according to an embodiment of the present disclosure.
In step S1001, the apparatus for recognizing text receives text including characters of a plurality of languages as input. In an embodiment of the present disclosure as shown in fig. 10, the text includes characters in a first language and characters in a second language. The apparatus for recognizing text sets the first language as a main language. In step S1003, the apparatus for recognizing a text recognizes a first text of the input text by using a first text recognition algorithm, wherein the first text recognition algorithm is a text recognition algorithm matching the first language which has been set as a main language.
In step S1005, the apparatus for recognizing text recognizes a main language and a non-main language based on the result of recognizing the first text. For example, the apparatus for recognizing a text determines that each character having a similarity smaller than a preset threshold belongs to a non-primary language based on the similarity of each character (the similarity of each character is based on the first text recognition algorithm).
The apparatus for recognizing text determines the type of the non-primary language, for example, determines that the type of the non-primary language is the second language in step S1007. The apparatus for recognizing text determines the type of non-dominant language similarly to the method for recognizing speech as described above. For example, the device for identifying text may determine the type of non-primary language based on whether the input text includes the name of a particular country, whether the history of text records includes language indicators, context information, and/or GPS/location information.
In step S1009, the apparatus for recognizing text recognizes the second text of the input text by using the second text recognition algorithm matched with the second language. In step S1011, the apparatus for recognizing text outputs a result of recognizing text based on the result of recognizing the first text and the result of recognizing the second text. Specifically, the apparatus for recognizing text outputs a result of recognizing the first text and a result of recognizing the second text in the form of voice.
It is understood that embodiments of the present disclosure may be implemented in software, hardware, or a combination thereof. Any such software may be stored, for example, in volatile or non-volatile storage (such as ROM), memory (such as RAM, memory chips, storage devices, or memory ICs), or recordable optical or magnetic media (such as CDs, DVDs, magnetic disks, or magnetic tape), regardless of its ability to be deleted or its ability to be re-recorded. In addition, it will be understood that the exemplary embodiments of the present disclosure may be implemented by a computer or a portable terminal including a control unit and a memory, wherein the memory may be an example of a storage medium readable by a machine adapted to store one or more programs including instructions for implementing the exemplary embodiments of the present disclosure. Accordingly, the present disclosure includes a program of codes for implementing the apparatus and method described in the claims of the present specification and a machine (computer, etc.) readable storage medium for storing the program. Further, the program as described above may be electrically transmitted through any medium (such as a communication signal transmitted through a cable or a wireless connection), and the present disclosure appropriately includes matters equivalent to the medium.
Further, the apparatus may receive the program from a program providing device connected to the apparatus wirelessly or through a cable, and store the received program. The program providing apparatus may include a program including instructions for executing an exemplary embodiment of the present disclosure, a memory storing information and the like required for the exemplary embodiment of the present disclosure, a communication unit performing wired or wireless communication with an electronic apparatus, and a control unit transmitting the corresponding program to a transmitting/receiving apparatus in response to a request from the electronic apparatus or automatically.
Although the present disclosure has been described with reference to exemplary embodiments, various changes and modifications may be suggested to one skilled in the art. It is intended that the present disclosure encompass such changes and modifications as fall within the scope of the appended claims.
- 上一篇:一种医用注射器针头装配设备
- 下一篇:家电设备的语音控制方法及装置