Editing support program, editing support method, and editing support apparatus

文档序号:144572 发布日期:2021-10-22 浏览:19次 中文

阅读说明:本技术 编辑支持程序、编辑支持方法和编辑支持装置 (Editing support program, editing support method, and editing support apparatus ) 是由 三小田聪 滨田祐介 于 2019-03-15 设计创作,主要内容包括:编辑支持程序使计算机执行如下处理:将表示针对基于声音识别生成的句子识别出的说话人的信息和所述句子中的与识别出的所述说话人对应的区间相关联地显示在显示部上,在发生对所述说话人的识别结果进行编辑的第1编辑处理且相邻的2个以上区间的各说话人因所述第1编辑处理而相同的情况下,将相邻的所述2个以上区间以结合的状态显示在所述显示部上,针对所结合的所述2个以上区间内的特定区间,指定进行编辑所述说话人的识别结果的第2编辑处理的区间的起点,并且,在从所指定的所述起点到所结合的所述2个以上区间的终点之间存在与结合前的所述2个以上区间中的任意1个起点对应的位置的情况下,对从所指定的所述起点到与所述2个以上区间中的任意1个起点对应的位置为止的区间应用所述第2编辑处理。(The editing support program causes the computer to execute the following processing: displaying information indicating a speaker recognized in a sentence generated based on voice recognition on a display unit in association with a section corresponding to the recognized speaker in the sentence, displaying the adjacent 2 or more sections in a coupled state on the display unit when a 1 st editing process of editing a recognition result of the speaker occurs and each speaker of the adjacent 2 or more sections is identical due to the 1 st editing process, designating a start point of a section in which a 2 nd editing process of editing the recognition result of the speaker is performed for a specific section within the coupled 2 or more sections, and when a position corresponding to any 1 start point of the 2 or more sections before the coupling exists between the designated start point and an end point of the coupled 2 or more sections, the 2 nd editing process is applied to a section from the specified starting point to a position corresponding to any 1 starting point among the 2 or more sections.)

1. An editing support program that causes a computer to execute:

displaying information indicating a speaker recognized with respect to a sentence generated based on voice recognition on a display section in association with a section corresponding to the recognized speaker in the sentence,

when the 1 st editing process for editing the recognition result of the speaker occurs and the speakers in the adjacent 2 or more sections are the same due to the 1 st editing process, the adjacent 2 or more sections are displayed in a combined state on the display unit,

when a start point of a section for which a 2 nd editing process for editing the recognition result of the speaker is performed is specified for a specific section within the 2 or more sections to be joined, and when a position corresponding to a start point of any 1 section of the 2 or more sections before joining exists between the specified start point and an end point of the 2 or more sections to be joined, the 2 nd editing process is applied to a section from the specified start point to a position corresponding to a start point of any 1 section of the 2 or more sections.

2. The editing support program according to claim 1,

when the 1 st editing process occurs and the speakers in the adjacent 2 or more sections are the same due to the 1 st editing process, the 1 st editing process is applied to the adjacent 2 or more sections, and the adjacent 2 or more sections are displayed on the display unit in a combined state.

3. The editing support program according to claim 1 or 2,

displaying a 1 st editing screen requesting the 1 st editing process and a 2 nd editing screen requesting the 2 nd editing process on the display section,

the 1 st editing process is applied to the adjacent 2 or more sections in accordance with the instruction to the 1 st editing screen, and the 2 nd editing process is applied to a section from the specified starting point to a position corresponding to a starting point of any 1 section of the 2 or more sections in accordance with the instruction to the 2 nd editing screen.

4. The editing support program according to claim 3,

the 1 st editing screen and the 2 nd editing screen each include information indicating that the speaker is an editing target, and the information indicating the speaker is arranged in a priority order corresponding to at least one of an utterance order and an utterance amount of the speaker.

5. The editing support program according to any one of claims 1 to 4,

in a case where the 1 st editing process occurs in the middle of a section corresponding to the speaker, and each speaker of the 2 or more sections adjacent before the middle of the section is the same due to the 1 st editing process, and each speaker of the 2 or more sections adjacent after the middle of the section is the same, after the 2 or more sections adjacent before the middle of the section are displayed in a joined state on the display unit, the 2 or more sections adjacent after the middle of the section are displayed in a joined state on the display unit.

6. The editing support program according to any one of claims 1 to 5,

the editing support program includes the following processes:

generating the sentence according to the speaker's voice and the voice recognition,

the speaker is identified with respect to the generated sentence according to the speaker's voice and a learning completion model in which the feature of the speaker's voice is learned.

7. The editing support program according to any one of claims 1 to 6,

the editing support program includes the following processes: storing the specified starting point and positions corresponding to any 1 starting point among the 2 or more sections in a storage unit,

the 2 nd editing process is applied to a section from the specified starting point to a position corresponding to a starting point of any 1 section of the 2 or more sections with reference to the storage unit.

8. An editing support method for causing a computer to execute processing of:

displaying information indicating a speaker recognized with respect to a sentence generated based on voice recognition on a display section in association with a section corresponding to the recognized speaker in the sentence,

when the 1 st editing process for editing the recognition result of the speaker occurs and the speakers in the adjacent 2 or more sections are the same due to the 1 st editing process, the adjacent 2 or more sections are displayed in a combined state on the display unit,

and a step of applying a 2 nd editing process to a section from the specified start point to a position corresponding to a start point of any 1 section of the 2 or more sections before the joining, when a start point of the 2 nd editing process for editing the recognition result of the speaker is specified from a specific section of the 2 or more sections joined, and a position corresponding to a start point of any 1 section of the 2 or more sections before the joining exists between the specified start point and an end point of the 2 or more sections joined.

9. An editing support apparatus that causes a processing unit to execute:

displaying information indicating a speaker recognized with respect to a sentence generated based on voice recognition on a display section in association with a section corresponding to the recognized speaker in the sentence,

when the 1 st editing process for editing the recognition result of the speaker occurs and the speakers in the adjacent 2 or more sections are the same due to the 1 st editing process, the adjacent 2 or more sections are displayed in a combined state on the display unit,

and a step of applying a 2 nd editing process to a section from the specified start point to a position corresponding to a start point of any 1 section of the 2 or more sections before the joining, when a start point of the 2 nd editing process for editing the recognition result of the speaker is specified from a specific section of the 2 or more sections joined, and a position corresponding to a start point of any 1 section of the 2 or more sections before the joining exists between the specified start point and an end point of the 2 or more sections joined.

10. The editing support apparatus according to claim 9,

when the 1 st editing process occurs and the speakers in the adjacent 2 or more sections are the same due to the 1 st editing process, the processing unit applies the 1 st editing process to the adjacent 2 or more sections and displays the adjacent 2 or more sections in a combined state on the display unit.

11. The editing support apparatus according to claim 9 or 10,

the processing unit displays a 1 st editing screen requesting the 1 st editing process and a 2 nd editing screen requesting the 2 nd editing process on the display unit, applies the 1 st editing process to the adjacent 2 or more sections in accordance with an instruction to the 1 st editing screen, and applies the 2 nd editing process to a section from the specified starting point to a position corresponding to any 1 starting point among the 2 or more sections in accordance with an instruction to the 2 nd editing screen.

12. The editing support apparatus according to claim 11,

the 1 st editing screen and the 2 nd editing screen each include information indicating that the speaker is an editing target, and the processing unit arranges the information indicating the speaker in a priority order corresponding to at least one of an utterance order and an utterance amount of the speaker.

13. The editing support apparatus according to any one of claims 9 to 12,

in a case where the 1 st editing process occurs in the middle of a section corresponding to the speaker and each speaker of the 2 or more sections adjacent before the middle of the section is the same due to the 1 st editing process and each speaker of the 2 or more sections adjacent after the middle of the section is the same due to the 1 st editing process, the processing unit displays the 2 or more sections adjacent before the middle of the section in a joined state on the display unit and then displays the 2 or more sections adjacent after the middle of the section in a joined state on the display unit.

14. The editing support apparatus according to any one of claims 9 to 13,

the processing unit generates the sentence from the voice of the speaker and the voice recognition, and recognizes the speaker with respect to the generated sentence from the voice of the speaker and a learning completion model in which the feature of the voice of the speaker is learned.

15. The editing support apparatus according to any one of claims 9 to 14,

the processing unit stores the designated starting point and positions corresponding to 1 arbitrary starting point among the 2 or more sections in a storage unit, and applies the 2 nd editing process to a section from the designated starting point to a position corresponding to 1 arbitrary starting point among the 2 or more sections with reference to the storage unit.

Technical Field

The invention relates to an editing support program, an editing support method, and an editing support apparatus.

Background

The following are known: voice data including utterance data of a plurality of speakers is reproduced, and a user writes utterance data of each speaker in a text, and sets a speaker name indicating the speaker for each utterance data. It is also known to classify voice data according to voice characteristics and obtain arbitrary speaker identification information for each piece of classified voice data (see patent document 1, for example, above).

Documents of the prior art

Patent document

Patent document 1: japanese patent laid-open publication No. 2014-38132

Disclosure of Invention

Problems to be solved by the invention

However, the speaker identification information obtained from the voice feature may vary depending on the physical condition of the speaker or the like. As a result, the speaker identification information may represent the wrong speaker. In this case, there is a problem that the user takes time in the editing process of the speaker identification information.

Therefore, the purpose of 1 side is to improve the convenience of the editing process for the recognition result of the speaker.

Means for solving the problems

In 1 embodiment, the editing support program causes a computer to execute: displaying information indicating a speaker recognized in a sentence generated based on voice recognition on a display unit in association with a section corresponding to the recognized speaker in the sentence, displaying the adjacent 2 or more sections in a coupled state on the display unit when a 1 st editing process of editing a recognition result of the speaker occurs and each speaker of the adjacent 2 or more sections is identical due to the 1 st editing process, designating a start point of a section in which a 2 nd editing process of editing the recognition result of the speaker is performed for a specific section within the coupled 2 or more sections, and when a position corresponding to any 1 start point of the 2 or more sections before the coupling exists between the designated start point and an end point of the coupled 2 or more sections, the 2 nd editing process is applied to a section from the specified starting point to a position corresponding to any 1 starting point among the 2 or more sections.

Effects of the invention

The convenience of the editing process for the recognition result of the speaker can be improved.

Drawings

Fig. 1 shows an example of a terminal device.

Fig. 2 shows an example of a hardware configuration of the terminal device.

Fig. 3 is an example of a block diagram of a terminal device.

Fig. 4 is a flowchart (1) showing an example of the operation of the terminal device.

Fig. 5 is a flowchart (2) showing an example of the operation of the terminal device.

Fig. 6 is an example of a portal screen.

Fig. 7 shows an example of speech data.

Fig. 8 shows an example of sentence data before update according to embodiment 1.

Fig. 9 is an example of the editing support screen.

Fig. 10(a) to 10(c) are diagrams (1) for explaining an example of the editing operation of the embodiment.

Fig. 11 is a diagram for explaining an example of updating sentence data.

Fig. 12(a) to 12(c) are diagrams (2) for explaining an example of the editing operation of the embodiment.

Fig. 13 shows an example of data at the start of division.

Fig. 14(a) and 14(b) are diagrams (fig. 3) for explaining an example of the editing operation of the embodiment.

Fig. 15 is a diagram for explaining another example of updating sentence data.

Fig. 16(a) and 16(b) are diagrams for explaining an example of editing operation in the comparative example.

Fig. 17(a) shows an example of sentence data before update according to embodiment 2.

Fig. 17(b) shows an example of the updated sentence data according to embodiment 2.

Fig. 18 shows an example of the editing support system.

Detailed Description

Hereinafter, embodiments for carrying out the present invention will be described with reference to the drawings.

(embodiment 1)

Fig. 1 shows an example of a terminal device 100. The terminal device 100 is an example of an editing support device. In fig. 1, as an example of the terminal device 100, a personal computer, i.e., a personal computer (pc), is shown, but may be a smart device such as a tablet terminal. The terminal apparatus 100 has a keyboard and a pointing device (hereinafter simply referred to as a keyboard) 100F. The terminal device 100 includes a display 100G. The display 100G may be a liquid crystal display or an organic electro-luminescence (el) display.

The display 100G displays various screens. For example, the display 100G displays the editing support screen 10, details of which will be described later. The editing support screen 10 is a screen for supporting editing of a speaker recognized in a sentence generated by voice recognition. The speaker recognition may be performed by Artificial Intelligence (AI), or may be performed by using a predetermined voice model defined in advance without using AI.

The user of the terminal device 100 confirms the speaker candidates displayed on the editing support screen 10 and operates the keyboard 100F to select 1 arbitrary speaker candidate from among the speaker candidates. Thus, the terminal device 100 edits the speaker before editing recognized by the AI or the like as the selected speaker candidate. Therefore, the user can easily edit the speaker by using the editing support screen 10. In the present embodiment, a description is given of a creator of a conference minutes as an example of a user, but the user is not particularly limited to such a creator. For example, the user may be a producer of a broadcast caption, a person responsible for recording a sound at a call center, or the like.

Next, the hardware configuration of the terminal device 100 will be described with reference to fig. 2.

Fig. 2 shows an example of the hardware configuration of the terminal device 100. As shown in fig. 2, the terminal device 100 includes at least a Central Processing Unit (CPU)100A, a Random Access Memory (RAM)100B, a ReadOnly Memory (ROM)100C, and a network I/F (interface) 100D as hardware processors. Further, as described above, the terminal device 100 further includes the keyboard 100F and the display 100G.

The terminal device 100 may include at least 1 of a Hard Disk Drive (HDD)100E, an input/output I/F100H, a drive device 100I, and a short-range wireless communication circuit 100J as necessary. The CPU100A to the short range wireless communication circuit 100J are connected to each other through an internal bus 100K. That is, the terminal device 100 may be implemented by a computer. Instead of the CPU100A, a MicroProcessing Unit (MPU) may be used as the hardware processor.

The input/output I/F100H is connected to a semiconductor memory 730. As the semiconductor memory 730, for example, a Universal Serial Bus (USB) memory, a flash memory, or the like is available. The input/output I/F100H reads a program or data stored in the semiconductor memory 730. The input/output I/F100H includes a USB port, for example. The portable recording medium 740 is inserted into the drive device 100I. Examples of the portable recording medium 740 include a removable disk such as a compact Disc (cd) -ROM and a Digital Versatile Disc (DVD). The drive device 100I reads the program or data stored in the portable recording medium 740. The short-range wireless communication circuit 100J is an electric circuit or an electronic circuit that realizes short-range wireless communication such as Wi-Fi (registered trademark) and Bluetooth (registered trademark). An antenna 100J' is connected to the short-range wireless communication circuit 100J. The short-range wireless communication circuit 100J may be replaced with a CPU that realizes a communication function. The Network I/F100D includes, for example, a Local Area Network (LAN) port.

Programs stored in the ROM100C or the HDD100E are temporarily stored in the RAM100B by the CPU 100A. The CPU100A temporarily stores the program stored in the portable storage medium 740 in the RAM 100B. The CPU100A implements various functions described later and executes various processes described later by the CPU100A executing the stored programs. The program may correspond to a flowchart to be described later.

Next, the functional structure of the terminal device 100 will be explained with reference to fig. 3.

Fig. 3 is an example of a block diagram of the terminal device 100. The main part of the functions of the terminal device 100 is shown in fig. 3. As shown in fig. 3, the terminal device 100 includes a storage unit 110, a processing unit 120, an input unit 130, and a display unit 140. The storage unit 110 can be realized by the RAM100B and the HDD 100E. The processing unit 120 can be realized by the CPU100A described above. The input section 130 may be implemented by the keyboard 100F described above. The display unit 140 can be implemented by the display 100G described above. Therefore, the storage unit 110, the processing unit 120, the input unit 130, and the display unit 140 are connected to each other.

Here, the storage unit 110 includes, as components, a voice storage unit 111, a dictionary storage unit 112, a sentence storage unit 113, a model storage unit 114, and a point storage unit 115. The processing unit 120 includes a 1 st display control unit 121, a voice recognition unit 122, a sentence generation unit 123, and a speaker recognition unit 124 as constituent elements. The processing unit 120 includes, as components, a sound reproducing unit 125, a speaker editing unit 126, a point managing unit 127, and a 2 nd display control unit 128.

Each component of the processing unit 120 accesses at least 1 of the components of the storage unit 110 and executes various processes. For example, when detecting an instruction to reproduce the audio data, the audio reproducing unit 125 accesses the audio storage unit 111 to acquire the audio data stored in the audio storage unit 111. The audio reproducing unit 125 acquires audio data and reproduces the audio data. The other components will be described in detail in describing the operation of the terminal device 100.

Next, the operation of the terminal device 100 will be described with reference to fig. 4 to 15.

First, as shown in fig. 4, the 1 st display controller 121 displays a portal screen (step S101). More specifically, when detecting an instruction to start the portal screen image output from input unit 130, display control unit 1 displays the portal screen image on display unit 140. Thereby, as shown in fig. 6, the display unit 140 displays the portal screen 20. The portal screen 20 includes a 1 st registration button 21, a 2 nd registration button 22, a 3 rd registration button 23, and a plurality of 4 th registration buttons 24.

The 1 st registration button 21 is a button for registering sound data of a conference. When the audio data of the conference is registered, the user prepares the audio data of the conference recorded in advance in the terminal device 100. When the user performs an operation of pressing the 1 st registration button 21 by the pointer Pt, the 1 st display control section 121 detects the pressing of the 1 st registration button 21. The 1 st display control unit 121 stores the audio data of the conference prepared in the terminal device 100 in the audio storage unit 111 when the 1 st registration button 21 is detected to be pressed.

The 2 nd registration button 22 is a button for registering material data relating to the conference material. When the material data is registered, the user prepares material data of a conference in the terminal device 100 in advance. When the user performs an operation of pressing the 2 nd registration button 22 by the pointer Pt, the 1 st display control section 121 detects the pressing of the 2 nd registration button 22. When the 1 st display control unit 121 detects the pressing of the 2 nd registration button 22, the material data prepared in the terminal device 100 is displayed in the 1 st display area 20A in the portal screen 20.

The 3 rd registration button 23 is a button for registering conference participants. In the case of registering the participants of the conference, the user performs an operation of pressing the 3 rd registration button 23 by the pointer Pt. When the user performs an operation of pressing the 3 rd registration button 23, the 1 st display control section 121 detects the pressing of the 3 rd registration button 23. When the 1 st display control unit 121 detects that the 3 rd registration button 23 is pressed, a registration screen (not shown) for registering the participant of the conference as the speaker is displayed on the display unit 140. When the user inputs a speaker (specifically, information indicating the name of the speaker) in the conference on the registration screen, the 1 st display control unit 121 displays the participant data including the input speaker in the 2 nd display area 20B in the portal screen 20. Then, the 1 st display control unit 121 generates a speaker ID, and stores the speaker ID in the model storage unit 114 in association with the inputted speaker. The speaker ID is information for identifying the speaker. Thus, the model storage unit 114 stores the speaker ID and the speaker in association with each other.

The 4 th registration buttons 24 are each a button for registering voice data of a speaker. When registering voice data of a speaker, the user prepares various voice data of the speaker recorded in advance in the terminal device 100. A microphone may be connected to the terminal device 100, and audio data acquired from the microphone may be used. When the user performs an operation of pressing the 4 th registration button 24 related to the speaker to be registered by the pointer Pt, the 1 st display control section 121 detects the pressing of the 4 th registration button 24. The 1 st display control unit 121 outputs the voice data prepared in the terminal device 100 to the speaker recognition unit 124 when detecting that the 4 th registration button 24 is pressed.

The speaker recognition unit 124 generates a learning completion model obtained by machine learning the voice characteristics of the speaker based on the voice data of the speaker output from the 1 st display control unit 121. The speaker recognition unit 124 stores the generated learning completion model in the model storage unit 114 in association with the speaker ID of the speaker corresponding to the voice data of the learning target. Thus, as shown in fig. 7, the model storage unit 114 stores speaker data in which the speaker ID, the speaker, and the learning completion model are associated with each other. After the model storage unit 114 stores the speaker data, the 1 st display control unit 121 displays the registration mark RM in the participant data concerning the speaker to be registered. The registration flag RM is a flag indicating that the learning completion model is stored in the model storage unit 114, and the voice data of the speaker is registered.

Returning to fig. 4, when the processing of step S101 is completed, the voice recognition unit 122 next executes voice recognition (step S102). For example, the voice recognition unit 122 refers to the voice storage unit 111, and determines whether or not the voice storage unit 111 stores voice data of a conference. When determining that the sound storage unit 111 stores the sound data of the conference, the sound recognition unit 122 performs sound recognition on the sound data of the conference stored in the sound storage unit 111 to generate character string data. More specifically, the voice recognition unit 122 specifies a plurality of characters from the voice of the speaker included in the voice data of the conference, arranges the specified characters in time series, and generates character string data by assigning a character ID and a time code to each character. When the voice recognition unit 122 generates character string data, the generated character string data is output to the sentence generation unit 123. The voice recognition unit 122 includes a plurality of voice recognition engines, and generates character string data corresponding to each of the voice recognition engines. Examples of the voice recognition engine include AmiVoice (registered trademark).

When the processing of step S102 is completed, next, the sentence generation part 123 generates sentence data (step S103). More specifically, upon receiving the character string data output from the voice recognition unit 122, the sentence generation unit 123 refers to the dictionary storage unit 112 and performs morphological analysis on the character string data. The dictionary storage unit 112 stores a morpheme dictionary. Various sentences are stored in the morpheme dictionary. For example, the morpheme dictionary stores sentences such as [ yes ], [ sure ], [ data ], [ question ], and the like. Therefore, when the sentence generation unit 123 performs morphological analysis on the character string data with reference to the dictionary storage unit 112, sentence data is generated in which the character string data is divided into a plurality of word blocks. When the sentence generating unit 123 generates sentence data, the generated sentence data is stored in the sentence storage unit 113 in association with the identifier of the word block unit. Thus, the sentence storage section 113 stores sentence data.

When the processing of step S103 is completed, next, the speaker recognition section 124 recognizes the speaker (step S104). More specifically, the speaker recognition unit 124 refers to the model storage unit 114, and compares the learning completion model stored in the model storage unit 114 with the conference audio data stored in the audio storage unit 111. The speaker recognition unit 124 compares the learning completion model with the conference sound data, and when a sound part corresponding to (for example, identical to or similar to) the learning completion model is detected in the conference sound data, specifies the speaker ID and the time code associated with the learning completion model. In this way, the speaker recognition unit 124 recognizes each speaker of various voice portions included in the voice data of the conference. After specifying the speaker ID and the time code, the speaker recognition unit 124 associates the specified speaker ID with the sentence data stored in the sentence storage unit 113 based on the time code. Thus, as shown in fig. 8, the sentence storage unit 113 stores sentence data associated with the speaker ID.

As shown in fig. 8, the text data includes, as constituent elements, a character ID, a character, a word block, a time code, a speaker ID (initial), and a speaker ID (current). Specifically, an identifier of a word block is registered in the word block. The speaker ID of the speaker first recognized by the speaker recognition unit 124 is registered in the speaker ID (initial). The speaker ID (current) after editing the speaker is registered in the speaker ID. Immediately after the speaker recognition section 124 recognizes the speaker, the same speaker ID is registered in the speaker ID (initial) and the speaker ID (current). The sentence storage unit 113 stores such sentence data. When the time code given to each character is the same as the immediately preceding time code, the time code following the immediately preceding time code may be omitted.

When the process of step S104 is completed, next, the 1 st display control unit 121 displays the speaker and the speaking section (step S105). More specifically, when the process of step S104 is completed, the 1 st display controller 121 terminates the display of the portal screen 20 on the display unit 140 and displays the editing support screen 10 on the display unit 140. The 1 st display control unit 121 displays the speaker and the speech section corresponding to the speaker in association with each other on the editing support screen 10.

Therefore, as shown in fig. 9, the display unit 140 displays the editing support screen 10. The editing support screen 10 includes a script area 11, a setting area 12, an editing area 13, a playback button 14, and the like. The 1 st display control unit 121 displays each speaker and a speaking section corresponding to each speaker in the sentence in association with each other in the editing area 13 of the editing support screen 10 based on the sentence data and the speaker data.

In the script region 11, the time code and characters of the sentence data stored by the sentence storage section 113 are displayed in a state of being associated with each other. In particular, in the scenario field in the scenario region 11, characters of the first time code switched from the speaker ID to the last time code of the continuous break of the speaker ID are displayed in time series in a combined state. In the setting area 12, setting items related to the playback format of the audio data, setting items related to the output format of the sentence data in which the speaker is edited, and the like are displayed.

As described above, the speaker and the speaking section are displayed in association with each other in the editing area 13. For example, in the editing area 13, the speaker [ miniascape ] is displayed in association with the speaking interval [ … … ですよね ]. Similarly, the speaker [ kumura ] and the corresponding speech zone [ かにはいそ, the thermally associated data について discloses the from-a. The speaker [ mountain field ] and the speech interval [ paint-out お いします ] are displayed in association with each other.

In addition, in the editing area 13, in addition to the speaker and the speaking section, a progress mark 16 and a switching point 17 are displayed. The progress mark 16 is a mark indicating the current reproduction position of the sound data. The switching point 17 is a point indicating switching of a word block (see fig. 8). That is, the switching point 17 is displayed at a position between 2 word blocks where the word block is switched to other word blocks. In the present embodiment, 1 switching point 17 is displayed, but for example, a plurality of switching points may be displayed, and 1 of the plurality of switching points may be set as the current switching point 17 and given a color different from the remaining switching points. Thereby, the user can confirm at which position the word block is switched.

The switching point 17 can be moved left and right in accordance with an operation of the input unit 130. For example, when the user performs an operation of pressing a cursor key indicating a right arrow, the 1 st display control unit 121 moves the switching point 17 rightward. When the user performs an operation of pressing a cursor key indicating a left arrow, the 1 st display control unit 121 moves the switching point 17 leftward. When the switching point 17 is moved in one direction to the right, the key for moving the switching point 17 may be a space key. The key for moving the switching point 17 may be determined as appropriate by design, experiment, or the like.

When the processing in step S105 is completed, the audio reproducing unit 125 waits until a reproduction instruction is detected (step S106: NO). When the audio reproducing unit 125 detects a reproduction instruction (YES in step S106), the audio data is reproduced (step S107). More specifically, when the reproduction button 14 (see fig. 9) is pressed by the pointer Pt, the audio reproduction unit 125 detects an instruction to reproduce audio data and starts reproducing the audio data. When the reproduction of the sound data is started, the progress mark 16 (refer to fig. 9) is moved in the right direction according to the reproduction speed of the sound data. The user reproduces the sound data of the conference and listens to the sound, and moves the switching point 17 to perform an operation of determining the position of the editing speaker.

When the process of step S107 is completed, the 1 st display control part 121 waits until the designated start point (step S108: NO). When the 1 st display control unit 121 designates the start point (step S108: YES), the 1 st editing screen is displayed (step S109). More specifically, as shown in fig. 10(a), the user first moves the switching point 17 and stops it at a predetermined position where the speaker desires to edit. When the user presses, for example, an Enter (Enter) key at the predetermined position, the 1 st display control unit 121 determines that the predetermined position is designated as the start point. When the start point is designated, the 1 st display control unit 121 displays the 1 st editing screen 30 in a superimposed manner on the editing area 13 as shown in fig. 10 (b). The 1 st editing screen 30 is a screen for requesting an editing process to the user. In addition, the 1 st display control unit 121 specifies a part of the speech section corresponding to one or more word blocks located before the start point among the speech sections corresponding to the start point together with the display of the 1 st editing screen 30. In the present embodiment, the 1 st display control unit 121 specifies a partial speech segment corresponding to the single word block [ かに ] as determined. The order of displaying the 1 st editing screen 30 and specifying a part of the speech section may be reversed.

When the process of step S109 is completed, the speaker editing unit 126 waits until the selection instruction is detected (step S110: NO). When the speaker editing unit 126 detects the selection instruction (step S110: YES), the speaker is edited (step S111) as shown in FIG. 5. More specifically, as shown in fig. 10(b), when the user operates the input unit 130 to select any one of the speakers included in the 1 st editing screen 30 by the pointer Pt, the speaker editing unit 126 detects a selection instruction. The user may select any one of the numerical values included in the 1 st editing screen 30 by the number key.

Here, the speakers included in the 1 st editing screen 30 are arranged in order of priority corresponding to at least one of the speech order and the speech amount. For example, it is assumed that the speaker who is the host in the conference often speaks earlier in order than the other speakers and also speaks more. Therefore, in the 1 st editing screen 30, speakers with high editing possibility are arranged in order. This can suppress the trouble of editing processing by the speaker.

The speaker editing unit 126 determines that the editing process has occurred when the selection instruction is detected, applies the editing process to the partial speech section specified by the 1 st display control unit 121, and edits and displays the speaker in the partial speech section to the selected speaker. In the present embodiment, the speaker editing unit 126 applies an editing process to a part of the speech segment corresponding to the determination かに of the word block, and edits and displays the speaker [ village ] in the speech segment into the selected speaker [ village ]. Since this example is not substantially modified, the detailed description will be given later.

When the processing of step S111 is completed, the speaker editing unit 126 determines whether or not the speakers are the same (step S112). More specifically, the speaker editing unit 126 determines whether the speaker after editing is the same as the speaker located in the speech section immediately before the speech section corresponding to the word block of the speaker after editing. In the present embodiment, the speaker editing unit 126 determines whether or not the speaker [ moku ] after the editing is identical to the speaker [ koda ] in the speech interval [ koda ] immediately before the partial speech interval corresponding to the block of words [ かに ] of the speaker [ moku ] after the editing. In this case, since the speaker [ Mucun ] is different from the speaker [ Tata ], the speaker editing unit 126 determines that the speakers are different (step S112: NO).

When the speakers are different from each other, the speaker editing unit 126 skips the processing in steps S113 and S114, and determines whether the processing is completed after the start point (step S115). When the speaker editing unit 126 determines that the processing is not completed after the start point (no in step S115), the 1 st display control unit 121 executes the processing in step S109 again as shown in fig. 4. That is, in the first processing of step S109, as shown in fig. 10(b), in the speech segment corresponding to the start point specified by the switching point 17, a partial speech segment corresponding to the one word block [ かに ] located before the start point is targeted for the editing process of the speaker. However, in the speech segment corresponding to the starting point specified by the switching point 17, the remaining speech segment corresponding to the plurality of word blocks [ はいそ ] located after the starting point does not target the speaker's editing process. Therefore, the speaker editing unit 126 determines that the processing is not completed after the start point, and the 1 st display control unit 121 displays the 1 st editing screen 30 in a superimposed manner in the editing area 13 again as shown in fig. 10 (c). In addition, the 1 st display control unit 121 specifies, in the speech section corresponding to the start point, the remaining speech sections corresponding to one or more word blocks located after the start point, together with the display of the 1 st editing screen 30. In the present embodiment, the 1 st display control unit 121 specifies the remaining speech segment corresponding to the plurality of word blocks [ はいそ, the user's memory database について.

When the process of step S109 at the 2 nd order is completed and the speaker editing unit 126 detects a selection instruction in the process of step S110, the speaker editing unit 126 edits the speaker in the process of step S111 (see fig. 5). More specifically, as shown in fig. 10(c), when the user operates the input unit 130 again and selects any one of the speakers included in the 1 st editing screen 30 by the pointer Pt, the speaker editing unit 126 detects the selection instruction. Upon detecting the selection instruction, the speaker editing unit 126 accesses the sentence storage unit 113, and updates the speaker ID (current) of the speaker corresponding to the specified word block to the speaker ID of the speaker after editing, as shown in fig. 11. When the selection command is detected, the speaker editing unit 126 determines that the editing process has occurred, applies the editing process to a specific remaining utterance section, and edits and displays the speaker in the remaining utterance section to the selected speaker. In the present embodiment, the speaker editing unit 126 applies an editing process to the remaining speech segment corresponding to the plurality of word segments [ はいそ, the "harvesting device について" and edits and displays the speaker [ cockaman ] in the remaining speech segment to the selected speaker [ shantian ].

When the process of step S111 is completed, the speaker editing unit 126 determines again whether the speakers are the same in the process of step S112. In the present embodiment, the speaker editing unit 126 determines whether or not the speaker [ shantian ] after the editing is identical to the speaker [ shantian ] in the speech section [ はいそ located immediately after the remaining speech section corresponding to the plurality of word blocks [ はいそ, material について pins ] of the speaker [ shantian ] after the editing is performed, to the speaker [ shantan ] located immediately after the remaining speech section corresponding to the other speech section [ はいそ, material について pins ]. Since 2 speakers [ Shantian ] are identical, the speaker editing unit 126 determines that the speakers are identical (step S112: YES).

If the speakers are the same, the speaker editing unit 126 displays the speaking section in a combined state (step S113). More specifically, the speaker editing unit 126 displays the edited utterance sections of the same 2 speakers in a combined state. The speaker editing unit 126 displays any 1 of the 2 speakers corresponding to the 2 utterance sections before the combination in association with the utterance section after the combination. Thus, the speaker editing unit 126 couples the remaining speech segment corresponding to the plurality of word segments [ はいそ, the "sienna" について "and the subsequent speech segment [ sesamon お いします ] to each other, and displays a state in which the new speech segment [ はいそ, the material について, the" cinch "and the" お いします ] are coupled to 2 speech segments, as shown in fig. 12 (a). In addition, 1 speaker is displayed in correspondence with the combined utterance section. Thus, the speaker is edited and the speech sections are combined. In particular, since the processing after the start point is dynamically requested after the processing before the start point specified by the switching point 17 is completed, the editing operation is performed in time series, and the trouble of the editing operation is suppressed.

When the processing of step S113 is completed, next, the dot management section 127 saves the division start dots (step S114). More specifically, the point management unit 127 determines the division start point of 2 speech segments before the combined speech segment as division start point data, and stores the division start point data in the point storage unit 115 together with the start point corresponding to the division start point and the end point of the combined speech segment. Thereby, the dot storage section 115 stores the division start point data.

In the present embodiment, as shown in fig. 10(c), the starting point of the division between 2 speech segments preceding the coupled speech segment corresponds to the starting point P1 between the speech segment [ かにはいそ configured from the material について "and the speech segment [ pous お いします ]. Therefore, as shown in fig. 13, the point storage unit 115 stores the identifier [08] of the last word piece [ slot ] of the speech section and the identifier [09] of the first word piece [ slot ] of the speech section in association with each other as division start point data. The point storage unit 115 stores, together with the storage of the data at the division starting point, an identifier of a word block that can specify the starting point corresponding to the division starting point and the end point of the speech segment after the combination. For example, the point storage unit 115 stores the identifier [03] of the word block [ かに ] and the identifier [04] of the word block [ はい ] as word blocks from which the starting point can be specified. The point storage unit 115 stores the identifier [11] of the word block [ します ] and a predetermined identifier [ - ] as identifiers of word blocks for which the destination can be specified. Instead of the identifier of the word block, a character ID may be used in the same manner as the identifier of the word block.

When the process of step S114 is completed, the speaker editing unit 126 determines whether or not the process is completed after the start point in the process of step S115 again. When the speaker editing unit 126 determines that the processing is completed after the start point (step S115: YES), the 2 nd display control unit 128 then waits until another start point is specified (step S116: NO). When the 2 nd display control part 128 designates another start point (step S116: YES), a 2 nd editing screen is displayed (step S117). More specifically, as shown in fig. 12(b), when the user moves the switching point 17 to stop at another position different from the predetermined position and presses the enter key, the 2 nd display control unit 128 determines that the other position is designated as the starting point. When the 2 nd display control unit 128 designates another start point, the 2 nd editing screen 40 is displayed in a superimposed manner on the editing area 13 as shown in fig. 12 (c). The 2 nd editing screen 40 is a screen for requesting an editing process to the user. The speakers included in the 2 nd editing screen 40 are arranged in the same manner as in the 1 st editing screen 30. In addition, the 2 nd display control unit 128 specifies a part of the speech section corresponding to one or more word blocks located before the start point among the speech sections corresponding to the start point together with the display of the 2 nd editing screen 40. In the present embodiment, the 2 nd display control unit 128 specifies a partial speech segment corresponding to one word block [ はい ]. The specific order of display of the 2 nd editing screen 40 and a part of the speech section may be reversed.

When the process of step S117 is completed, the speaker editing unit 126 waits until the selection instruction is detected (step S118: NO). When the speaker editing unit 126 detects a selection instruction (YES in step S118), the speaker is edited (step S119). More specifically, as shown in fig. 12(c), when the user operates the input unit 130 to select any one of the speakers included in the 2 nd editing screen 40 by the pointer Pt, the speaker editing unit 126 detects the selection instruction. The user may perform an operation of selecting any 1 of the plurality of numerical values included in the 2 nd editing screen 40 by the numeric keys. The speaker editing unit 126 determines that the editing process has occurred when the selection instruction is detected, applies the editing process to the specified partial speech section, and displays the speaker in the partial speech section as the selected speaker by editing. In the present embodiment, the speaker editing unit 126 applies an editing process to a part of the speech section corresponding to the word block [ はい ], edits and displays the speaker [ shantian ] in the part of the speech section to the selected speaker [ shantian ]. Since this example is not substantially modified, the detailed description will be given later.

When the process of step S119 is completed, the 2 nd display control part 128 displays the 2 nd editing screen again (step S120). More specifically, as shown in fig. 14(a), the 2 nd display control unit 128 displays the 2 nd editing screen 40 again while overlapping it on the editing area 13. In addition, the 2 nd display control unit 128 specifies, as the specific speech section, the remaining speech section corresponding to one or more word blocks located after the other starting point among the speech sections corresponding to the other starting point, together with the re-display of the 2 nd editing screen 40. In the present embodiment, the 2 nd display control unit 128 defines, as a specific speech segment, the remaining speech segment corresponding to the plurality of word blocks [ そ, the clay-caring nanotube について -nanotube. The order of displaying the second edit screen 40 and the remaining speech sections again may be reversed.

When the process of step S120 is completed, the speaker editing unit 126 waits until the selection instruction is detected (step S121: NO). When the speaker editing unit 126 detects a selection instruction (YES in step S121), the point management unit 127 determines whether or not there is a division start point (step S122). Specifically, the dot management unit 127 refers to the dot storage unit 115 and determines whether or not the data at the division start point is stored in the dot storage unit 115.

When the point management unit 127 determines that there is a division start point (yes in step S122), the speaker editing unit 126 edits the speaker to the division start point (step S123), and the process ends. More specifically, as shown in fig. 14(a), when the user operates the input unit 130 to select any one of the speakers included in the 2 nd editing screen 40 by the pointer Pt, the speaker editing unit 126 detects the selection instruction. The speaker editing unit 126 accesses the sentence storage unit 113 when detecting the selection instruction. Then, as shown in fig. 15, the speaker editing unit 126 applies an editing process to the speaker ID (current) of the speaker corresponding to the word block immediately after the other starting point to the word block immediately before the division starting point among the specified word blocks, and updates the speaker ID to the speaker ID of the speaker after editing.

When the selection instruction is detected, the speaker editing unit 126 determines that the editing process has occurred, applies the editing process to the specific speech section, and edits and displays the speaker in the specific speech section to the selected speaker. In the present embodiment, as shown in fig. 14(b), the speaker editing unit 126 applies an editing process to a specific speech segment corresponding to the plurality of word segments [ そ, the clay について nanotube ], and edits and displays the speaker [ shantian ] in the specific speech segment to the selected speaker [ woodcun ].

On the other hand, when the point management unit 127 determines that there is no division start point (no in step S122), the speaker editing unit 126 skips the processing in step S123 and ends the processing. When there is no division starting point, the speaker editing section 126 may perform error processing before ending the processing.

Fig. 16(a) and 16(b) are diagrams for explaining a comparative example. In particular, fig. 16(a) and 16(b) are views corresponding to fig. 14(a) and 14 (b). As described above, in embodiment 1, the case where the dot management unit 127 stores and manages division start point data in the dot storage unit 115 is described. For example, when the division start point data is not managed, as shown in fig. 16(a), when the user operates the input unit 130 to perform an operation of selecting any one of a plurality of speakers included in the 2 nd editing screen 40 by the pointer Pt, the speaker editing unit 126 detects a selection instruction. When the selection instruction is detected, the speaker editing unit 126 edits and displays the speakers of the remaining speech sections corresponding to all of the plurality of word blocks identified by the 2 nd display control unit 128 as the selected speaker. In the comparative example, as shown in fig. 16(b), the speaker editing unit 126 edits and displays, as the selected speaker [ cockamaur ], the speaker [ shanda ] in the remaining speech segment corresponding to all of the plurality of word blocks [ そ, the material について, the tuber colic お いします ]. Therefore, the editing operation for the part is newly generated for the user, which results in editing the plurality of word pieces [ cause お いします ] without the speaker error. However, according to embodiment 1, such a wasteful editing job does not occur. That is, according to embodiment 1, the convenience of the editing process for the recognition result of the speaker is improved as compared with the comparative example.

As described above, according to embodiment 1, the terminal device 100 includes the processing unit 120, and the processing unit 120 includes the 1 st display control unit 121, the speaker editing unit 126, and the 2 nd display control unit 128. The 1 st display control unit 121 displays information indicating a speaker recognized with respect to sentence data generated based on speech recognition and a speech segment corresponding to the recognized speaker in the sentence data on the display unit 140 in association with each other. The speaker editing unit 126 performs editing processing for editing the recognition result of the speaker, and displays the adjacent 2 or more utterance sections on the display unit 140 in a coupled state by the editing processing when the speakers in the adjacent 2 or more utterance sections are identical. The 2 nd display control unit 128 specifies a start point of a speech segment for which an editing process for editing the recognition result of the speaker is performed for a specific speech segment among the 2 or more speech segments to be combined, and applies the editing process to the speech segment from the specified start point to the end point of the 2 or more speech segments to be combined when a position corresponding to any 1 start point of the 2 or more segments before being combined exists between the specified start point and the end point. This can improve the convenience of the editing process for the recognition result of the speaker.

In particular, when a short word block is uttered by a speaker using a learning completion model or a predetermined voice model in speaker recognition, there is a possibility that the characteristics of the voice of the speaker cannot be sufficiently discriminated and the speaker cannot be recognized with high accuracy. The short word block is, for example, a word block corresponding to several characters such as [ はい ]. When the speaker cannot be accurately identified, the terminal device 100 may display an erroneous identification result. Even in such a case, according to the present embodiment, it is possible to improve the convenience of the editing process of the recognition result of the speaker.

(embodiment 2)

Next, embodiment 2 of the present invention will be described with reference to fig. 17. Fig. 17(a) shows an example of sentence data before update according to embodiment 2. Fig. 17(b) shows an example of the updated sentence data according to embodiment 2. In embodiment 1, the speaker editing unit 126 edits the speaker in units of one or more word blocks, but may edit the speaker in units of characters included in the word blocks. In this case, the switching point 17 may be moved in units of characters.

For example, as shown in fig. 17(a), the speaker editing unit 126 updates the speaker ID (current) of the character [ poun ] from the speaker ID [03] to the speaker ID [04] that identifies the speaker [ xiangchu ], not shown, as shown in fig. 17(b), with respect to the character [ poun ] and the character [ spool ] having the same identifier [09] of the word block. In this case, the speaker editing unit 126 divides the identifier of the word block and newly assigns identifiers after the word block. Specifically, as shown in fig. 17(b), the speaker editing unit 126 newly assigns an identifier [10] to the identifier [09] of the word block of the text [ open ]. The same applies to the subsequent identifiers. The speaker editing unit 126 can estimate the utterance time of the new word block based on the utterance time of the original word block. For example, the speaker editing unit 126 may estimate the speaking time of the original word block + the number of characters × several milliseconds as the speaking time of the new word block.

As described above, according to embodiment 2, even when a speaker is edited in units of characters, the convenience of the editing process for the recognition result of the speaker can be improved.

(embodiment 3)

Next, embodiment 3 of the present invention will be described with reference to fig. 18. Fig. 18 is an example of the editing support system ST. Note that the same components as those of the terminal device 100 shown in fig. 3 are denoted by the same reference numerals, and description thereof is omitted.

The editing support system ST includes a terminal device 100 and a server device 200. The terminal device 100 and the server device 200 are connected via a communication network NW. Examples of the communication Network NW include a Local Area Network (LAN) and the internet.

As shown in fig. 18, the terminal device 100 includes an input unit 130, a display unit 140, and a communication unit 150. On the other hand, the server device 200 includes a storage unit 110, a processing unit 120, and a communication unit 160. Each of the 2 communication units 150 and 160 can be realized by the network I/F100D or the short-range wireless communication circuit 100J. As described above, the server device 200 may include the storage unit 110 and the processing unit 120 described in embodiment 1 instead of the terminal device 100. That is, the server apparatus 200 may be an editing support apparatus.

In this case, the input unit 130 of the terminal device 100 is operated to store the conference audio data in the storage unit 110 (more specifically, the audio storage unit 111) via the 2 communication units 150 and 160. The operation input unit 130 inputs the voice data of the speaker to the processing unit 120 (more specifically, the speaker recognition unit 124) via the 2 communication units 150 and 160.

The processing unit 120 accesses the storage unit 110, acquires audio data of a conference, and generates sentence data by performing various processes described in embodiment 1 on the audio data of the conference. The processing unit 120 generates a learning completion model obtained by machine learning the characteristics of the voice of the speaker based on the input voice data of the speaker. Then, the processing section 120 identifies the speaker based on the sound data of the conference and the learning completion model. The processing unit 120 outputs screen information of the editing support screen 10, which displays the recognized speaker and the speaking section corresponding to the speaker in association with each other, to the communication unit 160 as a processing result. The communication unit 160 transmits the processing result to the communication unit 150, and the communication unit 150 outputs screen information to the display unit 140 upon receiving the processing result. Thereby, the display unit 140 displays the editing support screen 10.

As described above, the terminal device 100 may not include the storage unit 110 and the processing unit 120, and the server device 200 may include the storage unit 110 and the processing unit 120. The server device 200 may include the storage unit 110, and another server device (not shown) connected to the communication network NW may include the processing unit 120. Such a structure may also be used as an editing support system. Even in this embodiment, the convenience of the editing process of the recognition result of the speaker can be improved.

While the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the specific embodiments of the present invention, and various modifications and changes can be made within the scope of the present invention described in the claims. For example, in the above embodiment, the case where the 1 st editing screen 30 is continuously and dynamically displayed has been described. On the other hand, the switching point 17 may be moved by a cursor key, and the 1 st editing screen 30 may be displayed each time the enter key is pressed. Such control may also be applied to the 2 nd editing screen 40. If no participant data is registered, a recognition character or a recognition symbol may be employed instead of the speaker as a recognition result.

Description of the symbols

100 terminal device

110 storage part

115 dot storage unit

120 processing part

121 st display control part

122 voice recognition unit

123 sentence generating part

124 speaker identification part

125 sound reproducing part

126 speaker editing unit

127 point management unit

128 nd display control part

130 input unit

140 display part

33页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于包括全丢帧隐藏和部分丢帧隐藏的LC3隐藏的解码器和解码方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!