method for converting voice into character based on video communication

文档序号:1579903 发布日期:2020-01-31 浏览:8次 中文

阅读说明:本技术 一种基于视频通讯的语音转文字方法 (method for converting voice into character based on video communication ) 是由 沈梦超 裘昊 文志平 何志明 沈德欢 于 2019-10-29 设计创作,主要内容包括:本发明公开了一种基于视频通讯的语音转文字方法。它针对一般的视频会议系统,具体包括如下步骤:(1)在数据采集端,把音频数据和视频数据采集到了之后,送去编码器编码,同时保留一定时间的音频数据,进行文字识别处理,两者整合完毕后,一起发送给媒体服务器;(2)媒体服务器把收到的音视频数据包进行转发给客户端,同时对音视频数据包进行持久化保存下来;(3)客户端在收到音视频数据后,将其送入解码器解码,之后对音频进行播放以及视频进行渲染,把收到的文字信息渲染到视频上,显示到用户指定的区域上。本发明的有益效果是:给视频会议使用者提供更加直观的感受,提高语言听力上的容错率;使用文字形式记录视频会议的内容。(The invention discloses a method for converting voice into text based on video communication, which aims at a video conference system and specifically comprises the following steps of (1) collecting audio data and video data at a data collecting end, sending the audio data and the video data to an encoder for encoding, reserving time-fixed audio data, carrying out text recognition processing, sending data to a media server after the audio data and the video data are integrated, (2) forwarding the received audio and video data packets to a client by the media server, and storing the audio and video data packets in a persistence manner, (3) sending the audio and video data to a decoder for decoding after the client receives the audio and video data, then playing audio and rendering video, rendering the received text information to the video and displaying the video on an area designated by a user.)

1, A method for converting voice into text based on video communication, which is characterized in that, aiming at video conference system, the method comprises the following steps:

(1) at a data acquisition end, after audio data and video data are acquired, the audio data and the video data are sent to an encoder for encoding, meanwhile, -time audio data are reserved for character recognition processing, and after the audio data and the video data are integrated, is sent to a media server;

(2) the media server forwards the received audio and video data packet to the client, and meanwhile, the audio and video data packet is stored persistently;

(3) after receiving the audio and video data, the client sends the audio and video data to a decoder for decoding, then plays the audio and renders the video, renders the received text information to the video, and displays the text information to an area designated by a user.

2. The method of claim 1, wherein in step (1), since the data size of the text is not too large, it is not necessary to add encoder coding, and it can directly use text coding format to code, and then add it to the data frame after audio and video coding according to the sequence of time stamps, and after the integration, sends it to the media server.

3. The method of claim 1 or 2, wherein in step (1), the integration process is that the speech is divided and recognized, and continuous speech is output as segmented text frames, the content of the text frames includes the start of the text segment timestamp, the end of the text segment timestamp and recognized text content, the text frames with timestamps after segmentation are transmitted immediately, and the priority of the text packets is increased, so as to reduce the text delay.

4. The method of claim 1, wherein the media server receives audio, video and audio converted text in step (2), persists the text in rules to the database, and forwards the audio, video and text to the client.

5. The method of claim 4, wherein the synthesizing process comprises, when the server records the video, waiting for the arrival of the text frame to align the text frame with the video frame, text frames corresponding to multiple video frames, rendering the text content in the text frame respectively on the video frame corresponding to the text frame, saving the video after text frames are over, and performing the alignment and rendering of the next text frames with the video frame.

6. The method of claim 1, wherein in step (3), the audio is played by system speakers, the video is rendered by opengles or other rendering tools, and the video is rendered on the canvas, and simultaneously the received text information is rendered on pieces of canvas according to the timestamp format, so that the video and text are combined, and after the combination is completed, the buffer area is exchanged and displayed on the area designated by the user.

7. The method of claim 6, wherein in step (3), the video and text are synthesized by aligning the text frames with the video frames, text frames correspond to multiple video frames, rendering the video frames corresponding to the text frames with the text content in the text frames, respectively, after text frames are finished, the video can be saved, and the next text frames are aligned with the video frames and rendered, because the client receives the display in real time, and the process of converting the speech into text has a delay of time segments, the text frames with the same time stamp will arrive later than the video frames, which requires the media server to shorten the time interval of the text frames as much as possible, so that the client displays the text frames on the video frames in sequence.

8. The method of claim 1, wherein in step (3), if the client wants to view the text of the video conference, the client requests the server interface to retrieve the text record of the video conference.

Technical Field

The invention relates to the technical field related to video communication, in particular to voice-to-text method based on video communication.

Background

However, under special conditions, such as subway buses or fields, where the sound is noisy, the experience of audio-video call is sometimes reduced because the sound of the other party is not heard.

The video and audio of modern audio-video communication can be saved in , if in scenes of meetings, important speech of leaders is saved during video communication, and when the video communication is required to review a certain point (or review and confirm certain data), the efficiency of reviewing the whole video is low.

Disclosure of Invention

In order to overcome the defects in the prior art, the invention provides voice-to-text methods based on video communication for improving the fault tolerance rate.

In order to achieve the purpose, the invention adopts the following technical scheme:

A method for converting voice into text based on video communication, which aims at common video conference system, comprises the following steps:

(1) at a data acquisition end, after audio data and video data are acquired, the audio data and the video data are sent to an encoder for encoding, meanwhile, -time audio data are reserved for character recognition processing, and after the audio data and the video data are integrated, is sent to a media server;

(2) the media server forwards the received audio and video data packet to the client, and meanwhile, the audio and video data packet is stored persistently;

(3) after receiving the audio and video data, the client sends the audio and video data to a decoder for decoding, then plays the audio and renders the video, renders the received text information to the video, and displays the text information to an area designated by a user.

The invention focuses on solving the problem of voice data expression in video communication, and the sound is influenced by external environment greatly, so long as a little noise is generated, the situation that the sound is not clearly heard can be caused. Thus, the other party needs to speak again to hear clearly. In order to solve the problem, the invention identifies the audio by characters, forwards the identified characters through the server, then synthesizes pictures at the client, does not do too much work at the media server, and receives audio, video and character messages at the same time at the client, so that diversified data types can provide more visual feeling for video conference users, and the characters and subtitles assist the form of the audio, thereby improving the fault tolerance rate (sound cannot be heard clearly, and subtitles can be watched to make up).

Preferably, in step (1), since the data amount of the text is not too large, the text does not need to be encoded by an encoder, and the text can be encoded directly by using a text encoding format, and then added to the data frames after the audio and video encoding according to the sequence of the time stamps, and after the integration is completed, is sent to the media server.

Preferably, in step (1), the integration process is as follows: and voice segmentation and recognition, namely outputting continuous voice as segmented text frames, wherein the content of the text frames comprises the starting point of a text segmentation time stamp, the end point of the text segmentation time stamp and recognized text content, the text frames with the time stamps after the segmentation are immediately transmitted, and the priority of text packets is improved, so that the time delay of the text can be reduced.

Preferably, in step (2), the media server persists the text in the database according to rules when receiving the text converted from audio, video and audio at , and forwards the audio, video and text to the client at .

Preferably, in the step (2), the characters and the video are synthesized according to the requirement and then recorded and stored in the database, and the synthesizing process comprises the following steps of waiting for the arrival of character frames when the video is recorded at the server, aligning the character frames with the video frames, corresponding character frames to a plurality of video frames, rendering the video frames corresponding to the character frames with the character contents in the character frames, respectively, storing the video after character frames are finished, and performing alignment and rendering of the next character frames with the video frames.

Preferably, in step (3), the audio is played by a system speaker, the video is rendered by opengles or other rendering tools, the video is rendered on a canvas, and simultaneously the received text information is rendered on pieces of canvas according to a timestamp format, so that the video and the text are synthesized, and after the synthesis is finished, a buffer area is exchanged and displayed on the area designated by the user.

Preferably, in the step (3), the video and text synthesis process includes aligning text frames with video frames, wherein text frames correspond to a plurality of video frames, rendering the video frames corresponding to the text frames respectively to the text content in the text frames, storing the video after text frames are finished, and aligning and rendering text frames with the video frames, and since the client receives the picture display in real time and the process of converting voice into text has a time delay of segments, the text frames with the same time stamp will arrive later than the video frames, which requires the media server to shorten the time interval of the text frames as much as possible, so that the client displays the text frames on the video frames in sequence.

Preferably, in step (3), if the client wants to view the text content of the video conference, the client may request the server interface to call up the text record of the video conference. The invention can record the content of the video conference in a text form, so that the text can be conveniently recorded as a more written and more formal information carrier, and when the video conference is finished and the conference content is required to be consulted again, text inquiry is the most convenient and rapid inquiry mode.

The invention has the beneficial effects that: the video conference system can provide more visual feeling for video conference users, and the fault tolerance rate on language hearing can be improved due to the form of text subtitle auxiliary audio; the content of the video conference can be recorded in a text form, so that the text can be conveniently recorded as a more written and formal information carrier.

Drawings

FIG. 1 is a flow chart of a method of the present invention;

FIG. 2 is a process diagram of speech to text;

fig. 3 is a process diagram of text composition at the media server.

Detailed Description

The invention is further described with reference to the drawings and the detailed description.

In the embodiment shown in fig. 1, methods for converting voice into text based on video communication specifically include the following steps for a -like video conference system:

(1) at a data acquisition end, acquiring audio data and video data, sending the audio data and the video data to an encoder for encoding, simultaneously reserving -time audio data, reserving 3-5s (the time length is determined according to actual conditions) audio in , carrying out character recognition processing, after the two are integrated, starting to send the audio data and the video data to a media server, wherein because the data volume of characters is not too large, the data does not need to be added with the encoder for encoding, the characters can be directly encoded by using encoding formats such as 'utf-8' and the like, then the encoding formats are added into data frames after audio and video encoding according to the sequence of time stamps, and after the integration is finished, starting to send the audio data and the video data to the media server;

as shown in fig. 2, the integration process is as follows: and (2) voice segmentation and recognition, namely outputting continuous voice as a segmented text frame, wherein the content of the text frame comprises the starting point of a text segmentation time stamp, the end point of the text segmentation time stamp and recognized text content, and the content format of the text frame is specifically 'start: 1569307050000, end:1569307051000, body: this is the beginning of our meeting', wherein: the 'start' field represents the start of the text segment timestamp, the 'end' field represents the end of the text segment timestamp, and the 'body' field represents the recognized text content; the text frame which is segmented and has the time stamp is immediately transmitted, and the priority of the text packet is improved, so that the time delay of the text can be reduced;

(2) the media server transfers the received audio and video data packet to the client and simultaneously stores the audio and video data packet in a persistence way, when the media server receives the characters converted from audio, video and audio, stores the characters in the database in a persistence way according to rules for convenient later inquiry and use, and transfers the audio, video and characters to the client, or records the video and audio as required, or synthesizes the characters with the video and then stores the synthesized characters in the database, as shown in fig. 3, the synthesis process comprises that when the server records the video, the arrival of character frames is waited first, the character frames are aligned with the video frames, character frames correspond to a plurality of video frames, the video frames corresponding to the character frames are respectively rendered with the character contents in the character frames, after character frames are finished, the video can be stored, and character frames are aligned with and rendered;

(3) the method comprises the steps of aligning character frames with video frames, corresponding character frames to a plurality of video frames, respectively rendering the video frames corresponding to the character frames to character contents in the character frames, storing the video frames after character frames are finished, aligning and rendering character frames with the video frames, displaying frames with the same time stamp in real time, displaying frames with the same time stamp in a meeting terminal as long as the client receives pictures in real time, and displaying a plurality of video frames in a meeting terminal as long as possible, wherein the client can display a plurality of text frames in a meeting terminal as long as the client receives the video frames, and the meeting terminal can display a plurality of text frames in a meeting terminal, if the client receives the video frames and displays the video frames in a meeting terminal, the client can display the video frames in a meeting terminal as long as the client receives the video frames and displays the video frames in a meeting terminal as long as possible, and the client can display a plurality of text frames in a meeting terminal database as long as the client receives the video frames and displays the video frames.

The invention focuses on solving the problem of voice data expression in video communication, and the sound is influenced by external environment greatly, so long as a little noise is generated, the situation that the sound is not clearly heard can be caused. Thus, the other party needs to speak again to hear clearly. In order to solve the problem, the invention identifies the audio by characters, forwards the identified characters through the server, then synthesizes pictures at the client, does not do too much work at the media server, and receives audio, video and character messages at the same time at the client, so that diversified data types can provide more visual feeling for video conference users, and the characters and subtitles assist the form of the audio, thereby improving the fault tolerance rate (sound cannot be heard clearly, and subtitles can be watched to make up). The invention can also record the content of the video conference in a text form, so that the text can be conveniently recorded as a more written and more formal information carrier, and when the video conference is finished and the conference content is required to be consulted again, text query is the most convenient and rapid query mode.

6页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于网络状况调节手机视频通话动态码率的方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类