Intelligent device and simultaneous interpretation method during video call

文档序号:73243 发布日期:2021-10-01 浏览:25次 中文

阅读说明:本技术 一种智能设备及视频通话时的同声翻译方法 (Intelligent device and simultaneous interpretation method during video call ) 是由 矫佩佩 张玉 孙菁 陈维强 于 2020-04-29 设计创作,主要内容包括:本发明涉及智能家居领域,尤其涉及一种智能设备及视频通话时的同声翻译方法。方法包括:接收用户的操作指示,基于所述用户选择的目标交互用户以及目标语言生成视频交互指令,获取第一视频数据和第一音频数据,接收云服务器发送的字幕数据、所述第二音频数据以及所述第二视频数据,将所述字幕数据叠加到所述第二视频数据上,对处理后的第二视频数据和所述第一视频数据进行图像拼接处理,并将生成的合成视频数据以及所述第二音频数据发送至显示设备。这样,无需为实现音频数据转换为文本数据而架设大规模处理设备,且通过将合成视频数据与第二音频数据的同步配置,避免了音频数据与字幕数据之间的延时,实现将同声翻译应用于日常的使用中。(The invention relates to the field of intelligent home furnishing, in particular to an intelligent device and a simultaneous interpretation method during video call. The method comprises the following steps: receiving an operation instruction of a user, generating a video interaction instruction based on a target interaction user selected by the user and a target language, acquiring first video data and first audio data, receiving subtitle data sent by a cloud server, second audio data and the second video data, superposing the subtitle data on the second video data, performing image splicing processing on the processed second video data and the first video data, and sending the generated synthesized video data and the second audio data to display equipment. Therefore, large-scale processing equipment does not need to be erected for converting the audio data into the text data, the time delay between the audio data and the subtitle data is avoided by synchronously configuring the synthesized video data and the second audio data, and the simultaneous interpretation is applied to daily use.)

1. A simultaneous interpretation method during video call is characterized by comprising the following steps:

receiving an operation instruction of a user, controlling display equipment to display a corresponding graphical user interface, generating a video interaction instruction based on a target interaction user selected by the user on the corresponding graphical user interface and a target language, and acquiring first video data and first audio data;

sending the first audio data, the first video data and the video interaction instruction to a cloud server, and triggering the cloud server to acquire second video data and second audio data acquired by the target interaction user side;

receiving subtitle data, the second audio data and the second video data sent by the cloud server, wherein the subtitle data is obtained after the second audio data is translated into a target language;

and overlaying the subtitle data on the second video data, performing image splicing processing on the processed second video data and the first video data to generate synthetic video data, and sending the synthetic video data and the second audio data to the display equipment.

2. The method of claim 1, wherein after receiving the operation instruction of the user, controlling a display device to display a corresponding graphical user interface, and before generating the video interaction instruction based on the target interaction user selected by the user on the corresponding graphical user interface and the target language, further comprises:

the method comprises the steps of determining a target language selected by a user based on a graphical user interface presented on the display device, and determining that the target language exists in a preset effective language list, wherein the effective language list contains all language information which can be recognized and translated by a cloud server.

3. The method of claim 2, further comprising:

when determining that the target language selected by the user based on the graphical user interface presented on the display equipment is not contained in a preset effective language list, generating prompt information for selecting the target language;

and sending the prompt information to the display equipment for displaying, and waiting for the user to reselect the target language.

4. The method of any one of claims 1-3, wherein generating video interaction instructions based on the user-selected target interaction user and a target language, comprises:

and when the user selects to start the simultaneous interpretation function and selects the target interactive user, generating a video interactive instruction at least based on the ID information of the target interactive user selected by the user and the target language information selected when the simultaneous interpretation is started.

5. The method of claim 4, wherein after triggering the cloud server to obtain the second video data and the second audio data collected by the target interactive user side, and before sending the composite video data and the second audio data to the display device, further comprising:

and when the fact that only the second video data and the second audio data sent by the cloud server are received is determined, processing the second video data and the first video data by adopting an image splicing technology to generate composite video data.

6. An electronic device, comprising:

a memory for storing executable instructions;

a processor for reading and executing the executable instructions stored in the memory to implement the simultaneous interpretation method for video call as claimed in any one of claims 1 to 5.

7. A simultaneous interpretation method during video call is characterized by comprising the following steps:

receiving and displaying a graphical user interface determined to be presented based on an operation instruction of a user;

the method comprises the steps of receiving synthesized video data and audio data, displaying based on the synthesized video data, and playing based on the audio data, wherein the synthesized video data is obtained by translating second audio data collected by a target interactive user side into a target language, generating subtitle data, then overlapping the subtitle data and the second video data collected by the target interactive user side, and carrying out image splicing on the overlapped second video data and first video data collected locally, and the audio data is the second audio data collected by the target user side.

8. The method of claim 7, further comprising:

and when determining to receive prompt information of the selected target language, displaying the prompt information and presenting a graphical user interface of the selected target language to the user.

9. A display device, comprising:

the display is used for displaying a graphical user interface for video interaction and displaying based on video interaction data needing to be played;

the loudspeaker is used for playing audio data of a target interaction user in the video interaction process;

a processor configured to perform:

receiving a graphical user interface determined to be presented by the intelligent equipment based on an operation instruction of a user, and calling the display to display;

and receiving the synthesized video data and the audio data sent by the intelligent equipment, displaying on the display based on the synthesized video data, and playing on the loudspeaker based on the audio data, wherein the synthesized video data is obtained by translating the second audio data collected by the target interactive user side into a target language, generating subtitle data, then overlapping the subtitle data and the second video data collected by the target interactive user side, and performing image splicing on the overlapped second video data and the locally collected first video data.

10. The display device of claim 9, wherein the processor is further to:

and when the prompt information of selecting the target language sent by the intelligent equipment is determined to be received, displaying the prompt information on the display, and presenting a graphical user interface of selecting the target language to the user.

Technical Field

The invention relates to the technical field of intelligent home, in particular to an intelligent device and a simultaneous interpretation method during video call.

Background

With the development of the smart television, the smart television can not only meet the entertainment and recreation requirements of people, people can realize video call by means of the smart television, and due to the promotion of the globalization trend, the video call objects may use different languages, so that the elimination of communication barriers among different languages becomes more important, and the simultaneous interpretation technology comes up.

For the existing simultaneous interpretation technology, on one occasion, a large-scale communication device for simultaneous interpretation is deployed on the site of a large-scale international conference or a live broadcast and other public scenes, so that simultaneous interpretation in the conference or live broadcast process is realized, and on the other occasion, the simultaneous interpretation exists on a terminal device with higher computing capability in the form of application software, so that the voice content is converted into the translated text content.

However, in the prior art, in the first case, since the cost of deploying large-scale communication devices is high and the deployment is difficult, the communication devices are difficult to popularize in daily application and cannot be applied to a video call scene on a smart television, and in the second case, after the terminal device obtains voice data, a certain processing time is required in the process of translating the voice data into text data, so that the heard audio data and the seen text data are asynchronous, a certain delay exists, the use experience of a user is greatly influenced, and the voice data cannot be processed in the video call process on the smart television.

Disclosure of Invention

The embodiment of the invention provides intelligent equipment and a simultaneous interpretation method during video call, which are used for solving the problem that simultaneous interpretation cannot be applied to video interaction on an intelligent television on the basis of ensuring synchronization of caption data and audio data obtained by interpretation in the prior art.

The embodiment of the invention provides the following specific technical scheme:

a simultaneous interpretation method during video call comprises the following steps:

receiving an operation instruction of a user, controlling display equipment to display a corresponding graphical user interface, generating a video interaction instruction based on a target interaction user selected by the user on the corresponding graphical user interface and a target language, and acquiring first video data and first audio data;

sending the first audio data, the first video data and the video interaction instruction to a cloud server, and triggering the cloud server to acquire second video data and second audio data acquired by the target interaction user side;

receiving subtitle data, the second audio data and the second video data sent by the cloud server, wherein the subtitle data is obtained after the second audio data is translated into a target language;

and overlaying the subtitle data on the second video data, performing image splicing processing on the processed second video data and the first video data to generate synthetic video data, and sending the synthetic video data and the second audio data to the display equipment.

Optionally, after receiving the operation instruction of the user, and controlling the display device to display the corresponding graphical user interface, before generating the video interaction instruction based on the target interaction user and the target language selected by the user on the corresponding graphical user interface, the method further includes:

the method comprises the steps of determining a target language selected by a user based on a graphical user interface presented on the display device, and determining that the target language exists in a preset effective language list, wherein the effective language list contains all language information which can be recognized and translated by a cloud server.

Optionally, further comprising:

when determining that the target language selected by the user based on the graphical user interface presented on the display equipment is not contained in a preset effective language list, generating prompt information for selecting the target language;

and sending the prompt information to the display equipment for displaying, and waiting for the user to reselect the target language.

Optionally, when generating the video interaction instruction based on the target interaction user selected by the user and the target language, the method includes:

and when the user selects to start the simultaneous interpretation function and selects the target interactive user, generating a video interactive instruction at least based on the ID information of the target interactive user selected by the user and the target language information selected when the simultaneous interpretation is started.

Optionally, after triggering the cloud server to obtain the second video data and the second audio data acquired by the target interaction user side, before sending the synthesized video data and the second audio data to the display device, the method further includes:

and when the fact that only the second video data and the second audio data sent by the cloud server are received is determined, processing the second video data and the first video data by adopting an image splicing technology to generate composite video data.

An electronic device, comprising:

a memory for storing executable instructions;

and the processor is used for reading and executing the executable instructions stored in the memory so as to realize the simultaneous interpretation method during the video call.

A simultaneous interpretation method during video call comprises the following steps:

receiving and displaying a graphical user interface determined to be presented based on an operation instruction of a user;

the method comprises the steps of receiving synthesized video data and audio data, displaying based on the synthesized video data, and playing based on the audio data, wherein the synthesized video data is obtained by translating second audio data collected by a target interactive user side into a target language, generating subtitle data, then overlapping the subtitle data and the second video data collected by the target interactive user side, and carrying out image splicing on the overlapped second video data and first video data collected locally, and the audio data is the second audio data collected by the target user side.

A display device, comprising:

the display is used for displaying a graphical user interface for video interaction and displaying based on video interaction data needing to be played;

the loudspeaker is used for playing audio data of a target interaction user in the video interaction process;

a processor configured to perform:

receiving a graphical user interface determined to be presented by the intelligent equipment based on an operation instruction of a user, and calling the display to display;

and receiving the synthesized video data and the audio data sent by the intelligent equipment, displaying on the display based on the synthesized video data, and playing on the loudspeaker based on the audio data, wherein the synthesized video data is obtained by translating the second audio data collected by the target interactive user side into a target language, generating subtitle data, then overlapping the subtitle data and the second video data collected by the target interactive user side, and performing image splicing on the overlapped second video data and the locally collected first video data.

The invention has the following beneficial effects:

in the disclosure, an operation instruction of a user is received, a display device is controlled to display a corresponding graphical user interface, a video interaction instruction is generated based on a target interaction user and a target language selected by the user on the corresponding graphical user interface, further, first video data and first audio data are obtained, the first audio data, the first video data and the video interaction instruction are sent to a cloud server, the cloud server is triggered to obtain second video data and second audio data collected by the target interaction user side, then, subtitle data, the second audio data and the second video data sent by the cloud server are received, the second audio data are obtained after the subtitle data are translated into the target language, then the subtitle data are superposed on the second video data, and image splicing processing is performed on the processed second video data and the first video data, and generating composite video data and sending the composite video data and the second audio data to the display equipment.

Therefore, the translated caption data is obtained from the cloud server, the processing difficulty of local equipment is effectively reduced, large-scale processing equipment does not need to be erected for converting audio data into text data, the synthesized video data and the second audio data are synchronously configured, the time delay between the audio data and the caption data is avoided, on one hand, video interaction is realized by means of the display equipment, and on the other hand, simultaneous translation can be applied to daily use.

Drawings

Fig. 1A is a schematic view of an operation scenario among a display device, an intelligent device, and a cloud server in an embodiment of the present disclosure;

fig. 1B is a schematic diagram of functional modules of an intelligent device in an embodiment of the present disclosure;

fig. 2 is a schematic flow chart illustrating simultaneous interpretation when an intelligent device implements a video call in the embodiment of the present disclosure;

3A-3C are schematic diagrams of a smart device controlling a display device to present a graphical user interface in an embodiment of the disclosure;

FIG. 3D is a schematic view of a video interactive interface presented on a display device in an embodiment of the present disclosure;

fig. 4 is an interaction diagram of a display device, an intelligent device, and a cloud server implementing simultaneous interpretation of a video in an embodiment of the present disclosure;

FIG. 5 is a schematic diagram of a logical structure of an intelligent device in an embodiment of the present disclosure;

fig. 6 is a schematic diagram of a logical structure of a display device in an embodiment of the present disclosure.

Detailed Description

In order to make those skilled in the art better understand the technical solutions in the present disclosure, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.

In order to solve the problem that simultaneous interpretation can not be applied to a scene of video call on an intelligent television on the basis of ensuring audio and video synchronization in the prior art, and meanwhile, in order to ensure the interaction quality and the translation effect on a target language when video is carried out on a display device, the application provides an intelligent device and a simultaneous interpretation method during video call.

Fig. lA is a schematic diagram illustrating an operation scenario among a smart device, a display device, and a cloud server. As shown in fig. 1A, the control device 100 and the display device 200 may communicate with each other in a wired or wireless manner, or the control device 100 and the smart device 300 may communicate with each other in a wired or wireless manner.

The smart device 300 is connected to the display device 200 by a wire, and transmits a Video stream and an audio stream in a USB Video Class (UVC) Protocol format or a Real Time Streaming Protocol (RTSP) format. The intelligent device 300 is connected with the cloud server 400 through wireless communication, the transmitted video stream and audio stream adopt an RTSP format, and the transmission mode of the inter-component instructions in the intelligent device 300 can adopt a socket format.

It should be noted that the connection relationship and the processing manner between the display device 201, the intelligent device 301 and the cloud server 400 at the target interaction user side are the same as those between the devices at the user side initiating the video interaction, and are not described herein again.

The control apparatus 100 is configured on the one hand to control the display device 200, such as: the user responds to the operation of adding and subtracting channels by operating the channel add-subtract keys on the control device 100, the control device 100 is configured to control the intelligent device 300, the intelligent device 300 controls the display device 200 to display a graphical user interface, and receives the selection or input operation made by the user based on the graphical user interface.

The control device 100 may be a remote control 100A, which includes infrared protocol communication, bluetooth protocol communication, other short-distance communication methods, and the like, and controls the intelligent device 300 in a wireless or other wired manner. The user may input a user instruction through a button on a remote controller, voice input, control panel input, etc., based on a graphic user interface presented on the display apparatus 200, to control the smart device 300. Such as: the user can input a corresponding control instruction through a volume up-down key, a channel control key, up/down/left/right moving keys, a voice input key, a menu key, a power on/off key and the like on the remote controller, so as to realize the function of controlling the intelligent device 300.

The control apparatus 100 may also be a terminal device such as a mobile terminal 100B, a tablet computer, a notebook computer, or the like. For example, the smart device 300 is controlled using an application running on the terminal device. The application program provides various controls for the user through an intuitive User Interface (UI) by being configured on a screen associated with the terminal device.

For example, the mobile terminal 100B may install a software application with the smart device 300, implement connection communication through a network communication protocol, and implement the purpose of one-to-one control operation and data communication. Such as: the mobile terminal 100B may be caused to establish a control instruction protocol with the smart device 300 to implement the function of the physical keys as arranged by the remote control 100A by operating various function keys or virtual buttons of the user interface provided on the mobile terminal 100B.

The display device 200 may be a liquid crystal display, an organic light emitting display, a projection device. The specific display device type, size, resolution, etc. are not limited.

The smart device 300 performs data communication with the display device 200 through various communication means. Here, communication connection may be allowed by a wired connection, a Local Area Network (LAN), a Wireless Local Area Network (WLAN), or the like. The smart device 300 acquires first video data and first audio data collected locally based on an operation instruction initiated by a user through the control device 100, and acquires second video data and second audio data collected by a target interaction user side performing video interaction, and subtitle data by means of the cloud server 400. And then overlapping the subtitle data and the second video data, and splicing the processed second video data and the first video data.

The cloud server 400 is in communication connection with the smart device through a Local Area Network (LAN) or a Wireless Local Area Network (WLAN). The cloud server 400 is responsible for receiving the audio data and the video data sent by the intelligent device, translating the audio data into a target language, and generating subtitle data to be issued to a relevant user.

In some embodiments of the present disclosure, a smart device 300 controls and manages a display device 200, the smart device 300 is configured to receive a video interaction instruction initiated by a user based on a graphical user interface selection displayed by the display device 200, generate a video interaction instruction, receive locally acquired first audio data and first video data, send the video interaction instruction, the first audio data and the first video data to a cloud server 400, obtain, by the cloud server 400, second video data and second audio data acquired by a target interaction user side, and after the cloud server 400 translates the second audio data into a target language to obtain subtitle data, the smart device 300 receives the subtitle data sent by the cloud server 400, the second audio data and the second video data, and then superimposes the subtitle data onto the second video data, the processed second video data and the first video data are subjected to image splicing processing to generate synthetic video data, and the synthetic video data and the second audio data are sent to the display device 200, so that the intelligent device 300 can enable the video data and the audio data obtained by the display device 200 to be synchronous, delay between the audio data and the video data is avoided, and user privacy can be effectively protected.

In other embodiments of the present disclosure, one smart device 300 can manage different display devices 200 that can trust each other in a certain space, e.g., different display devices 200 of the same home or the same company can share one smart device 300. The user can select to display video interactive pictures on different display devices 200, the smart device 300 generates a corresponding video interactive instruction after receiving an operation instruction of the user based on a graphical user interface displayed on the display device, receives locally acquired first audio data and first video data, further sends the video interactive instruction, the first audio data and the first video data to the cloud server 400, the cloud server 400 acquires second video data and second audio data acquired by a target interactive user side, and after the cloud server 400 translates the second audio data into a target language to obtain subtitle data, the smart device 300 receives the subtitle data sent by the cloud server 400, the second video data and the second audio data, further superimposes the second video data and the subtitle data, and the processed second video data and the first video data are subjected to image splicing to obtain synthesized video data, and the synthesized video data and the second audio data are synchronously sent to the display device 200, so that the intelligent device 300 can enable the video data and the audio data obtained by the display device 200 to be synchronous, avoid time delay between audio and video data, and can effectively reduce the configuration cost by configuring and sharing the same intelligent device for the display devices which can trust each other.

In some embodiments of the present disclosure, the smart device 300 includes a processor.

The processor is configured to perform at least one of,

receiving an operation instruction of a user, controlling display equipment to display a corresponding graphical user interface, generating a video interaction instruction based on a target interaction user selected by the user on the corresponding graphical user interface and a target language, and acquiring first video data and first audio data;

sending the first audio data, the first video data and the video interaction instruction to a cloud server, and triggering the cloud server to acquire second video data and second audio data acquired by the target interaction user side;

receiving subtitle data, the second audio data and the second video data sent by the cloud server, wherein the subtitle data is obtained after the second audio data is translated into a target language;

and overlaying the subtitle data on the second video data, performing image splicing processing on the processed second video data and the first video data to generate synthetic video data, and sending the synthetic video data and the second audio data to the display equipment.

It should be noted that the smart device 200 may obtain the first audio data and the first audio data collected by the camera 102 and the microphone 103 by calling the other interactive camera 102 and microphone 103.

In other embodiments of the present disclosure, the smart device 200 itself may be configured with a movable camera 102 and a microphone 103, wherein the camera 102 may be flexibly placed at a central position on the top of the display device 200, or placed at another position of the display device 200, or placed at a position outside the display device 200 where video data of the user can be collected, for shooting the first video data generated by the user initiating the video interaction indication; the microphone 103 may be flexibly positioned to facilitate the capture of audio data of the user for capturing the first audio data generated by the user initiating the video interaction indication.

In other embodiments of the present disclosure, referring to fig. 1B, each component of the intelligent device 300 is divided into modules according to different implemented functions, and each module includes a video interaction bearing module, a service instruction transceiver module, a service control module, an audio/video data transceiver module, an audio/video data acquisition module, a data processing module, an interface module, and a communication module.

The service instruction transceiver module is configured to receive a video interaction instruction initiated by a user through selection and input operations based on a graphical user interface displayed on the display device 200, and transmit a corresponding control instruction to the display device 200;

the service control module is used for controlling the audio and video data transceiver module to acquire the first video data and the first audio data acquired by the audio and video data acquisition module, controlling the processed synthesized video data and the processed second audio data to be sent to the display device 200 through the interface module, controlling the audio and video data transceiver module to send the first audio data and the first video data, and receiving the second video data, the second audio data and the caption data.

The data processing module is used for overlapping the subtitle data and the second video data and splicing the overlapped second video data and the first video data;

the audio and video data acquisition module is used for acquiring first audio data and first video data generated by a user initiating video interaction locally.

In some embodiments of the present disclosure, the interaction process between the display device 200 and the smart device 300 is described below with reference to fig. 2:

step 201: the intelligent device receives an operation instruction of a user, controls the display device to display a corresponding graphical user interface, and generates a video interaction instruction based on a target interaction user and a target language selected by the user on the corresponding graphical user interface.

The method comprises the steps that related applications supporting video interaction between a user and other users are pre-installed on the intelligent device, and the display device is controlled to display a graphical user interface of the related applications, specifically, the intelligent device controls the display device to display a graphical user interface prompting the user to select a target interaction user for video interaction, and controls the display device to display a graphical user interface prompting the user to select to start a simultaneous interpretation function and select a target language.

Further, when the intelligent device determines that the user completes the selection of the target interaction user and the target language on the corresponding graphical user interface and confirms the operation of initiating the video interaction, the intelligent device determines that the user initiates the video interaction instruction.

In some embodiments of the present disclosure, after the smart device determines that the user starts the simultaneous interpretation function, determines the target language selected by the user, and determines the target interaction user selected by the user in response to a selection operation performed by the user based on a graphical user interface displayed on the display device, further, when it is determined that the target language exists in a preset effective language list, a video interaction instruction is generated based on at least ID information of the target interaction user selected by the user and the target language information selected when the simultaneous interpretation is started, where the effective language list includes all language information that the cloud server can recognize and interpret.

In other embodiments of the present disclosure, when the smart device determines that the target language selected by the user based on the graphical user interface on the display device is not included in the preset valid language list, the smart device generates a prompt message for selecting the target language, sends the prompt message to the display device for displaying, and waits for the user to reselect the target language. And after determining that the user selects to start the simultaneous interpretation function and selects an effective target language and selects a target interaction user, generating a video interaction instruction at least based on the ID information of the target interaction user selected by the user and the reselected target language information.

It should be noted that, after the smart device controls to display the graphical user interface prompting the user to select the target interaction user on the display device, the user is supported to input user identification information, such as Identity (ID) information, on the graphical user interface to determine the target interaction user, or the user is supported to select the target interaction user in an associated user list presented on the display device, where the associated user list includes other users who have a friend relationship with the user in advance on a related application performing video interaction and other users who have performed video interaction with the user.

For example, referring to fig. 3A-3C, in an initial graphical user interface presented by a smart device control display device, presenting application identification of related application capable of video interaction, determining that after a user clicks the application identification through remote control operation, causing the display device to present the graphical user interface schematically shown in fig. 3B, and enabling the user to interact with the user's ID information by typing in the target, or directly select other users presented in the associated user list to determine the target interactive user, and further, control the display device to present a graphical user interface schematically shown in fig. 3C, prompt the user to select to start the simultaneous interpretation function, and selecting a target language, determining that the user initiates a video interaction initiating instruction after determining that the user instructs to start the simultaneous interpretation function and determines the determining operation after selecting the target language.

Step 202: the intelligent device acquires first video data and first audio data.

After the intelligent device determines that a user initiates a video interaction instruction, first video data of the user and first audio data of the user are controlled to be acquired through a video acquisition device and an audio acquisition device, wherein the video acquisition device comprises but is not limited to a camera, and the audio acquisition device comprises but is not limited to a microphone.

Step 203: the intelligent device sends the first audio data, the first video data and the video interaction instruction to a cloud server, and triggers the cloud server to acquire second video data and second audio data acquired by the target interaction user side.

The method comprises the steps that after the intelligent device generates a video interaction instruction based on selection operation of a user, first audio data and first video data which are obtained by cooperation of the video interaction instruction are sent to a cloud server, the cloud server is triggered to obtain second audio data and second video data which are collected by a target interaction user side based on target interaction user ID information and target language information which are carried in the video interaction instruction, and the cloud server is made to translate the second audio data into target language to obtain subtitle data.

It should be noted that after the intelligent device sends the video interaction instruction to the cloud server, the cloud server is triggered to determine a target interaction user based on the target interaction user ID information, so that the cloud server issues a video interaction request to other intelligent devices on the target interaction user side, and the cloud server obtains second audio data and second video data, which are sent by the other intelligent devices and collected by the target interaction user side, after the target interaction user agrees to participate in video interaction.

Step 204: and the intelligent equipment receives the subtitle data, the second audio data and the second video data sent by the cloud server, wherein the subtitle data is obtained after the second audio data is translated into a target language.

After the intelligent device sends the first audio data, the first video data and the video interaction instruction to the cloud server, the cloud server is triggered to acquire the second video data and the second audio data which are uploaded by other intelligent devices of the target interaction user side and collected by the target interaction user side, and then after the cloud server finishes translating the second audio data into a target language and generating subtitle data, the second audio data, the second video data and the subtitle data sent by the cloud server are received.

Therefore, the translation from the audio data to the subtitle data is carried out on the cloud server, the problem that strong calculation force is needed for supporting when semantic analysis and translation are carried out is solved, and video interaction can be achieved without erecting large-scale processing equipment locally.

Step 205: and the intelligent equipment overlays the subtitle data on the second video data, carries out image splicing processing on the processed second video data and the first video data to generate synthetic video data, and sends the synthetic video data and the second audio data to the display equipment.

After receiving the subtitle data, the second video data and the second audio data generated by the cloud server based on the second audio data and the target language, the intelligent device superimposes the subtitle data on the second video data, and because the subtitle data is generated based on the second audio data and the second video data and the second audio data are synchronously acquired, the time between the second video data, the second audio data and the subtitle data is synchronous, and then the superimposition of the subtitle data and the second video data is realized by adopting a data superimposition technology, wherein the superimposition of the subtitle data and the video data is a mature technology in the field and is not repeated herein.

Further, the second video data superimposed with the subtitle data and two image frames in time synchronization in the first video data are subjected to image splicing processing, combined into one image frame, and finally synthesized video data are obtained based on the image frames obtained after processing.

Further, the obtained synthesized video data and the second audio data are synchronously sent to a display device through a preset audio and video interface, the display device is controlled to display based on the synthesized data, and the display device is controlled to play based on the second audio data.

For example, referring to fig. 3D, after the smart device sends the composite video data and the second audio data to the display device, the smart device presents the playing picture shown in fig. 3D, and displays the video call picture with the subtitle data between the user and the target interactive user on the display device in real time.

Therefore, the intelligent device can send the synthesized video data to the display device through the preset video interface, and simultaneously, synchronously send the second audio data to the display device through the preset audio interface, so that the video data and the audio data played by the display device can be ensured to be synchronous, and the problem that the user experience is influenced by the time delay existing between the audio data and the video data is avoided.

Referring to fig. 4, the simultaneous interpretation process of the present disclosure during video call is described below with reference to the accompanying drawings:

step 401: and the intelligent equipment receives an operation instruction of a user and controls the display equipment to display a corresponding graphical user interface.

The intelligent device 300 receives an operation instruction of a user, determines a graphical user interface required to be presented to the user, determines a graphical user interface required to be switched based on a selection operation performed by the user on the corresponding graphical user interface, and controls the display device 200 to display the corresponding graphical user interface.

Step 402: and the intelligent equipment generates a video interaction instruction based on the target interaction user selected by the user on the corresponding graphical user interface and the target language.

The smart device 300 controls the display device 200 to display a corresponding graphical user interface based on the operation of the user, determines a target interactive user selected by the user on the corresponding graphical user interface, and determines a target language selected by the user when the user selects to start the simultaneous interpretation function.

Further, in some embodiments of the present disclosure, the smart device 300 generates a video interaction instruction based on at least the ID information of the target interaction user and the target language information when determining that the target language exists in a preset valid language list, that is, when determining that the cloud server 400 can translate audio data into the target language.

In other embodiments of the present disclosure, when the smart device 300 determines that the obtained target language does not exist in the preset valid language list, it generates a prompt message for selecting the target language, sends the prompt message to the display device 200 for displaying, waits for the user to reselect the target language, and generates a video interaction instruction based on the reselected valid target language information and the ID information of the target interaction user.

Step 403: the intelligent device obtains locally acquired first audio data and first video data.

After the intelligent device 300 generates a video interaction instruction based on a target interaction user selected by a user on a corresponding graphical user interface and a target language, the camera 102 is called to collect first video data, the microphone 103 is called to collect first audio data, and the first video data and the first audio data are received at the same time.

Step 404: the intelligent device sends the first audio data, the first video data and the video interaction instruction to the cloud server.

After the intelligent device 300 acquires the first video data and the first audio data collected locally, the first video data and the first audio data are sent to the cloud server 400 together with the generated video interaction instruction through the local area network.

Step 405: and the cloud server determines that the data information and the instruction information are successfully received, and determines that the target interaction user agrees to perform video interaction.

After determining that the video interaction instruction, the first video data and the second video data sent by the intelligent device 300 are successfully received, the cloud server 400 sends a video interaction request to a corresponding target interaction user based on target interaction user information carried in the video interaction instruction.

In some embodiments of the present disclosure, the cloud server 400 displays a video interaction request on the display device 201 of the target interaction user side via the smart device 301 of the target interaction user side, and after determining that an indication that the target interaction user agrees to join in video interaction is obtained, triggers the smart device 301 to control the microphone of the target interaction user side to acquire second audio data, and controls the camera of the target interaction user side to acquire second video data, so that the smart device 301 receives the second audio data and the second video data.

In other embodiments of the present disclosure, the cloud server 400 sends the video interaction request to the display device 201 via the smart device 301 on the target interaction user side. After determining that the target display device initiates an indication of refusing to join the video interaction on the display device 201, directly feeding back the information of refusing to perform the video interaction to the intelligent device 300, forwarding the information to the display device 200 through the intelligent device 300 for display, and triggering the intelligent device 300 to end the current video interaction.

Step 406: and the cloud server receives second video data and second audio data acquired by the target interactive user side.

The cloud server 400 acquires second audio data and second video data collected by the target interaction user side and reported by the intelligent device 301 of the target interaction user side after determining that the video interaction instruction is successfully received, the target language information and the target interaction user information are acquired, and when determining that the target interaction user agrees to perform video interaction.

Step 407: and the cloud server generates subtitle data after translating the second audio data into the target language.

After obtaining the target language information and the second audio data collected by the target interaction user side, the cloud server 400 calls a voice translation package, translates the second audio data into the target language, and generates subtitle data corresponding to the second audio data after translation is completed.

Step 408: and the cloud server sends the second audio data, the second video data and the subtitle data to the intelligent equipment.

After the cloud server finishes translating the second audio data into the subtitle data of the target language, the obtained subtitle data, the second audio data and the second video data collected by the target interactive user side are sent to the intelligent device 300.

In some embodiments of the present disclosure, after the cloud server successfully translates the second audio data into the target language and obtains the subtitle data, the cloud server sends the subtitle data, together with the second audio data and the second video data, to the intelligent device 300.

In other embodiments of the present disclosure, if the cloud server fails to successfully translate the second audio data into the target language and fails to obtain the subtitle data, only the second audio data and the second video data are sent to the smart device 300.

Step 409: and the intelligent equipment performs superposition processing on the second video data and the subtitle data.

In some embodiments of the present disclosure, after receiving the subtitle data translated by the cloud server 400, the smart device 300 superimposes the subtitle data on the second video data according to the corresponding relationship between the subtitle data and the second audio data and the temporal synchronization relationship between the second audio data and the second video data, where the font size, the font color, and the text position of the subtitle data are flexibly adjustable.

In other embodiments of the present disclosure, when the smart device 300 does not receive the subtitle data sent by the cloud server 400, the operation defined in step 410 is directly performed.

Step 410: and the intelligent equipment performs image splicing processing on the second video data subjected to the superposition processing and the first video data to generate synthetic video data.

In some embodiments of the present disclosure, after the smart device 300 completes the superimposition processing of the subtitle data and the second video data based on the subtitle data sent by the cloud server 400, further, the second video data superimposed with the subtitle data and the locally acquired first video data are subjected to image stitching with the corresponding image frame as a processing object, so as to obtain the composite video data.

In other embodiments of the present disclosure, when determining that only the second video data and the second audio data sent by the cloud server 400 are received, the smart device 300 directly performs image stitching processing on the second video data and the first video data by using an image stitching technology to generate composite video data.

Step 411: and the intelligent equipment sends the synthesized video data and the audio data to display equipment.

After the smart device 300 processes the obtained synthesized video data, the synthesized video data and the second audio data are simultaneously sent to the display device 200, the display device 200 is enabled to display and synchronize based on the synthesized video data by using a display, and the display device 200 is enabled to play based on the second audio data by using a speaker.

Based on the same inventive concept, in the embodiments of the present disclosure, an intelligent device is provided, as shown in fig. 5, which at least includes: an acquisition unit 501, a transmission unit 502, a reception unit 503, and a processing unit 504, wherein,

the acquiring unit 501 receives an operation instruction of a user, controls a display device to display a corresponding graphical user interface, generates a video interaction instruction based on a target interaction user and a target language selected by the user on the corresponding graphical user interface, and acquires first video data and first audio data;

a sending unit 502, configured to send the first audio data, the first video data, and the video interaction instruction to a cloud server, and trigger the cloud server to obtain second video data and second audio data acquired by the target interaction user side;

a receiving unit 503, configured to receive subtitle data, the second audio data, and the second video data sent by the cloud server, where the subtitle data is obtained by translating the second audio data into a target language;

the processing unit 504 superimposes the subtitle data on the second video data, performs image stitching on the processed second video data and the first video data to generate composite video data, and sends the composite video data and the second audio data to the display device.

Based on the same inventive concept, in the embodiments of the present disclosure, a display device is provided, as shown in fig. 6, which at least includes: a receiving unit 601 and a display unit 602, wherein,

a receiving unit 601 that receives and displays a graphical user interface determined to be presented based on an operation instruction of a user;

the display unit 602 receives synthesized video data and audio data, displays the synthesized video data based on the synthesized video data, and plays the synthesized video data based on the audio data, wherein the synthesized video data is obtained by translating second audio data collected by a target interactive user side into a target language, generating subtitle data, superimposing the subtitle data and the second video data collected by the target interactive user side, and performing image stitching on the superimposed second video data and first video data collected locally, and the audio data is the second audio data collected by the target user side.

Based on the same inventive concept, the embodiments of the present disclosure provide a storage medium, and when instructions in the storage medium are executed by a processor, the processor can execute any one of the methods for simultaneous interpretation during video call implemented by an intelligent device in the above-mentioned process.

Based on the same inventive concept, the embodiments of the present disclosure provide a storage medium, and when instructions in the storage medium are executed by a processor, the processor can execute any one of the methods for simultaneous interpretation during video call implemented by the display device in the above-mentioned flow.

In the disclosure, an operation instruction of a user is received, a display device is controlled to display a corresponding graphical user interface, a video interaction instruction is generated based on a target interaction user and a target language selected by the user on the corresponding graphical user interface, further, first video data and first audio data are obtained, the first audio data, the first video data and the video interaction instruction are sent to a cloud server, the cloud server is triggered to obtain second video data and second audio data collected by the target interaction user side, then, subtitle data, the second audio data and the second video data sent by the cloud server are received, the second audio data are obtained after the subtitle data are translated into the target language, then the subtitle data are superposed on the second video data, and image splicing processing is performed on the processed second video data and the first video data, and generating composite video data and sending the composite video data and the second audio data to the display equipment.

Therefore, the translated caption data is obtained from the cloud server, the processing difficulty of local equipment is effectively reduced, large-scale processing equipment does not need to be erected for converting audio data into text data, the synthesized video data and the second audio data are synchronously configured, the time delay between the audio data and the caption data is avoided, on one hand, video interaction is realized by means of the display equipment, and on the other hand, simultaneous translation can be applied to daily use.

As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.

It will be apparent to those skilled in the art that various modifications and variations can be made in the embodiments of the present invention without departing from the spirit or scope of the embodiments of the invention. Thus, if such modifications and variations of the embodiments of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to encompass such modifications and variations.

20页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:智能终端、服务器和图像处理方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类