Video special effect display method and device

文档序号:142691 发布日期:2021-10-22 浏览:26次 中文

阅读说明:本技术 一种视频特效显示方法及设备 (Video special effect display method and device ) 是由 王冉冉 于 2021-06-22 设计创作,主要内容包括:本申请涉及AR技术领域,提供一种视频特效显示方法及设备,其中,第一终端响应于接收的目标视频播放请求,获取目标视频并播放;根据设定的时间间隔检测第二终端是否接入,第二终端设置有透明显示屏,所述透明显示屏用于使所述第二终端的佩戴者观看所述第一终端播放的所述目标视频;若接入,则响应于检测到的预设触发点,向所述第二终端发送控制指令,所述预设触发点表征所述目标视频关联的所述预设触发点对应的时刻设置有播放特效,所述控制指令携带所述目标视频的标识,使得所述第二终端根据所述标识获取并播放所述预设触发点对应的时刻设置的特效信息,通过第一终端和第二终端分别显示目标视频和特效信息,从而较少视频卡顿的现象。(The application relates to the technical field of AR (augmented reality), and provides a video special effect display method and equipment, wherein a first terminal responds to a received target video playing request, acquires a target video and plays the target video; detecting whether a second terminal is accessed according to a set time interval, wherein the second terminal is provided with a transparent display screen, and the transparent display screen is used for enabling a wearer of the second terminal to watch the target video played by the first terminal; and if the target video is accessed, responding to a detected preset trigger point, sending a control instruction to the second terminal, wherein the preset trigger point represents that a playing special effect is arranged at the moment corresponding to the preset trigger point associated with the target video, and the control instruction carries an identifier of the target video, so that the second terminal obtains and plays special effect information arranged at the moment corresponding to the preset trigger point according to the identifier, and the target video and the special effect information are respectively displayed through the first terminal and the second terminal, thereby reducing the phenomenon of video blocking.)

1. A video special effect display method, comprising:

the first terminal responds to the received target video playing request, acquires and plays the target video;

the first terminal detects whether a second terminal is accessed according to a set time interval, wherein the second terminal is provided with a transparent display screen, and the transparent display screen is used for enabling a wearer of the second terminal to watch the target video played by the first terminal;

when the second terminal is detected to be accessed, a control instruction is sent to the second terminal in response to a detected preset trigger point, wherein the preset trigger point represents that a play special effect is set at a moment corresponding to the preset trigger point associated with the target video, the control instruction carries an identifier of the target video, and the control instruction is used for enabling the second terminal to acquire and play special effect information set at the moment corresponding to the preset trigger point according to the identifier.

2. The method of claim 1, wherein the method further comprises:

when the fact that the second terminal is not connected is detected, responding to the detected preset trigger point, and sending a special effect obtaining request to a server;

and receiving and playing special effect information which is sent by the server and set at the moment corresponding to the preset trigger point.

3. The method of claim 1, wherein after sending the control instruction to the second terminal, the method further comprises:

and if the fact that the second terminal does not respond to the control command is detected, displaying prompt information that the second terminal does not respond to the wearer, wherein the prompt information comprises an error type that the second terminal does not respond to, so that the wearer reconfigures the second terminal according to the error type to establish communication connection with the first terminal.

4. The method according to any of claims 1-3, wherein said sending a control instruction to the second terminal in response to the detected preset trigger point comprises:

when the preset trigger point is detected, directly sending a control instruction to the second terminal at a first time corresponding to the preset trigger point; or

And when the preset trigger point is detected, determining a second moment according to the first moment corresponding to the preset trigger point, and sending a control instruction to the second terminal at the second moment, wherein the second moment is less than the first moment.

5. The method of claim 4, wherein the determining the second time according to the first time corresponding to the preset trigger point comprises:

acquiring a current playing time corresponding to a currently played target video frame; if the difference value between the current playing time and the first time is smaller than a preset threshold value, determining the current playing time as a second time; or

And determining a second moment according to the first moment corresponding to the preset trigger point and the preset delay playing time.

6. The method according to any one of claims 1 to 3, wherein the preset trigger point is a tag when the target video is played to a preset scene or a tag when a predetermined play time on a play time axis of the target video is marked.

7. The method of any of claims 1-3, wherein the second terminal is Augmented Reality (AR) glasses.

8. A video special effect display method, comprising:

when a first terminal is accessed, receiving a control instruction sent by the first terminal when a preset trigger point is detected, wherein the preset trigger point represents that a play special effect is arranged at a moment corresponding to the preset trigger point associated with the target video, the control instruction carries an identifier of the target video, the target video is obtained and played by the first terminal in response to a received target video play request, and the second terminal is provided with a transparent display screen which is used for enabling a wearer of the second terminal to watch the target video played by the first terminal;

and acquiring and playing special effect information set at the moment corresponding to the preset trigger point according to the identification carried by the control instruction.

9. A first terminal, comprising a display, a memory, a controller:

the display is connected with the controller and is configured to display a target video;

the memory, coupled to the controller, configured to store computer program instructions;

the controller configured to perform the following operations in accordance with the computer program instructions:

responding to the received target video playing request, acquiring a target video and playing the target video;

detecting whether the second terminal is accessed according to a set time interval, wherein the second terminal is provided with a transparent display screen, and the transparent display screen is used for enabling a wearer of the second terminal to watch the target video played by the first terminal;

when the second terminal is detected to be accessed, a control instruction is sent to the second terminal in response to a detected preset trigger point, wherein the preset trigger point represents that a play special effect is set at a moment corresponding to the preset trigger point associated with the target video, the control instruction carries an identifier of the target video, and the control instruction is used for enabling the second terminal to obtain and play special effect information set at the moment corresponding to the preset trigger point according to the identifier.

10. A second terminal, comprising a display, a memory, a processor:

the rendering engine is connected with the processor and is configured to render and display special effect information;

the memory, coupled to the processor, configured to store computer program instructions;

the processor configured to perform the following operations in accordance with the computer program instructions:

when a first terminal is accessed, receiving a control instruction sent by the first terminal when a preset trigger point is detected, wherein the preset trigger point represents that a play special effect is arranged at a moment corresponding to the preset trigger point associated with the target video, the control instruction carries an identifier of the target video, the target video is obtained and played by the first terminal in response to a received target video play request, and the second terminal is provided with a transparent display screen which is used for enabling a wearer of the second terminal to watch the target video played by the first terminal;

and acquiring and playing special effect information set at the moment corresponding to the preset trigger point according to the identification carried by the control instruction.

Technical Field

The present disclosure relates to the field of Augmented Reality (AR) technologies, and in particular, to a method and an apparatus for displaying a video effect.

Background

Augmented Reality (AR) is a technology for skillfully fusing virtual information and a real world, and a plurality of technical means such as multimedia, three-dimensional modeling, real-time tracking, intelligent interaction, sensing and the like are widely applied, so that generated virtual information such as characters, images, three-dimensional models, music, videos and the like is applied to the real world after being simulated, and the two kinds of information complement each other, thereby realizing the 'enhancement' of the real world.

The AR display has wide applications in various industries, such as educational training, fire drilling, virtual driving, real estate, marketing, etc., bringing users with immersive visual feasts. The scene special effect is superimposed on the video content through the AR technology, the intelligent and unprecedented visual impact experience is fully shown, the video with the personalized special effect is superimposed, the interesting dynamic effect of body response can be generated for a user, and the advantage of high stability is displayed in the extreme environment.

At present, the AR enhances the special effect mode is that the AR equipment generally plays the video and displays the special effect, the requirement on the processing performance of the AR equipment is high, and the phenomenon of video blocking easily occurs.

Disclosure of Invention

The embodiment of the application provides a video special effect display method and equipment, which are used for improving the performance of AR display special effects.

In a first aspect, an embodiment of the present application provides a video special effect display method, including:

the first terminal responds to the received target video playing request, acquires and plays the target video;

the first terminal detects whether the second terminal is accessed according to a set time interval, wherein the second terminal is provided with a transparent display screen, and the transparent display screen is used for enabling a wearer of the second terminal to watch the target video played by the first terminal;

when the second terminal is detected to be accessed, a control instruction is sent to the second terminal in response to a detected preset trigger point, wherein the preset trigger point represents that a play special effect is set at a moment corresponding to the preset trigger point associated with the target video, the control instruction carries an identifier of the target video, and the control instruction is used for enabling the second terminal to acquire and play special effect information set at the moment corresponding to the preset trigger point according to the identifier.

In a second aspect, an embodiment of the present application provides a video special effect display method, including:

when a first terminal is accessed, receiving a control instruction sent by the first terminal when a preset trigger point is detected, wherein the preset trigger point represents that a play special effect is arranged at a moment corresponding to the preset trigger point associated with the target video, the control instruction carries an identifier of the target video, the target video is obtained and played by the first terminal in response to a received target video play request, and the second terminal is provided with a transparent display screen which is used for enabling a wearer of the second terminal to watch the target video played by the first terminal;

and acquiring and playing special effect information set at the moment corresponding to the preset trigger point according to the identification carried by the control instruction.

In a third aspect, an embodiment of the present application provides a first terminal, including a display, a memory, and a controller:

the display is connected with the controller and is configured to display a target video;

the memory, coupled to the controller, configured to store computer program instructions;

the controller configured to perform the following operations in accordance with the computer program instructions:

responding to the received target video playing request, acquiring a target video and playing the target video;

detecting whether a second terminal is accessed according to a set time interval, wherein the second terminal is provided with a transparent display screen, and the transparent display screen is used for enabling a wearer of the second terminal to watch the target video played by the first terminal;

when the second terminal is detected to be accessed, a control instruction is sent to the second terminal in response to a detected preset trigger point, wherein the preset trigger point represents that a play special effect is set at a moment corresponding to the preset trigger point associated with the target video, the control instruction carries an identifier of the target video, and the control instruction is used for enabling the second terminal to acquire and play special effect information set at the moment corresponding to the preset trigger point according to the identifier.

Optionally, the controller is further configured to:

when the fact that the second terminal is not connected is detected, a special effect obtaining request is sent to a server in response to the detected preset trigger point;

and receiving and playing special effect information which is sent by the server and set at the moment corresponding to the preset trigger point.

Optionally, the controller is further configured to:

and if the fact that the second terminal does not respond to the control command is detected, displaying prompt information that the second terminal does not respond to the wearer, wherein the prompt information comprises an error type that the second terminal does not respond to, so that the wearer reconfigures the second terminal according to the error type to establish communication connection with the first terminal.

Optionally, the controller sends a control instruction to the second terminal in response to the detected preset trigger point, and is specifically configured to:

when the preset trigger point is detected, directly sending a control instruction to the second terminal at a first time corresponding to the preset trigger point; or

And when the preset trigger point is detected, determining a second moment according to the first moment corresponding to the preset trigger point, and sending a control instruction to the second terminal at the second moment.

Optionally, the determining, by the controller, a second time according to the first time corresponding to the preset trigger point includes:

acquiring a current playing time corresponding to a currently played target video frame; if the difference value between the current playing time and the first time is smaller than a preset threshold value, determining the current playing time as a second time; or

And determining a second moment according to the first moment corresponding to the preset trigger point and the preset delay playing time.

Optionally, the preset trigger point is a tag when the target video is played to a preset scene, or a tag marked at a preset playing time on a playing time axis of the target video.

Optionally, the second terminal is augmented reality AR glasses.

In a fourth aspect, an embodiment of the present application provides a second terminal, including a display, a memory, and a processor:

the rendering engine is connected with the processor and is configured to render and display special effect information;

the memory, coupled to the processor, configured to store computer program instructions;

the processor configured to perform the following operations in accordance with the computer program instructions:

when a first terminal is accessed, receiving a control instruction sent by the first terminal when a preset trigger point is detected, wherein the preset trigger point represents that a play special effect is arranged at a moment corresponding to the preset trigger point associated with the target video, the control instruction carries an identifier of the target video, the target video is obtained and played by the first terminal in response to a received target video play request, and the second terminal is provided with a transparent display screen which is used for enabling a wearer of the second terminal to watch the target video played by the first terminal;

and acquiring and playing special effect information set at the moment corresponding to the preset trigger point according to the identification carried by the control instruction.

In a fifth aspect, the present application provides a computer-readable storage medium storing computer-executable instructions for causing a computer to execute a video display method provided by an embodiment of the present application.

In the above embodiment of the application, a wearer of the second terminal watches interaction with the first terminal, the first terminal plays a target video selected by the wearer, the wearer watches the target video played by the first terminal through a transparent display screen of the second terminal, the first terminal detects whether the second terminal is accessed according to a set time interval, when the second terminal is detected to be accessed, a control instruction carrying a target video identifier is sent to the second terminal in response to a detected preset trigger point, a play special effect is set at a moment corresponding to the preset trigger point representing the target video association, after the second terminal receives the control instruction, special effect information set at the moment corresponding to the preset trigger point is obtained according to the identifier and played, the target video is played through the first terminal, the second terminal plays the information, and a scheme of displaying the special effect information in a superposition manner on the basis of the target video is realized, the method brings immersive visual experience to the wearer, generates interesting limb response corresponding to the special effect information, and reduces the requirement on the performance of the equipment and the video blocking phenomenon because the display target video and the special effect information are independently displayed by the first terminal and the second terminal respectively.

Drawings

In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.

Fig. 1 schematically illustrates an application scenario provided by an embodiment of the present application;

fig. 2 is a diagram illustrating a hardware structure of a first terminal according to an embodiment of the present application;

fig. 3 is a block diagram schematically illustrating a second terminal provided in an embodiment of the present application;

FIG. 4 is a flow chart illustrating a video effects display method provided by an embodiment of the present application;

fig. 5 is a diagram illustrating an effect of displaying special effect information by a second terminal according to an embodiment of the present application;

fig. 6 is a diagram illustrating a relationship between different target videos and special effect information provided by an embodiment of the present application;

fig. 7a is a diagram illustrating an effect of displaying special effect information by a first terminal according to an embodiment of the present application;

fig. 7b is a diagram illustrating an effect of the first terminal displaying the prompt message according to an embodiment of the present application;

FIG. 8 is a schematic diagram illustrating a television and AR glasses interaction process provided by an embodiment of the present application;

fig. 9 is a flowchart illustrating a complete method for displaying video effects by using television and AR glasses according to an embodiment of the present application.

Detailed Description

To make the objects, embodiments and advantages of the present application clearer, the following description of exemplary embodiments of the present application will clearly and completely describe the exemplary embodiments of the present application with reference to the accompanying drawings in the exemplary embodiments of the present application, and it is to be understood that the described exemplary embodiments are only a part of the embodiments of the present application, and not all of the embodiments.

All other embodiments, which can be derived by a person skilled in the art from the exemplary embodiments described herein without inventive step, are intended to be within the scope of the claims appended hereto. In addition, while the disclosure herein has been presented in terms of one or more exemplary examples, it should be appreciated that aspects of the disclosure may be implemented solely as a complete embodiment.

The terms "first", "second", "third", and the like in the description and claims of this application and in the above-described drawings are used for distinguishing between similar or analogous objects or entities and are not necessarily meant to define a particular order or sequence Unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein.

Furthermore, the terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or device that comprises a list of elements is not necessarily limited to those elements explicitly listed, but may include other elements not expressly listed or inherent to such product or device.

Embodiments of the present application are described in detail below with reference to the accompanying drawings.

Fig. 1 schematically illustrates an application scenario provided in an embodiment of the present application. As shown in fig. 1, the first terminal 100 is configured to obtain a target video according to a received video playing request and play the target video, and a wearer of the second terminal 200 watches the target video played by the first terminal through a transparent display screen of the second terminal. The first terminal 100 detects whether the second terminal 200 accesses according to the set time interval. When detecting that the second terminal 200 is accessed, the first terminal 100 sends a control instruction to the second terminal 200, and the second terminal 200 plays a special effect corresponding to a target video scene played by the first terminal 100 according to the received control instruction, so that the immersive visual feast is experienced, and when the wearer watches the video overlaid with the personalized special effect, an interesting dynamic effect of limb response can be generated; when it is detected that no second terminal 200 is accessed, the personalized special effect is played while the target video is played by the first terminal 100.

As shown in fig. 1, the server 300 is configured to store the processed target video and the special effect information, the first terminal 100 may acquire the target video and the special effect information from the server 300, and the second terminal may acquire the special effect information from the server 300.

The first terminal 100 and the second terminal 200 may be connected via bluetooth, or may be connected via the same network.

It should be noted that, when the memories of the first terminal and the second terminal are real enough, the corresponding special effect information may be locally stored.

Taking the first terminal as an example of the smart phone, fig. 2 exemplarily shows a structure diagram of the first terminal provided in the embodiment of the present application. As shown in fig. 2, the first terminal 100 includes at least one of a controller 250, a tuner demodulator 210, a communicator 220, a detector 230, an input/output interface 255, a display 275, an audio output interface 285, a memory 260, a power supply 290, a user interface 265, and an external device interface 240 therein.

In some embodiments, the display 275 includes a display screen assembly for presenting a picture and a driver assembly for driving the display of an image, an assembly for receiving image signals derived from the output of the first processor, displaying video content and images, and a menu manipulation interface.

In some embodiments, display 275 is a projection display and may also include a projection device and a projection screen.

In some embodiments, communicator 220 is a component for communicating with external devices or external servers according to various communication protocol types. For example: the communicator may include at least one of a Wifi chip, a bluetooth communication protocol chip, a wired ethernet communication protocol chip, and other network communication protocol chips or near field communication protocol chips, and an infrared receiver.

In some embodiments, the first terminal 100 may establish control signal and data signal transmission and reception with an external device through the communicator 220.

In some embodiments, the user interface 265 may be used to receive control signals for external devices.

In some embodiments, the detector 230 includes a light receiver, an image collector, a temperature sensor, a sound collector, etc. for collecting signals of an external environment or interaction with the outside.

In some embodiments, the input/output interface 255 is configured to allow data transfer between the controller 250 and external other devices or other controllers 250. Such as receiving video signal data and audio signal data of an external device, or command instruction data, etc.

In some embodiments, the external device interface 240 may include, but is not limited to, the following: the interface can be any one or more of a high-definition multimedia interface (HDMI), an analog or data high-definition component input interface, a composite video input interface, a USB input interface, an RGB port and the like. The plurality of interfaces may form a composite input/output interface.

In some embodiments, the tuner demodulator 210 is configured to receive broadcast television signals through wired or wireless reception, perform modulation and demodulation processing such as amplification, mixing, resonance, and the like, and demodulate audio and video signals from a plurality of wireless or wired broadcast television signals, where the audio and video signals may include television audio and video signals carried in a television channel frequency selected by a user and EPG data signals.

In some embodiments, the frequency points demodulated by the tuner demodulator 210 are controlled by the controller 250, and the controller 250 can send out control signals according to user selection, so that the modem responds to the television signal frequency selected by the user and modulates and demodulates the television signal carried by the frequency.

In some embodiments, the controller 250 controls the operation of the first terminal and responds to user operations through various software control programs stored in the memory. The controller 250 may control the overall operation of the first terminal 100. For example: in response to receiving a user command for selecting a UI object to be displayed on the display 275, the controller 250 may perform an operation related to the object selected by the user command.

As shown in fig. 2, the controller 250 includes at least one of a Random Access Memory 251 (RAM), a Read-Only Memory 252 (ROM), a video processor 270, an audio processor 280, other processors 253 (e.g., a Graphics Processing Unit (GPU), a Central Processing Unit 254 (CPU), a Communication Interface (Communication Interface), and a Communication Bus 256(Bus), which connects the respective components.

In some embodiments, RAM 251 is used to store temporary data for the operating system or other programs that are running.

In some embodiments, ROM 252 is used to store instructions for various system boots.

In some embodiments, the ROM 252 is used to store a Basic Input Output System (BIOS). The system is used for completing power-on self-test of the system, initialization of each functional module in the system, a driver of basic input/output of the system and booting an operating system.

In some embodiments, when the power-on signal is received, the first terminal 100 starts to power on, the CPU executes the system boot instruction in the ROM 252, and copies the temporary data of the operating system stored in the memory into the RAM 251 so as to start or run the operating system. After the start of the operating system is completed, the CPU copies the temporary data of the various application programs in the memory to the RAM 251, and then, the various application programs are started or run.

In some embodiments, CPU processor 254 is used to execute operating system and application program instructions stored in memory. And executing various application programs, data and contents according to various interactive instructions received from the outside so as to finally display and play various audio and video contents.

In some example embodiments, the CPU processor 254 may comprise a plurality of processors. The plurality of processors may include a main processor and one or more sub-processors. A main processor for performing some operations of the first terminal 100 in a pre-power-up mode and/or operations of displaying a screen in a normal mode. One or more sub-processors for one operation in a standby mode or the like.

In some embodiments, the graphics processor 253 is used to generate various graphics objects, such as: icons, operation menus, user input instruction display graphics, and the like. The display device comprises an arithmetic unit which carries out operation by receiving various interactive instructions input by a user and displays various objects according to display attributes. And the system comprises a renderer for rendering various objects obtained based on the arithmetic unit, wherein the rendered objects are used for being displayed on a display.

In some embodiments, the video processor 270 is configured to receive an external video signal, and perform video processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, image synthesis, and the like according to a standard codec protocol of the input signal, so as to obtain a signal that can be directly displayed or played on the first terminal 100.

In some embodiments, video processor 270 includes a demultiplexing module, a video decoding module, an image synthesis module, a frame rate conversion module, a display formatting module, and the like.

In some embodiments, the audio processor 280 is configured to receive an external audio signal, decompress and decode the received audio signal according to a standard codec protocol of the input signal, and perform noise reduction, digital-to-analog conversion, and amplification processes to obtain an audio signal that can be played in a speaker.

The power supply 290 provides power supply support for the first terminal 100 from the power input from the external power source under the control of the controller 250. The power supply 290 may include a built-in power supply circuit installed inside the first terminal 100, or may be a power supply interface installed outside the first terminal 100 to provide an external power supply in the first terminal 100.

A user interface 265 for receiving an input signal of a user and then transmitting the received user input signal to the controller 250. The user input signal may be a remote controller signal received through an infrared receiver, and various user control signals may be received through the network communication module.

The memory 260 includes a memory for storing various software modules for driving the first terminal 100. Such as: various software modules stored in the first memory, including: at least one of a basic module, a detection module, a communication module, a display control module, a browser module, and various service modules.

Taking the second terminal as AR glasses as an example, fig. 3 exemplarily shows a structure diagram of the second terminal provided in the embodiment of the present application. As shown in fig. 3, the second terminal 200 includes a left display lens 301 and a right display lens 302 through which the wearer can view video images. The camera 303 is used to collect images during the interaction process.

In some embodiments, the wearer may control the connection with the external device by turning the AR glasses on and or off via switch 304.

As shown in fig. 3, the wearer may interact with the AR glasses through the touch area 305. For example, the user acquires special effect information to be displayed through the touch area.

As not shown in fig. 3, the AR glasses further include chips such as a rendering engine, a memory, and a processor, which may be integrated in an integrated circuit board and placed inside the AR glasses, wherein the rendering engine, the memory, and the processor are connected through a bus, and the rendering engine is configured to render and display the special effect information; the memory is configured to store computer program instructions; the processor is configured to execute the method for displaying the special effect on the second terminal side in the embodiment of the present application according to the computer program instructions.

It should be noted that fig. 1-3 are only examples, and alternatively, the first terminal may be a display device with video playing and interaction functions, such as a smart phone, a notebook computer, a desktop computer, a tablet computer, and the like.

Based on the scenario shown in fig. 1, fig. 4 exemplarily shows a flowchart of a video special effect display method provided by an embodiment of the present application, and as shown in fig. 4, the flowchart is executed by a first terminal, and mainly includes the following steps:

s401: and the first terminal responds to the received target video playing request, acquires the target video and plays the target video.

In this step, the target video playing request may be triggered by the user or sent by other external devices. Taking user triggering as an example, a user selects an interested target video through a touch screen or a function key of a first terminal, and sends a target video playing request to the first terminal, the first terminal responds after receiving the target video playing request and sends a target video acquiring request to a server, and the server sends a corresponding target video to the first terminal after receiving the target video acquiring request and plays the target video by the first terminal.

In some embodiments, in order to improve video playing efficiency, after receiving a target video playing request, a first terminal queries whether a local video list includes a target video, loads the target video from the local video list and plays the target video when the local video list includes the target video, and acquires the target video from a server and plays the target video when the local video list does not include the target video.

S402: the first terminal detects whether the second terminal is accessed according to the set time interval, and executes S403 when detecting that the second terminal is accessed, and executes S404 when detecting that the second terminal is not accessed.

In the step, the second terminal is provided with a transparent display screen, a wearer of the second terminal watches the target video played by the first terminal through the transparent display screen, and the second terminal can display virtual special effect information (such as characters, images, three-dimensional models, music and videos) in an overlapping manner with the real target video, so that the 'enhancement' of the real video picture is realized. Optionally, the second terminal is AR glasses.

The second terminal is connected with the first terminal through Bluetooth or connected to the same network through WIFI. In S402, the first terminal detects whether the second terminal is accessed according to the set time interval, and determines a device for playing special effect information according to the access state of the second terminal. Specifically, when it is detected that the second terminal is accessed to the first terminal, the second terminal displays the special effect information, and when it is detected that the second terminal is not accessed to the first terminal, the first terminal displays the special effect information.

S403: and the first terminal responds to the detected preset trigger point and sends a control instruction to the second terminal.

In the step, each target video is associated with at least one preset trigger point in advance, and the preset trigger point represents that a play special effect is set at the moment corresponding to the preset trigger point associated with the target video. The preset trigger point can be set according to actual requirements.

In an optional implementation manner, the preset trigger point is a label when the target video is played to a preset scene, one target video may include different preset scenes, each preset scene may serve as a label of the preset trigger point, and when the preset scene is played, the corresponding special effect information is acquired and played.

Use the target video as "diamond calabash baby" for example, because seven calabash babies' skill diverse, for example four babies can the flame, five babies can spray water, consequently, can regard as a preset trigger point with the scene that four babies used the skill in the target video, and for this preset trigger point adds the special effect information of flame, as shown in (a) in fig. 5, regard as another preset trigger point with the scene that five babies used the skill in the target video, and for this preset trigger point adds the special effect information of water spray, as shown in (b) in fig. 5.

In another alternative embodiment, the preset trigger point is a tag marked at a predetermined play time on the play time axis of the target video.

For example, the duration of the target video is 30 seconds, a special effect is preset to be played when the target video is played to the 10 th second, a preset trigger point is set at the 10 th second on the playing time axis of the target video, and special effect information is added to the preset trigger point.

Each special effect information corresponds to a unique code, and the relationship between the preset trigger point associated with the target video and the special effect information is shown in table 1.

Table 1 correspondence between preset trigger points and special effect information

As can be seen from table 1, the special effect information corresponding to different preset trigger points associated with different target video frames may be the same. For example, the "flaming" effect of four children in the "diamond gourd child" is shown in (a) of fig. 6, and can also be applied to the "flaming" effect of red children in the "western travel notes" as shown in (b) of fig. 6.

The special effect information set at the moment corresponding to the preset trigger point associated with the target video is different in display effect due to the fact that the display technologies of the first terminal and the second terminal are different. It should be noted that fig. 5 and fig. 6 show special effect information corresponding to a preset trigger point, which is displayed by taking the second terminal as an example.

In S403, since the second terminal has been connected to the first terminal, the special effect information played by the second terminal has a stronger sense of reality, so as to bring better immersive experience to the user, and the first terminal can control the second terminal to play the special effect information. Specifically, the first terminal responds to the detected preset trigger point and sends a control instruction to the second terminal, the control instruction carries the identification of the target video, and the identification of each target video is unique. The embodiment of the present application does not require any limitation on the type of the identifier, including but not limited to a Uniform Resource Locator (URL) of the target video, an ID of the target video, and a video encoding of the target video. The target video is associated with each preset trigger point in advance, and each preset trigger point corresponds to one piece of special effect information, which is specifically referred to in table 1. Because the control command carries the identification of the target video, the second terminal can acquire the special effect information which is set at the moment corresponding to the preset trigger point according to the identification, and render and play the acquired special effect information, so that the superposition of the special effect information and the target video content is realized, the visual impact is enhanced, the immersive visual experience in the presence is brought to the user, and the interesting dynamic effect of limb response is generated.

For example, when the preset trigger point 1001 is detected, the first terminal sends a control instruction marked as "1" to the second terminal, and after the second terminal receives the control instruction, the second terminal acquires special effect information a1 corresponding to the preset trigger point 1001 from the server and plays the special effect information a1, so that the current video frame is enhanced, "immersive" is achieved, immersive experience is brought to a wearer, and the wearer generates a corresponding interesting limb response according to the displayed special effect information a 1. For example, when the special effect information a1 is a flame, the wearer may naturally evade the flame when viewing the flame.

In some scenes with low requirements on special-effect playing time, when the first terminal detects a preset trigger point, the first terminal directly sends a control instruction to the second terminal at a first time corresponding to the preset trigger point.

For example, assume that a preset trigger point is set at the 10 th second on the target video playing time axis, and when the target video is played to the 10 th second, the first terminal detects the preset trigger point and directly sends a control instruction to the second terminal.

In some scenes with high requirements on special-effect playing time, time delay of signaling transmission and special-effect acquisition needs to be considered, when the first terminal detects a preset trigger point, a control instruction can be sent to the second terminal before the time corresponding to the preset trigger point, so that the second terminal has enough time to acquire special-effect information from the server after receiving the control instruction.

For example, assuming that a preset trigger point is set at the 10 th second on the target video playing time axis, in order to ensure that special effect information can be played when the target video is played to the 10 th second, the first terminal may send a control instruction to the second terminal at the 9.5 th second, and as for how long ahead, the setting may be performed according to actual conditions.

In specific implementation, the first terminal determines a second moment according to the first moment corresponding to the preset trigger point, and sends a control instruction to the second terminal at the second moment, wherein the second moment is smaller than the first moment.

In an optional implementation manner, a first terminal acquires a current time corresponding to a currently played target video frame; and comparing the current moment with a first moment corresponding to a preset trigger point, and if the difference value between the current moment and the first moment is smaller than a preset threshold value, determining the current moment as a second moment by the first terminal.

For example, the first time corresponding to the preset trigger point 1001 is T, the current time is T, if the mosquito larvae T is less than Δ T, the T is determined as the second time, if the mosquito larvae T is greater than or equal to Δ T, the target video is continuously played, the current time T +1 corresponding to the next target video frame is obtained, and if the mosquito larvae (T +1) < Δ T, the T +1 is determined as the second time.

In another optional implementation manner, the first terminal determines the second time according to a first time corresponding to a preset trigger point and a preset delay playing time. Specifically, the difference between the first time and the delayed playing time is determined as the second time. Wherein, the delay playing time can be set according to practical experience. In the embodiment of the present application, the delay play time is measured to be 30 milliseconds or 60 milliseconds according to experimental data.

For example, if the first time corresponding to the preset trigger point 1001 is T and the preset delay playing time is Δ T, the mosquito larvae Δ T ═ T' is determined as the second time.

It should be noted that, the embodiment of the present application does not make a limiting requirement on the special effect playing time, and the playing time of the special effect information may not be consistent with the time corresponding to the preset trigger point associated with the target video.

For example, the time corresponding to the preset trigger point is 10 seconds, the first terminal sends a control instruction to the second terminal in 9.5 seconds, and due to the fact that the network speed is high, the second terminal acquires corresponding special effect information in 9.9 seconds, and the special effect information can be played when the target video is played to 9.9 seconds.

For another example, the time corresponding to the preset trigger point is 10 seconds, the first terminal sends a control instruction to the second terminal in 9.5 seconds, and due to the fact that the network speed is low, the second terminal obtains corresponding special effect information in 10.1 seconds, and then the special effect information can be played when the target video is played to 10.1 seconds.

S404: the first terminal responds to the detected preset trigger point and sends a special effect acquisition request to the server.

In this step, since it is detected that the second terminal is not accessed to the first terminal, special effect information is set by the first terminal at a time corresponding to a preset trigger point associated with the target video. Specifically, when a preset trigger point is detected by the first terminal, a special effect obtaining request is sent to the server, the special effect obtaining request carries an identifier of the target video, and the server returns special effect information set at the moment corresponding to the detected preset trigger point according to the identifier of the target video.

S405: and the first terminal receives and plays special effect information which is sent by the server and is set at the moment corresponding to the preset trigger point.

In the step, after receiving the special effect information returned by the server, the first terminal plays the target video and simultaneously plays the acquired special effect information.

For example, for a bonus guessing link in a target video, prizes are obtained according to different bonus types. After the first terminal receives the special effect information corresponding to the first-class award video frame, the first terminal plays the video frame corresponding to the first-class award at the same time, and as shown in (a) of fig. 7a, after the first terminal receives the special effect information corresponding to the second-class award video frame, the first terminal plays the video frame corresponding to the second-class award at the same time, and as shown in (b) of fig. 7a, the first terminal plays the special effect information corresponding to the second-class award at the same time.

In some embodiments, the wearer of the second terminal may switch the target video played by the first terminal through human-computer interaction. During specific implementation, a wearer of the second terminal switches the target video through the touch screen or the function key and sends a target video switching request, and after receiving the target video switching request, the first terminal acquires and plays a new target video from the server and controls playing of corresponding special effect information based on a new preset trigger point associated with the visual frequency.

In some embodiments, a wearer of the second terminal may control access states of the first terminal and the second terminal through a touch area or an on-off key of the second terminal, thereby implementing switching of the special effect information playing device.

For example, in the process of playing a target video by a first terminal, it is detected that a second terminal is not accessed at a first detection time, when a first preset trigger point associated with the target video is detected, the first terminal plays special effect information corresponding to the first preset trigger point acquired from a server, and plays the acquired special effect information at the time corresponding to the first preset trigger point, in a preset time interval, a wearer opens the second terminal and performs bluetooth or WiFi connection with the first terminal through a touch area, after the first terminal detects that the second terminal is accessed at a second detection time, when a second preset trigger point is detected, a control instruction is sent to the accessed second terminal, the second terminal acquires special effect information corresponding to the second preset trigger point from the server according to the received control instruction, and plays the acquired special effect information at the time corresponding to the second preset trigger point.

For another example, in the process of playing the target video by the first terminal, it is detected that the second terminal has accessed the first terminal at the first detection time, when a first preset trigger point associated with the target video is detected, an identification control instruction carrying the target video is sent to the second terminal, the second terminal acquires special effect information corresponding to the first preset trigger point from the server according to the identification and plays the acquired special effect information at the moment corresponding to the first preset trigger point, in the process of playing the target video, the second terminal is disconnected from the first terminal due to the interruption of the network and the Bluetooth or the closing of the switch, after a preset time interval, the first terminal detects that the second terminal is not accessed at a second detection moment, and when a second preset trigger point is detected, sending a special effect information acquisition request to the server, and playing the acquired special effect information at the moment corresponding to the second preset trigger point.

In other embodiments, when the first terminal detects that the second terminal is accessed, but the second terminal does not respond to the control instruction due to some reasons (e.g., IP address conflict, domain name resolution error), at this time, the first terminal presents to a wearer of the second terminal prompt information that the second terminal does not respond, where the prompt information includes an error type that the second terminal does not respond, so that a configurator reconfigures the second terminal according to the error type to establish a communication connection with the first terminal, and after the connection is established, the second terminal acquires and plays corresponding special effect information, so that the wearer views more vivid and real special effect information.

It should be noted that, the display manner of the prompt message in the embodiment of the present application is not limited, for example, the user may be prompted in a voice broadcast manner that the second terminal is disconnected from the first terminal, or the prompt message is displayed in a prompt box in a display page of the first terminal.

Optionally, in order not to obscure the target video played by the first terminal, a prompt message may be displayed in the upper left corner of the display page, as shown in fig. 7 b.

In the above embodiment of the application, a wearer of the second terminal performs human-computer interaction with the first terminal, the first terminal plays a target video selected by the wearer, and detects whether the second terminal is accessed according to a set time interval, when it is detected that the second terminal is accessed, a control instruction is sent to the second terminal so that the second terminal obtains and plays special effect information corresponding to a preset trigger point, so that the target video and the special effect information are independently displayed by two terminals in an overlapping manner, the performance requirement of equipment is reduced, the video pause phenomenon is reduced, and the user experience is improved; and when detecting that the second terminal is not accessed, the first terminal acquires and plays the special effect information corresponding to the preset trigger point, so that the personalized special effect can be played when the second terminal is interrupted.

Taking the first terminal as a television and the second terminal as AR glasses as an example, fig. 8 exemplarily shows an interaction diagram of the television and the AR glasses provided by the embodiment of the present application. As shown in fig. 8, the television is used to play a target video. When the AR glasses and the television are connected through the same WiFi, the television sends a control signal to the AR glasses, the AR glasses acquire special effect information according to the received control signal, and the special effect information is played at a preset trigger point associated with the target video, so that the special effect of the target video played by the television is enhanced.

The complete interaction process of the television and the AR glasses is shown in fig. 9, and the process mainly includes the following steps:

s901: and the television responds to a target video playing request triggered by a user and sends a target video acquiring request to the server.

S902-903: and the server sends the target video to the television according to the target video acquisition request.

S904: and the television plays the acquired target video.

S905: the television detects whether the AR glasses worn by the user are accessed according to the set time interval, and executes S906 when the AR glasses are detected to be accessed, and executes 910 when the AR glasses are detected not to be accessed.

S906: the television responds to the detected preset trigger point and sends a control instruction to the AR glasses, wherein the preset trigger point represents that a play special effect is arranged at the moment corresponding to the preset trigger point associated with the target video, and the control instruction carries the identification of the target video.

S907: the AR glasses send a special effect information acquisition request to the server according to the identification carried by the control instruction so as to acquire special effect information set at the moment corresponding to the preset trigger point associated with the target video.

S908: and the server sends the corresponding special effect information to the AR glasses according to the special effect information acquisition request.

S909: and the AR glasses receive and play special effect information which is sent by the server and is set at the moment corresponding to the preset trigger point.

S910: and the television responds to the detected preset trigger point and sends a special effect acquisition request to the server.

S911: and the server sends the corresponding special effect information to the television according to the special effect information acquisition request.

S912: and the television broadcast receiving server receives and plays the special effect information which is sent by the television broadcast receiving server and is set at the moment corresponding to the preset trigger point.

Embodiments of the present application also provide a computer-readable storage medium for storing instructions that, when executed, may implement the methods of the foregoing embodiments.

The embodiments of the present application also provide a computer program product for storing a computer program, where the computer program is used to execute the method of the foregoing embodiments.

Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

21页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:目标对象的锁定方法和装置、存储介质及电子设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类