High-definition 3D implementation method, device, equipment and storage medium based on mobile platform

文档序号:1925571 发布日期:2021-12-03 浏览:21次 中文

阅读说明:本技术 基于移动平台的高清3d实现方法、装置、设备及存储介质 (High-definition 3D implementation method, device, equipment and storage medium based on mobile platform ) 是由 唐永强 于 2021-07-27 设计创作,主要内容包括:本发明属于3D视频技术领域,公开了一种基于移动平台的高清3D实现方法、装置、设备及存储介质。该方法包括:根据所述融合指令获取移动处理器平台中的左视频帧信号和右视频帧信号;将所述左视频帧信号和右视频帧信号进行融合处理,得到融合视频信号;根据所述左视频帧信号和右视频帧信号得到视频同步信号;将所述融合视频信号和视频同步信号输出至显示器系统,以通过显示器系统完成3D视频显示。通过上述方法,实现了移动平台上生成3D视频的功能,通过获取移动设备中的左视频帧信号和右视频帧信号,对其进行融合处理,再发送给显示设备进行3D视频显示,从而解决了移动平台无法实现3D视频融合的技术问题。(The invention belongs to the technical field of 3D videos and discloses a high-definition 3D realization method, device, equipment and storage medium based on a mobile platform. The method comprises the following steps: acquiring a left video frame signal and a right video frame signal in the mobile processor platform according to the fusion instruction; performing fusion processing on the left video frame signal and the right video frame signal to obtain a fusion video signal; obtaining a video synchronization signal according to the left video frame signal and the right video frame signal; and outputting the fused video signal and the video synchronization signal to a display system so as to complete 3D video display through the display system. By the method, the function of generating the 3D video on the mobile platform is realized, the left video frame signal and the right video frame signal in the mobile equipment are acquired, fused and sent to the display equipment for 3D video display, and therefore the technical problem that the mobile platform cannot realize 3D video fusion is solved.)

1. A high-definition 3D implementation method based on a mobile platform is characterized by comprising the following steps:

when a fusion instruction is detected, acquiring a left video frame signal and a right video frame signal in a mobile processor platform according to the fusion instruction, wherein the left video frame signal and the right video frame signal are acquired based on a mobile industry processor interface;

performing fusion processing on the left video frame signal and the right video frame signal to obtain a fusion video signal;

obtaining a video synchronization signal according to the left video frame signal and the right video frame signal;

and outputting the fused video signal and the video synchronization signal to a display system so as to complete 3D video display through the display system.

2. The method of claim 1, wherein the fusion instructions are generated by a mobile processor platform upon detection of a 3D video playback task from the 3D video playback task.

3. The method of claim 1, wherein prior to the step of acquiring the left video frame signal and the right video frame signal in the mobile processor platform in accordance with the fusion instruction upon detection of the fusion instruction, further comprising:

when a pairing instruction is detected, a confirmation instruction is sent to a mobile processor platform according to the pairing instruction, so that the mobile processor platform determines a target mobile industry interface according to the confirmation instruction, and when a 3D fusion task is detected, a fusion instruction is fed back through the target mobile industry interface.

4. The method according to claim 1, wherein said fusing the left video frame signal and the right video frame signal to obtain a fused video signal comprises:

acquiring video mark information according to the left video frame signal and the right video frame signal;

arranging the left video frame signal and the right video frame signal according to the video mark information to obtain a video frame queue;

and obtaining a fusion video signal according to the video frame queue.

5. The method of claim 4, wherein said arranging said left and right video frame signals according to said video tag information to obtain a video frame queue comprises:

and sequentially and alternately arranging the left video frame signals and the right video frame signals according to the video mark information to obtain a video frame queue.

6. The method of claim 1, wherein deriving a video synchronization signal from the left and right video frame signals comprises:

alternately marking the left video frame signal and the right video frame signal to obtain a time sequence mark;

and generating a video synchronization signal according to the timing mark.

7. The method of any one of claims 1-6, wherein said outputting the fused video signal and the video synchronization signal to a display system comprises:

obtaining the refreshing frequency of a left video frame signal and a right video frame signal;

obtaining the refreshing frequency of an output signal according to the refreshing frequency and a preset adjusting coefficient;

outputting the fused video signal to a mobile platform through a display interface according to the output signal refreshing frequency;

and outputting the video synchronization signal to a mobile platform.

8. The utility model provides a high definition 3D realizes device based on mobile platform which characterized in that, high definition 3D realizes device based on mobile platform includes:

the system comprises an acquisition module, a fusion module and a display module, wherein the acquisition module is used for acquiring a left video frame signal and a right video frame signal in a mobile processor platform according to a fusion instruction when the fusion instruction is detected, and the left video frame signal and the right video frame signal are acquired based on a mobile industry processor interface;

the fusion processing module is used for carrying out fusion processing on the left video frame signal and the right video frame signal to obtain a fusion video signal;

the fusion processing module is further used for obtaining a video synchronization signal according to the left video frame signal and the right video frame signal;

and the control module is used for outputting the fused video signal and the video synchronization signal to a display system so as to complete 3D video display through the display system.

9. A high definition 3D implementation device based on a mobile platform, the device comprising: a memory, a processor, and a mobile platform based high definition 3D implementation program stored on the memory and executable on the processor, the mobile platform based high definition 3D implementation program configured to implement the mobile platform based high definition 3D implementation method of any one of claims 1 to 7.

10. A storage medium, wherein the storage medium stores thereon a mobile platform-based high definition 3D implementation program, and when executed by a processor, the mobile platform-based high definition 3D implementation program implements the mobile platform-based high definition 3D implementation method according to any one of claims 1 to 7.

Technical Field

The invention relates to the technical field of 3D videos, in particular to a high-definition 3D realization method, device, equipment and storage medium based on a mobile platform.

Background

With the continuous development of graphic technology, the demand of 3D watching videos is increasing day by day, and at present, an immersive 3D display system based on a mobile platform has a helmet, means for realizing 3D vision are also quite various, and a display technology for realizing 3D videos on mobile equipment also becomes a popular field. The application requirements for flat panel viewing of 3D are extensive, and in particular the requirements for stereoscopic recognition and virtual simulation are strong.

At present, 3D display rendering in a high frame sequential format is not pertinently supported in mobile chip architecture design of high-pass, Hai Si, Union department and the like of a mainstream mobile processor platform, so that full-high-definition 3D display similar to a PC (personal computer) is realized on the processor platform, and the problem to be solved urgently is solved.

The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.

Disclosure of Invention

The invention mainly aims to provide a high-definition 3D realization method, a device, equipment and a storage medium based on a mobile platform, and aims to solve the technical problem that 3D video can not be realized on a mobile processor for realizing 3D video rendering in the prior art.

In order to achieve the above object, the present invention provides a high definition 3D implementation method based on a mobile platform, the method comprising the following steps:

when a fusion instruction is detected, acquiring a left video frame signal and a right video frame signal in a mobile processor platform according to the fusion instruction, wherein the left video frame signal and the right video frame signal are acquired based on a mobile industry processor interface;

performing fusion processing on the left video frame signal and the right video frame signal to obtain a fusion video signal;

obtaining a video synchronization signal according to the left video frame signal and the right video frame signal;

and outputting the fused video signal and the video synchronization signal to a display system so as to complete 3D video display through the display system.

Optionally, the fusion instruction is generated according to the 3D video playing task when the 3D video playing task is detected by the mobile processor platform.

Optionally, before the step of acquiring the left video frame signal and the right video frame signal in the mobile processor platform according to the fusion instruction when the fusion instruction is detected, the method further includes:

when a pairing instruction is detected, sending a confirmation instruction to a mobile processor platform according to the pairing instruction so that the mobile processor platform determines a target mobile industry interface according to the confirmation instruction, and when a 3D fusion task is detected, feeding back a fusion instruction through the target mobile industry interface;

optionally, the fusing the left video frame signal and the right video frame signal to obtain a fused video signal includes:

acquiring video mark information according to the left video frame signal and the right video frame signal;

arranging the left video frame signal and the right video frame signal according to the video mark information to obtain a video frame queue;

and obtaining a fusion video signal according to the video frame queue.

Optionally, the arranging the left video frame signal and the right video frame signal according to the video tag information to obtain a video frame queue includes:

and sequentially and alternately arranging the left video frame signals and the right video frame signals according to the video mark information to obtain a video frame queue.

Optionally, the obtaining a video synchronization signal according to the left video frame signal and the right video frame signal includes:

alternately marking the left video frame signal and the right video frame signal to obtain a time sequence mark;

and generating a video synchronization signal according to the timing mark.

Optionally, the outputting the fused video signal and the video synchronization signal to a display system includes:

obtaining the refreshing frequency of a left video frame signal and a right video frame signal;

obtaining the refreshing frequency of an output signal according to the refreshing frequency and a preset adjusting coefficient;

outputting the fused video signal to a mobile platform through a display interface according to the output signal refreshing frequency;

and outputting the video synchronization signal to a mobile platform.

In addition, in order to achieve the above object, the present invention further provides a high definition 3D implementation apparatus based on a mobile platform, where the high definition 3D implementation apparatus based on the mobile platform includes:

the system comprises an acquisition module, a fusion module and a display module, wherein the acquisition module is used for acquiring a left video frame signal and a right video frame signal in a mobile processor platform according to a fusion instruction when the fusion instruction is detected, and the left video frame signal and the right video frame signal are acquired based on a mobile industry processor interface;

the fusion processing module is used for carrying out fusion processing on the left video frame signal and the right video frame signal to obtain a fusion video signal;

the fusion processing module is further used for obtaining a video synchronization signal according to the left video frame signal and the right video frame signal;

and the control module is used for outputting the fused video signal and the video synchronization signal to a display system so as to complete 3D video display through the display system.

In addition, in order to achieve the above object, the present invention further provides a mobile platform based high definition 3D implementation device, where the mobile platform based high definition 3D implementation device includes: the mobile platform-based high-definition 3D implementation program is stored on the memory and can run on the processor, and the mobile platform-based high-definition 3D implementation program is configured to implement the steps of the mobile platform-based high-definition 3D implementation method.

In addition, in order to achieve the above object, the present invention further provides a storage medium, where a mobile platform-based high definition 3D implementation program is stored, and when executed by a processor, the mobile platform-based high definition 3D implementation program implements the steps of the mobile platform-based high definition 3D implementation method as described above.

When a fusion instruction is detected, a left video frame signal and a right video frame signal in a mobile processor platform are obtained according to the fusion instruction, and the left video frame signal and the right video frame signal are obtained based on a mobile industry processor interface; performing fusion processing on the left video frame signal and the right video frame signal to obtain a fusion video signal; obtaining a video synchronization signal according to the left video frame signal and the right video frame signal; and outputting the fused video signal and the video synchronization signal to a display system so as to complete 3D video display through the display system. By the method, the function of generating the 3D video on the mobile platform is realized, the left video frame signal and the right video frame signal in the mobile equipment are acquired, fused and sent to the display equipment for 3D video display, and therefore the technical problem that the mobile platform cannot realize 3D video fusion is solved.

Drawings

Fig. 1 is a schematic structural diagram of a mobile platform-based high-definition 3D implementation device in a hardware operating environment according to an embodiment of the present invention;

fig. 2 is a schematic flow chart of a first embodiment of a mobile platform-based high-definition 3D implementation method according to the present invention;

fig. 3 is a schematic view of an interaction structure of an embodiment of a mobile platform-based high-definition 3D implementation method of the present invention;

FIG. 4 is a schematic diagram of a fusion flow of an embodiment of a mobile platform-based high-definition 3D implementation method of the present invention;

fig. 5 is a schematic flow chart of a second embodiment of the mobile platform-based high definition 3D implementation method of the present invention;

fig. 6 is a block diagram of a high definition 3D implementation apparatus based on a mobile platform according to a first embodiment of the present invention.

The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.

Detailed Description

It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.

Referring to fig. 1, fig. 1 is a schematic structural diagram of a mobile platform-based high-definition 3D implementation device in a hardware operating environment according to an embodiment of the present invention.

As shown in fig. 1, the mobile platform-based high definition 3D implementation device may include: a processor 1001, such as a Central Processing Unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a Wireless interface (e.g., a Wireless-Fidelity (Wi-Fi) interface). The Memory 1005 may be a Random Access Memory (RAM) Memory, or may be a Non-Volatile Memory (NVM), such as a disk Memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.

Those skilled in the art will appreciate that the architecture shown in fig. 1 does not constitute a limitation of a mobile platform based high definition 3D enabled device and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.

As shown in fig. 1, a memory 1005, which is a storage medium, may include therein an operating system, a network communication module, a user interface module, and a mobile platform-based high definition 3D implementation program.

In the mobile platform-based high definition 3D implementation device shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 of the mobile platform-based high-definition 3D implementation device may be arranged in the mobile platform-based high-definition 3D implementation device, and the mobile platform-based high-definition 3D implementation device calls the mobile platform-based high-definition 3D implementation program stored in the memory 1005 through the processor 1001, and executes the mobile platform-based high-definition 3D implementation method provided by the embodiment of the present invention.

An embodiment of the present invention provides a high definition 3D implementation method based on a mobile platform, and referring to fig. 2, fig. 2 is a schematic flow diagram of a first embodiment of a high definition 3D implementation method based on a mobile platform according to the present invention.

In this embodiment, the high definition 3D implementation method based on the foregoing method includes the following steps:

step S10: and when a fusion instruction is detected, acquiring a left video frame signal and a right video frame signal in the mobile processor platform according to the fusion instruction, wherein the left video frame signal and the right video frame signal are acquired based on a mobile industry processor interface.

It should be noted that the execution subject of this embodiment is a 3D video fusion device, and the 3D video fusion device may be a 3D video fusion module embedded in a mobile device, or an external 3D video fusion auxiliary device, or other devices with the same or similar functions, which is not limited in this embodiment.

It should be noted that the mobile platform may be a mobile phone, a tablet computer, or other similar or identical operating environments, and the embodiment does not limit the expression of the mobile platform. Further, the mobile processor platform is a hardware environment based on a processor, and this embodiment is applied to a mobile processor platform that cannot perform 3D video fusion.

In a specific implementation, the mobile industry processor interface is an MIPI interface, and may also be replaced by other data transmission interfaces with the same or similar functions, which is not limited in this embodiment.

It can be understood that this embodiment is applied to a 3D video fusion process, the fusion of 3D videos is to fuse left video frame data and right video frame data based on a principle of realizing 3D vision by a time division method to obtain two paths of video data with alternating images, and since a chip architecture in a mobile device in the hands of a user at present cannot basically realize the fusion processing of 3D videos, this embodiment can fuse left video frame signals and right video frame signals to obtain fused 3D videos by using the 3D video fusion device to assist a mobile platform without changing the internal structure of a mobile processor platform, and finally allows the user to watch 3D videos on the mobile platform or by outputting 3D video data through the mobile platform. The existing mobile processor platform supports 2 paths of MIPI (mobile industry processor interface) display driving 2 independent left and right eye screens aiming at VR (virtual reality) helmets, and the rendering of the screens in a display card of the mobile platform keeps 2 paths of left signals and right signals synchronously output. With the help of the existing foundation of the platform, 2 paths of MIPI signals are fused, and finally 3D video data are obtained, such as: the data of 2 paths of 60HZ data frames are buffered and then strictly and alternately fused in sequence, the data are output according to 120HZ video signals through a display interface such as an HDMI or a DP video interface, and left and right synchronizing signals are simultaneously output to a subsequent synchronizing processing unit for use, and the synchronizing processing unit is used for controlling relevant equipment for watching 3D videos so as to meet the control conditions required by watching.

The principle of realizing 3D by the time division method is as follows: the exchange type time division 3D technology is that the 3D glasses suitable for the method are used for adjusting the lenses into lightproof black in a state of high speed so as to respectively shield the left and right eyes of a person and enable the two eyes to see two pictures with different angles. The simple understanding is that the wire harness calculates two different pictures for each frame when calculating the picture (if the picture is a film picture, the picture is realized by the left camera and the right camera), the two different pictures are displayed on a display, then the left eye and the right eye respectively see the different pictures through the 3D glasses, so that human eyes are given illusion, and the eyes mistakenly think that a three-dimensional object is seen, and the stereo imaging technology is realized. For example: the right lens is blackened when the screen plays a left-eye picture, the left lens is blackened when the screen plays a right-eye picture, and the illusion that the two-eye picture is seen simultaneously is generated by high-speed switching, but the right-eye picture is not seen by the left eye, so the quick switching is to ensure that the image information seen by the left eye and the right eye is an image with slight difference. It is therefore necessary to use a video stream in which the left-eye picture and the right-eye picture appear alternately. On the basis of realizing 3D based on the time division method, the video data of the left-eye picture and the right-eye picture are fused, and finally the 3D video is realized.

In this embodiment, the fusion instruction is generated according to the 3D video playing task when the 3D video playing task is detected by the mobile processor platform.

Further, the fusion instruction is that when the mobile platform has a 3D video fusion demand, fusion of a left video frame signal and a right video frame signal is started according to the fusion instruction, where the left video frame signal and the right video frame signal are image information that needs to be watched by a left eye and image information that needs to be watched by a right eye in the 3D video. As shown in fig. 3, MIPI1 and MIPI2 are a first MIPI interface and a second MIPI interface, and are respectively used to obtain a left video Frame signal L _ Frame and a right video Frame signal R _ Frame, where the fusion processing unit is a functional module for implementing 3D video fusion in a 3D video fusion device, the display unit is a control module for controlling 3D video display, the synchronization processing unit is a control module for controlling optical paths in a stereoscopic control device, that is, a module for controlling left-eye and right-eye optical paths to be turned on and off, the L/R _ Sync is a control signal interface for outputting and controlling left and right image optical paths, and finally the L/R _ Sync is transmitted to the stereoscopic control device to control left and right eyes to view 3D video image optical paths. (without interpretation of DP)

In this embodiment, before the step of acquiring the left video frame signal and the right video frame signal in the mobile processor platform according to the fusion instruction when the fusion instruction is detected, the method further includes: when a pairing instruction is detected, a confirmation instruction is sent to a mobile processor platform according to the pairing instruction, so that the mobile processor platform determines a target mobile industry interface according to the confirmation instruction, and when a 3D fusion task is detected, a fusion instruction is fed back through the target mobile industry interface. This is because the APP or the driver software installed on the mobile platform is required to make the mobile processor platform clear the transmission route of the video signal, and finally the video signal is sent to the 3D video fusion device, so as to perform video data interaction through the MIPI interface or other data transmission interfaces.

Step S20: and carrying out fusion processing on the left video frame signal and the right video frame signal to obtain a fusion video signal.

It should be noted that, in the switched video fusion implementation manner, a left video frame signal and a right video frame signal are alternately fused into a video frame signal according to the frame number, a left video frame signal image and a right video frame signal image alternately appear, and then an image viewed by human eyes is controlled to be an image in the left video frame signal or an image in the right video frame signal through a video synchronization signal. The present embodiment prefers the switched implementation because other implementations can only see half of the image with each eye, and are not as good in picture quality as the switched implementation.

Step S30: and obtaining a video synchronization signal according to the left video frame signal and the right video frame signal.

It will be appreciated that the video synchronization signal is used to control the stereo control device to ensure that both eyes can only view the picture that the eye should see, for example: the lenses on the two sides are alternately adjusted to be lightproof black to respectively shield the left eye and the right eye of a person, so that the two eyes can see two pictures with different angles. The simple understanding is that the wire harness calculates two different pictures for each frame when calculating the picture (if the picture is a film picture, the picture is realized by the left camera and the right camera), the two different pictures are displayed on a display, then the left eye and the right eye respectively see the different pictures through the 3D glasses, so that human eyes are given illusion, and the eyes mistakenly think that a three-dimensional object is seen, and the stereo imaging technology is realized. The stereoscopic control device may be a 3D glasses, or may also be a light valve for controlling a light path, which is not limited in this embodiment.

In this embodiment, the left video frame signal and the right video frame signal are alternately marked to obtain a timing mark; and generating a video synchronization signal according to the timing mark.

It should be noted that this embodiment proposes a preferred scheme of timing marking, as shown in fig. 4, data of MIPI1 is cached in DDR, and then data of MIPI2 is cached in DDR; when the output display is needed, the fusion processing unit strictly outputs and displays the video frame data buffered in the DDR through a display communication port relying on a data packetization data transmission technology according to a refresh rate of 120HZ, outputs and displays the 3D video data after the fusion by a first HDMI (high definition video interface) or DP (display port), and simultaneously outputs and outputs the synchronization signal of the video frame with a square wave waveform of 60HZ according to a previous timing mark, wherein the timing mark can write a first byte mark of the left video frame data into 01, buffer the first byte mark of the right video frame data into 02, buffer the second byte mark of the left video frame data into 03, buffer the second byte mark of the right video frame data into 04, buffer the DDR, alternatively mark the left video frame signal and the right video frame signal by analogy, when outputting the 3D video data after the fusion, and pulling up the odd time sequence mark, and then, pulling up the even time sequence mark to form a time sequence control pulse so as to correspondingly control the three-dimensional control device.

Step S40: and outputting the fused video signal and the video synchronization signal to a display system so as to complete 3D video display through the display system.

It should be noted that, since the present embodiment is applied to viewing 3D video, the display system includes a control module for viewing 3D video, including a display board card and other stereoscopic control devices, for example: the control device for alternately switching the viewing fields may be a special glasses or a light valve, and needs to be determined according to an actual viewing scene, and the present embodiment does not limit other viewing control devices.

In the present embodiment, the refresh frequency of the left video frame signal and the right video frame signal is acquired; obtaining the refreshing frequency of an output signal according to the refreshing frequency and a preset adjusting coefficient; outputting the fused video signal to a mobile platform through a display interface according to the output signal refreshing frequency; and outputting the video synchronization signal to a mobile platform.

It can be understood that, since the refresh rate of the received two video signals and the refresh rate of the video signal at the time of output may be different, the refresh rate of the output fusion video signal needs to be determined according to the refresh frequency of the left video frame signal and the right video frame signal, and the preset refresh rate adjustment coefficient, because, for example: the refresh rate of the left video frame signal and the right video frame signal is 60HZ, and if the video refresh rate after the synthesis is still 60HZ, the video viewed by the viewer will become slow motion due to the doubled number of video frame images under the same video time span, so the refresh rate during output needs to be adjusted. Generally, the preset adjustment coefficient is 2, that is, the video is output at 120HZ after being fused, and the video watched by the viewer is at normal speed, but the viewer may have other viewing speed requirements, so the refresh rate can be adjusted according to the preset refresh rate adjustment coefficient.

In a specific implementation, to realize lossless full-high-definition 3D display, according to the current PC 3D display implementation method, the basic requirement is that the display video stream is strictly and alternately output to the display panel in about 120HZ frames. Therefore, the present embodiment focuses on realizing the requirement on a mobile platform by using a 3D video fusion device. In this embodiment, the fusion method is implemented to provide a preferred scheme, such as: the display card of the mobile processing platform respectively and synchronously outputs left and right rendering textures to 2 MIPI display interfaces; after 2 paths of MIPIs of the fusion processing unit receive the 60HZ video stream and buffer the video stream, the data buffers of 2 paths of data frames are sequentially and strictly and alternately fused and output according to 120HZ video signals through a display interface such as HDMI or DP.

In the embodiment, when a fusion instruction is detected, a left video frame signal and a right video frame signal in a mobile processor platform are acquired according to the fusion instruction, and the left video frame signal and the right video frame signal are acquired based on a mobile industry processor interface; performing fusion processing on the left video frame signal and the right video frame signal to obtain a fusion video signal; obtaining a video synchronization signal according to the left video frame signal and the right video frame signal; and outputting the fused video signal and the video synchronization signal to a display system so as to complete 3D video display through the display system. By the method, the function of generating the 3D video on the mobile platform is realized, the left video frame signal and the right video frame signal in the mobile equipment are acquired, fused and sent to the display equipment for 3D video display, and therefore the technical problem that the mobile platform cannot realize 3D video fusion is solved.

Referring to fig. 5, fig. 5 is a flowchart illustrating a second embodiment of a mobile platform-based high definition 3D implementation method according to the present invention.

Based on the first embodiment, in step S20, the method for implementing high definition 3D based on a mobile platform in this embodiment specifically includes:

step S21: and acquiring video mark information according to the left video frame signal and the right video frame signal.

It should be noted that the video mark information is information for confirming that the video is for distinguishing whether the video is a left video frame signal and a right video frame signal, and is also used for determining an order of each frame of image in the current side video frame signal, for example: the left video frame signal first frame image or the right video frame second frame image. The video mark information may be carried by the left video frame signal and the right video frame signal, or may be marked according to the number of the interface and the left/right video frame information.

Step S22: and arranging the left video frame signal and the right video frame signal according to the video mark information to obtain a video frame queue.

It should be noted that, the output order of each frame image in the left video frame signal and the right video frame signal may be sorted according to the video tag information, stored in the buffer, and played in order when playing is required.

In this embodiment, the left video frame signal and the right video frame signal are sequentially and alternately arranged according to the video tag information, so as to obtain a video frame queue.

In a specific implementation, the alternating arrangement may specifically be that each frame of image is sorted according to video tag information and stored in a cache, for example: and when the control signal is output, the odd video mark information is controlled to open the light path for watching by the left eye and close the light path for watching by the right eye according to the video playing rule, and the even video mark information is controlled to open the light path for watching by the right eye and close the light path for watching by the left eye according to the right eye watching light path. The video mark information may be marked as the same mark signal as the time sequence mark in the first embodiment, or may be marked separately, which is not limited in this embodiment.

It should be noted that the video frame queue is a video frame image sequence obtained according to the video tag information.

Step S23: and obtaining a fusion video signal according to the video frame queue.

It should be noted that, according to the video frame queue, each frame image can be fused into a new video signal according to the sequence of the video frame queue, and the new video is a fused video signal, that is, the video image played according to the fused video signal is that the left eye image and the right eye image appear strictly and alternately.

The present embodiment obtains video tag information by obtaining the left video frame signal and the right video frame signal; arranging the left video frame signal and the right video frame signal according to the video mark information to obtain a video frame queue; and obtaining a fusion video signal according to the video frame queue. The fusion of the left video frame signal and the right video frame signal is realized through the mode, a foundation is provided for 3D video watching, the video is sequenced through the video marking information, the video fusion process is optimized, and the video fusion efficiency is improved.

In addition, an embodiment of the present invention further provides a storage medium, where a mobile platform-based high-definition 3D program is stored on the storage medium, and when executed by a processor, the mobile platform-based high-definition 3D program implements the steps of the mobile platform-based high-definition 3D method as described above.

Since the storage medium adopts all technical solutions of all the embodiments, at least all the beneficial effects brought by the technical solutions of the embodiments are achieved, and no further description is given here.

Referring to fig. 6, fig. 6 is a block diagram illustrating a first embodiment of a mobile platform-based high definition 3D device according to the present invention.

As shown in fig. 6, a high definition 3D device based on a mobile platform according to an embodiment of the present invention includes:

the acquiring module 10 is configured to acquire a left video frame signal and a right video frame signal in the mobile processor platform according to the fusion instruction when the fusion instruction is detected, where the left video frame signal and the right video frame signal are acquired based on a mobile industry processor interface.

And a fusion processing module 20, configured to perform fusion processing on the left video frame signal and the right video frame signal to obtain a fusion video signal.

The fusion processing module 20 is further configured to obtain a video synchronization signal according to the left video frame signal and the right video frame signal.

And the control module 30 is configured to output the fused video signal and the video synchronization signal to a display system, so as to complete 3D video display through the display system.

It should be understood that the above is only an example, and the technical solution of the present invention is not limited in any way, and in a specific application, a person skilled in the art may set the technical solution as needed, and the present invention is not limited thereto.

In this embodiment, when detecting a fusion instruction, the obtaining module 10 obtains a left video frame signal and a right video frame signal in a mobile processor platform according to the fusion instruction, where the left video frame signal and the right video frame signal are obtained based on a mobile industry processor interface; the fusion processing module 20 performs fusion processing on the left video frame signal and the right video frame signal to obtain a fusion video signal; the fusion processing module 20 obtains a video synchronization signal according to the left video frame signal and the right video frame signal; the control module 30 outputs the fused video signal and the video synchronization signal to the display system to complete 3D video display through the display system. By the method, the function of generating the 3D video on the mobile platform is realized, the left video frame signal and the right video frame signal in the mobile equipment are acquired, fused and sent to the display equipment for 3D video display, and therefore the technical problem that the mobile platform cannot realize 3D video fusion is solved.

In an embodiment, the obtaining module 10 is further configured to send a confirmation instruction to the mobile processor platform according to the pairing instruction when the pairing instruction is detected, so that the mobile processor platform determines a target mobile industry interface according to the confirmation instruction, and feeds back a fusion instruction through the target mobile industry interface when the 3D fusion task is detected;

in an embodiment, the fusion processing module 20 is further configured to obtain video tag information according to the left video frame signal and the right video frame signal;

arranging the left video frame signal and the right video frame signal according to the video mark information to obtain a video frame queue;

and obtaining a fusion video signal according to the video frame queue.

In an embodiment, the fusion processing module 20 is further configured to sequentially and alternately arrange the left video frame signal and the right video frame signal according to the video tag information to obtain a video frame queue.

In an embodiment, the fusion processing module 20 is further configured to mark the left video frame signal and the right video frame signal alternately to obtain a timing mark;

and generating a video synchronization signal according to the timing mark.

In an embodiment, the control module 30 is further configured to obtain a refresh frequency of the left video frame signal and the right video frame signal;

obtaining the refreshing frequency of an output signal according to the refreshing frequency and a preset adjusting coefficient;

outputting the fused video signal to a mobile platform through a display interface according to the output signal refreshing frequency;

and outputting the video synchronization signal to a mobile platform.

It should be noted that the above-described work flows are only exemplary, and do not limit the scope of the present invention, and in practical applications, a person skilled in the art may select some or all of them to achieve the purpose of the solution of the embodiment according to actual needs, and the present invention is not limited herein.

In addition, the technical details that are not described in detail in this embodiment may refer to the high definition 3D method based on the mobile platform provided in any embodiment of the present invention, and are not described herein again.

Further, it is to be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.

The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.

Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (e.g. Read Only Memory (ROM)/RAM, magnetic disk, optical disk), and includes several instructions for enabling a terminal device (e.g. a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.

The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:TOF摄像模组、电子设备及3D图像的生成方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类