Method and system for realizing multi-terminal networking synchronization and cloud server

文档序号:1784837 发布日期:2019-12-06 浏览:23次 中文

阅读说明:本技术 实现多终端联网同步的方法、系统和云端服务器 (Method and system for realizing multi-terminal networking synchronization and cloud server ) 是由 武娟 庞涛 陈学亮 于 2018-05-28 设计创作,主要内容包括:本公开提供了一种实现多终端联网同步的方法、系统和云端服务器,涉及增强现实领域。该方法包括:获取主导终端上传的实景视频和实景界面逻辑,其中,主导终端基于本地AR技术根据实景视频生成实景界面逻辑;基于实景界面逻辑生成对应的虚拟内容逻辑;将虚拟内容逻辑发送至主导终端,以便主导终端根据虚拟内容逻辑生成虚拟内容,基于AR技术将虚拟内容与实景视频进行融合;将虚拟内容逻辑、实景界面逻辑和实景视频同步至参与终端,以便参与终端根据虚拟内容逻辑生成虚拟内容,并基于实景界面逻辑将虚拟内容与实景视频进行融合,从而生成与主导终端一致的内容。本公开能够实现多终端联网互动。(The disclosure provides a method, a system and a cloud server for realizing multi-terminal networking synchronization, and relates to the field of augmented reality. The method comprises the following steps: acquiring a live-action video and live-action interface logic uploaded by a leading terminal, wherein the leading terminal generates the live-action interface logic according to the live-action video based on a local AR technology; generating corresponding virtual content logic based on the live-action interface logic; sending the virtual content logic to the leading terminal so that the leading terminal generates virtual content according to the virtual content logic and fuses the virtual content and the live-action video based on the AR technology; and synchronizing the virtual content logic, the live-action interface logic and the live-action video to the participating terminal so that the participating terminal generates the virtual content according to the virtual content logic, and fusing the virtual content and the live-action video based on the live-action interface logic so as to generate the content consistent with the leading terminal. The method and the device can realize multi-terminal networking interaction.)

1. A method for realizing multi-terminal networking synchronization comprises the following steps:

Acquiring a live-action video and live-action interface logic uploaded by a leading terminal, wherein the leading terminal generates the live-action interface logic according to the live-action video based on a local Augmented Reality (AR) technology;

Generating corresponding virtual content logic based on the live-action interface logic;

Sending the virtual content logic to a leading terminal so that the leading terminal can generate virtual content according to the virtual content logic and fuse the virtual content with the live-action video based on AR technology;

And synchronizing the virtual content logic, the live-action interface logic and the live-action video to the participating terminal so that the participating terminal generates virtual content according to the virtual content logic, and fusing the virtual content and the live-action video based on the live-action interface logic so as to generate content consistent with the leading terminal.

2. The method of enabling multi-terminal networking synchronization of claim 1, further comprising:

responding to the leading terminal to finish the related content control, and synchronizing the updated real scene interface logic and the virtual content logic caused by the content control to the participating terminal; or

And responding to the participating terminal to complete the related content manipulation, and logically synchronizing the virtual content caused based on the content manipulation to the leading terminal.

3. The method for realizing multi-terminal networking synchronization according to claim 1 or 2, wherein the virtual content and the live-action video are fused and output as AR game content.

4. the method for implementing multi-terminal networking synchronization of claim 3, wherein the virtual content logic comprises coordinates and attribute information of a virtual object;

Wherein generating the corresponding virtual content logic based on the live-action interface logic comprises:

And establishing virtual objects in the real-scene interface logic according to game rules, and determining the coordinate information, the individual attribute and the related attribute among the related virtual objects of each virtual object.

5. A cloud server, comprising:

The data acquisition unit is used for acquiring a live-action video and a live-action interface logic uploaded by the leading terminal, wherein the leading terminal generates the live-action interface logic according to the live-action video based on a local Augmented Reality (AR) technology;

the logic generation unit is used for generating corresponding virtual content logic based on the real scene interface logic;

the data synchronization unit is used for sending the virtual content logic to a leading terminal so that the leading terminal can generate virtual content according to the virtual content logic, fuse the virtual content and the live-action video based on the AR technology and upload a fusion state; and synchronizing the virtual content logic, the live-action interface logic and the live-action video to the participating terminal so that the participating terminal generates virtual content according to the virtual content logic and fuses the virtual content and the live-action video based on the live-action interface logic, thereby generating content consistent with the leading terminal.

6. A cloud server as in claim 5,

The data acquisition unit is also used for responding to the control of the related content completed by the leading terminal and updating the logic of the live-action interface;

The data synchronization unit is further used for synchronizing the updated real scene interface logic and the virtual content logic caused by content manipulation to the participant terminal; or

The data acquisition unit is also used for responding to the participation terminal to complete the related content control;

The data synchronization unit is further used for logically synchronizing virtual content caused by content manipulation to the master terminal.

7. A cloud server as claimed in claim 5 or 6, wherein the virtual content and live-action video fusion output is AR game content.

8. A cloud server as claimed in claim 7, wherein the virtual content logic comprises coordinates and attribute information for a virtual object;

The logic generation unit is also used for establishing virtual objects in the real-scene interface logic according to game rules and determining the coordinate information, the individual attributes and the related attributes among the related virtual objects of each virtual object.

9. A cloud server, comprising:

A memory; and

A processor coupled to the memory, the processor configured to perform the method of implementing multi-terminal networking synchronization of any of claims 1-4 based on instructions stored in the memory.

10. a system for realizing multi-terminal networking synchronization comprises a leading terminal, a participating terminal and the cloud server of any one of claims 5 to 9;

the leading terminal is used for acquiring a live-action video, generating the live-action interface logic according to the live-action video based on an AR technology, sending the live-action video and the live-action interface logic to the cloud server, receiving the virtual content logic sent by the cloud server, generating virtual content according to the virtual content logic, and fusing the virtual content and the live-action video based on the AR technology;

The participating terminal is used for receiving the virtual content logic, the live-action interface logic and the live-action video sent by the cloud server, generating virtual content according to the virtual content logic, and fusing the virtual content and the live-action video based on the live-action interface logic, so as to generate content consistent with the leading terminal.

11. A computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the method of implementing multi-terminal networking synchronization of any of claims 1 to 4.

Technical Field

the present disclosure relates to the field of augmented reality, and in particular, to a method, a system, and a cloud server for implementing multi-terminal networking synchronization.

background

AR (Augmented Reality) organically combines a virtual object with the real world by means of technologies such as computer vision and AI (Artificial Intelligence), and improves a perception effect through a natural virtual-real interaction mode.

AR technology is a means as a revolutionary human-based interaction mechanism, and with the light point of sale as a smart phone, the development of the whole industry is driven, and it is expected that the global market will exceed $ 1200 billion (easy-to-watch data) by 2021, wherein games are a main innovative entertainment mode and an important value-added source.

The AR needs to acquire, identify, model and position the real-time live-action, and synchronously complete the addition, matching and interactive fusion of the virtual scene and the object, so that a great capability challenge is brought to the operation terminal; meanwhile, due to the limitation of local live-action acquisition, the networking fight of multiple persons and regions is also influenced, the current AR game is still in a single machine mode, and the coverage range of the terminal is also greatly limited.

Disclosure of Invention

the technical problem to be solved by the present disclosure is to provide a method, a system and a cloud server for implementing multi-terminal networking synchronization, in which a terminal without AR capability implements multi-terminal networking interaction by utilizing the AR capability of a leading terminal.

According to an aspect of the present disclosure, a method for implementing multi-terminal networking synchronization is provided, including: acquiring a live-action video and live-action interface logic uploaded by a leading terminal, wherein the leading terminal generates the live-action interface logic according to the live-action video based on a local Augmented Reality (AR) technology; generating corresponding virtual content logic based on the live-action interface logic; sending the virtual content logic to the leading terminal so that the leading terminal generates virtual content according to the virtual content logic and fuses the virtual content and the live-action video based on the AR technology; and synchronizing the virtual content logic, the live-action interface logic and the live-action video to the participating terminal so that the participating terminal generates the virtual content according to the virtual content logic, and fusing the virtual content and the live-action video based on the live-action interface logic so as to generate the content consistent with the leading terminal.

Optionally, in response to the leading terminal completing the related content manipulation, synchronizing the updated live-action interface logic and the virtual content logic caused based on the content manipulation to the participating terminal; or in response to the participant terminal completing the related content manipulation, logically synchronizing the virtual content caused based on the content manipulation to the leader terminal.

Optionally, the virtual content and the live-action video are fused and output as AR game content.

Optionally, the virtual content logic includes coordinates and attribute information of the virtual object; wherein generating the corresponding virtual content logic based on the live-action interface logic comprises: and establishing virtual objects in the real-scene interface logic according to game rules, and determining the coordinate information, the individual attribute and the related attribute among the related virtual objects of each virtual object.

According to another aspect of the present disclosure, a cloud server is further provided, including: the data acquisition unit is used for acquiring a live-action video and a live-action interface logic uploaded by the leading terminal, wherein the leading terminal generates the live-action interface logic according to the live-action video based on a local Augmented Reality (AR) technology; the logic generation unit is used for generating corresponding virtual content logic based on the live-action interface logic; the data synchronization unit is used for sending the virtual content logic to the leading terminal so that the leading terminal can generate virtual content according to the virtual content logic, fuse the virtual content and the live-action video based on the AR technology and upload a fusion state; and synchronizing the virtual content logic, the live-action interface logic and the live-action video to the participating terminal so that the participating terminal generates the virtual content according to the virtual content logic, and fusing the virtual content and the live-action video based on the live-action interface logic so as to generate the content consistent with the leading terminal.

optionally, the data obtaining unit is further configured to update the live-action interface logic in response to the master terminal completing the related content manipulation; the data synchronization unit is also used for synchronizing the updated real scene interface logic and the virtual content logic caused by content manipulation to the participating terminal; or the data acquisition unit is also used for responding to the participating terminal to complete the related content control; the data synchronization unit is also used for logically synchronizing the virtual content caused based on the content manipulation to the master terminal.

optionally, the virtual content and the live-action video are fused and output as AR game content.

Optionally, the virtual content logic includes coordinates and attribute information of the virtual object; the logic generation unit is also used for establishing virtual objects in the real-scene interface logic according to the game rules and determining the coordinate information, the individual attributes and the related attributes among the related virtual objects of each virtual object.

According to another aspect of the present disclosure, a cloud server is further provided, including: a memory; and a processor coupled to the memory, the processor configured to perform the method of implementing multi-terminal networking synchronization as described above based on instructions stored in the memory.

According to another aspect of the present disclosure, a system for implementing multi-terminal networking synchronization is further provided, including a leading terminal, a participating terminal and a cloud server; the leading terminal is used for acquiring a live-action video, generating a live-action interface logic according to the live-action video based on an AR technology, sending the live-action video and the live-action interface logic to the cloud server, receiving a virtual content logic sent by the cloud server, generating a virtual content according to the virtual content logic, and fusing the virtual content and the live-action video based on the AR technology; the participating terminal is used for receiving the virtual content logic, the live-action interface logic and the live-action video sent by the cloud server, generating the virtual content according to the virtual content logic, and fusing the virtual content and the live-action video based on the live-action interface logic, so that the content consistent with the leading terminal is generated.

According to another aspect of the present disclosure, a computer-readable storage medium is also proposed, on which computer program instructions are stored, which instructions, when executed by a processor, implement the steps of the above-mentioned method for implementing multi-terminal networking synchronization.

Compared with the prior art, the method and the device have the advantages that sharing and logic synchronization of the cloud server are utilized, an effective mechanism that the AR capability of the leading terminal is shared and synchronized towards other terminals is realized, the terminal without the AR capability realizes multi-terminal networking interaction by utilizing the AR capability of the leading terminal, and the problem that multi-terminal content synchronization cannot be realized due to terminal capability limitation in the prior art is solved.

Other features of the present disclosure and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.

drawings

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.

The present disclosure may be more clearly understood from the following detailed description, taken with reference to the accompanying drawings, in which:

Fig. 1 is a flowchart illustrating an embodiment of a method for implementing multi-terminal networking synchronization according to the present disclosure.

Fig. 2 is a flowchart illustrating another embodiment of a method for implementing multi-terminal networking synchronization according to the present disclosure.

Fig. 3 is a schematic structural diagram of an embodiment of a cloud server according to the present disclosure.

fig. 4 is a schematic structural diagram of an embodiment of a system for implementing multi-terminal networking synchronization according to the present disclosure.

Fig. 5 is a schematic structural diagram of another embodiment of the cloud server of the present disclosure.

Fig. 6 is a schematic structural diagram of an embodiment of a cloud server according to the present disclosure.

Detailed Description

Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.

Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.

The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.

Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.

in all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.

It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.

For the purpose of promoting a better understanding of the objects, aspects and advantages of the present disclosure, reference is made to the following detailed description taken in conjunction with the accompanying drawings.

Fig. 1 is a flowchart illustrating an embodiment of a method for implementing multi-terminal networking synchronization according to the present disclosure. This embodiment is performed by a cloud server.

In step 110, live-action video and live-action interface logic uploaded by the leader terminal are obtained. The leading terminal is a terminal with AR capability, can shoot live-action videos through a camera and the like, and generates live-action interface logic according to the live-action videos based on a third-party localized AR technology framework mechanism such as ARKit or ARCore. The leading terminal can also generate the live-action interface logic by utilizing AR technologies such as ARToolKit and BazAR.

In one embodiment, the real-scene interface logic is a three-dimensional digital map of the real scene, and may include real-scene plane information such as the ground, the desktop, and the wall, as well as information about the distance between the real scene and the camera, the distance between the real objects, and the positional relationship between the real scene, the front, the back, the upper, the lower, the left, the right, and the like.

At step 120, corresponding virtual content logic is generated based on the live action interface logic. The virtual content logic comprises the coordinates and attribute information of the virtual object. The virtual object is, for example, a virtual scene, a virtual character, virtual equipment, or the like. In one embodiment, the virtual content logic may specifically include a coordinate position of a virtual object, virtual object generation logic, numerical logic, event trigger logic, and the like.

In step 130, the virtual content logic is sent to the leading terminal, so that the leading terminal generates virtual content according to the virtual content logic, and the virtual content is fused with the live-action video based on the AR technology. The leading terminal fuses and outputs the virtual content and the live-action video based on AR components such as an ARkit or an ARCore, for example, the AR game content is output, and at the moment, the leading terminal can send the fusion state to the cloud server.

in step 140, the virtual content logic, the live-action interface logic and the live-action video are synchronized to the participant terminal so that the participant terminal generates the virtual content according to the virtual content logic, and the virtual content is fused with the live-action video based on the live-action interface logic, thereby generating the content consistent with the leading terminal. The parameter terminal may not have AR capability, and the parameter terminal may implement fusion of virtual content and live-action video based on its own application processing capability according to live-action interface logic, and generate content consistent with the leading terminal, for example, AR game content.

Wherein step 130 and step 140 may be performed simultaneously.

In the embodiment, sharing and logic synchronization of the cloud server are utilized to realize an effective mechanism that the AR capability of the leading terminal faces sharing and synchronization of other terminals, terminals without the AR capability realize multi-terminal networking interaction by utilizing the AR capability of the leading terminal, and the problem that multi-terminal content synchronization cannot be realized due to terminal capability limitation in the prior art is solved.

In another embodiment, when the terminal performs content control, the cloud server can synchronize the terminal control and update logic, so as to implement content synchronization among multiple terminals. For example, the leading terminal completes the related content control, and the cloud server synchronizes the updated real-scene interface logic and the virtual content logic caused by the content control to the participating terminals, so that the presenting content of the participating terminals is consistent with the presenting content of the leading terminal. And the participating terminals finish the control of related content, and the cloud server logically synchronizes the virtual content caused by the control of the content to the leading terminal, so that the content presented by the leading terminal is consistent with the content presented by the participating terminals.

Fig. 2 is a flowchart illustrating another embodiment of a method for implementing multi-terminal networking synchronization according to the present disclosure. This embodiment will be described by taking an AR game as an example.

In step 210, the leading terminal captures a live-action video through a camera.

In step 220, the leading terminal generates a live-action interface logic from the live-action video based on local AR technology. For example, a leading terminal with AR capability obtains a three-dimensional digital map of an actual scene by means of an ARkit, an arcre, or other third party AR middleware, and by using SLAM (simultaneous Localization and Mapping, instant Localization and Mapping), also referred to as CML (Concurrent Mapping and Localization), for a live-action video shot by a camera. The three-dimensional digital map of the actual scene is the real scene interface logic. The real scene interface logic can be regarded as digital logic of an entity scene, and aims to extract interface information, distance information, grid coordinates and the like of the real scene. In the method, the coordinates can be marked on the live-action logic to determine the adding direction of the virtual article.

In step 230, the lead terminal sends the live-action video and the live-action interface logic to the cloud server. The live-action video uploaded by the leading terminal can adopt a standard H.265 format and the like.

In step 240, the cloud server generates corresponding virtual content logic based on the live-action interface logic. Such as generating virtual scenes, virtual characters, virtual equipment, and associated data logic, etc.

In one embodiment, the cloud server establishes virtual objects in the live-action interface logic according to game rules, and determines coordinate information, individual attributes and related attributes among the virtual objects. The virtual content logic may specifically include coordinate locations, item generation logic, numerical logic, and event trigger logic. For example, the coordinate position includes a placement position of a virtual content related component element such as an NPC (Non-player character), a virtual article, and the like, where real-time positioning in the real scene is determined according to an interface of the real scene, position information, and the like. The item generation logic generates the NPC, the type, number, etc. of the virtual item. And the numerical logic is the data index such as the corresponding fighting capacity, the defense capacity and the like of the corresponding virtual article. The event trigger logic is, for example, the occurrence of corresponding shadow special effects and sound effects during the striking process.

In step 250, the cloud server sends the virtual content logic to the leader terminal, and synchronizes the virtual content logic, the live-action interface logic, and the live-action video to the participant terminals.

In step 260, the leading terminal generates virtual content according to the virtual content logic, fuses the virtual content and the live-action video based on the AR technology to output game content, and synchronizes the state to the cloud server. The virtual content, namely virtual articles, virtual characters, virtual scenes and the like generated based on the virtual content logic, can adopt an automatically generated virtual content logic script to construct a game virtual world by calling local scene materials, particle special effects and the like of the terminal.

In step 270, the participant terminal generates virtual content according to the virtual content logic, and fuses the virtual content and the live-action video by using its own game processing capability based on the live-action interface logic, thereby generating game content consistent with the leader terminal, and synchronizing the state to the cloud server.

In the embodiment, the capabilities of acquiring, analyzing, positioning and virtually fusing the real scenes of the terminal are achieved through the AR localization technical solution arranged in the leading terminal, other terminals are shared through the cloud server, and the coverage of the AR capabilities and the limitation of the AR real scenes on networking in a local mode under the traditional mechanism are expanded. In addition, the dominant authority of the virtual logic part in the traditional AR implementation mechanism is moved from the terminal to the cloud, so that the virtual and real contents of the multi-party battle are effectively ensured to be synchronous in the AR networking interaction process, and the operation pressure of the cloud server is reduced by transferring the local capacity of each terminal.

In one embodiment, the following steps may also be included.

In step 280, after the leading terminal completes the related game manipulation, the leading terminal sends the updated live-action interface logic and the virtual game logic caused by the game manipulation to the cloud server. The updated real-scene interface logic and the real-scene interface logic before updating may be the same or different, for example, if the leading terminal moves or changes on site, the real-scene interface logic changes.

In step 290, after the cloud server confirms, the updated live-action interface logic and the virtual game logic caused by the game manipulation are synchronized to the participant terminal, so that the participant terminal can update the game.

For example, the leading terminal user performs a mouse operation when playing a strange game, if blood drops in the strange animal in the game, the picture is uploaded to the cloud server, and the cloud server synchronizes the picture to the participating terminals.

In step 281, the participant terminal completes the related game operation and sends the virtual game logic based on the game operation to the cloud server.

In step 291, after the cloud server confirms, the virtual game logic caused by the game manipulation is synchronized to the leading terminal, so that the leading terminal performs game update.

The execution sequence of step 280 and step 281 may not be before or after.

In the embodiment, the limitation of an AR game mechanism of a single terminal is relatively met, the AR games which are the same as those of the leading terminal are presented on other participation terminals without AR capacity in the modes of transmitting and sharing the AR real scene, the AR virtual image, the game logic and the like of the high-end leading terminal through the cloud, and the multi-player AR game fight is realized through the synchronization of the cloud logic mechanism. The bottlenecks of networking of the AR game, limitation of terminal capacity and the like are fundamentally solved, and the interest and the playability of the AR game are increased.

Fig. 3 is a schematic structural diagram of an embodiment of a cloud server according to the present disclosure. The cloud server includes a data acquisition unit 310, a logic generation unit 320, and a data synchronization unit 330.

The data acquiring unit 310 is configured to acquire a live-action video and live-action interface logic uploaded by the master terminal. The leading terminal is a terminal with AR capability, the leading terminal can shoot live-action videos through a camera and the like, and the live-action interface logic is generated according to the live-action videos based on the local AR technology. The AR localization underlying implementation technology may employ any middleware with localization AR capabilities, such as ARToolKit, BazAR, arcre, ARKit, and the like.

The logic generating unit 320 is configured to generate a corresponding virtual content logic based on the live-action interface logic. The virtual content logic comprises the coordinates and attribute information of the virtual object. The virtual object is, for example, a virtual scene, a virtual character, virtual equipment, or the like. In one embodiment, the virtual content logic may specifically include a coordinate position of a virtual object, virtual object generation logic, numerical logic, event trigger logic, and the like.

The data synchronization unit 330 is configured to send the virtual content logic to the master terminal, so that the master terminal generates virtual content according to the virtual content logic, fuses the virtual content and the live-action video based on the AR technology, and uploads the fused state; and synchronizing the virtual content logic, the live-action interface logic and the live-action video to the participating terminal so that the participating terminal generates the virtual content according to the virtual content logic, and fusing the virtual content and the live-action video based on the live-action interface logic so as to generate the content consistent with the leading terminal. The parameter terminal may not have AR capability, and the parameter terminal may implement fusion of virtual content and live-action video based on its own application processing capability according to live-action interface logic, and generate content consistent with the leading terminal, for example, AR game content.

In the embodiment, sharing and logic synchronization of the cloud server are utilized, an effective mechanism for sharing and synchronizing the AR capability of the leading terminal facing other terminals is realized, bottlenecks such as terminal capability limitation in the prior art are solved, and the coverage of the AR capability and limitation of AR real scenes on networking in a local mode under the traditional mechanism are expanded.

in another embodiment of the present disclosure, for a plurality of people completing an AR game battle, when the leading terminal completes the related game manipulation, the leading terminal sends the updated real-world interface logic and the virtual game logic caused by the game manipulation to the cloud server, the data obtaining unit 310 updates the real-world interface logic and obtains the virtual game logic, and the data synchronizing unit 330 synchronizes the updated real-world interface logic and the virtual game logic to the participating terminals so that the participating terminals perform game updating. Alternatively, when the participating terminal completes the related game manipulation, the virtual game logic caused by the game manipulation is sent to the cloud server, the data obtaining unit 310 obtains the virtual game logic, and the data synchronizing unit 330 synchronizes the virtual game logic to the leading terminal, so that the leading terminal performs game updating.

In the embodiment, bottlenecks such as networking of the AR game, terminal capacity limitation and the like can be fundamentally solved, a plurality of people can play the AR game in a long distance based on the same actual scene, and the interestingness and the playability of the AR game are improved.

Fig. 4 is a schematic structural diagram of an embodiment of a system for implementing multi-terminal networking synchronization according to the present disclosure. The system includes a leading terminal 410, a plurality of participating terminals 420, and a cloud server 430, where the participating terminals 420 may be a plurality of terminals, and the cloud server 430 has been described in detail in the above embodiments.

The leading terminal 410 is configured to obtain a live-action video, generate a live-action interface logic according to the live-action video based on an AR technology, send the live-action video and the live-action interface logic to the cloud server 430, receive a virtual content logic sent by the cloud server 430, generate a virtual content according to the virtual content logic, fuse the virtual content and the live-action video based on the AR technology, and synchronize a state to the cloud server 430.

The participant terminal 420 is configured to receive the virtual content logic, the live-action interface logic, and the live-action video sent by the cloud server 430, generate virtual content according to the virtual content logic, and fuse the virtual content and the live-action video based on the live-action interface logic, so as to generate content consistent with the leader terminal.

when the AR game is played, the leading terminal 410 and the participant terminal 420 can directly play the game or collectively perform the related game manipulation.

In the embodiment, the method realizes the acquisition, analysis and positioning of the live-action scene through the AR localization technology built in the leading terminal, and establishes a corresponding coordinate system; then uploading the live-action video to a cloud server, and synchronizing the live-action analysis result to the cloud server; and finally, logically issuing the real scene and the virtual scene through a cloud server to realize AR synchronization and interaction of the participating terminals. In the embodiment, the dominant right of the virtual logic part in the traditional AR implementation mechanism is moved from the terminal to the cloud, so that the virtual and real contents of the multi-party battle are effectively ensured to be synchronous in the AR networking interaction process. In addition, by means of the capability of the leading terminal, the cloud server does not need to perform SLAM calculation on the live-action video, and the entity scene interface logic of the leading terminal is directly obtained; the fusion of the logically generated virtual content and the live-action video is realized by means of the game generation capacity of the participating terminals, and the virtual content is not generated by a cloud and then is issued to each terminal; in the process of controlling synchronization, the cloud only needs to confirm and logically synchronize the game effect by means of the processing capability of the terminal side. That is, in the embodiment, the operation pressure of the cloud is reduced by transferring the local capacity of each terminal.

The application of the method and the device in AR real-time competition can improve the entertainment value-added income.

Fig. 5 is a schematic structural diagram of another embodiment of the cloud server of the present disclosure. The cloud server includes a memory 510 and a processor 520, wherein:

The memory 510 may be a magnetic disk, flash memory, or any other non-volatile storage medium. The memory is used for storing instructions in the embodiments corresponding to fig. 1 and 2. Processor 620 is coupled to memory 510 and may be implemented as one or more integrated circuits, such as a microprocessor or microcontroller. The processor 520 is configured to execute instructions stored in memory.

In one embodiment, as also shown in fig. 6, the cloud server 600 includes a memory 610 and a processor 620. Processor 620 is coupled to memory 610 through a BUS 630. The cloud server 600 may be further connected to an external storage device 650 through a storage interface 640 for accessing external data, and may be further connected to a network or another computer system (not shown) through a network interface 660, which will not be described in detail herein.

In the embodiment, the data instruction is stored in the memory, the instruction is processed by the processor, and an effective mechanism for leading the terminal AR capability to share and synchronize with other terminals is realized by means of cloud sharing and logic synchronization, so that the multiple terminals can be networked to present content synchronization.

In addition, the multi-person AR networking battle is synchronously realized through all terminal interactive operations and cloud results, the problems that the existing AR games cannot be effectively subjected to multi-person networking battle, only a high-end mobile phone terminal can operate and the like are solved.

In another embodiment, a computer-readable storage medium has stored thereon computer program instructions which, when executed by a processor, implement the steps of the method in the corresponding embodiments of fig. 1, 2. As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, apparatus, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

the present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

Thus far, the present disclosure has been described in detail. Some details that are well known in the art have not been described in order to avoid obscuring the concepts of the present disclosure. It will be fully apparent to those skilled in the art from the foregoing description how to practice the presently disclosed embodiments.

although some specific embodiments of the present disclosure have been described in detail by way of example, it should be understood by those skilled in the art that the foregoing examples are for purposes of illustration only and are not intended to limit the scope of the present disclosure. It will be appreciated by those skilled in the art that modifications may be made to the above embodiments without departing from the scope and spirit of the present disclosure. The scope of the present disclosure is defined by the appended claims.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:具有高稳定性的停车拍摄设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类