Virtual interaction method and equipment based on moving head projection lamp and storage medium

文档序号:667148 发布日期:2021-04-30 浏览:25次 中文

阅读说明:本技术 一种基于摇头投影灯的虚拟互动方法、设备及存储介质 (Virtual interaction method and equipment based on moving head projection lamp and storage medium ) 是由 艾元平 于 2020-12-30 设计创作,主要内容包括:本发明公开一种基于摇头投影灯的虚拟互动方法、设备及存储介质,应用在具有摇头投影灯和扫描识别设备的灯光系统中,其中虚拟互动方法包括S1:接收虚拟目标物的动画信号,根据数据传输协议将所述动画信号转换并映射至所述摇头投影灯中进行投影展示;并实时获取投影展示画面中所述虚拟目标物的位置信息;S2:获取所述扫描识别设备所采集的障碍物信号,根据所述障碍物信号识别出障碍物指定部位的位置信息;S3:根据所述障碍物指定部位的位置信息和所述投影展示画面中所述虚拟目标物的位置信息判断二者间是否发生碰撞关系,若有,则向投影显示画面中输出对应的动态效果。本发明可实现与投影出来的动画图像进行互动的效果,同时提高人们的使用体验。(The invention discloses a virtual interaction method, equipment and a storage medium based on a moving head projection lamp, which are applied to a lighting system with the moving head projection lamp and scanning identification equipment, wherein the virtual interaction method comprises the following steps of S1: receiving an animation signal of a virtual target object, converting and mapping the animation signal into the moving head projection lamp according to a data transmission protocol for projection display; acquiring the position information of the virtual target object in the projection display picture in real time; s2: acquiring an obstacle signal acquired by the scanning and identifying equipment, and identifying position information of an appointed part of an obstacle according to the obstacle signal; s3: and judging whether a collision relation occurs between the position information of the specified part of the obstacle and the position information of the virtual target object in the projection display picture, and if so, outputting a corresponding dynamic effect to the projection display picture. The invention can realize the effect of interaction with the projected animation image and simultaneously improve the use experience of people.)

1. A virtual interaction method based on a moving head projection lamp is characterized in that the virtual interaction method is applied to a lighting system with the moving head projection lamp and a scanning recognition device, and comprises the following steps:

step S1: receiving an animation signal of a virtual target object, converting the animation signal according to a data transmission protocol, and mapping the animation signal to the moving head projection lamp for projection display; acquiring the position information of the virtual target object in the projection display picture in real time;

step S2: acquiring an obstacle signal acquired by the scanning and identifying equipment, and identifying position information of an appointed part of an obstacle according to the obstacle signal;

step S3: and judging whether a collision relation occurs between the position information of the specified part of the obstacle and the position information of the virtual target object in the projection display picture, and if so, outputting a corresponding dynamic effect to the projection display picture.

2. The method according to claim 1, wherein the moving head projection lamp is connected to the lighting system through an Artnet lighting controller, and in step S1, the DMX512 channel value of the moving head projection lamp is obtained according to the Artnet protocol of the Artnet lighting controller, so as to perform one-to-one mapping between the animation signal of the virtual object and the DMX512 channel value.

3. The moving head projection lamp-based virtual interaction method according to claim 1, wherein the position information of the virtual target object obtained in step S1 is a dynamic coordinate point of the virtual target object in the projected display picture.

4. The moving head projection lamp based virtual interaction method as claimed in claim 1, wherein the scanning recognition device is a radar device and ensures that the current position of the obstacle falls within the sensing range of the radar device.

5. The method according to claim 4, wherein in step S2, the obstacle signal is the coordinates of the human body collected by the radar device when the human body moves, and the position signal of the designated part of the obstacle is the coordinate information of the human hand.

6. The virtual interaction method based on the moving head projection lamp as claimed in claim 5, wherein the method for obtaining the position signal of the designated part of the obstacle is as follows:

the radar device is used for collecting a plurality of noise signals of the human hand, filtering and inhibiting the plurality of noise signals to output a single-point signal coordinate of the human hand, wherein the single-point signal coordinate is an effective click event.

7. The moving head projection lamp-based virtual interaction method according to claim 6, wherein the method for determining whether the collision relationship between the designated part of the obstacle and the virtual target object in step S3 is:

and judging whether the single-point signal coordinates of the human hand coincide with the current dynamic coordinate points of the virtual target object, if so, determining that a collision relation occurs, and if not, determining that no collision relation occurs.

8. The moving head projection lamp-based virtual interaction method according to claim 1, wherein said step S3 is implemented by receiving a custom parameter and presetting the dynamic effect according to the custom parameter before outputting the dynamic effect.

9. An electronic device, comprising a processor, a memory and a computer program stored in the memory and capable of running on the processor, wherein the processor implements the method for virtual interaction based on moving head projection lamp according to any one of claims 1 to 8 when executing the computer program.

10. A storage medium having stored thereon a computer program which, when executed, implements a moving head projection lamp based virtual interaction method according to any of claims 1 to 8.

Technical Field

The invention relates to the technical field of light projection interaction, in particular to a virtual interaction method, equipment and a storage medium based on a moving head projection lamp.

Background

At present, the popularization degree of projection equipment is higher and higher, and the projection equipment can be applied to daily video picture playing and can also be used for projecting and displaying lamplight, and various appointed projection pictures are projected onto buildings or wall surfaces so as to display shocking visual effects.

Because the shaking head projection lamp has a movable projection effect, the shaking head projection lamp is also frequently used in the light exhibition, and the shaking head projection lamp is utilized to control the appointed animation image to move back and forth in the projection range, so that a better visual effect is achieved. However, the existing lighting system only focuses on the visual display effect, and people still cannot interact with the projected animation image, so that the system is lack of interest.

Disclosure of Invention

In order to overcome the defects of the prior art, one of the objectives of the present invention is to provide a virtual interaction method based on a moving head projection lamp, which can achieve the effect of interacting with the projected animation image and improve the use experience of people.

Another object of the present invention is to provide an electronic device.

It is a further object of the present invention to provide a storage medium.

One of the purposes of the invention is realized by adopting the following technical scheme:

a virtual interaction method based on a moving head projection lamp is applied to a lighting system with the moving head projection lamp and a scanning recognition device, and comprises the following steps:

step S1: receiving an animation signal of a virtual target object, converting the animation signal according to a data transmission protocol, and mapping the animation signal to the moving head projection lamp for projection display; acquiring the position information of the virtual target object in the projection display picture in real time;

step S2: acquiring an obstacle signal acquired by the scanning and identifying equipment, and identifying position information of an appointed part of an obstacle according to the obstacle signal;

step S3: and judging whether a collision relation occurs between the position information of the specified part of the obstacle and the position information of the virtual target object in the projection display picture, and if so, outputting a corresponding dynamic effect to the projection display picture.

Further, the moving head projection lamp is connected to the lighting system through an Artnet lighting controller, and in step S1, the DMX512 channel value of the moving head projection lamp is obtained according to the Artnet protocol of the Artnet lighting controller, so that the animation signal of the virtual target object and the DMX512 channel value are mapped one-to-one.

Further, the position information of the virtual object obtained in the step S1 is a dynamic coordinate point of the virtual object in the projection display picture.

Further, the scanning and identifying device is a radar device, and ensures that the current position of the obstacle falls within the sensing range of the radar device.

Further, in step S2, the obstacle signal is a human coordinate acquired by the radar device when the human moves, and the position signal of the designated portion of the obstacle is coordinate information of the human hand.

Further, the method for acquiring the position signal of the specified part of the obstacle comprises the following steps:

the radar device is used for collecting a plurality of noise signals of the human hand, filtering and inhibiting the plurality of noise signals to output a single-point signal coordinate of the human hand, wherein the single-point signal coordinate is an effective click event.

Further, in step S3, the method for determining whether the collision relationship between the specified part of the obstacle and the virtual target object occurs includes:

and judging whether the single-point signal coordinates of the human hand coincide with the current dynamic coordinate points of the virtual target object, if so, determining that a collision relation occurs, and if not, determining that no collision relation occurs.

Further, step S3 receives the customized parameters and presets the dynamic effect according to the customized parameters before outputting the dynamic effect.

The second purpose of the invention is realized by adopting the following technical scheme:

an electronic device comprises a processor, a memory and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the computer program to realize the virtual interaction method based on the moving head projection lamp.

The third purpose of the invention is realized by adopting the following technical scheme:

a storage medium having stored thereon a computer program which, when executed, implements the above-described moving head projection lamp-based virtual interaction method.

Compared with the prior art, the invention has the beneficial effects that:

acquiring barrier signals through scanning recognition equipment, recognizing coordinate points of human hand actions, judging whether the coordinate points of the human hand and the coordinate points of the virtual target object have a collision relation or not, if so, interacting between the coordinate points and the virtual target object, and outputting a dynamic effect of successful interaction; by the method, the diversity and the interestingness of human-computer interaction can be improved, and the game experience of the user is improved.

Drawings

FIG. 1 is a schematic flow chart of a virtual interaction method according to the present invention;

FIG. 2 is a flowchart illustrating a virtual interaction method for capturing a butterfly game according to the present invention.

Detailed Description

The present invention will be further described with reference to the accompanying drawings and the detailed description, and it should be noted that any combination of the embodiments or technical features described below can be used to form a new embodiment without conflict.

Example one

The embodiment provides a virtual interaction method based on a moving head projection lamp, which is applied to a lighting system, wherein the lighting system comprises an Artnet light controller, different stage lamps and projection lamps are connected to the Artnet light controller, and the Artnet light controller is communicated with a lighting system signal of the embodiment, so that light is controlled. In this embodiment, the projection lamp further includes a shaking head projection lamp capable of rotating in the X-axis and Y-axis directions, and meanwhile, the lighting system of this embodiment further includes a scanning recognition device, and the virtual interaction method of this embodiment is implemented by using the scanning recognition device and the shaking head projection lamp.

As shown in fig. 1 and fig. 2, the virtual interaction method of the present embodiment includes the following steps:

step S0: initializing the lighting system to start various projection lamps and scanning identification equipment in the lighting system, and calibrating the scanning range of the scanning identification equipment and the projection range in the lighting system to ensure that the two ranges are coincident.

Step S1: receiving an animation signal of a virtual target object, converting the animation signal according to a data transmission protocol, and mapping the animation signal to the moving head projection lamp for projection display; and acquiring the position information of the virtual target object in the projection display picture in real time.

In the embodiment, the virtual target object is a virtual butterfly, and the animation signal of the virtual target object is 3D model data of the butterfly continuously executing flying motion according to a time line. The user can also replace the animation signal of the virtual object according to the requirement of the user.

The lighting system obtains an Artnet protocol of an Artnet lighting controller, obtains a DMX512 channel value of the moving head projection lamp through the Artnet protocol, and performs one-to-one mapping on the animation signal of the virtual target object and the DMX512 channel value so as to project the animation signal of the virtual target object through the moving head projection lamp after data conversion processing.

The moving head projection lamp obtains XY dynamic coordinates and animation motion orientation in 3D model data of a virtual target object in real time in the projection process, assigns the XY coordinate plus-minus data to the DMX512 moving head projection lamp according to the orientation, enables the moving head projection lamp to change the projection direction according to the flying orientation of the animation in the 3D model, and updates animation dynamic playing and projection lamp channel value animation timeline states in real time so as to display the projection of the moving virtual target object. After the virtual target object is displayed in a projection mode, the lighting system collects the position information of the virtual target object in real time, namely the current dynamic coordinate point of the virtual target object in the projection display picture.

Step S2: and acquiring the barrier signals acquired by the scanning and identifying equipment, and identifying the position information of the specified part of the barrier according to the barrier signals.

In this embodiment, the scanning and identifying device is a radar device, and ensures that the current position of the obstacle falls within the sensing range of the radar device, and the position information of the specified part of the obstacle can be acquired through the radar device. In this embodiment, the obstacle is a human body, and the designated portion of the obstacle is a hand position of the human body; the electromagnetic wave sent by the radar device is reflected by a human body and then processes the received reflected wave, the outline of the human body is identified through computer simulation, the hand position of the human body is selected, and the signal of the hand position is further analyzed.

When the obstacle signal scanned by the radar device is simulated by the computer to be not the human body outline, the step of identifying the position information of the appointed part of the obstacle is not executed, the radar device is enabled to continue scanning, and the step of identifying the position information of the hand position of the human body is not executed until the obstacle is scanned and identified by the radar device to be the human body.

Because a plurality of noise signals are acquired by the radar device and correspond to the positions of the hands of the human body, the noise signals are filtered and suppressed to output single-point signal coordinates of the hands of the human body; and converting the coordinate system of the single-point signal coordinate of the human hand to obtain the single-point signal coordinate of the human hand in the projection display picture, wherein the effective click event is only if the single-point signal coordinate and the virtual target object have a collision relation.

The single-point signal coordinates of the human hand can be displayed and verified before virtual interaction begins, firstly, a user stands in a scanning range of the radar device before virtual interaction begins, the radar device scans and obtains a basic outline of the user, then the position of the human hand is recognized, and the single-point signal coordinates output after filtering and inhibiting are displayed in a projection display picture in a marking point mode, so that when the hand of the user performs actions, the position of the hand of the user in the projection display picture can be made clear, and the accuracy of butterfly catching of the user is improved.

Step S3: and judging whether a collision relation occurs between the position information of the specified part of the obstacle and the position information of the virtual target object in the projection display picture, and if so, outputting a corresponding dynamic effect to the projection display picture.

The method for judging whether the collision relation occurs between the specified part of the obstacle and the virtual target object comprises the following steps: judging whether the single-point signal coordinates of the human hand coincide with the current dynamic coordinate points of the virtual target object in real time, if so, determining that a collision relation occurs, and if so, outputting a specified dynamic effect in the projection display picture, wherein the human hand can be considered to catch a virtual butterfly; if the virtual butterfly is not overlapped, the collision relation does not occur, the human hand is considered not to catch the virtual butterfly, and the dynamic effect is not output.

In addition, a certain touch error range can exist between the single-point signal coordinates of the human hand in the collision relationship and the dynamic coordinate points of the virtual target object, namely, the dynamic coordinate points of the virtual target object are expanded outwards into a coordinate range, and when the single-point signal coordinates of the human hand fall into the coordinate range of the virtual target object, the collision relationship between the human hand and the virtual target object is determined, so that the success rate of butterfly capturing of a user is improved.

The user can set the output dynamic effect by utilizing the client in advance, the user inputs the dynamic effect of characters, images and/or sounds displayed after the butterfly capturing is successful through the client, and when the virtual butterfly is captured by the hand of the human body, the corresponding dynamic effect of the characters, the images and/or the sounds is output so as to improve the interaction interest. The command action triggered and executed after the butterfly capturing is successful can be preset, for example, the scoring command is triggered and executed after the butterfly capturing is successful, the timing operation can be started when the interactive game is started, and the timing operation is finished after the butterfly capturing is successful, so that the time required by the butterfly capturing is counted, and the like, and the interactive diversity is increased.

Example two

The embodiment provides an electronic device, which comprises a processor, a memory and a computer program stored on the memory and capable of running on the processor, wherein the processor implements the virtual interaction method based on the moving head projection lamp in the first embodiment when executing the computer program; in addition, the present embodiment also provides a storage medium, on which a computer program is stored, and the computer program is executed to implement the above-mentioned virtual interaction method based on a moving head projection lamp.

The apparatus and the storage medium in this embodiment are based on two aspects of the same inventive concept, and the method implementation process has been described in detail in the foregoing, so that those skilled in the art can clearly understand the structure and implementation process of the system in this embodiment according to the foregoing description, and for the sake of brevity of the description, details are not repeated here.

The above embodiments are only preferred embodiments of the present invention, and the protection scope of the present invention is not limited thereby, and any insubstantial changes and substitutions made by those skilled in the art based on the present invention are within the protection scope of the present invention.

8页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种控制方法和电子设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类