AR equipment image control method and system

文档序号:19627 发布日期:2021-09-21 浏览:17次 中文

阅读说明:本技术 Ar设备图像控制方法及系统 (AR equipment image control method and system ) 是由 杜文彬 于 2021-06-16 设计创作,主要内容包括:本发明提供一种AR设备图像控制方法及系统,其中的方法包括:通过AR设备上的深度相机采集当前环境下的真实图像数据,并基于真实图像数据建立三维空间模型;对三维空间模型分层处理,对不同分层中的所有物体进行定位,并根据物体的定位结果确定是否存在至少两个相互重叠的目标物体;若存在至少两个相互重叠的目标物体,对目标物体对应的分层配置对应的控制手势,并通过控制手势对对应的分层和/或对应分层上的目标物体进行控制;其中,至少两个相互重叠的目标物体分别位于不同分层上。利用上述发明能够对AR设备中的重叠物体分别进行手势控制,操作简单,体验效果好。(The invention provides an AR equipment image control method and system, wherein the method comprises the following steps: acquiring real image data under the current environment through a depth camera on the AR equipment, and establishing a three-dimensional space model based on the real image data; layering the three-dimensional space model, positioning all objects in different layers, and determining whether at least two target objects which are overlapped exist according to the positioning result of the objects; if at least two target objects which are overlapped with each other exist, corresponding control gestures are configured for the layers corresponding to the target objects, and the corresponding layers and/or the target objects on the corresponding layers are controlled through the control gestures; wherein at least two target objects which are mutually overlapped are respectively positioned on different layers. By utilizing the method and the device, the gesture control can be respectively carried out on the overlapped objects in the AR equipment, the operation is simple, and the experience effect is good.)

1. An AR device image control method, comprising:

acquiring real image data under the current environment through a depth camera on the AR equipment, and establishing a three-dimensional space model based on the real image data;

layering the three-dimensional space model, positioning all objects in different layers, and determining whether at least two target objects which are overlapped exist according to the positioning result of the objects;

if at least two target objects which are overlapped with each other exist, configuring a corresponding control gesture for the corresponding layer of the target objects, and controlling the corresponding layer and/or the target objects on the corresponding layer through the control gesture;

wherein the at least two mutually overlapping target objects are respectively located on different layers.

2. The AR device image control method of claim 1, wherein said process of building a three-dimensional spatial model based on said real image data comprises:

acquiring position information of a real object in the real image data, and pushing the virtual object to a display interface by the AR equipment;

and constructing a virtual three-dimensional space model based on the position information of the real object and the virtual object, wherein all objects in the three-dimensional space model comprise the virtual object constructed by the real object and the virtual object pushed by the AR device.

3. The AR apparatus image control method according to claim 2, wherein the number of target objects overlapping each other is two.

4. The AR device image control method according to claim 1, wherein the process of determining whether there are at least two target objects overlapping each other according to the positioning result of the object includes:

and determining the areas where the objects are located based on the positioning result of the objects, and determining the objects as target objects which are mutually overlapped when the overlapping areas between the areas where different objects are located are larger than a preset threshold value.

5. The AR device image control method of claim 1, wherein the process of controlling the corresponding layer and/or the target object on the corresponding layer by the control gesture comprises: collecting gesture information of both hands of a user;

determining the control gesture based on the two-hand posture information;

and performing control operation corresponding to the control gesture on the corresponding layer and/or the object on the corresponding layer through the control gesture.

6. The AR device image control method of claim 5, wherein the process of collecting the user's two-hand pose information comprises:

acquiring two-hand image data of a user through the gesture recognition device;

and carrying out bone point identification on both hands in the both-hand image data based on a pre-trained bone point identification model, and determining the posture information of both hands based on a bone point identification result.

7. The AR device image control method of claim 5, wherein the process of determining the control gesture based on the two-hand pose information comprises:

determining the posture information corresponding to the left hand and/or the posture information corresponding to the right hand;

and determining a control gesture corresponding to the left hand and/or a control gesture corresponding to the right hand based on the posture information corresponding to the left hand and/or the posture information corresponding to the right hand.

8. The AR device image control method of claim 1, wherein the process of controlling the corresponding layer and/or the target object on the corresponding layer by the control gesture comprises:

controlling different layers by the left hand or the right hand; and the number of the first and second groups,

different gesture control operations are carried out on the same layer through different gestures of the same hand.

9. The AR device image control method of claim 8, wherein said process of performing different gesture control operations on the same tier by different gestures of the same hand comprises: different control operations are performed on the same layered whole body through different gestures of the same hand, and/or,

performing gesture control on different target objects of the same layer through different gestures of the same hand, and/or,

and performing different gesture control on the same target object of the same layer through different gestures of the same hand.

10. An AR device image control system, comprising:

the three-dimensional space modeling unit is used for acquiring real image data under the current environment through a depth camera on the AR equipment and establishing a three-dimensional space model based on the real image data;

the layering processing unit is used for layering the three-dimensional space model, positioning all objects in different layers and determining whether at least two mutually overlapped target objects exist according to the positioning result of the objects;

the object control unit is used for configuring a corresponding control gesture for a layer corresponding to the target object if at least two target objects which are overlapped with each other exist, and controlling the corresponding layer and/or the target object on the corresponding layer through the control gesture;

wherein the at least two mutually overlapping target objects are respectively located on different layers.

Technical Field

The invention relates to the technical field of augmented reality, in particular to an AR device image control method and system.

Background

The Augmented Reality (Augmented Reality) technology is a technology for skillfully fusing virtual information and a real world, and is widely applied to the real world after virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer is simulated and applied by various technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing and the like, and the two kinds of information supplement each other, so that the Augmented effect of the real world is realized.

In addition, augmented reality technology is also called augmented reality, AR augmented reality technology is a newer technology content which enables integration between real world information and virtual world information content, real environment and virtual object can exist in the same picture and space simultaneously after being overlapped, and in visual augmented reality, a user needs to enable real world and computer graphics to be overlapped on the basis of a helmet display, and the real world can be fully seen around the computer graphics after being overlapped.

At present, when an AR displays an image, the virtual image may coincide with other virtual images or images of a real world, and in the case of image coincidence, if a blocked image needs to be controlled, an object blocking the image needs to be removed first, so as to avoid object overlapping, but the operation is complicated, the time is delayed, and the user experience is poor.

Disclosure of Invention

In view of the above problems, an object of the present invention is to provide an AR device image control method and system to solve the problem that when images are displayed in AR, if images are overlapped, the overlapped images need to be processed before and after, which causes poor user experience.

The AR equipment image control method provided by the invention comprises the following steps: acquiring real image data under the current environment through a depth camera on the AR equipment, and establishing a three-dimensional space model based on the real image data; layering the three-dimensional space model, positioning all objects in different layers, and determining whether at least two target objects which are overlapped exist according to the positioning result of the objects; if at least two target objects which are overlapped with each other exist, corresponding control gestures are configured for the layers corresponding to the target objects, and the corresponding layers and/or the target objects on the corresponding layers are controlled through the control gestures; wherein at least two target objects which are mutually overlapped are respectively positioned on different layers.

In addition, an optional technical solution is that the process of establishing the three-dimensional space model based on the real image data includes: acquiring position information of a real object in real image data, and pushing the position information of the real object to a virtual object on a display interface by the AR equipment; and constructing a virtual three-dimensional space model based on the position information of the real object and the virtual object, wherein all objects in the three-dimensional space model comprise the virtual object constructed by the real object and the virtual object pushed by the AR equipment.

In addition, an optional technical solution is that the number of target objects overlapped with each other is two.

In addition, an optional technical solution is that the process of determining whether there are at least two target objects overlapping each other according to the positioning result of the object includes: and determining the areas where the objects are located based on the positioning result of the objects, and determining the objects as target objects which are mutually overlapped when the overlapping areas between the areas where different objects are located are larger than a preset threshold value.

In addition, an optional technical solution is that the process of controlling the corresponding layer and/or the target object on the corresponding layer through the control gesture includes: collecting gesture information of both hands of a user; determining a control gesture based on the two-hand posture information; and performing control operation corresponding to the control gesture on the corresponding layer and/or the object on the corresponding layer through the control gesture.

In addition, an optional technical solution is that the process of collecting the posture information of both hands of the user includes: acquiring two-hand image data of a user through a gesture recognition device; and carrying out bone point recognition on the hands in the two-hand image data based on a pre-trained bone point recognition model, and determining the posture information of the hands based on the bone point recognition result.

In addition, an optional technical solution is that the process of determining the control gesture based on the two-hand posture information includes: determining the posture information corresponding to the left hand and/or the posture information corresponding to the right hand; and determining a control gesture corresponding to the left hand and/or a control gesture corresponding to the right hand based on the posture information corresponding to the left hand and/or the posture information corresponding to the right hand.

In addition, an optional technical solution is that the process of controlling the corresponding layer and/or the target object on the corresponding layer through the control gesture includes: controlling different layers by the left hand or the right hand; and performing different gesture control operations on the same layer through different gestures of the same hand.

In addition, an optional technical solution is that the process of performing different gesture control operations on the same layer by different gestures of the same hand includes: different control operations are carried out on the same layered whole body through different gestures of the same hand, and/or different target objects of the same layered whole body are subjected to gesture control through different gestures of the same hand, and/or different gesture control is carried out on the same target object of the same layered whole body through different gestures of the same hand.

According to another aspect of the present invention, there is provided an AR device image control system including: the three-dimensional space modeling unit is used for acquiring real image data under the current environment through a depth camera on the AR equipment and establishing a three-dimensional space model based on the real image data; the layering processing unit is used for layering the three-dimensional space model, positioning all objects in different layers and determining whether at least two target objects which are overlapped with each other exist according to the positioning result of the objects; the object control unit is used for configuring a corresponding control gesture for a layer corresponding to a target object if at least two target objects which are overlapped with each other exist, and controlling the corresponding layer and/or the target object on the corresponding layer through the control gesture; wherein at least two target objects which are mutually overlapped are respectively positioned on different layers.

By utilizing the AR equipment image control method and system, real image data under the current environment are collected through a depth camera on the AR equipment, and a three-dimensional space model is established; the three-dimensional space model is processed in a layered mode, all objects in different layers are located, at least two target objects which are overlapped with each other are determined according to a locating result, then control gestures corresponding to the layered configuration corresponding to the target objects are configured, the target objects on the corresponding layers and/or the corresponding layers are controlled through the control gestures, a plurality of objects which are shielded from each other can be controlled simultaneously, an upper-layer object does not need to be removed firstly, a lower-layer object is controlled, the operation is more flexible and convenient, and the user experience feeling is stronger.

To the accomplishment of the foregoing and related ends, one or more aspects of the invention comprise the features hereinafter fully described. The following description and the annexed drawings set forth in detail certain illustrative aspects of the invention. These aspects are indicative, however, of but a few of the various ways in which the principles of the invention may be employed. Further, the present invention is intended to include all such aspects and their equivalents.

Drawings

Other objects and results of the present invention will become more apparent and more readily appreciated as the same becomes better understood by reference to the following description taken in conjunction with the accompanying drawings. In the drawings:

FIG. 1 is a flowchart of an AR device image control method according to an embodiment of the present invention;

FIG. 2 is a detailed flowchart of an AR device image control method according to an embodiment of the present invention;

FIG. 3 is a block diagram of an AR device image control system according to an embodiment of the present invention.

The same reference numbers in all figures indicate similar or corresponding features or functions.

Detailed Description

In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of one or more embodiments. It may be evident, however, that such embodiment(s) may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing one or more embodiments.

In the description of the present invention, it is to be understood that the terms "central," "longitudinal," "lateral," "length," "width," "thickness," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," "clockwise," "counterclockwise," "axial," "radial," "circumferential," and the like are used in the orientations and positional relationships indicated in the drawings for convenience in describing the invention and to simplify the description, and are not intended to indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and are therefore not to be considered limiting of the invention.

To describe the image control method and system of the AR device in detail, embodiments of the present invention will be described in detail below with reference to the accompanying drawings.

Fig. 1 shows a flow of an AR device image control method according to an embodiment of the present invention.

As shown in fig. 1, an AR device image control method according to an embodiment of the present invention includes:

s110: real image data under the current environment are collected through a depth camera on the AR device, and a three-dimensional space model is established based on the real image data.

The process of building a three-dimensional space model based on real image data may include:

1. acquiring position information of a real object in real image data;

2. and constructing a virtual three-dimensional space model based on the position information, and then loading each virtual picture in a virtual scene pushed by the AR device into the constructed three-dimensional space model, so that all objects in the three-dimensional space model comprise virtual objects constructed by real objects and virtual objects in the virtual scene pushed by the AR device. The virtual scene can be various scenes such as a virtual game, virtual teaching, virtual drilling and the like, and the corresponding virtual scene and the real scene form a three-dimensional space model together.

In addition, other ways to construct the three-dimensional space model may also be adopted, for example, the process of constructing the three-dimensional space model based on the real image data may also include:

1. acquiring position information of a real object in real image data, and pushing the position information of the real object to a virtual object on a display interface by the AR equipment;

2. and constructing a virtual three-dimensional space model based on the position information of the real object and the virtual object, wherein all objects in the three-dimensional space model comprise the virtual object constructed by the real object and the virtual object pushed by the AR equipment.

In the construction process, the virtual object pushed by the AR equipment can be directly displayed on the display screen and picked up by the user, namely, the user can simultaneously acquire the real object in the real environment and the virtual object displayed on the display screen, and then the system can construct a virtual three-dimensional space model according to the position information of the real object and the virtual object.

S120: and carrying out layering processing on the three-dimensional space model, positioning all objects in different layers, and determining whether at least two mutually overlapped target objects exist according to the positioning result of the objects.

In this step, after the three-dimensional model is built, the three-dimensional model can be further layered, and all objects in different layers can be positioned, so that the positions of the objects and the relative position relationship between the objects can be determined, and the user can conveniently experience the objects such as grabbing or other operations in the experience process.

Further, it may be determined whether at least two objects whose positions overlap each other exist as the target object according to the positioning result of the objects. For example, the area where the object is located may be determined based on the positioning result of the object, and when the overlapping area between the areas where different objects are located is greater than a preset threshold, the object is determined to be a target object overlapping with each other.

The preset threshold may be set and adjusted according to an application scenario and a user requirement, for example, when an overlapping area between two objects is small, it indicates that a shielding degree between the two objects is low, and the layering process may not be performed without affecting the operation.

S130: and if at least two target objects which are mutually overlapped exist, configuring a corresponding control gesture for the corresponding layer of the target object, and controlling the corresponding layer and/or the target object on the corresponding layer through the control gesture.

When the three-dimensional space model is layered, layering can be carried out through the system in the process of forming a virtual picture of the three-dimensional space model, the number of layers can be set according to requirements, then all objects in different layers are positioned, if at least two target objects which are overlapped mutually exist, control gestures corresponding to the layers corresponding to the target objects are configured, and objects on the corresponding layers and/or the layering can be adjusted and controlled by starting different layers.

Specifically, when the control gesture of the user is acquired, the posture information of both hands of the user can be acquired, and then the control gesture is determined based on the posture information of both hands; and finally, performing control operation corresponding to the control gesture on the object on the corresponding layer through the control gesture.

Further, the process of collecting the posture information of both hands of the user comprises: acquiring two-hand image data of a user through a gesture recognition device; and carrying out bone point recognition on the hands in the two-hand image data based on a pre-trained bone point recognition model, and determining the posture information of the hands based on the bone point recognition result. In this embodiment, the hand image data of the user may be collected by the gesture recognition camera, and the gesture information of both hands may be determined according to the hand image data.

As specific examples, the control gesture includes a scissors gesture, an OK gesture, or other gestures that are used more frequently by the user, such as a swipe in opposite directions, a reverse swipe, a parallel swipe of at least two fingers of the left hand, at least one click, a double click of the left hand, and a swipe in opposite directions, a reverse swipe, a parallel swipe, at least one click, a double click, etc. of at least two fingers of the right hand; the corresponding control operations may include: move, zoom in, zoom out, rotate, and the like.

Further, determining the control gesture based on the two-handed pose information includes: determining the posture information corresponding to the left hand and/or the posture information corresponding to the right hand; and determining a control gesture corresponding to the left hand and/or a control gesture corresponding to the right hand based on the posture information corresponding to the left hand and/or the posture information corresponding to the right hand. It should be noted that a control gesture corresponds to a control operation, and specifically the control gesture can be set or adjusted by itself according to user requirements in the process of device initialization or debugging, and is not limited to specific control gestures, and after different control gestures are allocated to each layer, two hands can be operated simultaneously through the different gestures of the left hand and the right hand or the different gestures of the left hand and the right hand, so that only one layer can be operated at a time, and the operation efficiency and the user experience are not affected. In other words, the process of controlling the corresponding layer and/or the target object on the corresponding layer through the control gesture may further include: controlling different layers by the left hand or the right hand; and performing different gesture control operations on the same layer through different gestures of the same hand.

Further, the process of performing different gesture control operations on the same layer through different gestures of the same hand comprises the following steps: different control operations are carried out on the same layered whole body through different gestures of the same hand, and/or different target objects of the same layered whole body are subjected to gesture control through different gestures of the same hand, and/or different gesture control is carried out on the same target object of the same layered whole body through different gestures of the same hand.

In addition, also can set up different start gestures to different layering after confirming the layering, through the start gesture that corresponds promptly, start the layering that corresponds for the active layer, other layering locking, consequently can realize controlling the active layer, and avoid other layering maloperation, and then the object of accessible other control gestures to active layer or active layer is adjusted respectively and is controlled. In addition, the corresponding layer can also be started to be the active layer through other instructions, such as preset voice instructions, interface operation instructions and the like, and the user can perform corresponding operations on the active layer or all objects on the active layer, and the like.

In one embodiment of the invention, all target objects on the same layer are controlled by the same control gesture; or, all the target objects on the same layer are controlled by different control gestures, for example, all the objects on the same layer can be zoomed out by sliding at least two fingers of the left hand in opposite directions, or the target object on the same layer can be zoomed out by sliding the index finger and the middle finger in opposite directions, and another target object on the same layer can be zoomed out by sliding the thumb and the index finger in opposite directions.

As a specific example, fig. 2 shows a detailed flow of an AR device image control method according to an embodiment of the present invention.

As shown in fig. 2, the AR device image control method according to the embodiment of the present invention may include:

1. the AR device system starts.

2. Acquiring real image data in a real environment through a depth camera on the AR equipment, and constructing a three-dimensional space model based on the real image data;

3. carrying out layering processing on the three-dimensional space model, positioning all objects in different layers, and determining whether at least two mutually overlapped target objects exist according to the positioning result of the objects;

4. if at least two target objects which are overlapped with each other exist, configuring corresponding control gestures for the layers corresponding to the target objects, and controlling the corresponding layers and/or the target objects on the corresponding layers through the control gestures;

5. gradually judging the current control hand and control gesture, correspondingly layering the current control hand into an active layer according to the control rules corresponding to the control hand and control gesture which are set in advance, and performing corresponding control operation on the active layer according to the control gesture, wherein the corresponding control operation can be fed back to the three-dimensional space model;

7. and when a command for finishing the layered control is received, finishing the process of the layered control and recovering the integral control of the three-dimensional space model. Specifically, the hierarchical control instruction may be a preset gesture control instruction, a preset voice control instruction, a preset screen control instruction, a preset key control instruction, and the like.

Corresponding to the above-mentioned image control method for the AR device, the present invention further provides an image control system for the AR device, and fig. 3 shows a schematic logic of the image control system for the AR device according to an embodiment of the present invention.

As shown in fig. 3, an AR device image control system 200 according to an embodiment of the present invention includes:

the three-dimensional space modeling unit 210 is configured to collect real image data in the current environment through a depth camera on the AR device, and establish a three-dimensional space model based on the real image data;

the layering processing unit 220 is configured to perform layering processing on the three-dimensional space model, locate all objects in different layers, and determine whether there are at least two target objects overlapping each other according to a location result of the objects.

An object control unit 230, configured to configure a corresponding control gesture for a layer corresponding to a target object if there are at least two target objects overlapped with each other, and control the corresponding layer and/or the target object on the corresponding layer through the control gesture;

wherein the at least two mutually overlapping target objects are respectively located on different layers.

It should be noted that, the description of the embodiment of the image control method for the AR device may be referred to in the embodiment of the image control system for the AR device, and details are not repeated here.

According to the AR equipment image control method and system provided by the invention, in a use scene, if a plurality of objects are mutually shielded, the shielded objects can be moved away, namely, the shielded objects can be controlled through the corresponding control gestures, namely, the shielded objects can be directly operated, so that the operation convenience and the operation rapidness can be greatly improved, and the user experience effect can be improved.

The AR device image control method and system according to the present invention are described above by way of example with reference to the accompanying drawings. However, it should be understood by those skilled in the art that various modifications may be made to the AR device image control method and system provided by the present invention without departing from the scope of the present invention. Therefore, the scope of the present invention should be determined by the contents of the appended claims.

11页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于声音刺激序列的稳态认知响应分析的方法、装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类