Virtual object display method and device, terminal equipment and storage medium

文档序号:1576946 发布日期:2020-01-31 浏览:2次 中文

阅读说明:本技术 虚拟对象的显示方法、装置、终端设备及存储介质 (Virtual object display method and device, terminal equipment and storage medium ) 是由 戴景文 贺杰 于 2018-07-20 设计创作,主要内容包括:本申请实施例公开了一种虚拟对象的显示方法、装置、终端设备及存储介质,涉及显示技术领域。该虚拟对象的显示方法应用于终端设备,该方法包括:获取所述终端设备相对标记物的第一空间位置信息;获取所述标记物相对实物的目标区域的第二空间位置信息;基于所述第一空间位置信息以及所述第二空间位置信息,确定所述终端设备相对所述目标区域的第三空间位置信息;基于所述第三空间位置信息,将虚拟对象显示于所述实物的目标区域。本方法可以实现虚拟对象显示于实物的目标区域。(The embodiment of the application discloses a display method and a display device of virtual objects, terminal equipment and a storage medium, and relates to the technical field of display.)

1, kinds of virtual object display method, characterized in that, applied to terminal equipment, the method includes:

spatial position information of the terminal equipment relative to the marker is obtained;

acquiring second spatial position information of the marker relative to a target area of the real object;

determining third spatial position information of the terminal device relative to the target area based on the th spatial position information and the second spatial position information;

and displaying a virtual object in the target area of the real object based on the third spatial position information.

2. The method of claim 1, wherein displaying a virtual object in a target area of the real object based on the third spatial location information comprises:

acquiring the scaling between the virtual object and the target area;

determining display coordinates of the virtual object based on the scaling and the third spatial location information;

displaying the virtual object in the target area based on the display coordinates.

3. The method of claim 1, wherein after the displaying a virtual object in the target area based on the third spatial location information, the method further comprises:

and when the change of the attitude information of the marker is detected, rendering a virtual object corresponding to the attitude information in the target area according to the attitude information of the marker.

4. The method of claim 1, wherein prior to said obtaining second spatial location information of the target region of the marker relative to the physical object, the method further comprises:

and determining the target area of the real object according to a selection instruction of a user on the target area of the real object.

5. The method according to claim 4, wherein the determining the target area of the real object according to the user's selection instruction for the target area of the real object comprises:

and obtaining a target area of the real object selected by the user according to the attitude information, the spatial position information and the control instruction sent by the control equipment.

6. The method of claim 4, wherein after displaying the virtual object in the target area based on the third spatial location information, the method further comprises:

obtaining a selection instruction of the user for the target area of the real object again;

adjusting the target area according to the obtained selecting instruction;

and displaying the virtual object in the adjusted target area.

7. The method of claim 1, wherein said obtaining second spatial position information of the marker relative to the target region comprises:

acquiring fourth spatial position information of the marker relative to the real object and fifth spatial position information of the target area relative to the real object;

determining second spatial position information of the marker relative to the target region based on the fourth spatial position information and the fifth spatial position information.

The device for displaying the virtual objects of 8 and types is applied to terminal equipment and comprises a position acquisition module, a second position acquisition module, a third position acquisition module and a display execution module, wherein,

the th position acquisition module is used for acquiring th spatial position information of the terminal device relative to a marker;

the second position acquisition module is used for acquiring second spatial position information of the marker relative to a target area of the real object;

the third position obtaining module is configured to determine third spatial position information of the terminal device relative to the target area based on the th spatial position information and the second spatial position information;

the display execution module is used for displaying the virtual object in the target area based on the third spatial position information.

Terminal device of the kind , comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to carry out the method of any of claims 1 to 7 and .

10, computer storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the method of any of claims 1-7 to .

Technical Field

The present application relates to the field of display technologies, and in particular, to a display method and apparatus for kinds of virtual objects, a terminal device, and a storage medium.

Background

In recent years, with the progress of science and technology, technologies such as Augmented Reality (AR) have become hot spots of research at home and abroad, and Augmented Reality is a technology for increasing the perception of a user to the real world through information provided by a computer system, in which a virtual object generated by a computer, a scene, or a content object such as system prompt information is superimposed on a real scene to enhance or modify the perception of the real world environment or data representing the real world environment. In the existing augmented reality display technology, a terminal device cannot well combine a virtual object with a real object for display.

Disclosure of Invention

The embodiment of the application provides virtual object display methods and devices, terminal equipment and storage media, so that combined display of virtual objects and real objects is better realized.

, the embodiment of the application provides a display method of virtual objects, which is applied to a terminal device and comprises the steps of obtaining spatial position information of the terminal device relative to a marker, obtaining second spatial position information of the marker relative to a target area of a real object, determining third spatial position information of the terminal device relative to the target area based on spatial position information and the second spatial position information, and displaying the virtual objects in the target area of the real object based on the third spatial position information.

In a second aspect, an embodiment of the present application provides an kind of virtual object display apparatus, which is applied to a terminal device, and the apparatus includes a position obtaining module, a second position obtaining module, a third position obtaining module, and a display execution module, where the position obtaining module is configured to obtain th spatial position information of the terminal device relative to a marker, the second position obtaining module is configured to obtain second spatial position information of a target area of the marker relative to a real object, the third position obtaining module is configured to determine third spatial position information of the terminal device relative to the target area based on the th spatial position information and the second spatial position information, and the display execution module is configured to display a virtual object in the target area based on the third spatial position information.

In a third aspect, an embodiment of the present application provides terminal devices, including a memory and a processor, where the memory stores a computer program, and when the computer program is executed by the processor, the processor is enabled to implement the method for displaying a virtual object provided in the above-mentioned .

In a fourth aspect, embodiments of the present application provide computer storage media having a computer program stored thereon, where the computer program is executed by a processor to implement the method for displaying virtual objects provided in the above-mentioned aspect.

According to the method, the device, the terminal device and the storage medium for displaying the virtual object, th spatial position information of the terminal device relative to the marker is obtained, then second spatial position information of the marker relative to the target area of the real object is obtained, third spatial position information of the terminal device relative to the target area is determined based on th spatial position information and the second spatial position information, and finally the virtual object is displayed in the target area of the real object based on the third spatial position information, so that the virtual object is displayed in the target area of the real object, and combined display of the real object and the virtual object is completed.

These and other aspects of the present application will be more readily apparent from the following description of the embodiments.

Drawings

In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.

Fig. 1 shows schematic diagrams of application scenarios of a display method of a virtual object provided by an embodiment of the present application;

fig. 2 shows another schematic diagrams of application scenarios of the display method of a virtual object provided in the embodiment of the present application;

fig. 3 shows a block diagram of a terminal device according to an embodiment of the present application;

FIG. 4 is a flow chart illustrating a method for displaying a virtual object provided by an embodiment of the present application ;

fig. 5 is a schematic diagram illustrating kinds of effects of a display method of a virtual object provided by an embodiment of the present application;

fig. 6 is another effect diagrams illustrating a display method of a virtual object provided by an embodiment of the present application;

FIG. 7 is a flow chart illustrating a method for displaying virtual objects as provided by another embodiment of the present application;

FIG. 8 is a block diagram illustrating kinds of structures of a display device for virtual objects provided by an embodiment of the present application;

fig. 9 shows another structural block diagrams of the display device of the virtual object provided in the embodiment of the present application.

Detailed Description

The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application , rather than all embodiments.

Referring to fig. 1, a schematic diagram of an application scenario of a display method of a virtual object according to an embodiment of the present application is shown, where the application scenario includes a display system 10. The display system 10 includes: terminal device 100, marker 200, and physical object 300.

In the embodiment of the present application, the marker 200 may be located within the visual field of the terminal device 100, and the marker 200 may be placed on the surface of the physical object 300 or in the vicinity of the physical object 300. for example, please refer to fig. 2, the physical object 300 is an plane, and the marker 200 may be attached to a area of the plane.

In the embodiment of the present application, the terminal device 100 may be a head-mounted display device, or may be a mobile device such as a mobile phone or a tablet, when the terminal device 100 is a head-mounted display device, the head-mounted display device may be an -style head-mounted display device, the terminal device 100 may also be a smart terminal such as a mobile phone connected to an external head-mounted display device, that is, the terminal device 100 may be inserted or connected to the external head-mounted display device as a processing and storage device of the head-mounted display device, and perform a display function on a virtual object in the head-mounted display device.

In this embodiment, please refer to fig. 3, the terminal device 100 may include: memory 120, processor 110, display device 130, and image acquisition device 140. The memory 120, the display device 130, and the image acquisition device 140 are all connected to the processor 110.

The image capturing device 140 is used for capturing an image of an object to be photographed and sending the image to the processor 110. The image capturing device 140 may be an infrared camera, a color camera, etc., and the specific type of the image capturing device is not limited in the embodiments of the present application.

The processor 110 may comprise any suitable type of general or special purpose microprocessor, digital signal processor, or microcontroller, the processor may be configured to receive data and/or signals from the various components of the system via, for example, a network, the processor 110 may also process the data and/or signals to determine or more operating conditions in the system, for example, the processor 110 may generate image data of the virtual world from pre-stored image data and send it to a display device for display, may also receive transmitted image data from a smart terminal or computer via a wired or wireless network, generate an image of the virtual world from the received image data for display, may also recognize locations from the image captured by the image capture device, and determine corresponding display content in the virtual world based on the location information and send it to the display device for display, it will be appreciated that the processor 110 is not limited to being housed within the terminal device 100.

The memory 120 may be used to store software programs and modules that the processor 110 executes to perform various functional applications and data processing by executing the software programs and modules stored in the memory 130, the memory 130 may include high speed random access memory, and may also include non-volatile memory, such as or more magnetic storage devices, flash memory, or other non-volatile solid state memory.

In the embodiment of the present application, when the terminal device 100 is a mobile terminal connected to an external head-mounted display device, the display device and the camera of the external head-mounted display device are connected to the mobile terminal. It is to be understood that the processing performed by the processor in the above embodiments is performed by the processor of the mobile terminal, and the data stored by the memory in the above embodiments is stored by the memory of the mobile terminal.

Of course, the components included in the terminal device 100 are only examples, and do not represent a limitation on the components included in the terminal device 100 in the embodiment of the present application, and the terminal device 100 may also include more or less components, for example, the terminal device 100 may also include a communication module, and the communication module is connected to the processor. The communication module is used for communication between the terminal device 100 and other devices.

In the embodiment of the present application, the markers 200 are placed within the field of view of the camera of the terminal device 100, that is, the camera may capture the images of the markers 200, the images of the markers 200 are stored in the terminal device and used for locating the position of the terminal device relative to the markers, the markers 200 may include at least sub-markers, and the sub-markers may be patterns having a shape determined by . in embodiments, each sub-marker may have or more feature points, where the feature points may be circular dots, circular rings, or triangles or other shapes, and in the embodiment of the present application, the distribution rules of the sub-markers in different markers are different, so each marker 200 may have different identity information, and the terminal device 100 may obtain the identity information corresponding to the markers 200 by identifying the sub-markers included in the markers 200, and the identity information may be, such as codes, which can be used for uniquely identifying the markers 200, but is not limited thereto.

For embodiments, the outline of marker 200 may be rectangular, and of course, marker 200 may have other shapes, and is not limited herein, and the rectangular area and the plurality of sub-markers in the area constitute markers 200. in the embodiment of the present application, marker 200 may be a pattern that can be identified and tracked by a terminal device.

When the user uses the terminal device, the terminal device may capture a marker image containing the marker 200 when the marker 200 is within the field of view of the terminal device. The processor of the terminal device acquires the marker image and the related information, identifies the marker image, acquires the identity information of the marker 200, acquires the position and rotation relationship between the marker 200 and the camera of the terminal device, and further acquires the position and rotation relationship of the marker relative to the terminal device.

For the above display system, an embodiment of the present application provides a display method for types of virtual objects, and specifically, refer to the following embodiment fig. 4, which shows a display method for types of virtual objects, as shown in fig. 4, the method includes:

and step S110, spatial position information of the terminal equipment relative to the marker is obtained.

In the embodiment of the application, the terminal equipment can identify the marker in the visual field range of the image acquisition device of the terminal equipment so as to obtain th spatial position information of the terminal equipment relative to the marker, and identification information and posture information of the marker can be obtained besides th spatial position information of the terminal equipment relative to the marker.

Different markers can have different identity information, and the corresponding relation between the markers and the identity information thereof is stored in the terminal equipment. In addition, the correspondence between the identity information of the marker and the related data may be stored in the terminal device, so that after the terminal device recognizes the identity information of the marker, the terminal device may read data corresponding to the marker, for example, model data of a virtual object corresponding to the marker, using the identity information of the marker.

As implementation modes, the acquisition of spatial position information of the terminal device relative to the marker can comprise the steps of acquiring an image containing the marker, identifying the image containing the marker and obtaining spatial position information of the terminal device relative to the marker.

It can be understood that the terminal device may acquire an image of the marker within the field of view by using the image acquisition device, so as to obtain an image containing the marker. The terminal equipment identifies and tracks the marker according to the image containing the marker so as to obtain the spatial position information of the terminal equipment relative to the marker. The spatial position information indicates six-degree-of-freedom information of the marker, including position information, attitude information, and the like. The posture information includes the orientation, rotation angle, etc. of the marker relative to the terminal device.

When the position of the terminal device changes, the marker image acquired by the terminal device using the camera also changes, that is, the marker images at different viewing angles are acquired, and the content of the marker in the marker images at different viewing angles is different. Therefore, the orientation, rotation angle, etc. of the recognized marker with respect to the terminal device will be different from the marker image acquired at the different position by the terminal device.

For example, when the outline of the marker is a rectangular sticker including sub-markers of a plurality of different patterns, the terminal device is located at the front upper side of the th side of the rectangular sticker, and the image capture device faces the th side of the rectangular sticker, the marker image captured by the terminal device is an image captured from the viewpoint of the th side of the marker, from which it can be recognized that the th side of the marker faces the terminal device, the angle of the orientation of the marker with the facing direction, and the like.

In addition, when the marker corresponds to the stored information, the marker may be identified to obtain the stored information corresponding to the marker. For example, the marker a corresponds to the virtual object 1 for display, and the terminal device obtains the virtual object 1 based on the corresponding relationship between the marker a and the virtual object 1 after obtaining the identity information of the marker a by identifying the marker a.

The terminal device can arbitrarily select a specific number of feature points from the image containing the marker as target feature points for determining the real position information and posture information between the terminal device (camera) and the marker with the target feature points. The terminal device may obtain pixel coordinates of all target feature points. And then, acquiring position information and posture information between the terminal equipment and the marker according to the pixel coordinates of all the feature points and the pre-acquired physical coordinates of all the feature points, wherein the physical coordinates are the pre-acquired coordinates of the feature points in a physical coordinate system corresponding to the marker, and the physical coordinates of each feature point can be pre-acquired and stored in the terminal equipment.

The terminal equipment can also determine the identity information of the marker by identifying the characteristic points in the image containing the marker.

Of course, the specific manner of recognizing the image containing the marker to obtain the position information, the posture information and the identity information of the terminal device relative to the marker is not limited in the embodiment of the present application.

Step S120: and acquiring second spatial position information of the target area of the marker relative to the real object.

For example, as shown in fig. 5, when the real object 300 is a plane, the marker 200 is attached to the plane 300, at this time, the target area 400 is a area on the plane, and when the virtual object is subsequently displayed, the virtual object is displayed in the area of the plane.

In this embodiment of the application, the second spatial position information of the target area of the marker relative to the real object may be stored in the terminal device in advance, and the terminal device reads the second spatial position information. Or the terminal device may acquire the second spatial position information of the target area of the marker relative to the real object in real time.

And step S130, determining third spatial position information of the terminal device relative to the target area based on the th spatial position information and the second spatial position information.

It is understood that after th spatial position information of the terminal device relative to the marker and the second spatial position information of the marker relative to the target area of the real object are obtained, third spatial position information of the terminal device relative to the target area can be obtained by taking the marker as a reference according to the th spatial position information and the second spatial position information.

Step S140: and displaying the virtual object in the target area of the real object based on the third spatial position information.

In the embodiment of the application, after the third spatial position information of the terminal device relative to the target area is obtained, the display coordinates of the virtual object in the virtual space can be calculated according to the third spatial position information, and the virtual object is displayed according to the display coordinates, so that the virtual object is displayed in the target area to be displayed.

As the modes, the method of displaying the virtual object in the target area of the real object based on the third spatial position information may include:

obtaining the scaling between the virtual object and the target area; determining display coordinates of the virtual object based on the scaling and the third spatial position information; the virtual object is displayed in the target area based on the display coordinates.

In this embodiment of the application, the virtual object may correspond to the marker, that is, correspond to the identity information of the marker, or the virtual object may be a virtual object created in advance in the terminal device. When a virtual object is specifically combined with a real object for display, the size of a model of the virtual object to be displayed may be larger or smaller than the size of a target area of the real object, and if the size of the virtual object is not scaled, the displayed virtual object may be too large or too small, the virtual object may not be aligned with the target area of the real object, and the effect that the virtual object is displayed in the target area of the real object may not be achieved.

Therefore, after the third spatial position information of the terminal relative to the target area is obtained, the scaling ratio between the virtual object and the target area can be obtained, so that the virtual object is displayed in the target area of the real object according to the display coordinates after being scaled.

For example, the size of the area occupied by the virtual object in the virtual space is 50m (meters) × 50m, the size of the target area of the real object is 1m × 1m, and the like, and the scaling ratio of the virtual object to the target area is 50: 1, but is not limited thereto, and the virtual object and the target area may be any other size.

Calculating the scaling according to the ratio of the virtual object to the target region may include using the ratio of the occupied area size of the virtual object to the target region as the scaling, or multiplying the ratio of the occupied area size of the virtual object to the target region by a preset coefficient to obtain the scaling.

The scaling ratio of the virtual object can be the ratio of the virtual object to the target area, so that the virtual object can be aligned with the target area to realize the virtual object to be superposed on the target area, the scaling ratio of the virtual object can also be the ratio of the virtual object to the target area multiplied by a preset coefficient, for example, the preset coefficient can be 0.7-1.3, an experience coefficient can also be set according to the viewing experience of a user, the user experience is ensured while the alignment is allowed to have -determined deviation value, the self-adaptation of the size of the virtual object to the size of the target area can be realized by adopting the flexible scaling, and the viewing experience of the user is improved.

And after the scaling of the virtual object is obtained, scaling the virtual object to be displayed according to the scaling. The terminal device may read data corresponding to the virtual object, the data may include model data of the virtual object, the model data is data used for rendering the virtual object, and may include colors used for establishing a model corresponding to the virtual object, coordinates of vertices in the 3D model, and the like, and the terminal device scales the size of the virtual object according to the data corresponding to the virtual object.

For example, when the terminal device displays line segments of a certain color, data such as a 3D point queue (coordinates of points in a plurality of virtual spaces) of the line segments, the thickness of the line segments, and the color of the line segments can be used as model data of the virtual object.

When the terminal device calculates the display coordinates of the virtual object in the display space, the terminal device may obtain the coordinates of the target area in the real space from the third spatial position information of the terminal device relative to the target area, and then convert the coordinates of the target area in the real space into the coordinates of the terminal device in the display space, that is, the display coordinates of the virtual object to be displayed in the target area and the display space of the terminal device are obtained. The terminal device may render the virtual object at the calculated display coordinates, so that a real position of the virtual object displayed by the terminal device is the same as a real position of the target region in the field of view of the terminal device, i.e., the display position of the virtual object corresponds to the target region in the real scene. For example, referring to fig. 6, when the real object 300 is a plane, the marker 200 is disposed on the plane, and the virtual object 500 is displayed in the target area 400 of the plane.

The user can observe the virtual object through the terminal device, the user of the terminal device can move the position, and the virtual object also changes along with the movement of the user position, for example, when the terminal device is far away from the position of the virtual object in the real scene, the virtual object becomes smaller, and conversely, when the terminal device is close to the position of the virtual object in the real scene, the virtual object becomes larger, and the like.

In addition, when the user of the terminal device can observe the virtual object through the own visual angle, the recording of the display content can be carried out, so that the observation and the analysis after the observation are convenient.

According to the virtual object display method provided by the embodiment of the application, after the spatial position information of the terminal device relative to the marker and the spatial position information of the marker relative to the target area of the real object are obtained, the spatial position information of the terminal device relative to the target area is determined, the display coordinate of the virtual object is determined according to the spatial position information of the terminal device relative to the target area, the virtual object is displayed in the target area of the real object, the combined display between the virtual content and the real object is achieved, and the viewing experience of a user is improved.

In embodiments, please refer to fig. 7, and fig. 7 is a schematic flowchart illustrating a method for displaying a virtual object according to an embodiment of the present application, which will be described in detail with reference to the flowchart illustrated in fig. 7, the method for displaying a virtual object may specifically include the following steps:

and step S210, spatial position information of the terminal equipment relative to the marker is obtained.

Step S220: and determining the target area of the real object according to the selection instruction of the user to the target area of the real object.

In the embodiment of the application, the target area of the real object can be determined according to the selection of the target area of the real object by the user through the control device. Specifically, step S220 may include:

and obtaining a target area of the real object selected by the user according to the attitude information, the spatial position information and the control instruction sent by the control equipment.

It can be understood that, when the user uses the terminal device for viewing, the user can select a target area where the virtual object needs to be displayed on the entity according to the control device.

For example, when a user wears the head-mounted display device, the user can observe a real object through the head-mounted display device, then the user moves through the control device, and the target area where the virtual object needs to be displayed is selected by using a control key of the control device. In addition, when the user selects the target area of the real object by using the keys, the control device can send a control instruction, namely the key information of the control device, to the terminal device. And the terminal equipment can determine the target area required to be selected by the user according to the attitude information, the spatial position information and the control instruction of the control equipment after receiving the attitude information, the spatial position information and the control instruction sent by the control equipment.

The starting point selected in the display space can be obtained according to the attitude information, the spatial position information and the control instruction of the control equipment, then the moving track of the starting point can be obtained according to the change of the attitude and the position of the control equipment, and finally the target area selected in the display space can be determined according to the moving track. In addition, the position relationship between the target area and the real object in the display space, that is, the fifth spatial position information of the target area relative to the real object, can be determined according to the position of the target area and the position of the real object in the display space.

In the embodiment of the application, the target area can also be selected by detecting the gesture of the user. When a preset gesture in the display space is detected, the selection of the target area is triggered, and then the selected target area in the display space can be determined according to the movement track of the detected gesture.

In the embodiment of the present application, the target area may also be selected by tracking the change of the focal point of the eyeball. It can be understood that a camera for acquiring an eye image of a user may be arranged in the head-mounted display device, when the user views a real object by using the head-mounted display device, the eye image of the user is acquired, image data of a retina and a cornea of the user may be captured in the process of determining the target area, the terminal device constructs a 3D model of the eye according to the data, and the target area is selected by tracking a focus of an eyeball in a three-dimensional space.

Step S230: and acquiring fourth spatial position information of the marker relative to the real object and fifth spatial position information of the target area relative to the real object.

In this embodiment of the application, the terminal device obtains the second spatial position information of the marker relative to the target area, where fourth spatial position information of the marker relative to the real object and fifth spatial position information of the target area relative to the real object are stored in the terminal device (which may be obtained according to the target area of the real object determined in step 220). The fifth spatial position information of the target area relative to the real object may be obtained according to the target area of the real object determined in step 220. When the target area is a preset area of the real object, fifth spatial position information of the target area relative to the real object may be stored in the terminal device in advance. When the terminal device needs to acquire the spatial position information of the marker relative to the target area, the fourth spatial position information of the marker relative to the real object and the fifth spatial position information of the target area relative to the real object can be read.

Step S240: second spatial position information of the marker relative to the target region is determined based on the fourth spatial position information and the fifth spatial position information.

After the fourth spatial position information of the marker relative to the real object and the fifth spatial position information of the target area relative to the real object are obtained, the positional relationship between the marker and the target area can be determined by taking the real object as a reference according to the fourth spatial position information and the fifth spatial position information, and the second spatial position information of the marker relative to the target area is obtained.

And step S250, determining third spatial position information of the terminal equipment relative to the target area based on the th spatial position information and the second spatial position information.

Step S260: and displaying the virtual object in the target area of the real object based on the third spatial position information.

In this embodiment of the application, after the virtual object is displayed in the target area of the real object based on the third spatial position information of the terminal device relative to the target area, there may be a case where the position and the posture of the terminal device relative to the target area are changed, for example, the distance of the terminal device relative to the target area is changed, and the angle of the terminal device toward the target area is changed. When the position and the posture of the terminal device relative to the target area change, the position and the posture of the terminal device relative to the marker also change, that is, the position and the posture of the marker recognized by the terminal device according to the acquired image containing the marker change. In this case, the virtual object displayed by the terminal device may be adjusted according to the movement of the terminal device, for example, when the terminal device is close to the target area, the virtual object displayed by the terminal device may become larger, and otherwise, the virtual object may become smaller.

Specifically, in this embodiment of the present application, the method for displaying a virtual object may further include:

and when the change of the attitude information of the marker is detected, rendering the virtual object corresponding to the attitude information in the target area according to the attitude information of the marker.

It can be understood that, when the terminal device determines that the posture information of the marker changes according to the acquired image containing the marker, the displayed virtual object can be adjusted according to the posture information of the marker, for example, the virtual object is houses, the front of the house is shown before, when the terminal device faces to the other side of the marker, the detected posture information of the marker changes, and the side of the house is rendered in the target area according to the model data of the virtual object.

Of course, in this embodiment of the present application, the method for displaying a virtual object may further include:

and when the change of the distance of the terminal equipment relative to the marker is detected, rendering a virtual object corresponding to the changed distance in the target area according to the distance of the terminal equipment relative to the marker.

For example, houses are used, when the distance between the terminal device and the marker is detected to be smaller, a house with a larger size is rendered in the target area according to the model data of the virtual object, so that a user can observe the larger house when approaching the marker, when the distance between the terminal device and the marker is detected to be larger, a house with a smaller size is rendered in the target area according to the model data of the virtual object, and when the user is far away from the marker, the smaller house can be observed.

In the embodiment of the present application, there is also a case where a target area for displaying a virtual object is adjusted, and therefore, the method for displaying a virtual object may further include:

obtaining a selection instruction of the user for the target area of the real object again; adjusting the target area according to the obtained selecting instruction; and displaying the virtual object in the adjusted target area.

When the user needs to adjust the position of the virtual object displayed on the physical object, the target area of the physical object can be selected again, and the terminal device can obtain a selection instruction of the user on the target area of the physical object by using the control device and then determine the target area again. After the target area is determined again, the terminal device may redisplay the virtual object in the new target area, and the specific display may be performed according to the display mode of the virtual object.

In the embodiment of the application, the display method of the virtual object can be applied to some scenes needing to cover the marker, so that the terminal device can identify the marker arranged at a position which does not affect the aesthetic appearance of the real object, namely the virtual object can be displayed in a certain area of the real object, so that a user can observe the virtual object in the certain area of the real object and does not affect the overall aesthetic appearance.

According to the virtual object display method provided by the embodiment of the application, th spatial position information of the terminal device relative to the marker is obtained, then the target area of the real object is determined according to a selection instruction of a user for the target area of the real object, then the position relation between the real object and the target area is determined according to the position relation between the marker and the real object, second spatial position information of the marker relative to the target area is determined, then the spatial position information of the terminal device relative to the target area is determined according to the th spatial position information and the second spatial position information, and finally the virtual object is displayed in the target area of the real object according to the spatial position information.

Of course, in addition to the target area for displaying the virtual object obtained by the control device provided in the above embodiment, the target area may also be obtained by using other electronic devices connected to the terminal device, such as a mobile phone, a tablet, and the like. The specific manner may be that the electronic device connected to the terminal device stores the model of the real object, the electronic device may display the model of the real object, and the user may select a certain area on the real model from the real model displayed by the electronic device. And then, the electronic equipment sends the area on the physical model selected by the user to the terminal equipment, and the terminal equipment determines a target area for displaying the virtual object on the physical model according to the area on the physical model.

In embodiments, please refer to fig. 8, and fig. 8 illustrates a block diagram of a display apparatus 400 of a virtual object provided in the present application, where the display apparatus 400 of the virtual object is applied to a terminal device, as will be explained below with reference to the block diagram illustrated in fig. 8, the display apparatus 400 of the virtual object includes a -th position obtaining module 410, a second position obtaining module 420, a third position obtaining module 430, and a display execution module 440, where the -th position obtaining module 410 is configured to obtain -th spatial position information of the terminal device relative to a marker, the second position obtaining module 420 is configured to obtain second spatial position information of the marker relative to a target area of a real object, the third position obtaining module 430 is configured to determine third spatial position information of the terminal device relative to the target area based on the -th spatial position information and the second spatial position information, and the display execution module 440 is configured to display the virtual object in the target area based on the third spatial position information.

In this embodiment, the display execution module 440 may specifically be configured to: acquiring the scaling between the virtual object and the target area; determining display coordinates of the virtual object based on the scaling and the third spatial location information; displaying the virtual object in the target area based on the display coordinates.

In the embodiment of the present application, please refer to fig. 9, the display apparatus 400 of the virtual object may further include: an object rendering module 450. The object rendering module 450 is configured to render a virtual object corresponding to the posture information in the target area according to the posture information of the marker when it is detected that the posture information of the marker changes.

In an embodiment of the present application, please refer to fig. 9, the display apparatus of the virtual object may further include: the region determination module 460. The area determination module 460 is configured to determine a target area of the real object according to a selection instruction of the user for the target area of the real object.

Further , the area determination module 460 may be specifically configured to obtain the target area of the real object selected by the user according to the posture information, the spatial location information, and the control instruction sent by the control device.

In an embodiment of the present application, please refer to fig. 9, the display apparatus of the virtual object may further include: an instruction obtaining module 470, a region adjusting module 480, and an object display module 490. The instruction obtaining module 470 is configured to obtain again a selection instruction of the user for the target area of the real object; the area adjusting module 480 is configured to adjust the target area according to the re-obtained selecting instruction; the object display module 490 is configured to display the virtual object in the adjusted target area.

In this embodiment of the application, the second position obtaining module 420 may be specifically configured to: acquiring fourth spatial position information of the marker relative to the real object and fifth spatial position information of the target area relative to the real object; determining second spatial position information of the marker relative to the target region based on the fourth spatial position information and the fifth spatial position information.

In embodiments, the present application further provides terminal devices, including a memory and a processor, where the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to implement the above-mentioned display method for virtual objects.

In embodiments, the present application further provides computer storage media having stored thereon a computer program that, when executed by a processor, implements the above-described method of displaying virtual objects.

In summary, according to the display method, the display device, the terminal device, and the storage medium for the virtual object provided in the present application, the th spatial position information of the terminal device relative to the marker is obtained, then the second spatial position information of the marker relative to the target area of the real object is obtained, then the third spatial position information of the terminal device relative to the target area is determined based on the th spatial position information and the second spatial position information, and finally the virtual object is displayed in the target area of the real object based on the third spatial position information, so that the virtual object is displayed in the target area of the real object, and the combined display of the real object and the virtual object is completed.

It should be noted that the functions of each device in the system in the embodiment of the present application may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the related description of the foregoing method embodiment, which is not described herein again.

It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.

It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises an series of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.

It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in computer readable storage media, which may be read only memory, magnetic or optical disk, etc.

Although the present application has been described with reference to a preferred embodiment, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the application, and all changes, substitutions and alterations that fall within the spirit and scope of the application are to be understood as being included within the following description of the preferred embodiment.

19页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于触觉控制器的触发器按钮

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类