Virtual reality display system and display method thereof

文档序号:1534210 发布日期:2020-02-14 浏览:46次 中文

阅读说明:本技术 虚拟现实显示系统及其显示方法 (Virtual reality display system and display method thereof ) 是由 张思远 王安廷 于 2019-12-12 设计创作,主要内容包括:本申请实施例公开了一种虚拟现实显示系统及其显示方法,该显示系统可以基于目标对象的眼部图像,获得目标对象在所述待显示图像上的注视对象以及所述注视对象的深度信息,并通过调节所述调焦装置的预设参数,使得所述显示系统的显示图像所在平面的位置与所述注视对象的深度信息相匹配,从而使得所述显示系统形成的显示图像到观看者的距离等于所述显示系统形成的显示图像的视差对应的距离,进而使得观看者基于观看的图像得到的对焦距离和实际观看到的清晰图像的对焦距离保持一致,降低观看者产生视觉疲劳的概率。(The embodiment of the application discloses virtual reality display system and display method thereof, this display system can be based on target object's eye image, obtain target object and be in treat that the gazing object on the display image and gazing object's depth information, and through adjusting focusing device's preset parameter makes display system's display image place planar position with gazing object's depth information phase-match, thereby make display system forms the display image equals to viewer's distance the distance that the parallax of the display image that display system formed corresponds, and then makes the viewer keep unanimous based on the focusing distance that the image that watches obtained and the focusing distance of the clear image that actually watches, reduces the probability that the viewer produced visual fatigue.)

1. A virtual reality display system, comprising:

a display device including a display element for forming a two-dimensional display image based on an image to be displayed, and a lens element for forming a first virtual display image based on the two-dimensional display image;

the tracking device is used for acquiring an eye image of the target object;

the processing device is used for determining a gazing object of the target object on the image to be displayed and the depth information of the gazing object based on the eye image of the target object, and generating a control instruction based on the depth information of the gazing object;

the focusing device is used for adjusting the position of a plane where a display image of the display system is located;

and the driving device is used for responding to the control instruction and adjusting the preset parameters of the focusing device so that the position of the plane where the display image of the display system is located is matched with the depth information of the gazing object.

2. The display system according to claim 1, wherein the display element includes a first display element and a second display element, wherein the first display element is configured to form a two-dimensional first display image based on the image to be displayed, and the second display element is configured to form a two-dimensional second display image based on the image to be displayed, and the first display image and the second display image are different;

the lens elements include a first lens element for forming a first sub-display virtual image based on the first display image and a second lens element for forming a second sub-display virtual image based on the second display image;

and the plane of the first sub-display virtual image and the plane of the second sub-display virtual image are the same plane.

3. The display system according to claim 2, wherein the processing device is configured to, when determining the target object gazing on the image to be displayed based on the eye image of the target object, perform:

when the target object gazes at the gazing object, a first coordinate and a first gazing direction of a left eyeball of the target object in a first coordinate system and a second coordinate and a second gazing direction of a right eyeball of the target object in the first coordinate system are obtained;

determining a third coordinate and a third gaze direction of a left eyeball of a virtual viewer corresponding to the target object in the second coordinate system based on a first coordinate and a first gaze direction of a left eyeball of the target object in the first coordinate system, and determining a fourth coordinate and a fourth gaze direction of a right eyeball of the virtual viewer corresponding to the target object in the second coordinate system based on a second coordinate and a second gaze direction of a right eyeball of the target object in the first coordinate system;

determining a focusing position of the virtual viewer corresponding to the target object in the second coordinate system based on a third coordinate and a third gaze direction corresponding to a left eyeball of the virtual viewer corresponding to the target object in the second coordinate system and a fourth coordinate and a fourth gaze direction corresponding to a right eyeball thereof;

determining a fifth gaze direction based on the in-focus position of the virtual viewer in the second coordinate system and the coordinates of the midpoint of the left and right eyeballs when the virtual viewer is looking forward;

determining a gazing object of the target object on the image to be displayed based on the coordinates of each display object in the image to be displayed in the second coordinate system and the fifth gazing direction;

in the first coordinate system, an X axis and a Y axis are positioned in a plane where the target object eyes are positioned and are mutually vertical, and a Z axis is vertical to the plane where the target object eyes are positioned;

the second coordinate system is located in the virtual reality environment and is a static coordinate system, and in the second coordinate system, the X axis, the Y axis and the Z axis are mutually vertical.

4. The display system according to claim 3, wherein the processing device, when executing determining the gazing object of the target object on the image to be displayed based on the coordinates of each displayed object in the image to be displayed in the second coordinate system and the fifth gazing direction, is specifically configured to execute:

and taking the display object which has the minimum distance between the fifth gaze lines corresponding to the fifth gaze direction and is closest to the virtual viewer in all the display objects of the image to be displayed in the second coordinate system as the gaze object of the target object on the image to be displayed.

5. The display system according to claim 3, wherein the processing means, when performing the determining the depth information of the gazing object, is specifically configured to perform:

and determining the depth information of the gazing object based on the coordinates of the gazing object and the coordinates of the virtual viewer corresponding to the target object in the second coordinate system.

6. The display system according to claim 1, wherein the driving device is configured to perform, in response to the control instruction, adjusting preset parameters of the focusing device so that the position of a plane where the display image of the display system is located matches the depth information of the gazing object, and specifically perform:

adjusting the distance between the focusing device and the lens element in response to the control instruction, so that the position of the plane of the display image of the display system is matched with the depth information of the gazing object;

or, responding to the control instruction, adjusting the curvature of the focusing device to enable the position of the plane of the display image of the display system to be matched with the depth information of the gazing object;

or responding to the control instruction, and adjusting the refractive index of the focusing device so that the position of the plane of the displayed image of the displayed system is matched with the depth information of the gazing object.

7. A display system according to claim 6, wherein the focusing means is located between the display element and the lens element or on a side of the lens element facing away from the display element.

8. The display system of claim 6, wherein the focusing means comprises: the focusing device comprises a first focusing element and a second focusing element, wherein the first focusing element comprises a first focusing lens and a second focusing lens, the first focusing lens and the second focusing lens are liquid lenses, and the concave-convex properties of the first focusing lens and the concave-convex properties of the second focusing lens are opposite.

9. The display system of claim 8, wherein the focusing means further comprises: a second focusing element comprising: the focusing lens comprises a third focusing lens and a fourth focusing lens, wherein the third focusing lens and the fourth focusing lens are solid lenses.

10. A display method applied to the virtual reality display system according to any one of claims 1 to 9, the display system including a display device, a tracking device, a focusing device, and a driving device, wherein the display device includes a display element for forming a two-dimensional display image based on an image to be displayed, and a lens element for forming a first virtual display image based on the two-dimensional display image; the method comprises the following steps:

based on the eye image of the target object acquired by the tracking device, determining the gazing object of the target object on the image to be displayed and the depth information of the gazing object, generating a control instruction based on the depth information of the gazing object, sending the control instruction to the driving device, and adjusting the preset parameters of the focusing device through the driving device to enable the position of the plane where the display image of the display system is located to be matched with the depth information of the gazing object.

Technical Field

The application relates to the technical field of display, in particular to a virtual reality display system and a display method thereof.

Background

With the rapid development of display technologies, more and more stereoscopic display technologies are widely applied, and the principle of stereoscopic display is mainly based on binocular parallax, so that the left and right eyes of a viewer receive image signals of different angles, and the two image signals of different angles are subjected to brain visualization processing, so that a stereoscopic visual effect can be generated in the brain.

However, when a stereoscopic display screen is viewed using the conventional virtual reality display device, a viewer is likely to experience visual fatigue.

Disclosure of Invention

In order to solve the above technical problems, embodiments of the present application provide a virtual reality display system and a display method, so as to solve the problem that when an existing virtual reality display device is used to view a stereoscopic display screen, a viewer is prone to visual fatigue.

In order to solve the above problem, the embodiment of the present application provides the following technical solutions:

a virtual reality display system, comprising:

a display device including a display element for forming a two-dimensional display image based on an image to be displayed, and a lens element for forming a first virtual display image based on the two-dimensional display image;

the tracking device is used for acquiring an eye image of the target object;

the processing device is used for determining a gazing object of the target object on the image to be displayed and the depth information of the gazing object based on the eye image of the target object, and generating a control instruction based on the depth information of the gazing object;

the focusing device is used for adjusting the position of a plane where a display image of the display system is located;

and the driving device is used for responding to the control instruction and adjusting the preset parameters of the focusing device so that the position of the plane where the display image of the display system is located is matched with the depth information of the gazing object.

Optionally, the display elements include a first display element and a second display element, where the first display element is configured to form a two-dimensional first display image based on the image to be displayed, the second display element is configured to form a two-dimensional second display image based on the image to be displayed, and the first display image and the second display image are different;

the lens elements include a first lens element for forming a first sub-display virtual image based on the first display image and a second lens element for forming a second sub-display virtual image based on the second display image;

and the plane of the first sub-display virtual image and the plane of the second sub-display virtual image are the same plane.

Optionally, the processing device is configured to, when determining, based on the eye image of the target object, a gazing object of the target object on the image to be displayed, specifically, to perform:

when the target object gazes at the gazing object, a first coordinate and a first gazing direction of a left eyeball of the target object in a first coordinate system and a second coordinate and a second gazing direction of a right eyeball of the target object in the first coordinate system are obtained;

determining a third coordinate and a third gaze direction of a left eyeball of a virtual viewer corresponding to the target object in the second coordinate system based on a first coordinate and a first gaze direction of a left eyeball of the target object in the first coordinate system, and determining a fourth coordinate and a fourth gaze direction of a right eyeball of the virtual viewer corresponding to the target object in the second coordinate system based on a second coordinate and a second gaze direction of a right eyeball of the target object in the first coordinate system;

determining a focusing position of the virtual viewer corresponding to the target object in the second coordinate system based on a third coordinate and a third gaze direction corresponding to a left eyeball of the virtual viewer corresponding to the target object in the second coordinate system and a fourth coordinate and a fourth gaze direction corresponding to a right eyeball thereof;

determining a fifth gaze direction based on the in-focus position of the virtual viewer in the second coordinate system and the coordinates of the midpoint of the left and right eyeballs when the virtual viewer is looking forward;

determining a gazing object of the target object on the image to be displayed based on the coordinates of each display object in the image to be displayed in the second coordinate system and the fifth gazing direction;

in the first coordinate system, an X axis and a Y axis are positioned in a plane where the target object eyes are positioned and are mutually vertical, and a Z axis is vertical to the plane where the target object eyes are positioned;

the second coordinate system is located in the virtual reality environment and is a static coordinate system, and in the second coordinate system, the X axis, the Y axis and the Z axis are mutually vertical.

Optionally, when the processing device determines, based on the coordinates of each display object in the image to be displayed in the second coordinate system and the fifth gaze direction, a gaze object of the target object on the image to be displayed, the processing device is specifically configured to perform:

and taking the display object which has the minimum distance between the fifth gaze lines corresponding to the fifth gaze direction and is closest to the virtual viewer in all the display objects of the image to be displayed in the second coordinate system as the gaze object of the target object on the image to be displayed.

Optionally, when determining the depth information of the gazing object, the processing device is specifically configured to perform:

and determining the depth information of the gazing object based on the coordinates of the gazing object and the coordinates of the virtual viewer corresponding to the target object in the second coordinate system.

Optionally, the driving device is configured to execute, in response to the control instruction, adjusting preset parameters of the focusing device, so that when the position of the plane where the display image of the display system is located matches the depth information of the gazing object, the driving device is specifically configured to execute:

adjusting the distance between the focusing device and the lens element in response to the control instruction, so that the position of the plane of the display image of the display system is matched with the depth information of the gazing object;

or, responding to the control instruction, adjusting the curvature of the focusing device to enable the position of the plane of the display image of the display system to be matched with the depth information of the gazing object;

or responding to the control instruction, and adjusting the refractive index of the focusing device so that the position of the plane of the displayed image of the displayed system is matched with the depth information of the gazing object.

Optionally, the focusing device is located between the display element and the lens element, or the focusing device is located on a side of the lens element facing away from the display element.

Optionally, the focusing apparatus includes: the focusing device comprises a first focusing element and a second focusing element, wherein the first focusing element comprises a first focusing lens and a second focusing lens, the first focusing lens and the second focusing lens are liquid lenses, and the concave-convex properties of the first focusing lens and the concave-convex properties of the second focusing lens are opposite.

Optionally, the focusing apparatus further includes: a second focusing element comprising: the focusing lens comprises a third focusing lens and a fourth focusing lens, wherein the third focusing lens and the fourth focusing lens are solid lenses.

A display method applied to any one of the virtual reality display systems, the display system including a display device, a tracking device, a focusing device, and a driving device, wherein the display device includes a display element for forming a two-dimensional display image based on an image to be displayed and a lens element for forming a first virtual display image based on the two-dimensional display image; the method comprises the following steps:

based on the eye image of the target object acquired by the tracking device, determining the gazing object of the target object on the image to be displayed and the depth information of the gazing object, generating a control instruction based on the depth information of the gazing object, sending the control instruction to the driving device, and adjusting the preset parameters of the focusing device through the driving device to enable the position of the plane where the display image of the display system is located to be matched with the depth information of the gazing object.

Compared with the prior art, the technical scheme has the following advantages:

the virtual reality display system that this application embodiment provided can obtain the target object based on target object's eye image treat that the object of gazing on the display image and the depth information of gazing the object, and through adjusting focusing device's preset parameter makes display system's the planar position in display image place with the depth information phase-match of gazing the object, thereby make display system's display image apart from viewer's distance equals the distance that display system's display image's the parallax corresponds, and then makes the focus distance that the viewer obtained based on the image of watching and the focus distance of the clear image of actual watching keep unanimous, reduces the probability that the viewer produced visual fatigue.

Drawings

In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.

Fig. 1 is a schematic structural diagram of a virtual reality display system according to an embodiment of the present disclosure;

fig. 2 is a schematic diagram illustrating relative positions of a display device, a focusing device, a tracking device and a target object in a virtual reality display system according to an embodiment of the present disclosure;

fig. 3 is a schematic structural diagram of a tracking device in a virtual reality display system according to an embodiment of the present disclosure;

fig. 4 is a schematic view of an optical path direction of a target object when the target object gazes at a target a in a virtual reality display system according to an embodiment of the present disclosure;

fig. 5 is a schematic view of an optical path direction of a target object when looking at a target B in a virtual reality display system according to an embodiment of the present disclosure;

fig. 6 is a schematic diagram of determining, in a virtual reality display system provided in an embodiment of the present application, a gazing object of the target object on the image to be displayed based on an eye image of the target object;

fig. 7 is a schematic structural diagram of a focusing device in a virtual reality display system according to an embodiment of the present disclosure;

fig. 8 is a schematic optical path diagram of a virtual reality display system according to an embodiment of the present disclosure, in which a first virtual display image formed by the focusing device based on a two-dimensional display image is displayed between a display element and the focusing device;

fig. 9 is a schematic optical path diagram of a first virtual display image formed by the focusing device based on the two-dimensional display image when the display element is displayed on a side away from the focusing device in a virtual reality display system provided in an embodiment of the present application;

fig. 10 is a flowchart of a display method according to an embodiment of the present application.

Detailed Description

The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, but the present application may be practiced in other ways than those described herein, and it will be apparent to those of ordinary skill in the art that the present application is not limited to the specific embodiments disclosed below.

As described in the background section, when a stereoscopic display screen is viewed using an existing virtual reality display device, a viewer is likely to experience visual fatigue.

This is because when a viewer views a display screen using an existing virtual reality display device, the focal distance obtained based on an image viewed by the left eye and an image viewed by the right eye is the parallax (e.g., two meters) when the display screen is photographed, but when the viewer adjusts the focal distance of the left eye and the right eye to obtain a clear display screen, the actually obtained focal distance may be another value (e.g., five meters) different from the parallax when the display screen is photographed, so that the focal distance obtained based on the viewed image and the focal distance of the actually viewed clear image conflict with each other, and the viewer is likely to suffer from visual fatigue.

In view of this, an embodiment of the present application provides a virtual reality display system, as shown in fig. 1, fig. 2 and fig. 3, the virtual reality display system includes:

a display device 100, the display device 100 including a display element 101 and a lens element 102, the display element 101 being configured to form a two-dimensional display image based on an image to be displayed, the lens element 102 being configured to form a first virtual display image based on the two-dimensional display image;

a tracking device 200 for acquiring an eye image of a target object;

the processing device 300 is configured to determine, based on an eye image of the target object, a gazing object of the target object on the image to be displayed and depth information of the gazing object, and generate a control instruction based on the depth information of the gazing object;

a focusing device 400 for adjusting the position of a plane on which a display image of the display system is located;

and the driving device 500 is configured to respond to the control instruction, and adjust preset parameters of the focusing device 400 so that the position of the plane where the display image of the display system is located matches the depth information of the gazing object.

It should be noted that, in this embodiment of the present application, the target object may be an eye of a viewer in a real environment, in another embodiment of the present application, the target object may also be a head of the viewer in the real environment, and in other embodiments of the present application, the target object may also be the viewer himself in the real environment, which is not limited by the present application as long as the target object includes the eye of the viewer in the real environment.

On the basis of the above embodiments, in one embodiment of the present application, the display element is an LED display screen, and in other embodiments of the present application, the display element is another type of display screen, which is not limited in this application, as the case may be.

On the basis of any of the above embodiments, in an embodiment of the present application, the preset parameter includes at least one of a curvature of the focusing device, a refractive index of the focusing device, or a distance between the focusing device and the lens element, and in other embodiments of the present application, the preset parameter may further include other parameters.

Optionally, in this embodiment of the application, when the tracking device is used to acquire an eye image of a target object, the tracking device is specifically configured to: and acquiring the eye image of the target object in real time. Specifically, the tracking device is an eye movement tracking device to accurately acquire the pupil position, and has a short delay and a high refresh rate.

Based on the above embodiments, in a specific embodiment of the present application, the refresh rate of the eye tracking device is greater than or equal to 100 Hz. However, the present application is not limited thereto, as the case may be.

Therefore, the virtual reality display system that this application embodiment provided can be based on target object's eye image, obtain target object and be in treat that the planar position in display image place of display system with the depth information phase-match of gazing the object, thereby make display system's display image equals to the distance that the parallax of display system's display image corresponds, and then make the focus distance that the viewer obtained based on the image of watching and the focus distance of the clear image of actual watching keep unanimous, reduce the probability that the viewer produced visual fatigue.

Continuing with fig. 4-5, in a specific embodiment of the present application, the gazing object of the target object is changed from the target a shown in fig. 4 to the target B shown in fig. 5, and the specific working process is as follows: the processing device determines that the gazing object of the target object is a target B and the depth information of the target B based on the eye image of the target object, and generates a control command to the driving device based on the depth information of the target B, so that the driving device responds to the control instruction, adjusts the preset parameters of the focusing device, changes the position of the plane of the display image of the display system, so that the plane of the display image of the display system is moved from position a 'to position B', so that the distance between the plane of the display image of the display system and the target object is equal to the distance corresponding to the parallax of the display image of the display system, and further, the focal distance obtained by the viewer based on the viewed image is consistent with the focal distance of the actually viewed clear image, and the probability of visual fatigue of the viewer is reduced. Note that, in fig. 4, a solid line between the eye of the viewer and the display element is an actual optical path direction of the eye of the viewer viewing the object a, and a broken line is an optical path direction which is subjectively regarded when the eye of the viewer views the object a; the solid line between the eyes of the viewer and the display element in fig. 5 is the actual optical path direction of the viewer's eyes viewing the object B, and the broken line is the subjectively-perceived optical path direction of the viewer's eyes viewing the object B.

On the basis of any one of the above embodiments of the present application, in one embodiment of the present application, the display element includes a first display element and a second display element, wherein the first display element is configured to form a two-dimensional first display image based on the image to be displayed, and the second display element is configured to form a two-dimensional second display image based on the image to be displayed, and the first display image and the second display image are different;

the lens elements include a first lens element for forming a first sub-display virtual image based on the first display image and a second lens element for forming a second sub-display virtual image based on the second display image;

and the plane where the first sub-display virtual image is located and the plane where the second sub-display virtual image is located are the same plane.

It should be noted that, the first lens element and the second lens element are convex lenses, which is not limited in this application, and in other embodiments of the present application, the first lens element and the second lens element may also be other optical elements as long as it is ensured that the first lens element forms a first sub-display virtual image based on the first display image, and the second lens element forms a second sub-display virtual image element based on the second display image.

On the basis of any of the above embodiments, in an embodiment of the present application, as shown in fig. 2, the tracking device includes:

the infrared detection device comprises an infrared emitting element 201 and an infrared detecting element 202, wherein the infrared emitting element 201 is used for emitting infrared light to the eyes of the target object, and the infrared detecting element 202 is used for receiving the infrared light reflected by the eyes of the target object and generating an infrared image based on the infrared light reflected by the eyes of the target object, and the infrared image comprises the eye image of the target object.

The infrared emission element in the tracking device can provide enough infrared light for eyes of a viewer to ensure that the infrared detection element of the tracking device can obtain enough infrared light reflected by the eyes of the target object, so that the infrared light reflected by the eyes of the target object can obtain a clear infrared image, the accuracy of the tracking device is improved, and the processing device is more accurate when determining the gazing direction of the target object based on the infrared image.

On the basis of any of the foregoing embodiments of the present application, in an embodiment of the present application, as shown in fig. 6, the processing device is configured to, when determining, based on the eye image of the target object, a gazing object of the target object on the image to be displayed, specifically, perform:

when the target object gazes at the gazing object, a first coordinate and a first gazing direction of a left eyeball of the target object in a first coordinate system and a second coordinate and a second gazing direction of a right eyeball of the target object in the first coordinate system are obtained;

determining a third coordinate and a third gaze direction of a left eyeball of a virtual viewer corresponding to the target object in the second coordinate system based on a first coordinate and a first gaze direction of a left eyeball of the target object in the first coordinate system, and determining a fourth coordinate and a fourth gaze direction of a right eyeball of the virtual viewer corresponding to the target object in the second coordinate system based on a second coordinate and a second gaze direction of a right eyeball of the target object in the first coordinate system;

determining a focusing position of the virtual viewer in the second coordinate system based on a third coordinate and a third gaze direction corresponding to a left eyeball of the virtual viewer corresponding to the target object and a fourth coordinate and a fourth gaze direction corresponding to a right eyeball of the virtual viewer in the second coordinate system;

determining a fifth gaze direction based on the virtual viewer focus position in the second coordinate system and coordinates of a midpoint of a left eyeball and a right eyeball when the virtual viewer is looking forward;

determining a gazing object of the target object on the image to be displayed based on the coordinates of each display object in the image to be displayed in the second coordinate system and the fifth gazing direction;

in the first coordinate system, an X axis and a Y axis are positioned in a plane where the target object eyes are positioned and are mutually vertical, and a Z axis is vertical to the plane where the target object eyes are positioned;

the second coordinate system is located in the virtual reality environment and is a static coordinate system, and in the second coordinate system, the X axis, the Y axis and the Z axis are mutually vertical.

In the embodiment of the present application, the specific positions of the X axis, the Y axis, and the Z axis in the second coordinate system are not limited, and may be set by a developer corresponding to the virtual display environment according to the needs of the developer, as long as the second coordinate system is always a static coordinate system in the virtual reality environment, that is, the second coordinate system does not change regardless of how each display object and the virtual viewer change in the virtual reality environment.

On the basis of the foregoing embodiments, in one embodiment of the present application, the processing device, when performing determining the gaze direction of the left eyeball of the target object (i.e. the first gaze direction) based on the first coordinate of the left eyeball of the target object in the first coordinate system, is specifically configured to:

and determining a first gaze direction of the left eyeball of the target object, namely the imaging angle of the left eyeball of the target object in the first coordinate system based on the first coordinate of the left eyeball of the target object in the first coordinate system and a first pre-stored corresponding relation between the coordinate of the left eyeball of the target object in the first coordinate system and the imaging angle. The imaging angle of the left eyeball of the target object in the first coordinate system is an included angle between the gazing direction of the left eyeball of the target object in the first coordinate system and the Z axis in the first coordinate system.

On the basis of the foregoing embodiments, in one embodiment of the present application, the processing device, when performing determining the gaze direction of the right eye of the target object (i.e. the second gaze direction) based on the first coordinate of the right eye of the target object in the first coordinate system, is specifically configured to:

and determining the gazing direction of the right eyeball of the target object, namely the imaging angle of the right eyeball of the target object in the first coordinate system based on the second coordinate of the right eyeball of the target object in the first coordinate system and a second corresponding relation, stored in advance, between the coordinate of the right eyeball of the target object in the first coordinate system and the imaging angle. The imaging angle of the right eyeball of the target object in the first coordinate system refers to an included angle between the gazing direction of the right eyeball of the target object in the first coordinate system and the Z axis in the first coordinate system.

On the basis of the above embodiments, in an embodiment of the present application, the first corresponding relationship and the second corresponding relationship are the same corresponding relationship, that is, the same corresponding relationship is queried based on the coordinates of the left eye ball and the coordinates of the right eye ball of the target object in the first coordinate system, and the imaging angle of the left eye ball and the imaging angle of the right eye ball of the target object in the first coordinate system are obtained; in another embodiment of the present application, the first corresponding relationship and the second corresponding relationship are different corresponding relationships, that is, based on the coordinate of the left eye ball of the target object in the first coordinate system and the coordinate of the right eye ball in the first coordinate system, the corresponding relationship corresponding to the left eye ball and the corresponding relationship corresponding to the right eye ball are respectively queried, and an imaging angle of the left eye ball of the target object in the first coordinate system and an imaging angle of the right eye ball in the first coordinate system are obtained.

Specifically, in the first coordinate system, when the target object is gazed at the right front, the coordinate of the left eyeball of the target object in the first coordinate system is (x)L0,yL0) The coordinate of the right eyeball in the first coordinate system is (x)R0,yR0) When the target object gazes at the gazing object in the image to be displayed, a first coordinate R of a left eyeball of the target object in a first coordinate systemL' is (x)L,yL) The second coordinate R of the right eyeball in the first coordinate systemR' is (x)R,yR) And the imaging angle corresponding to the gazing direction of the left eyeball of the target object in the first coordinate system

Figure BDA0002315103090000121

Expressed in polar coordinates, is (θ)L

Figure BDA0002315103090000122

) And the imaging angle corresponding to the gazing direction of the right eyeball of the target object in the first coordinate systemExpressed in polar coordinates, is (θ)R

Figure BDA0002315103090000124

) And then:

first coordinates of a left eye ball of the target object in a first coordinate system:

P′L=(xL-x0)ex′+(yL-y0)ey′;

an imaging angle corresponding to a first gaze direction of a left eyeball of the target object in a first coordinate system

Figure BDA0002315103090000131

Can be expressed as:

Figure BDA0002315103090000132

wherein e isx' is a unit vector of the X axis in the first coordinate system; e.g. of the typey' is a unit vector of the Y axis in the first coordinate system;

Figure BDA0002315103090000133

representing an imaging angle corresponding to a first gaze direction of a left eyeball of the target object in a first coordinate system

Figure BDA0002315103090000134

First coordinate R 'of left eye ball of the target object in a first coordinate system'LThe corresponding relationship of (a);

similarly, the second coordinate of the right eyeball of the target object in the first coordinate system is as follows:

RR′=(xR-x0)ex′+(yR-y0)ey′;

an imaging angle corresponding to a second gaze direction of a right eyeball of the target object in the first coordinate system

Figure BDA0002315103090000135

Can be expressed as:

Figure BDA0002315103090000136

wherein the content of the first and second substances,

Figure BDA0002315103090000137

the right eyeball of the target object is in the first seatImaging angle corresponding to second gaze direction in the frame

Figure BDA0002315103090000138

A second coordinate R of the right eyeball of the target object in the first coordinate systemR' corresponding relationship.

On the basis of any one of the above embodiments, in an embodiment of the present application, the method for obtaining the correspondence between the coordinates of the eyeball of the target object in the first coordinate system and the imaging angle includes:

acquiring the coordinate (x) of the left eyeball in a first coordinate system when the target object is watched right aheadL0,yL0) And coordinates (x) of the right eyeball in the first coordinate systemR0,yR0);

Acquiring an imaging angle corresponding to a coordinate of a left eyeball of the target object in a first coordinate system and a watching sight line of the left eyeball in the first coordinate system and an imaging angle corresponding to a coordinate of a right eyeball in the first coordinate system and a watching sight line of the right eyeball in the first coordinate system when the target object respectively watches watching objects in different directions (namely the watching direction of the target object is different from the included angle of a Z axis in the first coordinate system, namely the imaging angle is different);

based on the coordinates (x) of the left eyeball of the target object in the first coordinate system when the target object respectively gazes at gazing objects in different directions (namely the gazing direction of the target object is different from the included angle of the Z axis in the first coordinate system, namely the imaging angle is different)L,yL) An imaging angle (theta) corresponding to the gaze line of the left eye in a first coordinate systemL

Figure BDA0002315103090000141

) Obtaining the corresponding relation (namely (x) between the coordinates of the left eyeball of the target object in the first coordinate system and the corresponding imaging angle of the fixation sight line of the left eyeball in the first coordinate systemL-xL0,yL-yL0) And (theta)L

Figure BDA0002315103090000142

) The corresponding relationship of (a);

based on the coordinate (x) of the right eyeball of the target object in the first coordinate system when the target object respectively gazes at gazing objects in different directions (namely the gazing direction of the target object is different from the included angle of the Z axis in the first coordinate system, namely the imaging angle is different)R,yR) An imaging angle (theta) corresponding to the gaze line of the right eyeball in the first coordinate systemR

Figure BDA0002315103090000143

) Obtaining the corresponding relation between the coordinate of the right eyeball of the target object in the first coordinate system and the corresponding imaging angle of the fixation sight line of the right eyeball in the first coordinate system (namely, (x)R-xR0,yR-yR0) And (theta)R) The corresponding relationship of (a).

That is, in the embodiment of the present application, after the coordinates of the left eyeball and the right eyeball of the target object are obtained based on the eye image of the target object, the position offset (i.e., (x) when the target object is gazed right ahead is calculated firstL-xL0,yL-yL0)、(xR-xR0,yR-yR0) Based on its position offset from when the target object is gazing straight ahead (i.e., (x))L-xL0,yL-yL0)、(xR-xR0,yR-yR0) Inquiring the first corresponding relation and the second corresponding relation to obtain the gazing direction of the left eyeball and the gazing direction of the right eyeball of the target object in the first coordinate system.

In other embodiments of the present application, the correspondence between the coordinates of the left eyeball of the target object in the first coordinate system and the corresponding imaging angle of the gaze line of the left eyeball in the first coordinate system may also be (x)L-x0,yL-y0) And (theta)L

Figure BDA0002315103090000145

) The corresponding relationship of (a); the corresponding relation between the coordinate of the right eyeball of the target object in the first coordinate system and the corresponding imaging angle of the fixation sight line of the right eyeball in the first coordinate system can also be (x)R-x0,yR-y0) And (theta)R) The corresponding relationship in the present application is not limited to the above, and is determined according to the situation.

On the basis of the above-mentioned embodiment, in an embodiment of this application, utilize when reducing in actual use first corresponding relation with the error when the gaze direction of target object is confirmed to the second corresponding relation, confirm first corresponding relation with during the second corresponding relation, need measure many times target object is at same gaze direction's coordinate, utilizes the fitting mode to carry out the fitting to the coordinate of many times measuring same gaze direction down to confirm target object and the corresponding relation of its eyeball coordinate that corresponds under this gaze direction.

On the basis of any of the foregoing embodiments, in an embodiment of the present application, the processing device is specifically configured to, when executing the determining, based on the first coordinate and the first gaze direction of the left eyeball of the target object in the first coordinate system, determine the third coordinate and the third gaze direction of the left eyeball of the virtual viewer corresponding to the target object in the second coordinate system, and based on the second coordinate and the second gaze direction of the right eyeball of the target object in the first coordinate system, determine the fourth coordinate and the fourth gaze direction of the right eyeball of the virtual viewer corresponding to the target object in the second coordinate system, specifically:

determining a third coordinate and a third gaze direction of a left eyeball of the virtual viewer (i.e. a character corresponding to the target object in the game) in the second coordinate system by using a first conversion matrix based on a first coordinate and a first gaze direction of the left eyeball of the target object in the first coordinate system;

and determining a fourth coordinate and a fourth gaze direction of the right eyeball of the virtual viewer (namely the character corresponding to the target object in the game) in the second coordinate system by utilizing a second conversion matrix based on the second coordinate and the second gaze direction of the right eyeball of the target object in the first coordinate system.

Optionally, on the basis of the above embodiment, in an embodiment of the present application, the first conversion matrix and the second conversion matrix are the same, but the present application does not limit this, and in other embodiments of the present application, the first conversion matrix and the second conversion matrix may also be different, as the case may be.

The following description will be given by taking the first conversion matrix and the second conversion matrix as an example.

Specifically, in one embodiment of the present application, the first conversion matrix and the second conversion matrix aHComprises the following steps:

the first coordinate of the left eye ball of the target object in the first coordinate system is RL' the first gaze direction of the left eyeball of the target object in the first coordinate system is

Figure BDA0002315103090000162

The second coordinate of the right eyeball of the target object in the first coordinate system is RR' the second gaze direction of the right eyeball of the target object in the first coordinate system is

Figure BDA0002315103090000163

The coordinate of the midpoint of the left eyeball coordinate and the right eyeball coordinate in the second coordinate system is R when the virtual viewer corresponding to the target object in the second coordinate system is in front of the front viewE(x, y, z); then:

the target object corresponds to the virtual viewer's left eyeball in the second coordinate system at the second sitting positionThird coordinate R in the systemLComprises the following steps: rL=RE+AHRL′;

A third gazing direction of the left eyeball of the virtual viewer in the second coordinate system corresponding to the target object in the second coordinate system

Figure BDA0002315103090000164

Comprises the following steps:

Figure BDA0002315103090000165

a fourth coordinate R of the right eyeball of the virtual viewer corresponding to the target object in the second coordinate systemRComprises the following steps: rR=RE+AHRR

A fourth gazing direction of a right eyeball of the virtual viewer corresponding to the target object in the second coordinate system

Figure BDA0002315103090000166

Comprises the following steps:

it should be noted that, in the above embodiment, the first conversion matrix and the second conversion matrix a areHBased on a relative positional relationship of a third coordinate system to the second coordinate system in the virtual display environment (α)H,βH,γH) Determining, wherein the third coordinate system is located in the virtual reality environment, and in the third coordinate system, an X axis and a Y axis are located in a plane where the virtual viewer's eyes are located in the virtual reality environment and are perpendicular to each other, and a Z axis is perpendicular to the plane where the virtual viewer's eyes are located in the virtual reality environment. Since the determination method is well known to those skilled in the art, it will not be described in detail herein.

On the basis of the foregoing embodiments of the present application, in one embodiment of the present application, the processing device is specifically configured to perform, when determining the focus position of the virtual viewer in the second coordinate system based on the third coordinate and the third gaze direction of the left eye of the virtual viewer (i.e., the character in the game) corresponding to the target object in the second coordinate system and the fourth coordinate and the fourth gaze direction of the right eye of the virtual viewer:

determining a gazing direction of the left eyeball of the virtual viewer based on a third coordinate of the left eyeball of the virtual viewer in the second coordinate system, namely the third gazing direction;

determining a third gaze line, namely the gaze line l of the left eyeball of the virtual viewer in the second coordinate system, by taking the third coordinate as an initial point and the third gaze direction as a vector directionLThe expression can be expressed as:

determining a gaze direction of a right eye of the virtual viewer based on a fourth coordinate of the right eye of the virtual viewer in the second coordinate system, i.e., the fourth gaze direction;

determining a fourth gaze line, namely the gaze line l of the right eyeball of the virtual viewer in the second coordinate system, by taking the fourth coordinate as an initial point and the fourth gaze direction as a vector directionRThe expression can be expressed as:

Figure BDA0002315103090000172

determining an in-focus position of the virtual viewer in a second coordinate system based on intersection positions of the third gaze line and the fourth gaze line in the second coordinate system.

In the expression of the third gaze line and the fourth gaze line in the above embodiment, r represents a straight line where the third gaze line or the fourth gaze line is located, and t represents an equation parameter, i.e., an independent variable in the above equation.

In theory, the third line of sight lLAnd the fourth line of sight lRAt the placeThe second coordinate system must meet at a point. However, in practical use, the two lines of sight may not be on the same plane, so that the third line of sight lLAnd the fourth line of sight lRIn the second coordinate system, therefore, on the basis of the above-mentioned embodiment, in an embodiment of the present application, if the third gaze line iLAnd the fourth line of sight lRThe third gaze line l directly intersects at a point in the second coordinate systemLAnd the fourth line of sight lRThe position of the intersection point is the focusing position R of the virtual viewer in the virtual reality environmentT0(ii) a If the third line of sight lLAnd the fourth line of sight lRIf the second coordinate system does not directly intersect with the first coordinate system, the third gaze line l is first selectedLAnd the fourth line of sight lRTwo points with the nearest distance are taken, then the midpoint of the two points is taken, and the position of the midpoint is recorded as the focusing position R of the virtual viewer in the virtual reality environmentT0Then, the embodiment of the present application is based on the coordinate R of the focusing position in the second coordinate systemT0And a coordinate R of a midpoint of the left eyeball and the right eyeball when the virtual viewer is looking forward in the second coordinate systemEA fifth gaze direction, i.e. a fifth gaze line i of the virtual viewer in the second coordinate system, may be determined0The expression can be expressed as: r ═ RE+t(RT0-RE)。

On the basis of any of the above embodiments, in an embodiment of the present application, an origin of the first coordinate system is a midpoint of a left eyeball and a right eyeball of the target object when the left eyeball and the right eyeball are gazed at right ahead. In other embodiments of the present application, the origin of the first coordinate system may also be other points, which is not limited in the present application, as the case may be.

The following description will be given taking the center point of the left eyeball and the right eyeball as an example when the origin of the first coordinate system is taken as the right front of the left eyeball and the right eyeball of the target object.

In this embodiment of the present application, before the virtual reality display system is used for the first time, the virtual reality display system needs to correct the origin of the first coordinate system, which specifically includes: obtaining the coordinate (x) of the left eyeball in a first coordinate system when the target object is watched right aheadL0,yL0) And coordinates (x) of the right eyeball in the first coordinate systemR0,yR0) Based on the coordinates (x) of the left eye in the first coordinate system when the target object is gazing straight aheadL0,yL0) And the coordinates (y) of the right eyeball in the first coordinate systemR0,yR0) Calculating the position (x) of the origin in the first coordinate system0,y0) Wherein x is0=(xL0+xR0)/2,y0=(yL0+yR0)/2。

On the basis of the above embodiments, in an embodiment of the present application, before the display system is used for the first time, the correcting the origin of the first coordinate system may be that the origin of the first coordinate system is corrected before the display system is shipped for the first time, and is not corrected any more subsequently, or that the origin of the first coordinate system is corrected before the display system is used for each time, or that the origin of the first coordinate system is corrected before the display system is used for the first time after the time interval between the time of the display system being used for the current time and the time of the last use is preset, which is not limited in this application and is determined as the case may be.

Since the virtual reality environment is a three-dimensional virtual scene, and all display objects in the virtual reality environment do not all lie in the same plane, on the basis of the above embodiment, in an embodiment of the present application, the processing device is specifically configured to perform, when determining the gazing object of the target object on the image to be displayed based on the coordinates of all display objects in the image to be displayed in the second coordinate system and the fifth gazing direction, the following steps: and taking a display object which has the minimum distance between fifth gaze lines corresponding to the fifth gaze direction and is closest to the virtual viewer in each display object of the image to be displayed in the second coordinate system as a gaze object of the target object on the image to be displayed, namely taking a display object which has the minimum distance between the fifth gaze lines and is closest to the virtual viewer in the virtual reality environment as a gaze object of the target object.

Although the coordinates and gaze directions of the left eyeball and the right eyeball of the target object in the first coordinate system are respectively converted into the second coordinate system in the above embodiments, and the focusing positions and the fifth gaze directions of the left eyeball and the right eyeball of the virtual viewer corresponding to the target object in the second coordinate system are determined, the present application does not limit this, and in other embodiments of the present application, the focusing positions and the sixth gaze directions of the target object in the first coordinate system may be determined based on the coordinates and gaze directions of the left eyeball and the right eyeball of the target object in the first coordinate system, and then converted into the second coordinate system, and the fifth gaze direction may be determined, as the case may be.

Specifically, if the embodiment of the present application first determines the focusing position and the sixth gaze direction of the target object in the first coordinate system based on the coordinates and gaze directions of the left eyeball and the right eyeball of the target object in the first coordinate system, and then converts the focusing position and the sixth gaze direction into the second coordinate system, the processing device is configured to perform, when determining the gaze object of the target object on the image to be displayed based on the eye image of the target object, specifically to perform:

when the target object gazes at the gazing object, a first coordinate and a first gazing direction of a left eyeball of the target object in a first coordinate system and a second coordinate and a second gazing direction of a right eyeball of the target object in the first coordinate system are obtained;

determining a focus position of the target object based on a first coordinate and a first gaze direction of a left eye ball of the target object in a first coordinate system and a second coordinate and a second gaze direction of a right eye ball of the target object in the first coordinate system;

determining a sixth gaze direction of the target object in the first coordinate system based on the in-focus position of the target object and the midpoint of the left eye and the right eye when the target object is looking forward;

determining a fifth gaze direction of a virtual viewer to which the target object corresponds in the second coordinate system based on a sixth gaze direction of the target object in the first coordinate system;

determining a gazing object of the target object on the image to be displayed based on the coordinates of each display object in the image to be displayed in the second coordinate system and the fifth gazing direction;

the first coordinate system is located in a real environment, in the first coordinate system, an X axis and a Y axis are located in a plane where the target object eyes are located and are perpendicular to each other, and a Z axis is perpendicular to the plane where the target object eyes are located; the second coordinate system is located in the virtual reality environment, is a stationary coordinate system, and in the second coordinate system, the X-axis, the Y-axis and the Z-axis thereof are perpendicular to each other.

It should be noted that, in the embodiment of the present application, each display object in the virtual reality environment occupies a certain space, and in the second coordinate system, the focusing position of the virtual viewer is a point in the three-dimensional virtual scene, so in the embodiment of the present application, the gazing object may be a display image corresponding to a certain complete object in the image to be displayed, that is, the display object is a display image corresponding to a certain complete object, such as a whole cup, and may also be a display image corresponding to a point or a part of a certain object in the image to be displayed, that is, the display object is a display image corresponding to a point or a part of a certain object, such as a point or a handle on a cup.

On the basis of any of the foregoing embodiments, in an embodiment of the present application, the processing device is specifically configured to, when performing the determining of the depth information of the gazing object, perform:

determining depth information h of the gazing object based on the coordinates of the gazing object and the coordinates of the virtual viewer in the second coordinate system. Wherein the depth information h of the gazing object is a distance between the eye of the virtual viewer corresponding to the target object and the gazing object in the second coordinate system.

On the basis of the foregoing embodiment, in an embodiment of the present application, the processing device is configured to, when performing the determining of the coordinate of the gazing object, specifically, perform: and obtaining the coordinate of the gazing object in the second coordinate system based on the stereo model information corresponding to the image to be displayed.

Alternatively, in one embodiment of the present application, the focusing means is located between the display element and the lens element, in the embodiment of the present application, the focusing means forms a second virtual display image based on the two-dimensional display image formed by the display element, the lens element forming a first virtual display image based on the second virtual display image, the virtual reality display system having a display image that is the first virtual display image formed by the lens element, that is, in the embodiment of the present application, the driving device adjusts the preset parameters of the focusing device in response to the control command, when the position of the plane where the display image of the display system is located is matched with the depth information of the gazing object, the preset parameters of the focusing device are specifically adjusted in response to the control instruction, and enabling the position of the plane of the first display virtual image to be matched with the depth information of the gazing object.

In other embodiments of the present application, the focusing means may also be located on a side of the lens element facing away from the display element, in the embodiment of the present application, the lens element forms a first virtual display image based on the two-dimensional display image formed by the display element, the focusing apparatus forms a second virtual display image based on the first virtual display image, the virtual reality display system display image being the second virtual display image formed by the focusing apparatus, that is, in the embodiment of the present application, the driving device adjusts the preset parameters of the focusing device in response to the control command, when the position of the plane where the display image of the display system is located is matched with the depth information of the gazing object, the preset parameters of the focusing device are specifically adjusted in response to the control instruction, and enabling the position of the plane of the second display virtual image to be matched with the depth information of the gazing object. The present application is not limited thereto, as the case may be.

The focusing means will be described below by way of example on the side of the lens element facing away from the display element.

Specifically, in one embodiment of the present application, as shown in fig. 3, the focusing device 400 is located on the side of the lens element 102 away from the display element 101, in this embodiment, the two-dimensional display image of the display element 101 is taken as the object plane of the lens element 102, the two-dimensional display image passes through the lens element 102 to form a first virtual display image, which is taken as the object plane of the focusing device 400, and passes through the focusing device 400 to form a second virtual display image (i.e., a display image formed by a display system); wherein a distance between the second display virtual image and the focusing device in the Z-axis direction is H', a distance between the second display virtual image and the target object in the Z-axis direction is H, and a distance between the target object and the focusing device is D3Since the second virtual display image is formed on the side of the focusing apparatus away from the target object, the distance H' in the Z-axis direction between the second virtual display image and the focusing apparatus, the distance H in the Z-axis direction between the second virtual display image and the target object, and the distance D between the target object and the focusing apparatus3Satisfy H ═ H-D3

It should be noted that, in this embodiment of the application, if the position of the display image plane of the display system matches the depth information of the gazing object, the second display virtual image and the distance H between the target objects in the Z-axis direction and the eye of the virtual viewer in the second coordinate system are equal to the distance H between the gazing objects, and therefore the second display virtual image and the focusing device are equal to each other in the Z-axis directionThe distance h' only needs to satisfy h ═ h-D3Then, the distance between the display image of the display system and the viewer can be equal to the distance corresponding to the parallax of the display image of the display system, so that the focal distance obtained by the viewer based on the viewed image is consistent with the focal distance of the actually viewed clear image, and the probability of visual fatigue of the viewer is reduced.

It should be noted that, the distance between the second virtual display image and the focusing apparatus 400 in the Z-axis direction is h ', the focal length of the focusing apparatus 400 is f ', the distance between the first virtual display image formed by the two-dimensional display image passing through the lens element 102 and the focusing apparatus 400 is x, and the distance between the second virtual display image and the focusing apparatus 400 in the Z-axis direction, the focal length f ' of the focusing apparatus 400, and the distance between the first virtual display image formed by the two-dimensional display image passing through the lens element 102 and the focusing apparatus 400 x satisfy the following formula:

Figure BDA0002315103090000231

it can be seen that the distance h' between the second virtual display image and the focusing apparatus 400 along the Z-axis direction satisfies

Figure BDA0002315103090000232

The formula h' is h-D3Substitution formula

Figure BDA0002315103090000233

In the method, a formula about the gazing depth h and the focal length f' of the focusing device can be obtained

Figure BDA0002315103090000234

Further, the focal length f' that the focus adjustment device needs to achieve can be calculated from the value of the depth information h of the gazing object.

Therefore, the virtual reality display system provided by the embodiment of the application can adjust the two-dimensional display image of the display element to pass through the position of the second display virtual image formed by the focusing device on the plane through adjusting the focal length of the focusing device, so that the distance from the display image of the display system to the viewer is equal to the distance corresponding to the parallax of the display image of the display system, the focusing distance obtained by the viewer based on the viewed image is consistent with the focusing distance of the actually viewed clear image, and the probability of visual fatigue of the viewer is reduced.

In addition, in the embodiment of the present application, the focusing apparatus may change the distance h' between the second virtual display image and the focusing apparatus along the Z-axis direction in various ways. Specifically, in an embodiment of the present application, the driving device is configured to perform, in response to the control instruction, adjusting preset parameters of the focusing device, so that when a position of a plane where a display image of the display system is located matches the depth information of the gazing object, the driving device is specifically configured to perform: and responding to the control instruction, adjusting the distance between the focusing device and the lens element so that the position of the plane of the display image of the display system is matched with the depth information of the gazing object.

In another embodiment of the present application, the driving device is configured to perform, in response to the control instruction, adjusting preset parameters of the focusing device, so that when a position of a plane where a display image of the display system is located matches the depth information of the gazing object, the driving device is specifically configured to perform: and responding to the control instruction, adjusting the curvature of the focusing device so that the position of the plane of the display image of the display system is matched with the depth information of the gazing object.

In another embodiment of the present application, the driving device is configured to perform, in response to the control instruction, adjusting preset parameters of the focusing device, so that when a position of a plane where a display image of the display system is located matches the depth information of the gazing object, the driving device is specifically configured to perform: and responding to the control instruction, adjusting the refractive index of the focusing device, so that the position of the plane of the displayed image of the displayed system is matched with the depth information of the gazing object.

Specifically, in an embodiment of the present application, as shown in fig. 7, the focusing apparatus 400 includes: a first focusing element 410, the first focusing element 410 comprising a first focusing lens 411 and a second focusing lens 412, the first focusing lens 411 and the second focusing lens 412 being liquid lenses, and the first focusing lens 411 and the second focusing lens 412 having opposite concave-convex properties.

Note that the focal length f of the liquid lens or the liquid crystal lens is:

Figure BDA0002315103090000241

wherein n is the refractive index of the lens, r1Is the radius of curvature of the front surface of the lens, r2Is the radius of curvature of the rear surface of the lens.

Optionally, in an embodiment of the application, when the first focusing lens and the second focusing lens are liquid lenses, the driving device changes the curvature of the first focusing lens and/or the second focusing lens by changing a driving signal applied to the first focusing lens and/or the second focusing lens, and then changes the focal length of the first focusing lens and/or the second focusing lens, and finally changes the focal length of the focusing device, so as to change the position of the two-dimensional display screen of the display element on the plane where the second display virtual image is formed by the focusing device.

It should be noted that, in the embodiment of the present application, the focusing apparatus employs a dual liquid lens group for focusing, so as to ensure that the angular magnification is not changed when the focal length of the focusing apparatus is adjusted, and ensure the definition of the displayed image and the impression of the viewer.

In another embodiment of the present application, the first focusing lens and the second focusing lens are liquid crystal lenses, the driving device changes the refractive index of the first focusing lens and/or the second focusing lens by changing a driving signal applied to the first focusing lens and/or the second focusing lens, and then changes the focal length of the first focusing lens and/or the second focusing lens, and finally changes the focal length of the focusing device, so as to change the position of the plane of the second display virtual image formed by the focusing device on the two-dimensional display screen of the display element.

When no external voltage is applied to the liquid crystal lens, microcrystals in the liquid crystal lens are arranged in disorder, the material is isotropic, and no obvious deflection effect is caused to light rays. Therefore, in the embodiment of the present application, if the first liquid crystal lens and the second liquid crystal lens are liquid crystal lenses, the voltage applied to the first liquid crystal lens and the second liquid crystal lens can be controlled to control the deflection of the first liquid crystal lens and the second liquid crystal lens on the light ray, so as to adjust the position of the second display virtual image formed by the light ray emitted by the display element to the lens element through the focusing device, and further adjust the position of the second display virtual image on the plane, so that the position of the display image formed by the display system on the plane matches the depth information of the gazing object.

It should be further noted that the focusing device in the embodiment of the present application has the characteristics of light weight, miniaturization, high response speed, etc., and when the focusing device is used for focusing, it is not necessary to move the internal components of the focusing device, so that the deflection function of the surface of each focusing lens in the first focusing element in the focusing device on light can be changed only by changing the input voltage of the focusing device without motor driving, and thus, focusing can be performed at high speed and high definition, and further, the focusing depth of the display system can be changed rapidly, accurately and high-quality without affecting the viewing angle of the three-dimensional display image viewed by human eyes.

Therefore, in the embodiment of the application, the focal length of the first focusing lens and/or the focal length of the second focusing lens can be changed by changing the surface curvature of the liquid lens and/or the refractive index of the liquid crystal lens, so that the focal length of the first focusing element is changed, and the position of the plane where the two-dimensional display screen of the display element is located of the second display virtual image formed by the focusing device is changed.

In other embodiments of the present application, the first and second focus lenses may also be polymer lenses, which is not limited in this application as long as the position of the second virtual display image (i.e., the display image of the display system) formed by the focus device can be adjusted by adjusting the focus device.

It should be noted that, if the first focusing lens and the second focusing lens are both liquid lenses, the focal length f 'of the first focusing element changes with the change of the curvature of the first focusing lens and/or the second focusing lens, in this embodiment of the application, the processing device changes the curvature of the first focusing element by controlling the driving device to change the voltage applied to the two ends of the first focusing element, so that the focal length f' of the first focusing element meets the requirement, the focusing depth of the virtual reality display system meets the requirement, and the position of the plane where the display image of the display system is located matches the depth information of the gazing object, so that the distance from the display image of the display system to the viewer is equal to the distance corresponding to the parallax of the display image of the display system.

If the first and second focusing lenses are both liquid crystal lenses, the focal length f' of the first focusing element is changed with a change in refractive index of the first and/or second focusing lenses, in an embodiment of the application, the processing means is arranged to change the refractive index of the first and/or second focus lens in the first focus element by controlling the drive means to change the voltage applied across the first focus element, the focal length f' of the first focusing element meets the requirement, so that the focusing depth of the virtual reality display system meets the requirement, thereby enabling the position of the plane of the display image of the display system to be matched with the depth information of the gazing object, so that the distance from the plane of the display image of the display system to the viewer is equal to the distance corresponding to the parallax of the display image of the display system.

It should be noted that the focal length f of the first focusing element has a one-to-one correspondence relationship with the voltage applied by the driving device. In the embodiment of the application, after the position of each focusing lens in the first focusing element is fixed, the corresponding focal length can be obtained only by changing the voltage applied to the first focusing element, and further, the position of the plane where the display image of the display system is located can be changed.

On the basis of any of the above embodiments of the present application, in an embodiment of the present application, as shown in fig. 5, the focusing apparatus 400 further includes a second focusing element 420, where the second focusing element includes a third focusing lens 421 and a fourth focusing lens 422, and the third focusing lens 421 and the fourth focusing lens 422 are solid lenses, specifically, in actual use, concave and convex properties of the third focusing lens 421 and the fourth focusing lens 422 may be opposite or the same, and the present application does not limit this to this, depending on the circumstances.

On the basis of the above-described embodiments, in one embodiment of the present application, the third focusing lens 421 is a convex lens, and the fourth focusing lens 422 is a concave lens; in another embodiment of the present application, the third focusing lens 421 is a concave lens, and the fourth focusing lens 422 is a convex lens, which is not limited in this application, as the case may be.

Specifically, when the third focusing lens 421 is a convex lens and the fourth focusing lens 422 is a concave lens, as shown in fig. 8, the first virtual display image formed by the lens element 102 may be displayed between the display element 101 and the focusing device (i.e., displayed in the near vicinity) by adjusting the focal length of the first focusing element (including the first focusing lens 411 and the second focusing lens 412) after passing through the focusing device; as shown in fig. 9, in the present application, the focal length of the first focusing element (including the first focusing lens 411 and the second focusing lens 412) may be adjusted, so that the first virtual display image formed by the lens element 102 is displayed on the side of the display element 101 away from the focusing device (i.e., displayed at a distance).

It should be noted that, if the entrance pupil diameter is 3mm, the third focusing lens is K7, and the radius of curvature is 22.5 mm's biconvex drum lens, the fourth focusing lens is K7, and the radius of curvature is-250 mm's biconcave lens, the minimum of the radius of curvature of first focusing lens and second focusing lens is 15mm, then can realize keeping that the angular magnification is 1, i.e. under the prerequisite that the angle of vision of people to a single pixel is unchangeable, arbitrary continuous change from 50mm away from concave lens to infinity in the virtual image plane of focusing device, focusing time is very short, only tens of milliseconds.

Correspondingly, the present application also provides a display method, which is applied to the virtual reality display system provided in any one of the above embodiments, and the virtual reality display system includes a display device, a tracking device, a focusing device and a driving device, wherein the display device includes a display element and a lens element, the display element is used for forming a two-dimensional display image based on an image to be displayed, and the lens element is used for forming a first virtual display image based on the two-dimensional display image; as shown in fig. 10, the display method includes:

s1: based on the eye image of the target object obtained by the tracking device;

s2: determining a gazing object of the target object on the image to be displayed and depth information of the gazing object, generating a control instruction based on the depth information of the gazing object, and sending the control instruction to the driving device;

s3: and adjusting the preset parameters of the focusing device through the driving device, so that the position of the plane where the display image of the display system is located is matched with the depth information of the gazing object.

Because based on the eye image of the target object that the tracer obtained, confirm the target object is in treat the gazing object on the display image and the depth information of gazing object, and based on the depth information generation control command of gazing object sends drive arrangement, and pass through drive arrangement adjusts focusing arrangement's preset parameter makes the position of display system's display image place plane with the depth information assorted of gazing object is seen in each embodiment of this application, and it is no longer detailed and repeated here.

On the basis of any of the above embodiments, in an embodiment of the present application, the preset parameter includes at least one of a curvature of the focusing device, a refractive index of the focusing device, and a distance between the focusing device and the lens element, and in other embodiments of the present application, the preset parameter may further include other parameters.

The utility model provides an in the virtual reality display system that can be based on target object's eye image, obtain target object be in treat that the object of gazing on the display image and the depth information of gazing the object, and through adjusting focusing device's preset parameter makes display system's the planar position in display image place with the depth information phase-match of gazing the object, thereby make the planar distance viewer that display system's display image place equals show the parallax of virtual image, and then make the focusing distance that the viewer obtained based on the image of watching and the focusing distance of the clear image of actual watching keep unanimous, reduce the probability that the viewer produced visual fatigue.

All parts in the specification are described in a parallel and progressive mode, each part is mainly described to be different from other parts, and the same and similar parts among all parts can be referred to each other.

The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

23页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种放大检测摄录护目眼镜

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!