Method for generating and displaying virtual objects by means of an optical system

文档序号:1804148 发布日期:2021-11-05 浏览:2次 中文

阅读说明:本技术 用于通过光学系统产生和显示虚拟对象的方法 (Method for generating and displaying virtual objects by means of an optical system ) 是由 朱利安·格拉斯尔 米夏埃尔·莫滕胡伯 弗兰克·林森迈尔 于 2019-12-19 设计创作,主要内容包括:在用于通过光学系统(1)向个体用户产生和显示虚拟对象(16)的方法中,该光学系统由眼动追踪眼镜(2)和至少一个与眼动追踪眼镜(2)连接的显示器单元(3)构成,该显示器单元(3)具有第一显示器(4),其中,眼动追踪眼镜(2)具有第一眼睛检测相机(5),其中,第一显示器(4)布置在眼动追踪眼镜(2)的第一观察区域(7)中,提出的是,使光学系统(1)匹配于个体用户,其中,获知用于对显示器单元(3)的用于驱控第一显示器(4)的显示器驱控单元(8)进行匹配的第一目标值,其中,由眼动追踪眼镜(2)获知第一眼睛(6)的当前的视线方向,其中,产生虚拟对象(16),其中,在考虑第一目标值的情况下,在第一显示器(4)中在第一眼睛(6)的所获知的视线方向上的定位处显示虚拟对象(16)。(In a method for generating and displaying a virtual object (16) to an individual user by means of an optical system (1) consisting of eye tracking glasses (2) and at least one display unit (3) connected to the eye tracking glasses (2), the display unit (3) having a first display (4), wherein the eye tracking glasses (2) have a first eye detection camera (5), wherein the first display (4) is arranged in a first viewing area (7) of the eye tracking glasses (2), it is proposed to adapt the optical system (1) to the individual user, wherein a first target value for adapting a display actuation unit (8) of the display unit (3) for actuating the first display (4) is known, wherein a current direction of sight of the first eye (6) is known from the eye tracking glasses (2), wherein a virtual object (16) is generated, wherein the virtual object (16) is displayed in the first display (4) at a position in the known gaze direction of the first eye (6) taking into account the first target value.)

1. Method for generating and displaying a virtual object (16) by means of an optical system (1), wherein the optical system (1) is formed by eye tracking glasses (2) for detecting a direction of a line of sight of a user and at least one display unit (3) connected to the eye tracking glasses (2), the display unit (3) having at least one at least partially transparent first display (4), wherein the eye tracking glasses (2) have a first eye detection camera (5) for creating a first eye video of a first eye (6) of the user, wherein the first display (4) is arranged at least regionally in a first viewing region (7) of the eye tracking glasses (2) associated with the first eye (6),

-wherein the optical system (1) is adapted to an individual user according to the following steps:

-wherein the eye tracking glasses (2) are worn by a user,

-wherein subsequently at least one predeterminable eye size and/or at least one predeterminable eye position of the first eye (6) is known by means of the eye-tracking glasses (2),

-wherein subsequently at least one geometric display set at least one first target value of the first display (4) is known from at least one known eye size and/or at least one known eye position and the position and orientation of the first eye detection camera (5),

-and wherein subsequently a display actuation unit (8) of the display unit (3) for actuating the first display (4) is matched at least to the first target value,

-wherein the current gaze direction of the first eye (6) is known by the eye-movement tracking glasses (2),

-wherein the virtual object (16) is generated in response to a predeterminable state and/or a predeterminable event,

-wherein the virtual object (16) is displayed by the display control unit (8) in the first display (4) at a location in the first eye's (6) known gaze direction, taking into account the first target value.

2. The method according to claim 1, characterized in that the current gaze direction of the first eye (6) is continuously known by the eye-tracking glasses (2) and the position displayed by the virtual object (16) is continuously matched to the current gaze direction.

3. Method according to claim 1 or 2, characterized in that the current direction of sight must deviate from the last known direction of sight by a predefinable amount, in particular by 2 °, in order for the position at which the virtual object (16) is displayed to be matched to the current direction of sight.

4. Method according to one of claims 1 to 3, characterized in that a first target position and/or a first target degree of deformation of a first display region (9) within the first display (4) for presenting a virtual object (16) in front of a first eye (6) is known as a first target value of the geometric display setting, wherein starting from at least one deviation between the first target position and/or the first target degree of deformation of the first display region (9) and a first display region base setting of the first display (4) at least one first correction factor and/or a first correction function is known, wherein the display actuation unit (8) is adapted to the user at least with the first correction factor or the first correction function, and the virtual object (16) is offset and/or deformed by the first correction factor and/or the first correction function A manner of correction function is displayed in the first display (4).

5. The method according to one of claims 1 to 4, characterized in that the display unit (3) has at least one at least partially transparent second display (10), wherein the eye-tracking glasses (2) have a second eye-detection camera (11) for creating a second eye video of a second eye (12) of the user, wherein the second display (10) is arranged at least regionally in a second viewing region (13) of the eye-tracking glasses (2) which is assigned to the second eye (12),

-wherein at least one predeterminable eye size and/or at least one predeterminable eye position of the second eye (12) is known by means of the eye-movement tracking glasses (2),

-wherein subsequently at least one geometric display of the second display (10) is known showing the set at least one second target value, depending on at least one known eye size and/or at least one known eye position and the position and orientation of the second eye detection camera (11),

-and wherein subsequently a display control unit (8) for controlling the second display (10) is adapted to match at least the second target value.

6. The method according to claim 5, characterized in that, furthermore,

-wherein the current gaze direction of the second eye (12) is known by the eye-movement tracking glasses (2),

-wherein the virtual object (16) is displayed by the display control unit (8) also in the second display (10) at a location in the second eye's (12) known gaze direction, taking into account the second target value.

7. Method according to claim 5 or 6, characterized in that a second target position and/or a second target deformation of a second display region (17) within the second display (10) for presenting a virtual object (16) in front of a second eye (12) is known as a second target value of the geometric display setting, wherein starting from at least one deviation between the second target position and/or the second target deformation of the second display region (17) and a second display region base setting of the second display (10), at least one second correction factor and/or a second correction function is known, wherein the display control unit (8) is adapted to the user at least with the second correction factor or the second correction function, and the virtual object (16) is adapted to the second correction factor and/or the second correction function with a deviation and/or deformation The manner of the numbers is displayed in a second display (10).

8. The method according to any one of claims 5 to 7, characterized in that upon learning the eye positions of the first eye (6) and the second eye (12), the position of the middle between the eyes (6, 12) is also determined, and the positions of the first eye detection camera (5) and the second eye detection camera (11) relative to the middle between the eyes (6, 12) are also determined by means of the eye movement tracking glasses (2).

9. Method according to one of claims 1 to 8, characterized in that, for the purpose of adapting to the user, at least one typical sequence of fixations of the user looking at a predetermined number of a plurality of predetermined control points is recorded, said control points being arranged at different distances and at different distances from the optical system (1).

10. The method according to any of claims 7 to 9, characterized in that a first distance value of the first target value and a first distance value of the second target value are known at a first control point arranged at a first distance from the optical system (1), and a second distance value of the first target value and a second distance value of the second target value are known at a second control point arranged at a second distance from the optical system (1), wherein the first distance is different from the second distance.

11. The method according to any one of claims 7 to 10, characterized in that the current postures of the two eyes (6, 12) are known by the eye-tracking glasses (2) and the virtual object (16) is presented on the first and second displays (4, 10) with a positional shift and/or deformation such that it appears as a single object at the distance at which the two eyes (6, 12) are aligned.

12. Method according to any of claims 1 to 11, characterized in that the optical system has at least one field of view camera (14), wherein a predeterminable real object (15) is detected by the field of view camera (14), wherein the detection of the predeterminable real object (15) is used for generating an event predeterminable for the virtual object (16).

13. Method according to one of claims 1 to 12, characterized in that the system is configured for detecting at least one state value of a user, in particular a tired state, which is monitored by the system with respect to exceeding a boundary value, and exceeding the boundary value is a predeterminable event for generating the virtual object (16).

14. Method according to any one of claims 1 to 13, characterized in that the system has at least one navigation and position determination unit (23) to detect the spatial orientation and location of the system, and that the predeterminable location and the predeterminable spatial orientation are predeterminable events for generating the virtual object (16).

Technical Field

The invention relates to a method for generating and displaying a virtual object by means of an optical system according to claim 1.

Background

Glasses for so-called Augmented Reality (Augmented Reality) and Mixed Reality (Mixed Reality) are becoming increasingly important and popular. Here, the information is imported into the field of view of the user of the data glasses. Such glasses have at least one at least partially transparent display, which is arranged in front of at least one eye of the user. Compared to the so-called Virtual Reality (Virtual Reality), augmented Reality and mixed Reality always have a direct relationship with the environment.

In fact, this type of glasses, for example the so-called Google Glass, causes a number of problems. Simply presenting information on a display that is part of the glasses has proven to be extremely disadvantageous. The gaze behavior of the user is greatly changed due to the presentation of the content. In fact, the user is hardly supported by presenting information, because the user or the user's eyes are forced to constantly switch between presented information and the environment. This can quickly lead to fatigue and places a heavy burden on the brain. The displayed content also distracts the user from the real environment since the visual stimulus of the newly displayed information attracts the user's gaze, and more attention should be paid regardless of whether there are possible more important things in the real world in other line of sight directions. This may lead to accidents. Thus, such known spectacles may cause exactly the opposite thing to their intention. Such eyewear increases complexity rather than providing convenience to the user.

In addition, all people are unique to some extent. The arrangement of the eyes of the person relative to the nose and the ears is different and also has a unique mutual distance. Each spectacle wearer knows to adjust the spectacles accordingly by a optician who adapts the spectacles to the respective conditions of the respective spectacle wearer by making mechanical changes to the spectacles, for example by bending the temples or nose pads. Accordingly, the optical lenses are also individually matched to the user.

In known systems for augmented reality or mixed reality, it has not been possible to combine the display of virtual objects with real-world objects or to coordinate them in a way that can be predetermined and intentional so far.

Disclosure of Invention

It is therefore an object of the present invention to provide a method of the type mentioned at the outset, with which the disadvantages mentioned can be avoided and with which virtual objects can be displayed in a predefinable combination with the real environment.

According to the invention, this is achieved by the features of claim 1.

Thereby, the content is presented exactly where the person has gazed instead at the relevant point in time. A person rarely gazes accidentally, but often for reasons of the environment of the person concerned. The presentation of the virtual object in the smart glasses is a further event and will attract the attention of the user. In many cases, the user does not accidentally look at a particular object in the real environment, but for some reason. If an object appears in its smart glasses, the user will automatically look at the object. Thereby, he loses his attention to reality. In environments where safety is critical, such as in an industrial or laboratory, this can lead to negligence and accidents. By displaying the content exactly where the user has looked at, the user does not have to change his visual attention due to the displayed content or image. Thereby allowing the user to react equally to the actual environment and the displayed content. Thereby, the user is provided as good support as possible, even in the most difficult situations to grasp, without distraction or misuse.

Thereby, the data glasses, in particular the optical system consisting of the eye tracking glasses and the display unit, can be quickly and simply matched to the individual user. Thereby it is possible to make the content not displayed anywhere on the display, but to match the associated optical system so that the corresponding content can be displayed where the user is looking. This enables the display of virtual objects to be combined with the real-world objects or coordinated in a predetermined and intentional manner.

It can thereby be ensured that the content displayed by or on the display is displayed in the actual direction of the line of sight of the user or in a position which is clearly defined and deliberately selected with respect to the actual direction of the line of sight of the user.

The method of the invention can be carried out quickly and simply in practice.

The dependent claims relate to further advantageous embodiments of the invention.

The statements of the patent claims are hereby expressly incorporated by reference into the specification, and the claims are to be read literally.

Drawings

The invention is described in detail with reference to the accompanying drawings, in which preferred embodiments are shown by way of illustration only. Wherein:

fig. 1 shows in perspective a first embodiment of the system of the invention consisting of eye tracking glasses and a display unit;

FIG. 2 shows in plan view a second embodiment of the system of the invention consisting of eye tracking glasses and a display unit;

FIG. 3 shows a schematic perspective view of the spatial arrangement of the first and second displays with respect to the eyes of a user;

FIG. 4 shows a block diagram of an embodiment of the system of the present invention consisting of eye tracking glasses and a display unit;

fig. 5 shows a schematic perspective view of a spatial arrangement of a first display, a first eye and a real object (real object);

fig. 6 shows the arrangement of the first display and the first eye according to fig. 5 with virtual objects (labels) presented on the first display; and

fig. 7 shows the arrangement according to fig. 5 consisting of the first display and the first eye with virtual objects (labels) arranged outside the first display and not represented.

Detailed Description

Fig. 1, 2 and 4 show different embodiments or presentation schemes, respectively, of an optical system 1 adapted to an individual user according to the invention. Furthermore, the optical system 1 of the invention serves to display, in a defined manner, virtual objects 16 which are generated in response to predeterminable states or predeterminable events. These virtual objects 16 are presented in a defined field of view environment at a predeterminable position in at least one display 4, 10 of the system 1.

The optical system 1 is composed of at least eye-tracking glasses 2 and at least one display unit 3 connected to the eye-tracking glasses 2.

The eye tracking glasses 2 have at least one first eye detection camera 5 for creating a first eye video of a first eye 6 of the user. The eye tracking glasses 2 preferably also have a second eye detection camera 11 for creating a second eye video of a second eye 12 of the user. The eye tracking glasses 2 preferably have at least one field of view camera 14 pointing forward from the perspective of the user 1 wearing the eye tracking glasses 2. At least one eye detection camera 5 or preferably two eye detection cameras 5, 11 are arranged in a so-called nose bridge of the eye tracking glasses 2. The arrangement of the two eye detection cameras 5, 11 can be clearly seen in fig. 2.

A particularly preferred eye-tracking glasses 2 is known from AT 513.987B1, as shown in fig. 1 and 2, from which further details of the preferred eye-tracking glasses 2 can be derived. However, the method in question may also be performed with other eye tracking glasses 2.

The eye tracking glasses 2 are arranged and constructed for detecting the direction of the line of sight of the user.

Fig. 4 shows in particular a block diagram of the eye-tracking glasses 2, wherein, however, the eye-tracking glasses 2 actually used may have further components. In addition to the eye detection cameras 5, 11 already described, the eye tracking glasses 2 have in particular at least one controller 18 of the eye tracking glasses 2 and an interface 19 for communication with the display unit 3. The eye tracking scope 2 preferably also has further components, such as an energy supply unit.

As shown in fig. 1, the display unit 3 has at least one (at least) partially transparent first display 4. In particular, the display unit 3 also has a second display 10. The two displays 4, 10 can also be of one-piece design, wherein it is provided here that each display extends over both eyes 6, 12. According to an example of the invention, the first display 4 is always assigned to the left eye of the user, wherein, however, this is not a mandatory provision, the right eye may also be referred to as the first eye 6.

The display unit 3 is preferably a device that is independent of the eye-tracking glasses 2, but is configured for integration with a particular type of eye-tracking glasses 2 and is arranged on and mechanically connected to the particular eye-tracking glasses 2. According to an example of the invention, a system 1 of eye tracking glasses 2 and a display unit 3 mechanically connected thereto is always used, wherein an integrated embodiment of the two devices 2, 3 can also be provided.

The first display 4 is arranged at least regionally in a first viewing region 7 of the eye tracking glasses 2. The second display 10 is preferably arranged at least in regions in the second viewing area 13 of the eye tracking glasses 2. As viewing area 7, 13, an area within the line of sight or optical field of view of the user is understood here. In particular, the viewing zones 7, 13 are identical to the sheet receiving opening of the eye tracking glasses 2. The viewing area 7, 13 is particularly that area where "lenses" are typically arranged in conventional spectacles if the eye tracking spectacles do not have "lenses" and/or do not have frames or only partially constructed frames.

The first display 4 and possibly the second display 10 may be arranged on the side of the eye-tracking glasses 2 facing the user, as shown in fig. 1, or on the side of the eye-tracking glasses 2 facing away from the user, as shown in fig. 2. Furthermore, they may also be arranged in the eye tracking glasses 2.

The first and possibly the second display 4, 10 are arranged in a stationary manner in the display unit 3. It is provided that they do not tilt or pivot during operation. The display unit 3 also does not have a corresponding actuator.

The first and possibly the second display 4, 10 are preferably constructed as so-called waveguide displays and are substantially transparent.

The first and possibly the second display 4, 10 are preferably constructed as so-called "single focal plane" displays. It is a display with only one display plane. In contrast, so-called "multi-focal plane" displays are also known, but they are not used by the present examples.

Fig. 4 shows in particular a block diagram of the display unit 3, wherein, however, the display unit 3 actually used may also have other components. In addition to the displays 4, 10 already described, the display unit 3 has a controller 22 of the display unit 3 and a first interface 20 of the display unit 3 for communicating with the eye tracking glasses 2.

Furthermore, the display unit 3 has a display control unit 8, which is connected to the controller 22 or is formed integrally therewith. The display control unit 8 controls the first display 4 and the preferably provided second display 10 and is responsible here for the positioning and deformation of the images or the objects 16 to be represented on or in the first display 4 and preferably on or in the second display 10. The image or object 16 is generated by the controller 22 and transmitted to the display control unit 8 for presentation.

The display unit 3 also has a second interface 21 of the display unit 3, which is provided and correspondingly configured for communication with the environment. Correspondingly suitable or preferred transmission methods or systems are currently well known and widely used and are referred to in the field of cellular mobile communications as 3G (umts) or 4G (lte) or 5G, wherein further systems of the internet or WLAN may also be used. The corresponding further protocol is for example IEEE 802 with numerous variants.

Furthermore, the display unit 3 preferably has a navigation and position determination unit 23 connected to the controller 22. Corresponding units are known from so-called smart phones. The navigation and position determination unit 23 can determine not only the position of the display unit 3 in the global coordinate system, in particular by means of satellite navigation methods and if necessary with the collection of connection data of the mobile telephone provider, but also the spatial position or orientation of the display unit 3, in particular by means of at least one inclination sensor.

The display unit 3 also preferably has further components, for example an energy supply unit.

Since slight individual deviations of the dimensional specification may occur during the manufacture of the eye-tracking glasses 2 and/or the display unit 3, it is preferably provided that the unique position and orientation of the first eye-detection camera 5 and/or the second eye-detection camera 11 of the respective or each individual eye-tracking glasses 2 is known on the measuring station (before they are supplied) and the corresponding data is stored in the memory or in the controller 18 of the respective eye-tracking glasses 2.

It is preferably provided that at least one value for at least one predeterminable optical error of the first eye-detecting camera 5 and/or of the second eye-detecting camera 11 is also (likewise individually) known in the measuring station and that the at least one known value is likewise stored in a memory or in the controller 18 and taken into account in the method steps described below.

Furthermore, it is preferably provided that the individual positions and orientations of the first display 4 and the preferably provided second display 10 of each display unit 3 are known one by one in the measuring station and that the data known here are stored in the control unit 22 of the respective display unit 3. These data are preferably taken into account in the method steps described below.

The aforementioned action of knowing the actual dimensions and the optical error is already completed before the respective eye-tracking glasses 2 and/or the respective display unit 3 are delivered to the user. This is also referred to as intrinsic calibration.

Within the scope of the method according to the invention for generating and displaying a virtual object 16 by means of an optical system, it is provided that the optical system 1 is adapted to the individual user. This comprises at least the following steps:

the eye tracking glasses 2 are worn by the user.

Subsequently, at least one predeterminable eye size and/or at least one predeterminable eye position of the first eye 6 is known by the eye-tracking glasses 2.

Subsequently, from the at least one known eye size and/or the at least one known eye position and the position and orientation of the first eye detection camera 5, at least one geometric display set at least one first target value of the first display 4 is known.

Subsequently, the display control unit 8 of the display unit 3 for controlling the first display 4 is matched at least to the first target value.

Thereby, the individual optical system 1 for augmented reality or mixed reality of the present invention is made quickly and easily adaptable to an individual user. It is thus possible for the first time to display content not anywhere on the display 4, 10, but to match the associated optical system 1 so that the corresponding content can be displayed where the user has seen it. This also makes it possible to combine the display of the virtual object 16 with the real object 15 of the real world or to coordinate these in a predeterminable and intentional manner.

It is thereby ensured that the content displayed by means of the display 4, 10 or displayed on the display 4, 10 is displayed in the actual viewing direction of the user or in a position and/or degree of deformation which is well-defined and deliberately selected in relation to the actual viewing direction of the user.

It can be provided that the respective adaptation or calibration is carried out only once for the individual optical system 1 in order to adapt it to a specific user. It is preferably provided that the matching is repeated at predefinable time intervals.

The method steps are each only mandatory for one eye 6 and can be carried out for only one eye 6. This relates to a user who may have only one eye 6 or a situation in which the user uses only one eye 6.

Preferably, the method of the invention is set up for both eyes 6, 12 of the user. The method according to the invention is therefore described below in particular for both eyes 6, 12, wherein all these method steps that can be carried out even with only one eye 6 or with only one eye 6 are also provided as method steps for only one eye 6.

The process according to the invention therefore has the following further process steps in a preferred basic variant:

at least one predeterminable eye size and/or at least one predeterminable eye position of the second eye 12 is known by the eye-tracking glasses 2.

Subsequently, at least one geometric display set at least one second target value of the second display 10 is known from the at least one known eye size and/or the at least one known eye position and the position and orientation of the second eye detection camera 11.

Subsequently, the display control unit 8 for controlling the second display 10 is adapted to at least the second target value.

The individual steps will be explained in detail below.

Wearing the eye-movement tracking glasses 2 is the same as wearing any other glasses, which is generally known and need not be further elaborated. Together with the eye-tracking glasses 2, a display unit 3 connected thereto is also worn.

After wearing the eye-tracking glasses 2, the at least one predefinable eye size and/or the at least one predefinable eye position of the first eye 6 and preferably also of the second eye 12 is known by the eye-tracking glasses 2. In particular, the eye size is the eye diameter or eye radius and also the pupil diameter. Preferably, both are determined. In particular, the eye position is the position of the pupils of the eyes 6, 12, in particular the distance of the two pupils from one another, and the spatial position of the two eyes 6, 12 from one another. Preferably, both are determined.

In order to know the eye positions of the first eye 6 and of the second eye 12, it is provided in particular that the position of the middle between the eyes 6, 12 is determined. The centre line of the body or head in the region of the eyes 6, 12 is called the middle. Furthermore, for this purpose, the positioning of the first eye detection camera 5 and the second eye detection camera 11 relative to the middle between the eyes 6, 12 is determined.

In order to know the eye position or the eye size, it is preferably provided that at least one typical gaze sequence is recorded when the user looks at a predetermined number of a plurality of predetermined control points. The following user's gaze behavior is referred to herein as a typical gaze sequence or gaze sequence: he is induced to look at a specific control point starting from a specific dwell point or to move the user's head in a predeterminable manner if the control point is fixed. It is preferably provided that the control points are arranged at different distances and at different distances from the optical system 1 or in this way. The resulting advantages will be discussed later.

After the eye size and the eye position are known, it is provided that at least one geometric display of the first display 4 shows the set at least one first target value as a function of the at least one known eye size and/or the at least one known eye position and the position and orientation of the first eye detection camera 5. It is also preferably provided that at least one geometric display of the second display 10 shows the set at least one second target value as a function of at least one known eye size and/or at least one known eye position and the position and orientation of the second eye detection camera 11.

Thus, during this method step, a value or parameter, called target value, is known. These target values specify at which position and/or with what degree of deformation an image or virtual object 16 has to be presented within the display area of the respective display 4, 10, so that the image or virtual object has to be displayed for a user who is focused on the obliquely arranged display 4, 10 in front of his eyes 6, 12, so that the image or virtual object is again presented for the user at a well-defined or predetermined location, and substantially without distortion. In particular, the target value is not a single value, but rather a group or set of values or vectors, respectively. In particular, it is provided that different target values are respectively known, stored and taken into account for different eye poses, which are usually associated with different viewing distances.

Fig. 3 shows a corresponding view of only the eyes 6, 12 and the two displays 4, 10.

Accordingly, the geometric display setting involves at least one of the following settings: which is related to the geometric rendering of the object in the display but not to its color or contrast. Thus, the geometric display settings are related to the position or positioning, degree of deformation and size of the presented object 16 within the display area of the respective display 4, 10.

After the first target value and preferably the second target value are known, the display actuation unit 8 of the display unit 3 for actuating the first display 4 is adapted to the first target value and the display actuation unit 8 of the display unit 3 for actuating the second display 10 is adapted to the second target value. What can be achieved by matching the display actuation unit 8 is that the objects to be presented are presented to the user such that the objects to be presented (relative to the eyes 6, 12 of the user) are actually presented where they should appear and have the required degree of deformation to appear undistorted.

The degree of deformation required and the desired positioning are not constant even for one and the same user in all gaze states. In particular, they vary with the distance to the point at which the user is gazing. As already mentioned, it is therefore particularly preferably provided in the context of the determination of the eye size and/or the eye positioning that a first distance value of the first target value and a first distance value of the second target value are determined at a first control point arranged at a first distance from the optical system 1, and a second distance value of the first target value and a second distance value of the second target value are determined at a second control point arranged at a second distance from the optical system 1, wherein the first distance is different from the second distance. Thus, the different values or amounts of the first and second target values may be known in a manner involving different set distances of the first and second eyes 6, 12 or different postures of the eyes 6, 12 pointing towards each other. In particular, it is provided that the control points are arranged at least four different distances. From the known distance values, the corresponding course for the set distance can be extrapolated and stored for future display of the virtual object 16 for the particular eye pose.

The type of target value is directly related to the type of display 4, 10 used. The displays 4, 10 usually have a so-called basic setting, which is also referred to as a default setting. In the event that the video signal delivered to such a display is not intentionally altered or matched, the corresponding image or video is presented according to a default setting. Thereby, the respective images are rendered generally in the center of the respective displays without distortion.

It is therefore particularly preferably provided that a first target position and/or a first target degree of deformation of the first display region 9 for representing the virtual object 16 in front of the first eye 6 is determined as a first target value of the geometric display setting, wherein at least one first correction factor and/or a first correction function is determined starting from at least one deviation between the first target position and/or the first target degree of deformation of the first display region 9 and the first display region base setting of the first display 4, wherein the display control unit 8 is adapted to the user using at least the first correction factor or the first correction function. The first display area 9 is here a sub-area within the first display 4.

Accordingly, it is preferably provided for the second eye that a second target position and/or a second target deformation of a second display region 17 for representing the virtual object 16 in front of the second eye 12 is determined as a second target value of the geometric display setting, wherein at least one second correction factor and/or a second correction function is determined based on at least one deviation between the second target position and/or the second target deformation of the second display region 17 and a second display region base setting of the second display 10, wherein the display control unit 8 is adapted to the user using at least the second correction factor or the second correction function. Here, the second display area 17 is a sub-area within the second display 10.

In the above-described relationship, the correction function establishes a relationship between a specific eye posture and/or gaze direction of the user and correction coefficients respectively set for displaying the virtual object under the respective conditions. The eye has the ability to change the position quasi-continuously. It has been shown that the values of the correction factors also show the same quasi-continuous behavior.

An easy adaptation of the displays 4, 10 or the display control unit 8 can be achieved by the above-described use of correction coefficients or correction functions. In this case, it is particularly preferably provided that the correction factor or the correction function is recorded as an array with different distance values.

Fig. 3 clearly shows how the respective display areas 9, 17 are clearly offset from the respective centers of the two displays 4, 10 for the two shown eyes.

The matching performed according to the invention by means of the optical system 1 enables the virtual object 16 to be displayed in a predeterminable manner with respect to the gaze direction or gaze behavior of the user. In particular, it is thereby possible to display the virtual object 16 together with the real object 15 in a predefinable relationship.

After matching or calibrating the described optical system 1 to a user according to the described method, the virtual object 16 may further be generated and displayed. In the method for generating and displaying a virtual object 16 by means of an optical system 1, the following further method steps are provided:

the current gaze direction of the first eye 6 is known by the eye-tracking glasses 2.

Generating the virtual object 16 in response to a predeterminable state and/or a predeterminable event.

Displaying, by the display control unit 8, the virtual object 16 in the first display 4 at the known position in the gaze direction of the first eye 6, taking into account the first target value.

As already explained, in particular two displays 4, 10 are provided. In this case, it is also provided in particular that, when the eye tracking glasses 2 are adapted or calibrated accordingly, the following further method steps are carried out simultaneously with the preceding method steps:

the current gaze direction of the second eye 12 is known by the eye-tracking glasses 2.

The virtual object 16 is also displayed in the second display 10 at the known position in the gaze direction of the second eye 12, taking into account the second target value, by the display control unit 8.

In particular, when using the displays 4, 10 having the basic setting, it is provided that the virtual object 16 is displayed in the first display 4 offset and/or deformed by the first correction factor and/or the first correction function, and preferably that the virtual object 16 is displayed in the second display 10 offset and/or deformed by the second correction factor and/or the second correction function.

It is provided that the virtual object is presented at a known position in the direction of the line of sight of the first eye 6 or the second eye 12. It is provided here that the relevant position of the presentation is shifted with the viewing direction or correspondingly. It is therefore preferably provided here that the current viewing direction of the first eye 6 and/or of the second eye 12 is continuously known by the eye-tracking glasses 2 and the position displayed with the virtual object 16 is continuously adapted to the current viewing direction or current viewing directions.

Since at least one of the eyes 6, 12 may have some slight and unintended movement while viewing the virtual object 16, which movements are detected by the eye-tracking glasses 2, respectively, the following of the display location may result in a constant movement of the display location, which may be annoying or shaky by the user. In order to avoid this, it can be provided that the current viewing direction must deviate from the last known viewing direction by a predeterminable amount, in particular by 2 °, in order to match the position at which the virtual object 16 is displayed to the current viewing direction.

Alternatively, it can be provided that the respectively known viewing directions are averaged over a certain period of time or a predeterminable past period of time, and that the position displayed with the virtual object 16 lies in the averaged viewing direction. Here, the length of the past period may be matched according to the situation. The length of the past period is preferably about 0.1s to 0.3 s.

The virtual object 16 is generated and displayed when a predeterminable state and/or a predeterminable event occurs. In this case, such a state or such an event is only considered to occur if a predetermined number of criteria are respectively fulfilled.

According to a first preferred variant, it is provided that the optical system has at least one field of view camera 14, wherein a predeterminable real object 15 is detected by the field of view camera 14, wherein the detection of the predeterminable real object 15 is a predeterminable event for generating the virtual object 16 or a criterion for the respective event. This makes it possible to support the user in a real-world orientation, for example. The real object 15 can already be recognized by means of a corresponding image processing program. Here, face recognition may be provided.

According to a second preferred variant, it is provided that the system is designed to detect at least one state value of the user, the state value is monitored by the system with respect to the exceeding of a boundary value, and the exceeding of the boundary value is a predeterminable event for generating the virtual object 16. For example, a tiredness value for the waking/tiredness state of the user may be known from observing and evaluating the gaze behavior, for which no additional sensors are required.

It can furthermore be provided that the optical system 1 has at least one further sensor for ascertaining physiological variables of the user, in particular the heartbeat and/or the skin conductivity, and/or that the optical system is connected to a corresponding external sensor in order to display its value. For example, the optical system 1 may be connected to a blood pressure sensor. The corresponding external sensor may also be a sensor that records a physical function of a living being other than the user. Thus, for example, the coach can be informed of the critical situation of one of his athletes. Furthermore, the at least one sensor may be a sensor of the technical installation.

According to a third preferred variant, it is provided that the system has at least one navigation and position determination unit 23 for detecting the spatial orientation and the location of the system 1, and that the predeterminable location and the predeterminable spatial orientation are predeterminable events for generating the virtual object 16. The detection of the spatial orientation and location of the system 1 can be supported in particular by means of so-called location-based related services or location-based services as are known in the field of smart phones.

Fig. 5 to 7 show examples for displaying a virtual object 16 in relation to a real object 15, wherein it does not matter whether the real object is simply present therein or is detected or recognized as an object 15. Fig. 5 shows a first eye 6 of a user looking at a real object 15, which is labeled as "real object" in fig. 5, through the first display 4. According to a particularly simple reaction, the virtual object 16 is displayed in the display 4 in the form of a frame, such that the virtual object 16 surrounds the real object 15 and the frame appears substantially rectangular.

Fig. 6 shows a similar situation, wherein, however, the real object 15 is not shown. Instead of the surrounding edge, a further virtual object 16 in the form of a TAG (TAG) is now presented. The relevant label is here presented laterally adjacent to the viewing direction. Since it is possible to know clearly where the user is looking through the eye-tracking glasses 2, the virtual object can be positioned at or beside the gaze direction so that it can be recognized or read by the user without changing the gaze direction significantly.

Fig. 7 illustrates the effect on the virtual object 16 or tag when the user turns away from the virtual object 16 or tag or turns gaze towards another item. The relevant virtual object 16 retains its allocated position in space. Since it is now outside the display area 9 of the display 4, it is no longer presented. Once the user has moved sufficiently in the respective direction again, the virtual object 16 will again be presented.

It is preferably provided that, while the real object 15 is being fixated, the set distance of the two eyes 6, 12 of the user is known by the eye-tracking glasses 2 and the virtual object 16 is presented on the first and second displays 4, 10 with a positional shift and/or deformation such that it appears as a single object at the same set distance from the two eyes 6, 12 as the real object 15. Thus, although the virtual object 16 is presented on the display 4, 10 arranged directly in front of the eyes 6, 12 of the user, the eyes 6, 12 of the user see that the virtual object 16 appears at the distance to which it is attached as the real object 15. Thus, refocusing to different distances that would otherwise be required constantly is eliminated.

It is therefore provided that the virtual object 16 is displayed as a so-called stereoscopic image.

The set distance of the two eyes 6, 12 is determined by the eye tracking glasses in such a way that the angular position of the two eyes 6, 12 is determined and it is calculated therefrom which distance the eyes 6, 12 are focused on. The knowledge of the distance value in terms of length can be completely dispensed with. The set distance may also be known and processed in the form of an angle or angles. In other words, the current position of the two eyes is thus known by the eye-tracking glasses 2, and the virtual object 16 is represented on the first and second displays 4, 10 with a positional shift and/or deformation such that it appears as a single object at the distance at which the two eyes 6, 12 are aligned.

The system 1 of the invention preferably does not have a separate distance sensor for knowing the distance between the system 1 and the real object 15.

In particular, it is provided here that, for the presentation of the virtual object 16 in the first and second displays 4, 10, the display control unit 8 takes into account at least one distance value of the first target value and of the second target value, which corresponds to the set distance of the two eyes 6, 12. If one or more distance values of the first target value and of the second target value are not stored, it is provided that the display control unit 8 interpolates between two adjacent distance values.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:头戴式信息处理装置和头戴式显示系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类