Binocular augmented reality system with alignment correction and alignment correction method

文档序号:377562 发布日期:2021-12-10 浏览:2次 中文

阅读说明:本技术 具有对准校正的双目增强现实系统及对准校正方法 (Binocular augmented reality system with alignment correction and alignment correction method ) 是由 耶谢·丹齐格 于 2019-01-02 设计创作,主要内容包括:本公开涉及具有对准校正的双目增强现实系统以及对准校正方法。一种具有对准校正的双目增强现实系统,包括:(a)右眼显示单元,其包括在空间上与第一摄像机相关联的第一增强现实显示器;(b)左眼显示单元,其包括在空间上与第二摄像机相关联的第二增强现实显示器;(c)可调整支撑结构,其在右眼显示单元和左侧显示单元之间提供可调整瞳孔间距离;以及(d)处理系统,其包括至少一个处理器,处理系统与第一和第二摄像机及第一和第二增强现实显示器数据通信,处理系统配置成确定第一和第二增强现实显示器之间的对准校正以用于图像的双目显示。(The present disclosure relates to a binocular augmented reality system with alignment correction and an alignment correction method. A binocular augmented reality system with alignment correction, comprising: (a) a right eye display unit comprising a first augmented reality display spatially associated with a first camera; (b) a left-eye display unit comprising a second augmented reality display spatially associated with a second camera; (c) an adjustable support structure that provides an adjustable interpupillary distance between the right eye display unit and the left eye display unit; and (d) a processing system including at least one processor, the processing system in data communication with the first and second cameras and the first and second augmented reality displays, the processing system configured to determine an alignment correction between the first and second augmented reality displays for binocular display of the images.)

1. A binocular augmented reality system with alignment correction, comprising:

(a) a right eye display unit comprising a first augmented reality display spatially associated with a first camera;

(b) a left-eye display unit comprising a second augmented reality display spatially associated with a second camera;

(c) an adjustable support structure providing an adjustable interpupillary distance (IPD) between the right eye display unit and the left eye display unit; and

(d) a processing system comprising at least one processor, the processing system in data communication with the first and second cameras and the first and second augmented reality displays, the processing system configured to determine an alignment correction between the first and second augmented reality displays for binocular display of images.

2. The system of claim 1, wherein the processing system is responsive to user input to determine the alignment correction.

3. The system of claim 1, wherein the processing system is configured to intermittently determine the alignment correction.

4. The system of claim 1, wherein the processing system is configured to determine the alignment correction when a user views the first and second augmented reality displays.

5. The system of claim 1, wherein the right eye display unit, the left eye display unit, and the adjustable support are part of an augmented reality device, and wherein at least a portion of the processing system is an onboard processing system integrated with the device.

6. The system of claim 1, wherein the right eye display unit, the left eye display unit, and the adjustable support are part of an augmented reality device, and wherein at least a portion of the processing system is a remote processing system associated with the device via a short range or long range communication connection.

7. The system of claim 1, wherein the first camera is a forward-looking camera rigidly associated with the first augmented reality display and the second camera is a forward-looking camera rigidly associated with the second augmented reality display.

8. The system of claim 7, wherein the processing system is configured to determine a correlation between images from the first camera and the second camera as part of determining the alignment correction.

9. The system of claim 7, wherein the processing system is configured to determine the alignment correction by:

(i) sampling at least one image from the first camera;

(ii) sampling at least one image from the second camera;

(iii) co-processing the images from the first and second cameras to obtain an inter-camera mapping indicative of a relative orientation between the first camera and the second camera; and

(iv) combining the inter-camera mapping with a first alignment mapping between the first camera and the first augmented reality display and a second alignment mapping between the second camera and the second augmented reality display to derive the alignment correction.

10. The system of claim 9, wherein the at least one image sampled from the first camera and the second camera is a plurality of images, and wherein the co-processing comprises deriving a three-dimensional model of at least a portion of a scene contained in the plurality of images.

11. The system of claim 7, wherein the processing system is configured to receive user input indicative of an alignment adjustment between an alignment feature displayed via the first augmented reality display and a corresponding real-world feature observed by a user, the alignment feature derived from an image sampled from the second camera.

12. The system of claim 11, wherein the processing system is further configured to receive user input indicative of an alignment adjustment between an alignment feature displayed via the second augmented reality display and a corresponding real-world feature observed by a user, the alignment feature derived from an image sampled from the first camera.

13. The system of claim 7, wherein the processing system is configured to determine the alignment correction by:

(a) performing a first cross-registration process, the first cross-registration process comprising:

(i) obtaining at least one image of a scene sampled by the first camera,

(ii) displaying, via the second augmented reality display, at least one alignment feature derived from the at least one image sampled by the first camera,

(iii) receiving input from the user indicating an alignment offset between the at least one alignment feature and a corresponding visual feature of the scene, an

(iv) Correcting a display position of the at least one alignment feature in accordance with the user input until the at least one alignment feature is aligned with the corresponding visual feature of the scene;

(b) performing a second cross-registration process, the second cross-registration process comprising:

(i) obtaining at least one image of a scene sampled by the second camera,

(ii) displaying, via the first augmented reality display, at least one alignment feature derived from the at least one image sampled by the second camera,

(iii) receiving input from the user indicating an alignment offset between the at least one alignment feature and a corresponding visual feature of the scene, an

(iv) Correcting a display position of the at least one alignment feature in accordance with the user input until the at least one alignment feature is aligned with the corresponding visual feature of the scene; and

(c) the alignment correction is derived based on the user input.

14. The system of claim 13, wherein the at least one alignment feature for each of the cross-registration processes is at least a portion of a sampled image.

15. The system of claim 13, wherein the at least one alignment feature for each of the cross-registration processes is a location marker corresponding to a feature detected in a sampled image.

16. The system of claim 13, further comprising obtaining an estimated distance to an object in a sampled image, the estimated distance used to implement the alignment correction.

17. A method for stereo alignment correction between a right eye display and a left eye display of a binocular augmented reality display device, the method comprising the steps of: (a) providing an augmented reality device, comprising: (i) a right eye display unit comprising a first augmented reality display rigidly integrated with a forward looking first camera, (ii) a left eye display unit comprising a second augmented reality display rigidly integrated with a forward looking second camera, and (iii) a support structure interconnected between the right eye display unit and the left side display unit; (b) providing a first alignment mapping between the first camera and the first augmented reality display and a second alignment mapping between the second camera and the second augmented reality display; (c) sampling at least one image from the first camera; (d) sampling at least one image from the second camera; (e) co-processing the images from the first and second cameras to obtain an inter-camera mapping indicative of a relative orientation between the first camera and the second camera; (f) combining the inter-camera mapping with the first alignment mapping and the second alignment mapping to obtain an inter-display alignment mapping indicating a relative orientation of the first augmented reality display and the second augmented reality display; and (g) performing an alignment correction for the augmented reality display device based on the inter-display alignment mapping, wherein the at least one image from the first camera and the second camera is a plurality of images, and wherein the co-processing comprises deriving a three-dimensional model of at least a portion of a scene contained in the plurality of images.

18. A method for stereo alignment correction between a right eye display and a left eye display of a binocular augmented reality display device, the method comprising the steps of: (a) providing an augmented reality device, comprising: (i) a right eye display unit comprising a first augmented reality display rigidly integrated with a forward looking first camera, (ii) a left eye display unit comprising a second augmented reality display rigidly integrated with a forward looking second camera, and (iii) a support structure interconnected between the right eye display unit and the left side display unit; (b) providing a first alignment mapping between the first camera and the first augmented reality display and a second alignment mapping between the second camera and the second augmented reality display; (c) sampling at least one image from the first camera; (d) sampling at least one image from the second camera; (e) co-processing the images from the first and second cameras to obtain an inter-camera mapping indicative of a relative orientation between the first camera and the second camera; (f) combining the inter-camera mapping with the first alignment mapping and the second alignment mapping to obtain an inter-display alignment mapping indicating a relative orientation of the first augmented reality display and the second augmented reality display; and (g) performing an alignment correction for the augmented reality display device based on the inter-display alignment map, wherein the step of providing a first alignment map comprises: (i) sampling at least one calibration image using the first camera; (ii) displaying, via the first augmented reality display, the calibration image sampled by the first camera; and (iii) determining the first alignment map from the alignment of the displayed calibration images.

19. The method of claim 18, wherein the calibration image is projected by a projector, and wherein the alignment of the displayed calibration image is determined from an image sampled by a calibration camera, the projector and the calibration camera being rigidly mounted on a calibration jig.

20. A method for stereo alignment correction between a right eye display and a left eye display of a binocular augmented reality display device, the method comprising the steps of: (a) providing an augmented reality device comprising a right-eye augmented reality display, a left-eye augmented reality display, a right camera spatially associated with the right-eye augmented reality display, and a left camera spatially associated with the left-eye augmented reality display; (b) performing a first cross-registration process, the first cross-registration process comprising: (i) obtaining at least one image of a scene sampled by the right camera, (ii) displaying, via the left-eye augmented reality display, at least one alignment feature derived from the at least one image sampled by the right camera, (iii) receiving input from the user indicative of an alignment offset between the at least one alignment feature and a corresponding visualization of the scene, and (iv) correcting a display position of the at least one alignment feature in accordance with the user input until the at least one alignment feature is aligned with the corresponding visualization feature of the scene; (c) performing a second cross-registration process, the second cross-registration process comprising: (i) obtaining at least one image of a scene sampled by the left camera, (ii) displaying, via the right-eye augmented reality display, at least one alignment feature derived from the at least one image sampled by the left camera, (iii) receiving input from the user indicative of an alignment offset between the at least one alignment feature and a corresponding visual feature of the scene, and (iv) correcting a display position of the at least one alignment feature in accordance with the user input until the at least one alignment feature is aligned with the corresponding visual feature of the scene; and (d) apply a correction to the augmented reality display device based on the user input.

21. The method of claim 20, wherein the at least one alignment feature for each of the cross-registration processes is at least a portion of a sampled image.

22. The method of claim 20, wherein the at least one alignment feature for each of the cross-registration processes is a location marker corresponding to a feature detected in a sampled image.

23. The method of claim 20, further comprising obtaining an estimated distance to an object in a sampled image, the estimated distance being used to implement the alignment correction.

24. The method of claim 20, wherein the right camera is rigidly mounted relative to the right-eye augmented reality display, and wherein the left camera is rigidly mounted relative to the left-eye display, the alignment correction implemented using relative alignment data of the right camera relative to the right-eye augmented reality display and relative alignment data of the left camera relative to the left-eye augmented reality display.

25. The method of claim 20, further comprising performing at least one additional registration process to receive user input to correct alignment of at least one of the right eye augmented reality display and the left eye augmented reality display relative to a corresponding one of the right camera and the left camera.

Technical Field

The present invention relates to augmented reality displays, and in particular to a binocular augmented reality display having an arrangement for adjusting alignment of left and right eye displays of the binocular augmented reality display, and a corresponding alignment method.

Background

Augmented reality glasses must be precisely aligned to provide an effective binocular viewing experience of the augmented images, and even relatively small misalignments risk causing eye fatigue or headaches. Conventional approaches typically involve mounting the left-eye display and the right-eye display on a mechanically rigid common support structure as shown in fig. 1A to achieve preliminary alignment and fixed relative positions of the displays. The final fine alignment is achieved by electronic shifting of the images as schematically shown in fig. 1B to achieve a correct alignment between the displays, where fig. 1B shows an image generation matrix 30 (i.e. the physical end of the display field of view) and a transformed projected image 32 according to a calibration matrix typically programmed into the firmware associated with each display. The margin between 30 and 32 is designed into the system to accommodate any transformations required to correct for misalignment within predefined limits.

Referring to fig. 1-2, an exemplary alignment process according to the method is shown herein. Electronic alignment parameters are generated by placing glasses in front of two co-aligned cameras and comparing the orientation of the enhanced images generated by the two projectors. The resulting calibration data is introduced into the transform firmware of the image projector. Alternatively, the mechanical alignment of the optical system may be accurate to within the required optical accuracy. The alignment process described above requires a dedicated optical alignment stage and is only suitable for implementation in a production facility.

There is a need to implement augmented reality glasses in a lightweight and compact form factor to make the technology more suitable for the consumer market. Lightweight implementations, however, typically lack sufficient mechanical rigidity to ensure that the alignment of the two displays does not change over time, but rather the alignment may change due to thermal variations and other mechanical or environmental effects.

Furthermore, the interpupillary distance (IPD, distance between eyes) can vary up to 15 mm for different people. Thus, if the two projectors are rigidly connected, each of the eye-boxes (i.e., the illuminated area of each projector where the eye pupil is expected to be, shown as area 10 in fig. 1A) must be able to widen 15/2-7.5 mm for each eye to accommodate each possible user with any IPD within a defined margin. The large eye box determines a more bulky and expensive optical element. If a mechanism for IPD adjustment is provided, this will typically introduce additional uncertainty into the alignment between the two displays, making any pre-calibrated alignment corrections unreliable.

Disclosure of Invention

The present invention is a binocular augmented reality display having an arrangement for adjusting alignment of left and right eye displays of the binocular augmented reality display and a corresponding alignment method.

According to the teachings of embodiments of the present invention, there is provided a method for obtaining an alignment correction between a right-eye display and a left-eye display of a binocular augmented reality display device, the method comprising the steps of: (a) positioning a camera having a field of view such that the camera field of view includes both a portion of the projected image from the left-eye display and a portion of the projected image from the right-eye display; (b) projecting, via each of the right-eye display and the left-eye display, at least a portion of a calibration image including at least one right field alignment feature and at least one left field alignment feature; (c) sampling an image with a camera; (d) identifying a right field alignment feature and a left field alignment feature within the image, and (e) deriving an alignment correction between a right eye display and a left eye display of the augmented reality display device based on positions of the right field alignment feature and the left field alignment feature within the image.

According to another feature of an embodiment of the present invention, the camera is located on a viewing side of the augmented reality display device such that the image includes a right field alignment feature viewed via the right-eye display and a left field alignment feature viewed via the left-eye display.

According to another feature of an embodiment of the present invention, the projected calibration image is displayed with a focal length, and wherein the camera is focused at the focal length.

According to another feature of an embodiment of the present invention, the camera is located on an opposite side of a viewing side of the augmented reality display device such that the camera captures an outwardly reflected portion of the image illumination from each of the right-eye display and the left-eye display and such that the image includes a left field alignment feature viewed via the right-eye display and a right field alignment feature viewed via the left-eye display.

According to another feature of an embodiment of the present invention, the camera is a handheld camera, the method further comprising: at least one indication is displayed to the user via the right eye display and/or the left eye display to assist in properly positioning the camera.

According to another feature of an embodiment of the invention: (a) identifying within the image features associated with the binocular augmented reality display device sufficient to define at least three reference points; and (b) determining the position of the camera relative to the at least three fiducials.

According to another feature of an embodiment of the present invention, the positioning includes: the camera is directed toward the mirror such that the reflected field of view includes both a portion of the projected image from the left-eye display and a portion of the projected image from the right-eye display.

According to another feature of an embodiment of the present invention, the camera is a camera of a mobile device integrated with the screen, the method further comprising: at least one indication is displayed to the user via the screen to assist in properly positioning the camera.

According to another feature of an embodiment of the present invention, an alignment correction of the augmented reality display device is implemented based on the resulting alignment correction.

There is also provided, in accordance with the teachings of an embodiment of the present invention, a method for stereo alignment correction between a right-eye display and a left-eye display of a binocular augmented reality display device, the method including the steps of: (a) providing an augmented reality device, the augmented reality device comprising: (i) a right eye display unit comprising a first augmented reality display rigidly integrated with a forward looking first camera, (ii) a left eye display unit comprising a second augmented reality display rigidly integrated with a forward looking second camera, and (iii) a support structure interconnected between the right eye display unit and the left side display unit; (b) providing a first alignment mapping between the first camera and the first augmented reality display and a second alignment mapping between the second camera and the second augmented reality display; (c) sampling at least one image from a first camera; (d) sampling at least one image from a second camera; (e) cooperatively processing images from the first camera and the second camera to obtain an inter-camera mapping indicative of a relative orientation between the first camera and the second camera; (f) combining the inter-camera mapping with the first and second alignment mappings to obtain an inter-display alignment mapping indicating a relative orientation of the first and second augmented reality displays; and (g) implementing an alignment correction for the augmented reality display device based on the inter-display alignment map.

According to another feature of an embodiment of the present invention, the at least one image from the first camera and the second camera is sampled for a distant scene.

According to another feature of an embodiment of the present invention, the at least one image from the first camera and the second camera is a plurality of images, and wherein the co-processing comprises: a three-dimensional model of at least a portion of a scene included in the plurality of images is obtained.

There is also provided, in accordance with the teachings of an embodiment of the present invention, a method for stereo alignment correction between a right-eye display and a left-eye display of a binocular augmented reality display device, the method including the steps of: (a) providing an augmented reality device comprising a right eye augmented reality display, a left eye augmented reality display, a right camera spatially associated with the right eye augmented reality display, and a left camera spatially associated with the left eye augmented reality display; (b) performing a first cross-registration process, the first cross-registration process comprising: (i) obtaining at least one image of a scene sampled by a right camera, (ii) displaying, via a left eye augmented reality display, at least one alignment feature derived from the at least one image sampled by the right camera, (iii) receiving input from a user indicative of an alignment offset between the at least one alignment feature and a corresponding direct viewing feature of the scene, and (iv) correcting a display position of the at least one alignment feature in accordance with the user input until the at least one alignment feature is aligned with the corresponding direct viewing feature of the scene; (c) performing a second cross-registration process, the second cross-registration process comprising: (i) obtaining at least one image of a scene sampled by a left camera, (ii) displaying, via a right-eye augmented reality display, at least one alignment feature derived from the at least one image sampled by the left camera, (iii) receiving input from a user indicative of an alignment offset between the at least one alignment feature and a respective direct-view feature of the scene, and (iv) correcting a display position of the at least one alignment feature in accordance with the user input until the at least one alignment feature is aligned with the respective direct-view feature of the scene; and (d) implement an alignment correction for the augmented reality display device based on the user input.

According to a further feature of an embodiment of the present invention, the at least one alignment feature for each cross-registration process is at least a portion of a sampled image.

According to a further feature of an embodiment of the present invention, the at least one alignment feature for each cross-registration process is a position marker corresponding to a feature detected in the sampled image.

According to another feature of an embodiment of the present invention, an estimated distance to an object in the sampled image is obtained, the estimated distance being used to implement the alignment correction.

According to another feature of an embodiment of the present invention, the right camera is rigidly mounted relative to the right-eye augmented reality display and wherein the left camera is rigidly mounted relative to the left-eye display, the alignment correction being effected using relative alignment data of the right camera relative to the right-eye augmented reality display and relative alignment data of the left camera relative to the left-eye augmented reality display.

According to another feature of an embodiment of the present invention, at least one additional registration process is performed to receive user input for correcting alignment of at least one of the right-eye augmented reality display and the left-eye augmented reality display relative to a respective one of the right camera and the left camera.

Drawings

The invention is herein described, by way of example only, with reference to the accompanying drawings, wherein:

as described above, fig. 1A is a top view of a binocular augmented reality display according to the prior art;

FIG. 1B is a schematic representation illustrating the principles of electronic alignment correction for an augmented reality display;

FIG. 2 is a flow diagram illustrating a factory adjustment process for calibrating an augmented reality display according to the prior art;

fig. 3 is a schematic front view of a binocular augmented reality display with an arrangement for adjusting the IPD, constructed and operative in accordance with an embodiment of the present invention;

FIG. 4 is a schematic side view of the display of FIG. 3 in use;

FIG. 5 is a schematic side view of the apparatus of FIG. 4 during a factory partial calibration process according to a first implementation option;

FIG. 6 is a schematic side view of the apparatus of FIG. 4 during a factory partial calibration process according to a second implementation option;

FIG. 7 is a schematic representation of a calibration process including sampling multiple images of an object or scene from different directions;

fig. 8 is a flow diagram illustrating a method for alignment calibration of the augmented reality display of fig. 3 and 4, in accordance with an aspect of the present invention;

FIGS. 9A and 9B are side and front schematic views, respectively, of an augmented reality display device employing an alternative technique of alignment calibration;

FIG. 9C is a schematic representation of alignment adjustment performed by a user according to this aspect of the invention;

FIG. 10A is a schematic side view of an augmented reality display device during implementation of an alignment calibration according to another aspect of the present invention;

FIG. 10B is an enlarged schematic side view of a light guide optical element showing two possible geometries for delivering an augmented reality image to a user's eye;

FIG. 11A is a schematic top view of the arrangement of FIG. 10A;

FIG. 11B is a schematic top view of a modified implementation of the arrangement of FIG. 10A;

FIG. 11C is a schematic representation of a mobile communication device employed as a camera for the alignment calibration of FIG. 10A;

FIG. 11D is a schematic representation of a calibration image displayed via an augmented reality display during performance of an alignment calibration according to this aspect of the invention;

FIG. 11E is a schematic representation of images sampled by the camera during performance of alignment calibration according to this aspect of the invention;

FIG. 11F is a schematic top view of another variant implementation of the arrangement of FIG. 10A; and

fig. 12 is a flowchart illustrating a method of alignment calibration according to the arrangement of fig. 10A, 11B and 11F.

Detailed Description

The present invention is a binocular augmented reality display having an arrangement for adjusting alignment of left and right eye displays of the binocular augmented reality display and a corresponding alignment method.

The principles and operation of an apparatus and method according to the present invention may be better understood with reference to the drawings and the accompanying description.

By way of introduction, the present invention addresses a range of situations where there is no pre-calibrated alignment between the right-eye display and the left-eye display of a binocular augmented reality display or where the pre-calibrated alignment is not considered reliable. This may be due to the use of lightweight structural components that do not ensure constant rigid alignment of the components over sustained periods of time and/or varying environmental conditions, or may be due to the presence of adjustment mechanisms, particularly IPD adjustment mechanisms, that may cause inaccuracies in the final alignment of the display. The presence of the IPD adjustment mechanism is particularly preferred, allowing the augmented reality display device to accommodate users with different interpupillary distances, while reducing the requirements on projector eye box size and, therefore, projector volume, complexity, and cost. However, the IPD adjustment mechanism typically introduces variability into the alignment of the two display projectors.

To address these problems, the present invention provides three sets of solutions that allow for the alignment of the right-eye display and the left-eye display of a binocular augmented reality display device to be calibrated or re-calibrated in the normal operating environment of the end user without the need for any specialized equipment. In particular, the first subset of alignment correction techniques is implemented as an automatic or semi-automatic alignment process based on a correlation of images sampled by a bilateral camera associated with respective left-eye and right-eye displays. A second subset of alignment correction techniques, which also utilize cameras mounted on the device, require user input to align the displayed features with the corresponding real-world features. Finally, a third subset of alignment correction techniques may be applied without relying on a camera mounted on the device, but rather on an external camera. Each of these subsets of technologies also preferably corresponds to a different implementation of the binocular augmented reality device having control components configured to implement the corresponding technology. Each method will now be described in detail.

Referring now to the drawings, fig. 3-8 illustrate various aspects of a binocular augmented reality display device, an initial partial alignment process, and corresponding methods for stereo alignment correction between a right eye display and a left eye display of the binocular augmented reality display device, all according to a first method of aspects of the present invention. According to the method, each of two displays ("projectors") is rigidly attached to a forward looking camera. The support structure that bridges between the eye projectors is relatively less rigid and/or can be modified and locked by the user according to his or her personal IPD. Images of the scene received by the cameras are compared and a transformation matrix is derived for the projector.

Thus, in summary, there is provided an augmented reality device comprising a right eye display unit having a first augmented reality display rigidly integrated with a forward looking first camera and a left eye display unit having a second augmented reality display rigidly integrated with a forward looking second camera. The augmented reality device further comprises a support structure interconnected between the right eye display unit and the left side display unit. According to a preferred aspect of the method, each display unit is rigid such that each camera is in fixed alignment with the respective augmented reality display, and the system is provided with or derives an alignment map between each camera and the respective augmented reality display, typically in the form of a transformation matrix that maps the camera alignment to the display (i.e. allows the display of camera images that are correctly aligned with the real world of the distant scene viewed through the augmented reality display). On the other hand, the support structure need not be considered rigid enough to provide a constant alignment between the left-eye and right-eye display units over a period of time, and in certain particularly preferred implementations, the support structure includes an adjustment mechanism that allows the IPD to be adjusted for different users, which typically results in some variation in angular alignment during adjustment.

Alignment correction is then preferably performed by a processing system associated with the augmented reality display device, which may be an onboard processing system or may be a processing system associated with the device via a short range or long range communication connection. Here and elsewhere in this application, the processes described may be performed by standard processing components, which may be general purpose hardware or ASIC or other special or semi-special purpose hardware configured by suitable software, as readily selected by the person of ordinary skill in the art according to their best suitability for the functions described herein. Further, the processing may be performed at any location or combination of locations including, but not limited to, one or more on-board processors forming part of an augmented reality display device, a mobile communication device wired or wirelessly connected to an AR display device, a server located at a remote location and connected to the AR display device via a WAN, and a cloud computing virtual machine comprised of dynamically allocated computing resources. The details of the processing system implementation are not essential to the implementation of the present invention and therefore will not be described in further detail herein.

The alignment correction process according to an aspect of the present invention preferably includes:

i. sampling at least one image from a first camera;

sampling at least one image from a second camera;

co-processing images from the first camera and the second camera to obtain an inter-camera mapping indicative of a relative orientation between the first camera and the second camera;

combining the inter-camera mapping with the first and second alignment mappings to obtain an inter-display alignment mapping indicative of a relative orientation of the first and second augmented reality displays; and

v. implementing alignment correction for an augmented reality display device based on inter-display alignment mapping.

This process will be discussed in more detail below.

Fig. 3 schematically depicts a front view of a system according to the invention. The optical assemblies 40R and 40L project images into respective see-through optical elements 42R and 42L, which see-through optical elements 42R and 42L are preferably realized as light-transmitting light-guide optical elements with partial reflectors or diffractive optical elements for coupling out virtual images onto the right and left eyes, respectively, of an observer. Forward-facing cameras 44R and 44L are rigidly attached to their adjacent projectors, while a support structure 46, preferably implemented as an adjustable mechanical arrangement, connects the two projectors. Preferably, the mechanical arrangement can be unlocked to change the distance between the projectors and then locked again before use. This allows IPD adjustment, reducing projector size and complexity. It should be appreciated that precise parallelism and orientation is generally not maintained after unlocking and locking the arrangement 46.

Fig. 4 shows a schematic side view representation of the left projector and camera. Light from the optical element 40L passes through the waveguide 42L and is deflected towards the eye (deflection methods not shown, but typically based on a substrate with internal tilted partially reflective facets (facet) as marketed from rumus Ltd or based on an arrangement of diffractive optical elements). The object 50 or scene is imaged by the camera 44L. The same object is imaged by right camera 44R.

The alignment correction process according to this aspect of the invention requires determining an alignment map between each camera and the corresponding augmented reality display for each of the right eye display unit and the left eye display unit. Preferably, the transformation parameters between the camera axis and the projector axis are measured after integration of the camera projector, preferably as part of the manufacturing process. Various techniques may be used to determine the alignment map. Two options will now be described with reference to fig. 5 and 6.

In fig. 5, an external fixture 52 securely holds a co-aligned projector 54 and camera 56. The projector and camera are preferably aligned with their optical axes parallel to each other, and most preferably aligned with sufficient accuracy that no transformation parameters are required between the two. Projector 54 projects a "reference image" received by camera 44L. The processing system injects a similar centered image into projector 40L, which projector 40L generates a projected image that is received by camera 56 via optical element 42L. The processing system compares the images from 44L and 56 to define transformation parameters between 40L and 44L. The distance between 44L and 42L (specifically the eye box center of the waveguide) is also preferably recorded for parallax calculation, if desired.

In fig. 6, two projectors 54U and 54D are rigidly attached (or alternatively, may be implemented as a single projector with a sufficiently large aperture) and project calibration images that are typically aimed at infinity. The image from 54U is received by camera 44L and "injected" into projector 40L. In this case, the camera 56 simultaneously receives a superposition of the direct view image projected by 54D and the image projected by the projector 40L through the optical element 42L. The difference between the two images corresponds to the transformed data between projector 40L and camera 44L. Most preferably, the automatic alignment process may adjust the alignment of the images generated by projector 40L until a clear (precisely superimposed) image is received by camera 56, although a manually controlled adjustment process using a suitable graphical user interface (not shown) is also possible. At this stage, such adjustments need not actually be implemented in the device firmware, as the final alignment will also depend on the binocular alignment. To facilitate manual or automatic alignment, the alignment image may be an X-crosshair or the like, and the color of the image from 40L may be changed or the image may be caused to blink for clear differentiation during the alignment process. Then, two visually distinct X-crosshairs need to be aligned.

If the optical elements on projector 42L generate a virtual image at a finite distance, it is preferable to set the calibration image and the conversion of 54U and 54D to that distance as well, and since the distance parameters are known, the image projected from projector 40L is shifted when injected into 42L according to the parallax between camera 44L and projector 42L.

The above alignment process shown for the left eye display unit is clearly repeated (or performed simultaneously) for the right eye display unit. The result is a well-defined transformation matrix that aligns the cameras to the displays mapped to each of the display elements.

After using one of the alignment techniques described above, the camera may then be used in a calibration process performed by the end user to measure and correct misalignment between the two projectors as needed, for example, after adjustment of the IPD or as an automated self-calibration process performed intermittently (or in some preferred applications each time the device is powered on), during or after manufacturing, in order to obtain an alignment transformation between each projector and its corresponding camera.

When the camera samples images of a distant scene, resolving the relative orientation of cameras 44L and 44R (after IPD adjustment, as described in fig. 3) is particularly straightforward, since the disparity between the two sampled images is negligible. In this case, "far" is ideally any distance in excess of about 100 meters, which ensures that the angular variation due to convergence between eyes/cameras (convergence) is less than the angular resolution of human visual perception. However, in practice, here "far" may include any distance in excess of 30 meters, and in some cases, a distance of 10 or 20 meters may also allow the use of such a simplified calibration procedure with acceptable results. Thus, in the case of a user-actuated calibration, the user may be instructed to point the device at a distant scene before starting the calibration process. Similarly, where the device is used in an outdoor environment, the device may be configured to detect when the camera is looking at a distant scene via a ranging sensor or by image processing. Calibration may then be formed by sampling images from the distant scene from each camera 44L and 44R, and performing image comparison/registration between the two images to determine the transformation between the cameras.

Even in cases where the scene is at a short distance, alignment correction can sometimes be done using simple image registration, as long as the scene has little "depth" and the two cameras sample substantially the same image. One such example is calibration by imaging a flat surface such as a poster or other picture or texture on a wall. In this case, information on the distance from the camera to the surface is required to correct the convergence angle.

To allow calibration in a range of situations where a "distant scene" may not be available, or to adapt a more robust calibration procedure to be performed automatically without user cooperation, calibration may also be performed using nearby objects for which parallax between the cameras is apparent. In this case, a 3D reconstruction is required to "resolve" the relative camera positions. It may be necessary to move the camera to generate multiple images to get an accurate solution, as schematically shown in fig. 7. For example, algorithms for this calculation are well known in literature and open source code libraries related to SLAM (simultaneous localization and mapping) processes. By employing these algorithms, a 3D reconstruction (or "model") of at least part of the scene is generated for each camera. The offset (space and orientation) between the projectors is determined using the reconstruction offset between the cameras.

In the case of using the SLAM process to derive a model, a scaling factor is required to fully resolve the model. The scaling factor may be derived from any of a number of sources, including but not limited to: a known distance between two cameras without IPD-adjusted device; measuring the distance between the two cameras, wherein the encoder is included on the IPD adjustment mechanism; camera motion derived from an inertial motion sensor arrangement integrated with the device; distances to pixel locations within an image, as obtained, for example, by a range finder integrated with the device; identification of an object of known size included within a field of view of the image; and the introduction of additional parametric constraints such as objects known to have straight edges, etc.

An exemplary overview of the overall procedure in the case of IPD adjustment and subsequent readjustment is shown in fig. 8. First, assume here that the process starts after the adjustment of the distance between the projectors (step 110), e.g., by IPD adjustment, and may be user-initiated or automatically triggered. The process may also be implemented as an automatic or semi-automatic process, either manually triggered or triggered by a software trigger signal, performed at device start-up, optionally generating a prompt prompting the user to move relative to the scene being viewed.

Once triggered, the device acquires images of the scene for the left camera (step 112) and the right camera (step 114), and the processing system compares the images (locally or remotely on the device) to derive the relative orientation of the two cameras (step 116). In case the simple registration process fails due to parallax variations between the images, the system preferably samples additional images and waits for motion to get an at least partial 3D model of the partial field of view if needed (step 118), allowing the relative camera orientation to be found. At step 120, this relative camera orientation data is used with the previously derived left camera to left projector transformation data (122) and right camera to right projector transformation data (124) to determine the overall alignment correction introduced into the respective firmware for each projector (steps 126 and 128), allowing the left virtual image to be converted to a left transformed virtual image for projection from projector 40L and the right virtual image to be converted to a right transformed virtual image for projection from projector 40R in order to generate a correctly aligned viewing image.

Turning now to a second subset of alignment correction methods for right and left eye displays of a binocular augmented reality display device, fig. 9A and 9B schematically illustrate arrangements in which a user provides input to define at least a portion of the alignment correction. Thus, in fig. 9A is shown an optical device similar to that of fig. 3 and 4 but with the addition of a user input device 130, which may be a joystick, touch screen or any other suitable user input device, optionally implemented as an APP running on the mobile electronic device. As previously described, the method assumes the presence of a left camera 44L spatially associated with the left eye augmented reality display (projector 40L and outcoupling optical elements 42L), and corresponding elements for the right eye side of the device (not shown) (right camera spatially associated with the right eye augmented reality display).

A particular feature according to certain particularly preferred implementations of this aspect of the invention is that the alignment correction method comprises a first cross-registration process comprising:

i. obtaining at least one image of a scene sampled by a right camera,

displaying, via the left eye augmented reality display, at least one alignment feature derived from at least one image sampled by the right camera,

receiving input from a user indicative of an alignment offset between the at least one alignment feature and a corresponding direct view feature of the scene, and

correcting the display position of the at least one alignment feature in accordance with user input until the at least one alignment feature is aligned with a respective direct view feature of the scene. This defines a transformation schematically represented by arrow 78 in fig. 9B.

More preferably, the alignment process further comprises an inverted cross-registration process, namely:

i. obtaining at least one image of a scene sampled by a left camera,

displaying, via the right-eye augmented reality display, at least one alignment feature derived from the at least one image sampled by the left camera,

receiving input from a user indicating an alignment offset between the at least one alignment feature and a corresponding direct view feature of the scene, and

correcting the display position of the at least one alignment feature in accordance with user input until the at least one alignment feature is aligned with a corresponding direct view feature of the scene. This defines a transformation schematically indicated by arrow 76 in fig. 9B.

The user input is then used to implement alignment correction for the augmented reality display device. With each camera rigidly mounted relative to the respective augmented reality display, as in the example described above, the alignment correction is achieved using the relative alignment data for the right camera relative to the right eye augmented reality display (arrow 74) and the relative alignment data for the left camera relative to the left eye augmented reality display (arrow 72). Such data may be obtained by a factory alignment process, such as described above with reference to fig. 5 and 6.

In a more general case, where the transformations 72 and 74 are unknown or may vary due to non-rigid (e.g., adjustable) mounting of the left/right display relative to the camera, the transformations 72 and 74 may be obtained by at least one additional registration process that receives user input for correcting alignment of at least one of the right-eye and left-eye augmented reality displays relative to a respective one of the right and left cameras. These registration processes may be performed in substantially the same manner as the cross-registration process described herein.

If all four transforms 72, 74, 76 and 78 are determined, there is some information redundancy, as in principle any three of these transforms are sufficient to determine the overall calibration matrix between the two displays. In fact, the use of such redundancy is advantageous to improve the accuracy of the alignment correction.

During the alignment process, each projector is activated separately. The general sequence of operations according to the method will proceed as follows:

1) the user is instructed to see a scene object located at the same nominal distance (apparent distance) as the virtual image. This process is most simply implemented using "distant" objects to avoid parallax compensation problems, but parallax problems may also be corrected as discussed below.

2) The processing system injects the image from the camera of one eye onto the adjacent projector so that the viewer sees the same augmented and "real world" overlay. If the scene is not a "distant" scene, disparity compensation is introduced to the projected image based on the estimated distance to the scene. If the camera axis and the projector axis (after parallax compensation) are not accurate, there is a shift mismatch (offset) 57 (fig. 9C).

3) The observer manually controls the position and rotation of the virtual image, and moves the augmented reality image to overlap with the "real world" image 57 (map 72).

4) This process is repeated for the second eye to generate the map 74. Up to now, the calibration has been achieved between each camera and its adjacent projector.

5) The processing system injects an image from a camera (44L) of one eye onto an opposing projector (40R) and directs the user at the image to determine a map 76.

The same process is repeated for the opposing camera and projector to generate the map 78.

Now the two projectors and the two camera orientations are calibrated.

The image projected for this alignment process (alignment feature) may be at least a portion of the sampled image. In this case, the user obtains a "double-vision" effect of the superimposed images that do not fit perfectly, and adjusts the alignment until they are fully superimposed.

Alternatively, the projected alignment feature image may include one or more location markers derived from the sample image by image processing and corresponding to features detected in the sample image. This may be an outline of the object, or a plurality of markers indicating "corner" features in the image. In this case, the user aligns these position markers with the corresponding features in the real-world view.

In the case where the above-described process is performed using a scene that is not a distant scene, it is necessary to estimate the distance to the scene to perform parallax correction based on the known distance between each camera and the center of the corresponding EMB. The distance may be input by a user or may be derived by the system from any combination of available sensors and/or image processing as known in the art, depending on the details of the application. Non-limiting examples of how to derive the distance include: employing a ranging sensor, performing SLAM processing on the image to derive a 3D model (as detailed further above), and sampling the image containing an object of known size.

Many projectors include optical elements that project a virtual image to a limited distance. In this case, the calibration is preferably performed while viewing a scene at a distance matching the apparent distance of the virtual image. For example, if the virtual image is focused to 2 meters, the calibration should preferably also be performed for scenes or objects located at a distance of about two meters. The injected images from the camera to the projector are shifted according to the disparity between the camera and the projector at the specified distance and the field center (the relative distance is known).

It is important to note that the alignment process described herein is also applicable if the two projector/camera pairs are rigidly combined during the production process, i.e. there is no adjustable spacing for the IPD. In this case, as described above, the transformations 72 and 74 are typically pre-calibrated, and the transformations 76 and 78 are implemented only by user input.

In all cases where "stereo alignment correction" is referred to herein, stereo alignment correction is typically achieved by generating a calibration matrix that associates each eye with the real world or defines the relationship between the eyes.

An alternative method of performing cross-alignment of the projectors of binocular augmented reality may be implemented without relying on an out-of-view camera (which may or may not be present in the product). In contrast, this third subset of alignment correction techniques employs a camera that samples images from the right-eye display and the left-eye display while separate from the augmented reality display device, and then derives alignment corrections from the images. Exemplary implementations of this alternative method are presented below.

In general, according to this aspect of the invention, a method for deriving an alignment correction between a right-eye display and a left-eye display of a binocular augmented reality display device comprises the steps of:

a) positioning a camera having a field of view such that the camera field of view includes both a portion of the projected image from the left-eye display and a portion of the projected image from the right-eye display;

b) projecting, via each of the right-eye display and the left-eye display, at least a portion of a calibration image including at least one right field alignment feature and at least one left field alignment feature;

c) sampling an image with a camera;

d) identifying a right field alignment feature and a left field alignment feature within the image; and

e) and obtaining alignment correction between the right-eye display and the left-eye display of the augmented reality display device according to the positions of the right field alignment feature and the left field alignment feature in the image.

One implementation of this method is shown schematically here in fig. 10A. It should be noted that some of the light projected by the waveguide 42 toward the viewer's eye is reflected forward (i.e., outward from the user), for example, by the outer surface of the waveguide closest to the eye. In the implementation shown here, it is this outwardly reflected light that is detected by the camera 80, which is located on the side opposite the viewing side of the augmented reality display device 40L, 42L, so that the camera captures the outwardly reflected portion of the image illumination from each of the right-eye display and the left-eye display.

The system controller injects the image into the projector 40 and the projector 40 illuminates the eye through the waveguide 42, as indicated by the solid arrow. Some of the light is reflected in the opposite direction as indicated by the dotted arrow.

A camera on the portable device 80 receives at least a portion of the forward reflected image and sends the image to the system controller for processing. (the camera is shown here only schematically and will obviously be oriented to face the projector and to capture a portion of the forward reflected image illumination.) alternatively, this process may be performed in the portable device itself.

Although only a portion of the field is received by camera 80, the image is designed so that it can be known which portion of the image was received, as discussed further below with reference to FIG. 11D. From this portion, the processor derives the orientation of the camera relative to the forward projected image.

Fig. 10B schematically shows two projectors 99L and 99R, each indicating the projector orientation of the respective device for both eyes. In 99L, light rays 100 are projected towards the viewer perpendicular to the face of the waveguide 99L, and thus reflections 102 are reflected in opposite directions along the optical axis. In contrast, in waveguide 99R, an alternative geometry is shown in which the projected image optical axis indicated by output rays 104 is not perpendicular to the surface of waveguide 99R, so reflected rays 106 are not opposite to 104. Therefore, a calibration matrix should be derived for the offset 106 relative to 104. The calibration matrix should be obtained by comparing the forward images (100 and 104) with the reflected images (102 and 106) during projector production or as described below.

Image acquisition according to this method is performed simultaneously for two projectors, as schematically shown in the plan view in fig. 11A. The dot-dash arrows indicate forward reflected images. The camera 80 receives different portions of the reflected images from the two projectors and derives the orientation of the two fields. By comparing these orientations, as described above, the relative orientation between the projectors can be derived and the alignment electronically corrected.

The accuracy of the calibration can be improved if the camera 80 is placed further away from the projector 42. In the case of a hand-held camera that cannot be conveniently held away from the device, imaging from a greater effective distance can be achieved by viewing the projector through mirror 57, as shown in FIG. 11B. This mirror-based geometry also allows the calibration technique to be implemented using the built-in forward-looking camera of the augmented reality display device itself, particularly in devices provided with a single central forward-looking camera.

The orientation of the camera 80 may be optimized by providing visual guidance cues to the user for properly positioning the camera during calibration. For example, if the camera 80 is a camera of a mobile device, such as a mobile phone, integrated with a screen, at least one indication to the user may be displayed via the screen to assist in properly positioning the camera, as shown in fig. 11C. Additionally or alternatively, for any handheld camera, at least one indication may be displayed to the user via one or both of the augmented reality displays to assist in properly positioning the camera.

FIG. 11D shows an example of images that may be projected by two displays for use in a calibration process. Any other image may be used and is presented herein as a non-limiting example. The image has clear marks 90a and 90b which serve as left and right field alignment features, respectively. The right and left field alignment features may be part of a continuous geometric pattern, or may be separate features, and are preferably distinguishable from each other. They preferably include the following features: the features are readily identified and processed by image processing techniques to derive a position and orientation. After compensating for any geometric distortion introduced by the projector itself, the image is projected. It should be noted that only a portion of the image is captured from each separate projector by the camera 80. The camera is positioned such that, in the case of the camera at the "exterior" of the projector, the sampled image includes left field alignment features viewed via the right-eye display and right field alignment features viewed via the left-eye display.

Fig. 11E schematically shows the image 100 received by the camera 80. The distance from the camera 80 to the glasses may be derived from parameters in the image relating to the glasses, such as the size 82 of the glasses. In the waveguide 42R and the waveguide 42L, the reflection of the projected image is apparently 84R and 84L. The images in both reflections include a mark 90a and a mark 90 b. By measuring the angular distance in the image between the marks 86, and taking into account parallax caused by the known distance to the glasses, the actual misalignment between the projectors 42R and 42L can be known. Angular misalignment can also be obtained as shown by the tilt angle designated 88. The architecture also enables detection of eye position 60R and eye position 60L. This further improves the projection alignment by taking into account distortion caused by the eye position in the projector eye box.

In an alternative set of implementations, the camera 80 is located on the viewing side of the augmented reality display device, i.e., the side that the user views through the display. In this case, the sampled image includes right field alignment features viewed via a right-eye display and left field alignment features viewed via a left-eye display. An example of this implementation is shown in fig. 11F.

It is important to focus the camera 80 on the projected image. If a lens is placed in front of the projector 42, the virtual image 51 will be generated at a finite viewing distance (apparent focal distance). This should be taken into account in obtaining the disparity introduced at 84R and 84L.

In the example of fig. 11F, the projector includes a lens so that the image 51 is projected as virtual images 62L (from 42L) and 62R (from 42R) at the apparent focal length 61. The two images should be brought into a precise overlapping relationship to achieve optimal alignment. The image acquired by the camera 80 will be equivalent to 84L and 84R (described in fig. 11E), and the derivation of the offset between 62L and 62R will take into account the virtual image 61 (preset by the lens) and the distance to the camera 63 (again, for example, by identifying the size of the device 82 in the image).

As described above, the distance of the camera 80 from the display device may be determined by identifying features within the image associated with the display device, such as the width dimension 82. Ideally, to determine the distance and orientation of the camera relative to the display device, the processing system preferably identifies within the image features associated with the binocular augmented reality display device sufficient to define at least three (most preferably, four) non-collinear (and, four non-coplanar) fiducial points. The feature may be any feature that relates to the shape of the device or any reference pattern formed on the surface of the device. In case the projected calibration image is presented at a certain depth of focus, features of the projected virtual image may also be used as reference points in some cases. The reference points are then processed to determine the position of the camera relative to the reference points and hence relative to the projector.

An exemplary, non-limiting implementation of this process is depicted in FIG. 12. As in fig. 8 above, calibration may be necessary due to misalignment introduced by IPD adjustment (step 140), but is not limited to this case. At step 142, a calibration image or "field image" is "injected" for display via both the right eye projector and the left eye projector, and an image containing a portion of the illumination corresponding to the calibration image from each of the projectors, and preferably also imaging other features of the projectors or the display device itself, is sampled using the camera 80 (step 144).

At step 146, the features of the display device are processed to determine camera orientations relative to each projector. This then provides sufficient information to allow the relative alignment of the projectors to be derived from the portions of the calibration image acquired via each display (step 148). The pre-measured reflection shift parameters are also employed in the alignment calculation (150) in the case of illumination with outward reflection using the camera 80 outside the display and the image projection axis is not perpendicular to the waveguide surface. Alignment calculations are then used to generate a calibration matrix for updating the firmware of each projector (step 152).

The user may also be assisted using a camera on the portable device 80 during the mechanical IPD adjustment itself (before performing the described calibration). According to this selection, the user changes the distance between the projectors while the camera continuously sends the facet images to the processor. The processor compares the eye position to the optical projector position (which optionally has indicia thereon to facilitate detection of the projector position) and generates an output (typically an audio signal and/or visual display) to the user to indicate how the relative position should be further adjusted, or to notify the user when an optimal position has been reached for the user. Then, a calibration procedure is preferably performed, as described herein.

The invention also comprises the following technical scheme.

Scheme 1. a method for obtaining an alignment correction between a left-eye display and a right-eye display of a binocular augmented reality display device, the method comprising the steps of:

(a) positioning a camera having a field of view such that the camera field of view includes both a portion of the projected image from the left-eye display and a portion of the projected image from the right-eye display;

(b) projecting, via each of the right-eye display and the left-eye display, at least a portion of a calibration image including at least one right field alignment feature and at least one left field alignment feature;

(c) sampling an image with the camera;

(d) identifying the right field alignment feature and the left field alignment feature within the image; and

(e) obtaining alignment correction between the right-eye display and the left-eye display of the augmented reality display device according to the positions of the right field alignment feature and the left field alignment feature within the image.

Scheme 2. the method of scheme 1, wherein the camera is located on a viewing side of the augmented reality display device such that the image includes the right field alignment features viewed via the right-eye display and the left field alignment features viewed via the left-eye display.

Scheme 3. the method of scheme 2, wherein the projected calibration image is displayed with a focal length, and wherein the camera is focused at the focal length.

Scheme 4. the method of scheme 1, wherein the camera is located on an opposite side of the viewing side of the augmented reality display device such that the camera captures an outwardly reflected portion of image illumination from each of the right-eye display and the left-eye display and such that the image includes the left field alignment feature viewed via the right-eye display and the right field alignment feature viewed via the left-eye display.

Scheme 5. the method of scheme 4, wherein the camera is a handheld camera, the method further comprising: displaying at least one indication to a user via the right eye display and/or the left eye display to assist in correctly positioning the camera.

Scheme 6. the method of scheme 1, further comprising the steps of:

(a) identifying within the image features associated with the binocular augmented reality display device sufficient to define at least three fiducial points; and

(b) determining a position of the camera relative to the at least three reference points.

Scheme 7. the method of scheme 1, wherein the locating comprises: directing the camera toward a mirror such that the reflected field of view includes both a portion of the projected image from the left-eye display and a portion of the projected image from the right-eye display.

Scheme 8. the method of scheme 1, wherein the camera is a camera of a mobile device integrated with a screen, the method further comprising: displaying at least one indication to a user via the screen to assist in correctly positioning the camera.

Scheme 9. the method of scheme 1, further comprising: implementing an alignment correction for the augmented reality display device based on the derived alignment correction.

Scheme 10. a method for stereo alignment correction between a right-eye display and a left-eye display of a binocular augmented reality display device, the method comprising the steps of:

(a) providing an augmented reality device, the augmented reality device comprising:

(i) a right eye display unit comprising a first augmented reality display rigidly integrated with a forward looking first camera,

(ii) a left eye display unit comprising a second augmented reality display rigidly integrated with a forward looking second camera, and

(iii) a support structure interconnected between the right-eye display unit and the left-eye display unit;

(b) providing a first alignment mapping between the first camera and the first augmented reality display and a second alignment mapping between the second camera and the second augmented reality display;

(c) sampling at least one image from the first camera;

(d) sampling at least one image from the second camera;

(e) cooperatively processing images from the first camera and the second camera to obtain an inter-camera mapping indicative of a relative orientation between the first camera and the second camera;

(f) combining the inter-camera mapping with the first and second alignment mappings to obtain an inter-display alignment mapping that indicates a relative orientation of the first and second augmented reality displays; and

(g) implementing alignment correction for the augmented reality display device based on the inter-display alignment mapping.

Scheme 11. the method of scheme 10, wherein at least one image from the first camera and the second camera is sampled for a distant scene.

Scheme 12. the method of scheme 10, wherein at least one image from the first and second cameras is a plurality of images, and wherein the co-processing comprises: a three-dimensional model of at least a portion of a scene included in the plurality of images is obtained.

Scheme 13. a method for stereo alignment correction between a right-eye display and a left-eye display of a binocular augmented reality display device, the method comprising the steps of:

(a) providing an augmented reality device comprising a right eye augmented reality display, a left eye augmented reality display, a right camera spatially associated with the right eye augmented reality display, and a left camera spatially associated with the left eye augmented reality display;

(b) performing a first cross-registration process, the first cross-registration process comprising:

(i) obtaining at least one image of a scene sampled by the right camera,

(ii) displaying, via the left eye augmented reality display, at least one alignment feature derived from the at least one image sampled by the right camera,

(iii) receiving input from a user indicating an alignment offset between the at least one alignment feature and a corresponding direct view feature of the scene, an

(iv) Correcting a display position of the at least one alignment feature in accordance with user input until the at least one alignment feature is aligned with the respective direct view feature of the scene;

(c) performing a second cross-registration process, the second cross-registration process comprising:

(i) obtaining at least one image of a scene sampled by the left camera,

(ii) displaying, via the right-eye augmented reality display, at least one alignment feature derived from the at least one image sampled by the left camera,

(iii) receiving input from the user indicating an alignment offset between the at least one alignment feature and a corresponding direct-view feature of the scene, an

(iv) Correcting a display position of the at least one alignment feature in accordance with user input until the at least one alignment feature is aligned with the respective direct view feature of the scene; and

(d) implementing an alignment correction for the augmented reality display device based on the user input.

Scheme 14. the method of scheme 13, wherein the at least one alignment feature for each of the cross-registration processes is at least a portion of a sampled image.

Scheme 15 the method of scheme 13, wherein the at least one alignment feature for each of the cross-registration processes is a position marker corresponding to a feature detected in the sampled image.

The method of claim 13, further comprising obtaining an estimated distance to an object in the sampled image, the estimated distance being used to implement the alignment correction.

Scheme 17. the method of scheme 13, wherein the right camera is rigidly mounted relative to the right-eye augmented reality display, and wherein the left camera is rigidly mounted relative to the left-eye display, the alignment correction being effected using relative alignment data of the right camera relative to the right-eye augmented reality display and relative alignment data of the left camera relative to the left-eye augmented reality display.

Scheme 18. the method of scheme 13, further comprising: performing at least one additional registration process to receive user input for correcting alignment of at least one of the right-eye augmented reality display and the left-eye augmented reality display relative to a respective one of the right camera and the left camera.

It will be appreciated that the above description is intended only as an example, and that many other embodiments are possible within the scope of the invention as defined in the appended claims.

32页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种自动输出眼药水的VR眼镜装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!