Separable distortion disparity determination

文档序号:621635 发布日期:2021-05-07 浏览:4次 中文

阅读说明:本技术 可分离失真视差确定 (Separable distortion disparity determination ) 是由 萨基·卡茨 于 2019-09-10 设计创作,主要内容包括:本申请公开了用于确定两个图像之间视差的系统和方法。此类系统和方法包含从第一视点获得场景的第一原始像素图像,从第二视点获得所述场景的第二原始图像(例如,在摄像头基线方向(例如,水平或垂直)上与第一视点分离),使用分量分离的校正来修改所述第一和第二原始像素图像,以创建在所述摄像头基线方向上从所述第一和第二原始像素图像之间到所述第一和第二校正像素图像之间保持像素场景匹配的相应的第一和第二校正像素图像,在所述摄像头基线方向上从所述第一和第二校正图像之间的对应像素确定像素对,以及从与所述第一和第二校正像素图像中的像素对的相应像素位置对应的所述第一和第二原始像素图像中的像素位置,确定每个所确定所述像素对的视差匹配。(Systems and methods for determining disparity between two images are disclosed. Such systems and methods include obtaining a first raw pixel image of a scene from a first viewpoint, obtaining a second raw image of the scene from a second viewpoint (e.g., separated from the first viewpoint in a camera baseline direction (e.g., horizontal or vertical)), modifying the first and second raw pixel images using component-separated corrections to create respective first and second corrected pixel images that maintain a pixel scene match in the camera baseline direction from between the first and second raw pixel images to between the first and second corrected pixel images, determining pixel pairs in the camera baseline direction from corresponding pixels between the first and second corrected images, and determining pixel positions in the first and second raw pixel images that correspond to respective pixel positions of pixel pairs in the first and second corrected pixel images, determining a disparity match for each of the determined pairs of pixels.)

1. A separable distorted disparity determination system, comprising:

the electronic device includes:

a frame; and

a first camera and a second camera supported by the frame, wherein the first camera has a first viewpoint and the second camera has a second viewpoint separated from the first viewpoint in a camera baseline direction;

an image display;

an image display driver coupled to the image display to control the image display;

a memory;

a processor coupled to the first camera, the second camera, the image display driver and the memory

A program in the memory, wherein execution of the program by the processor configures the system to perform functions comprising:

obtaining a first raw pixel image of a scene captured with the first camera;

obtaining a second raw image of the scene captured with the second camera;

modifying the first and second raw pixel images using component-separated corrections that preserve pixel scene matching in the camera baseline direction from between the first and second raw pixel images to between the first and second corrected pixel images to create respective first and second corrected pixel images;

determining pairs of pixels of corresponding pixels between the first and second corrected images in the camera baseline direction

Determining a disparity match for each determined pair of pixels in the first and second original pixel images based on pixel positions in the first and second original pixel images that correspond to respective pixel positions of the determined pair of pixels in the first and second corrected pixel images.

2. The system of claim 1, wherein:

the first camera is a first visible light camera configured to capture the first raw image, the first raw image comprising a first matrix of pixels; and

the second camera is a second visible light camera configured to capture the second raw image, the second raw image including a second matrix of pixels.

3. The system of claim 1, wherein the first camera of the electronic device is a first visible light camera, the second camera of the electronic device is a second visible light camera, and the functions to be modified include the following functions:

creating a first separable distorted image from the first original image, creating a second separable distorted image from the second original image, maintaining pixel scene matching at least in the camera baseline direction, and removing distortion introduced by the respective lenses of the first and second visible light cameras.

4. The system of claim 3, wherein the function that creates the first and second separable distorted images applies a monotonic function f as follows:

(xseparable distortion,ySeparable distortion)=(rx*xraw,ry*yraw)

Wherein r isx=f(x2 raw) (ii) a And

ry=f(y2 raw);

where x is the pixel position in the horizontal direction and y is the pixel position in the vertical direction.

5. The system of claim 1, wherein the function of determining pixel pairs comprises the functions of:

extracting image disparities by correlating pixels in the first separable distorted image with the second separable distorted image to calculate separable distorted disparities for each of the correlated pixels.

6. The system of claim 5, wherein the function of determining a disparity match comprises the functions of:

determining respective raw pixel pair positions in the first and second raw images that correspond to the positions of the determined pixel pairs in the first and second corrected images;

determining a corrected distorted disparity using the respective original pixel pair locations; and

replacing the separable distorted disparity with the corrected distorted disparity.

7. The system of claim 6, wherein the functions further comprise functions to:

creating a depth map of the scene using the corrected distortion disparity.

8. The system of claim 7, wherein the depth map comprises a plurality of vertices based on the corrected distortion disparity, and wherein each vertex comprises one or more of a color attribute or a texture attribute.

9. The system of claim 7, wherein the functions of creating a depth map comprise the functions of:

calculating Z-position coordinates of each vertex of the depth map using the corrected distorted disparities.

10. The system of claim 9, wherein the functions further comprise functions to:

generating a three-dimensional scene using the depth map; and

presenting the three-dimensional scene on the image display.

11. A separable distortion disparity determination method, comprising the steps of:

obtaining a first original pixel image of a scene from a first viewpoint captured by a first camera of an electronic device;

obtaining a second original image of the scene from a second viewpoint captured by a second camera of the electronic device, the first viewpoint being separated from the second viewpoint in a camera baseline direction;

modifying the first and second raw pixel images using component-separated corrections that preserve pixel scene matching in the camera baseline direction from between the first and second raw pixel images to between the first and second corrected pixel images to create respective first and second corrected pixel images;

determining corresponding pairs of pixels between the first and second corrected images in the camera baseline direction; and

determining a disparity match for each determined pixel pair from pixel positions in the first and second original pixel images that correspond to respective pixel positions of the determined pixel pair in the first and second corrected pixel images.

12. The method of claim 11, wherein the first camera of the electronic device is a first visible light camera and the second camera of the electronic device is a second visible light camera, the method further comprising:

capturing the first raw image with the first visible light camera of the electronic device, the first raw image comprising a first matrix of pixels

Capturing the second raw image with the second visible light camera of the electronic device, the second raw image comprising a first matrix of pixels.

13. The method of claim 11, wherein the first camera of the electronic device is a first visible light camera and the second camera of the electronic device is a second visible light camera, wherein the modifying step comprises:

creating a first separable distorted image from the first original image and a second separable distorted image from the second original image that maintains pixel scene matching at least in the camera baseline direction and removes distortion introduced by the respective lenses of the first and second visible light cameras.

14. The method of claim 13, wherein the creating step comprises applying a monotonic function f to the first and second raw images as follows:

(xseparable distortion,ySeparable distortion)=(rx*xraw,ry*yraw)

Wherein r isx=f(x2 raw) (ii) a And

ry=f(y2 raw);

where x is the pixel position in the horizontal direction and y is the pixel position in the vertical direction.

15. The method of claim 13, wherein the step of determining pixel pairs comprises:

extracting image disparities by correlating pixels in the first separable distorted image with the second separable distorted image to calculate separable distorted disparities for each of the correlated pixels.

16. The method of claim 15, wherein the step of determining a disparity match comprises:

determining respective raw pixel pair positions in the first and second raw images that correspond to the determined positions of the pixel pairs in the first and second corrected images;

determining a corrected distorted disparity using the corresponding original pixel pair locations

Replacing the separable distorted disparity with the corrected distorted disparity.

17. The method of claim 16, further comprising:

creating a depth map of the scene using the corrected distortion disparity.

18. The method of claim 17, wherein the depth map comprises a plurality of vertices based on the corrected distortion disparity, and wherein each vertex comprises one or more of a color attribute or a texture attribute.

19. The method of claim 17, wherein the creating step comprises:

calculating Z-position coordinates of each vertex of the depth map using the corrected distorted disparity.

20. The method of claim 17, further comprising:

generating a three-dimensional scene using the depth map; and

presenting the three-dimensional scene on a display of the electronic device or a remote display of a remote portable device coupled to the electronic device.

Technical Field

The present subject matter relates to electronic devices (e.g., eye-worn devices) and mobile devices and techniques to determine parallax (e.g., for creating and presenting three-dimensional images).

Background

Electronic devices, including wearable devices such as portable eye-worn devices (e.g., smart glasses, headwear, and head-worn devices), mobile devices (e.g., tablet computers, smart phones, and notebook computers), and personal computers currently on the market all integrate an image display and a camera.

The wearable device may include a plurality of cameras for collecting image information from a scene. The lenses of one or more cameras can cause image distortion. This distortion interferes with the ability to accurately reproduce the scene on the display. There is a need for methods and systems for accurately presenting images.

Brief Description of Drawings

The drawings depict one or more embodiments by way of example only and not by way of limitation. In the drawings, like reference characters designate the same or similar elements.

Fig. 1A is a right side view of an example hardware configuration of an eye-worn device for use in a separable distorted disparity determination system;

FIG. 1B is a cross-sectional top view of the right block of the eyewear shown in FIG. 1A showing a right visible camera and a circuit board;

FIG. 1C is a left side view of an example hardware configuration of the eye-worn device shown in FIG. 1A, showing a left visible light camera;

FIG. 1D is a cross-sectional top view of the left block of the eye-worn device shown in FIG. 1C, showing a left visible light camera and a circuit board;

fig. 2A and 2B are rear views of an example hardware configuration of an eye-worn device for use in a separable distorted disparity determination system, including two different types of image displays;

FIG. 3 shows an example of visible light captured by a left visible light camera as a left raw image and an example of visible light captured by a right visible light camera as a right raw image;

fig. 4 is a functional block diagram of an exemplary separable distortion disparity determination system comprising an eye-worn device, a mobile device, and a server system connected via various networks;

fig. 5 illustrates an example of a hardware configuration of a mobile device of the separable distorted disparity determining system of fig. 4;

FIG. 6 is a flow diagram of an example method for separable distorted disparity determination, generation, and presentation of a three-dimensional image with a corrected image;

FIG. 7 is a flowchart of exemplary steps for determining disparity matching in the exemplary method of FIG. 6;

FIGS. 8A and 8B are representative illustrations of a scene raw image pair (FIG. 10A) and a corrected image pair (FIG. 10B) generated during the method of FIG. 6;

FIG. 9 is a representative illustration of determining a disparity match between an original image pair and a separable distorted image pair during the method of FIG. 6; and

fig. 10A and 10B depict examples of an original image and a corresponding corrected image, respectively.

Detailed Description

In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant teachings. It will be apparent, however, to one skilled in the art that the present teachings may be practiced without these specific details. In other instances, well known methods, procedures, components, and circuits have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present invention.

The term "coupled" or "connected" as used herein refers to any logical, optical, physical, or electrical connection, etc., through which an electrical or magnetic signal generated or provided by one system element is transferred to another coupled or connected element. Unless otherwise specified, coupled or connected elements or devices are not necessarily directly connected to each other and may be separated by intermediate components, elements, or propagation media that may modify, manipulate, or carry electrical signals. The term "on …" means directly supported by an element or indirectly supported by the element through another element integrated into or supported by the element.

For purposes of illustration and discussion, the orientation of the eye-worn device, associated components, and any complete device incorporating a three-dimensional camera, as shown in any of the figures, is given by way of example only. In operation for separable distortion disparity determination in images, the eye-worn device may be oriented in any other direction suitable for the particular application of the eye-worn device, such as up, down, sideways, or any other direction. Furthermore, within the scope of use herein, any directional terminology, such as front, back, inward, outward, facing, left, right, lateral, longitudinal, upward, downward, upper, lower, top, bottom, side, horizontal, vertical, and diagonal, is used by way of example only and is not limited to the direction or orientation of any three-dimensional camera head or a three-dimensional camera head component constructed as otherwise described herein.

Additional objects, advantages and novel features of the example will be set forth in part in the description which follows, and in part will become apparent to those skilled in the art upon examination of the following or may be learned by production or operation of the example. The objects and advantages of the present subject matter may be realized and attained by means of the methodologies, instrumentalities and combinations particularly pointed out in the appended claims.

Reference will now be made in detail to the examples illustrated in the accompanying drawings and discussed below.

Fig. 1A is a right side view of an example hardware configuration of an eye-worn device 100 for use in a separable distorted disparity determination system, showing a right visible light camera 114B for collecting image information. As described further below, in a separable distorted disparity determination system, two cameras capture image information of a scene from two separate viewpoints. The two captured images are modified to generate two corrected images, pairs of pixels between the two corrected images are determined in the camera baseline direction, and a disparity match is determined for each pair of pixels.

The eye-worn device 100 includes a right optical assembly 180B having an image display to present an image such as a depth image. As shown in fig. 1A-B, the eye-worn device 100 includes a right visible light camera 114B. The eye-worn device 100 may include a plurality of visible light cameras 114A-B that form a passive type of three-dimensional camera, such as a stereo camera, with the right visible light camera 114B located on the right bank 110B. As shown in fig. 1C-D, the eye-worn device 100 also includes a left visible light camera 114A.

The left and right visible light cameras 114A-B are sensitive to visible range wavelengths. Each of the visible light cameras 114A-B has a different forward facing field of view that overlaps to enable the generation of a three dimensional depth image, e.g., the right visible light camera 114B depicts the right field of view 111B. In general, a "field of view" is the portion of a scene that is visible by a camera in a particular location and orientation in space. The fields of view 111A and 111B have overlapping fields of view 813. Objects or object features outside of the field of view 111A-B are not recorded in the original image (e.g., photograph or picture) when the visible light camera captures the image. The field of view describes the range of angles at which the image sensors of the visible light cameras 114A-B receive electromagnetic radiation of a given scene in a captured image of the given scene. The field of view may be expressed as the angular size of the viewing cone, i.e., the viewing angle. The viewing angle can be measured horizontally, vertically, or diagonally.

In an example, the field of view of the visible light cameras 114A-B is between 15 and 30, e.g., 24, with a resolution of 480x480 pixels. The "field angle" describes the range of angles that the lens of the visible light cameras 114A-B or infrared camera 220 (see FIG. 2A) can effectively image. Typically, the image circle produced by the camera lens is large enough to completely cover the film or sensor of the camera, possibly including some vignetting towards the edges. If the field angle of the camera lens is not filled with sensors, the image circle will be visible, typically with a strong vignetting towards the edges, and the effective field angle will be limited to the field angle.

Examples of such visible light cameras 114A-B include high resolution Complementary Metal Oxide Semiconductor (CMOS) image sensors and Video Graphics Array (VGA) cameras, such as 640p (e.g., 640 x480 pixels, for a total of 30 ten thousand pixels), 720p, or 1080 p. The term "overlap" as used herein when referring to a field of view means that the pixel matrices in the generated original image overlap by 30% or more. The term "substantially overlapping" as used herein when referring to a field of view means that the pixel matrices in the generated original image or infrared image of the scene overlap by 50% or more.

The eye-worn device 100 can capture image sensor data from the visible light cameras 114A-B digitized by the image processor as well as geo-location data for storage in memory. The left and right raw images captured by the respective visible light cameras 114A-B are in a two-dimensional spatial domain, including a matrix of pixels on a two-dimensional coordinate system that includes an X-axis for horizontal position and a Y-axis for vertical position. Each pixel includes a color attribute (e.g., a red pixel light value, a green pixel light value, and/or a blue pixel light value) and a position attribute (e.g., an X position coordinate and a Y position coordinate).

To provide stereo vision, an image processor (element 912 of FIG. 4) may be coupled to the visible light cameras 114A-B to receive image information for digital processing and time stamps for capturing images of a scene. Image processor 912 includes circuitry for receiving signals from visible light cameras 114A-B and processing those signals from visible light cameras 114 into a format suitable for storage in memory. The time stamp may be added by the image processor or other processor that controls the operation of the visible light cameras 114A-B. The visible light cameras 114A-B allow the three-dimensional cameras to simulate the vision of both eyes of a person. The three-dimensional camera provides the ability to render a three-dimensional image based on two captured images from the visible light cameras 114A-B having the same time stamp. Such three-dimensional images allow for an immersive realistic experience, for example, for virtual reality or video games.

For stereo vision, a pair of raw red, green, and blue (RGB) images of a scene (one image for each of the left and right visible-light cameras 114A-B) are captured at a given moment in time. When processing raw image pairs captured from the forward facing left and right fields of view 111A-B of the left and right visible light cameras 114A-B (e.g., by an image processor), a depth image is generated that a user can perceive on an optical assembly 180A-B (e.g., of a mobile device) or other image display. The generated depth image may include a matrix of vertices in a three-dimensional position coordinate system including an X-axis for horizontal position (e.g., length), a Y-axis for vertical position (e.g., height), and a Z-axis for depth (e.g., distance) in a three-dimensional spatial domain. Each vertex includes a color attribute (e.g., a red pixel light value, a green pixel light value, and/or a blue pixel light value), a position attribute (e.g., an X position coordinate, a Y position coordinate, and a Z position coordinate), a texture attribute, and/or a reflectivity attribute. The texture attribute quantifies the perceived texture of the depth image, such as the spatial arrangement of colors or intensities in the vertex region of the depth image.

Typically, the perception of depth comes from the disparity of a given 3D point in the left and right raw images captured by the visible light cameras 114A-B. Disparity is the difference in image positioning of the same 3D point when projected under the view angle of the visible light cameras 114A-B (D ═ x)Left side of-xRight side). For a pixel having parallel optical axes, focal length f, base line b and corresponding image point (x)Left side of,yLeft side of) And (x)Right side,yRight side) The visible light cameras 114A-B may use triangulation to determine depth from parallax to derive the position of the 3D point (Z-axis position)Set coordinates). In general, the depth of a 3D point is inversely proportional to disparity. Various other techniques may also be used. The generation of the three-dimensional depth image will be explained in more detail below.

In an example, the separable parallax distortion determination system includes an eye-mounted device 100. The eyewear 100 includes a frame 105, a left eyewear leg 110A extending from a left side 170A of the frame 105, and a right eyewear leg 110B extending from a right side 170B of the frame 105. The eye-worn device 100 also includes two cameras. The two cameras may include at least two visible light cameras having overlapping fields of view. In one example, the two cameras include a left visible light camera 114A, the left visible light camera 114A having a left field of view 111A, connected to the frame 105 or the left eye-mounted device leg 110A to capture a left image of the scene. The eye-worn device 100 also includes a right visible light camera 114B, the right visible light camera 114B connected to the frame 105 or the right eye-worn device leg 110B, having a right field of view 111B to capture a right image of a scene that partially overlaps the left image (e.g., simultaneously with the left visible light camera 114A).

The separable parallax distortion determination system also includes a computing device, such as a host (e.g., mobile device 990 of fig. 4) coupled to the eye-worn device 100 over a network. The separable parallax distortion determination system further comprises an image display (optical components 180A-B of the eye-worn device; image display 180 of mobile device 990 of fig. 4) for rendering (e.g., displaying) a three-dimensional depth image. The separable parallax distortion determination system further comprises an image display driver (element 942 of the eye-worn device 100 of fig. 4; element 1090 of the mobile device 990 of fig. 4) coupled to the image display (optical components 180A-B of the eye-worn device; image display 1080 of the mobile device 990 of fig. 5) to control the image display to render the depth image.

The separable parallax distortion determination system further comprises a user input device for receiving a two-dimensional input selection by a user. Examples of user input devices include touch sensors (element 991 of fig. 4 for the eye-worn device 100), touch screen displays (element 1091 of fig. 5 for the mobile device 990 of fig. 5), and computer mice for personal or notebook computers. The separable parallax distortion determination system further comprises a processor (element 932 of the eye-worn device 100 of fig. 4; element 1030 of the mobile device 990 of fig. 5) coupled to the eye-worn device 100 and the three-dimensional camera. The separable disparity distortion determination system also includes a processor-accessible memory (element 934 of the eye-worn device 100 of fig. 4; elements 1040A-B of the mobile device 990 of fig. 4), and a separable distortion disparity determination program (element 945 of the eye-worn device 100 of fig. 4; element 945 of the mobile device 990 of fig. 4) in the memory, for example, in the eye-worn device 100 itself, the mobile device (element 990 of fig. 4), or another portion of the separable disparity distortion determination system (e.g., the server system 998 of fig. 4). Execution of the program (element 945 of fig. 4) by the processor (element 932 of fig. 4) configures the eye-worn device 100 to generate a depth image 961 by the three-dimensional camera. The depth image includes a vertex matrix. Each vertex represents a pixel in the three-dimensional scene. Each vertex has a position attribute. The position attribute of each vertex is based on a three-dimensional position coordinate system including an X position coordinate on an X axis for a horizontal position, a Y position coordinate on a Y axis for a vertical position, and a Z position coordinate on a Z axis for a depth.

The mobile device (element 990 of fig. 4) of the separable distortion determination system is configured by the processor (element 1030 of fig. 5) executing the separable distortion determination procedure (element 945 of fig. 4) to perform the functions described herein.

Fig. 1B is a cross-sectional top view of the right block 110B of the eye-worn device 100 of fig. 1A, showing the right visible camera 114B and the circuit board of the camera system. Fig. 1C is a left side view of an example hardware configuration of the eye-mounted device 100 of fig. 1A, showing a left visible light camera 114A of the camera system. Fig. 1D is a cross-sectional top view of the left block 110A of the eye-worn device of fig. 1C depicting the left visible light camera 114A of the three-dimensional camera and the circuit board. The structure and layout of the left visible light camera 114A is substantially similar to the right visible light camera 114B, except that it is connected and coupled to the left side 170A. As shown in the example of fig. 1B, the eye-worn device 100 includes a right visible light camera 114B and a circuit board, which may be a flexible Printed Circuit Board (PCB) 140B. The right hinge 226B connects the right block 110B to the right eye-worn device leg 125B of the eye-worn device 100. In some examples, the right visible light camera 114B, the flexible PCB140B, or other components of electrical connectors or contacts may be located on the right eye-worn device leg 125B or the right hinge 226B.

The right block 110B includes a block body 211 and a block cap, which is omitted in the cross section of fig. 1B. Disposed within right block 110B are various interconnected circuit boards, such as PCBs or flexible PCBs, including controller circuitry for right visible light camera 114B, a microphone, low power wireless circuitry (e.g., for via Bluetooth)TMWireless short-range network communication), high-speed wireless circuitry (e.g., for wireless local area network communication via WiFi).

The right visible light camera 114B is coupled to or disposed on the flexible PCB 240 and is covered by a visible light camera lens cover that is aligned through an opening formed in the frame 105. For example, the right edge 107B of the frame 105 is connected to the right block 110B and includes an opening for a visible light camera lens cover. The frame 105 includes a front facing side configured to face outwardly away from the user's eyes. An opening for a visible light camera lens cover is formed on and through the front facing side. In this example, the right visible light camera 114B has an outward facing field of view 111B that has a line of sight or perspective of the right eye of the user of the eye-worn device 100. The visible-light camera lens cover may also be adhered to an outward-facing surface of the right block 110B, in which an opening having an outward-facing angle of view but in a different outward direction is formed. The coupling may also be an indirect coupling via intermediate components.

The left (first) visible light camera 114A is connected to the left image display of the left optical assembly 180A to capture a left eye viewing scene observed in the left raw image by the wearer of the eye-mounted device 100. The right (second) visible light camera 114B is connected to the right image display of the right optical assembly 180B to capture a right eye viewing scene observed by the wearer of the eye-worn device 100 in the right raw image. The left original image and the right original image partially overlap to render a three-dimensional observable space of the generated depth image.

A flexible PCB140B is disposed within right block 110B and is coupled to one or more other components in right block 110B. Although shown as being formed on the circuit board of the right block 110B, the right visible light camera 114B may be formed on the circuit board of the left block 110A, the eyewear legs 125A-B, or the frame 105.

Fig. 2A-B are rear views of an example hardware configuration of the eye-worn device 100, including two different types of image displays. The eye-worn device 100 has a form configured to be worn by a user, which in this example is eyeglasses. The eye-worn device 100 may take other forms and may incorporate other types of frames, such as a headset, earphones, or helmet.

In the eyeglass example, the eyewear 100 comprises a frame 105, the frame 105 comprising a left edge 107A, the left edge 107A being connected to a right edge 107B via a nose bridge 106 adapted to the nose of the user. The left and right edges 107A-B include respective apertures 175A-B that retain respective optical elements 180A-B, such as lenses and display devices. As used herein, the term lens refers to a transparent or translucent glass or plastic cover sheet having curved and/or flat surfaces that cause light to converge/diverge or that cause little or no convergence or divergence.

Although shown as having two optical elements 180A-B, the eye-worn device 100 may include other arrangements, such as a single optical element or may not include any optical elements 180A-B, depending on the application or intended user of the eye-worn device 100. As further shown, the eye-worn device 100 includes a left block 110A adjacent a left side 170A of the frame 105 and a right block 110B adjacent a right side 170B of the frame 105. Blocks 110A-B may be integrated into frame 105 on respective sides 170A-B (as shown), or implemented as separate components attached to frame 105 on respective sides 170A-B. Alternatively, blocks 110A-B may be integrated into an eye-worn device leg (not shown) attached to frame 105.

In one example, the image display of optical assemblies 180A-B comprises an integrated image display. As shown in FIG. 2A, the optical assemblies 180A-B include a suitable display matrix 170, such as a Liquid Crystal Display (LCD), an Organic Light Emitting Diode (OLED) display, or any other such display. The optical assemblies 180A-B also include one or more optical layers 176, and the optical layers 176 may include lenses, optical coatings, prisms, mirrors, waveguides, light bars, and other optical components in any combination. The optical layers 176A-N may include prisms of suitable size and configuration and including a first surface for receiving light from the display matrix and a second surface for emitting light to the eyes of a user. The prisms of the optical layers 176A-N extend over all or at least a portion of the respective apertures 175A-B formed in the left and right edges 107A-B to allow a user to see the second surfaces of the prisms when the user's eyes are viewed through the corresponding left and right edges 107A-B. The first surfaces of the prisms of the optical layers 176A-N face upward from the frame 105, and the display matrix is positioned over the prisms such that photons and light emitted by the display matrix strike the first surfaces. The prisms are sized and shaped such that light is refracted within the prisms and directed by the second surfaces of the prisms of the optical layers 176A-176N toward the user's eyes. In this regard, the second surfaces of the prisms of the optical layers 176A-N may be convex to direct light toward the center of the eye. The prism may optionally be sized and shaped to magnify the image projected by the display matrix 170 and the light passes through the prism such that the image viewed from the second surface is larger in one or more dimensions than the image emitted from the display matrix 170.

In another example, the image display device of optical assemblies 180A-B comprises a projected image display as shown in FIG. 2B. The optical assemblies 180A-B include a laser projector 150, which is a three-color laser projector using a scanning mirror or galvanometer. During operation, a light source, such as a laser projector 150, is disposed in or on one of the eye-worn device legs 125A-B of the eye-worn device 100. The optical component 180-B includes one or more light bars 155A-N spaced across the width of the lens of the optical component 180A-B or across the depth of the lens between the front and back surfaces of the lens.

As the photons projected by the laser projector 150 traverse the lenses of the optical assemblies 180A-B, the photons encounter the light bars 155A-N. When a particular photon encounters a particular light bar, the photon is either directed toward the user's eye or passes to the next light bar. The combination of laser projector 150 modulation and light bar modulation may control specific photons or light. In an example, the processor controls the light bars 155A-N by a mechanical, acoustic, or electromagnetic initiation signal. Although shown as having two optical components 180A-B, the eye-worn device 100 can include other arrangements, such as single or three optical components, or the optical components 180A-B can take different arrangements depending on the application or intended user of the eye-worn device 100.

As further shown in fig. 2A-B, the eye-worn device 100 includes a left block 110A adjacent a left side 170A of the frame 105 and a right block 110B adjacent a right side 170B of the frame 105. Blocks 110A-B may be integrated into frame 105 on respective sides 170A-B (as shown), or implemented as separate components attached to frame 105 on respective sides 170A-B. Alternatively, blocks 110A-B may be integrated into eye-worn device legs 125A-B attached to frame 105.

In one example, the image display includes a first (left side) image display and a second (right side) image display. The eye-worn device 100 includes first and second apertures 175A-B that hold respective first and second optical assemblies 180A-B. The first optical assembly 180A includes a first image display (e.g., the display matrix 170A of fig. 2A, or the light bars 155A-N' of fig. 2B and the projector 150A). The second optical assembly 180B includes a second image display (e.g., the display matrix 170B of FIG. 2A, or the light bars 155A-N "of FIG. 2B and the projector 150B).

Fig. 3 depicts an example of visible light captured with the left visible light camera 114A and visible light captured with the right visible light camera 114B. Visible light is captured by the left visible light camera 114A with a left visible light camera field of view 111A as a left raw image 858A (fig. 4). Visible light is captured by right visible light camera 114B, which has right visible light camera field of view 111B, as right raw image 858B (FIG. 4). As described in more detail below, a three-dimensional depth image of the three-dimensional scene 715 is generated based on processing of the left raw image 858A (fig. 4) and the right raw image 858B (fig. 4).

Fig. 4 is a high-level functional block diagram of an exemplary separable distorted disparity determination system 900, the system 900 including a wearable device (e.g., the eye-worn device 100), a mobile device 990, and a server system 998 connected via various networks. The eye-worn device 100 includes a three-dimensional camera, such as at least one of the visible light cameras 114A-B; and a depth sensor 213, shown as an infrared emitter 215 and an infrared camera 220. Or the three-dimensional camera may include at least two visible light cameras 114A-B (one associated with the left side 170A and one associated with the right side 170B). The three-dimensional camera generates an initial depth image (not shown) of the depth image, which is a rendered three-dimensional (3D) model that is a red, green, and blue (RGB) imaging scene texture map image.

The mobile device 990 may be a smartphone, tablet, laptop, access point, or any other such device capable of connecting with the eye-worn device 100 using both a low-power wireless connection 925 and a high-speed wireless connection 937. The mobile device 990 is connected to a server system 998 and a network 995. The network 995 may include any combination of wired and wireless connections.

The eye-worn device 100 also includes two image displays (one associated with the left side 170A and one associated with the right side 170B) of the optical assembly 180A-B. The eye-worn device 100 also includes an image display driver 942, an image processor 912, low power circuitry 920, and high speed circuitry 930. The image display of optical assembly 180-B is used to present an image, such as a depth image 961. The image display driver 942 is coupled to the image display of the optical assembly 180A-B to control the image display of the optical assembly 180A-B to present an image, such as a depth image 961. The eye-worn device 100 also includes a user input device 991 (e.g., a touch sensor) for receiving user two-dimensional input selections.

The components of the eye-worn device 100 shown in fig. 4 are located on one or more circuit boards, such as a PCB or flexible PCB located in an edge or in a leg of the eye-worn device. Alternatively or additionally, the depicted components may be located in a block, frame, hinge, or nosepiece of the eyewear 100. The left and right visible light cameras 114A-B may include digital camera elements, such as Complementary Metal Oxide Semiconductor (CMOS) image sensors, charge-coupled devices, lenses, or any other corresponding visible light or light capturing elements that may be used to capture data, including images of a scene with unknown objects.

The eye-worn device 100 includes a memory 934 including a separable distorted disparity determination program 945 that performs a subset or all of the functions described herein for determining disparity matching between two separable distorted images, a flowchart outlining the functions that may be performed in the separable distorted disparity determination program 945 is shown in fig. 6 and 7. As shown, the memory 934 also includes a left raw image 858A captured by the left visible light camera 114A and a right raw image captured by the right visible light camera 114B, a left separable distortion corrected image 808A corresponding to the left raw image, and a right separable distortion corrected image 808B corresponding to the right raw image.

As shown in fig. 4, the high-speed circuitry 930 includes a high-speed processor 932, memory 934, and high-speed radio circuitry 936. In this example, an image display driver 942 is coupled to the high speed circuitry 930 and operated by the high speed processor 932 to drive the left and right image displays of the optical assemblies 180A-B. The high-speed processor 932 may be any processor capable of managing the high-speed communication and operation of any general-purpose computing system required for the eye-worn device 100. The high-speed processor 932 includes the processing resources needed to manage high-speed data transfers over a high-speed wireless connection 937 to a Wireless Local Area Network (WLAN) using high-speed wireless circuitry 936. In some embodiments, the high-speed processor 932 executes an operating system, such as the LINUX operating system of the eye-worn device 100 or other such operating system, and the operating system is stored in the memory 934 for execution. The high-speed processor 932 executing the software architecture of the eye-mounted device 100 is used to manage data transfer to the high-speed radio circuit 936, among any other responsibilities. In certain embodiments, the high-speed wireless circuitry 936 is configured to implement an Institute of Electrical and Electronics Engineers (IEEE)802.11 communication standard, also referred to herein as Wi-Fi. In other embodiments, other high-speed communication standards may be implemented by the high-speed wireless circuitry 936.

The low-power radio circuit 924 and the high-speed radio circuit 936 of the eye-worn device 100 may include short-range transceivers (Bluetooth)TM) And a wireless wide area, local area, or wide area network transceiver (e.g., cellular or WiFi). The mobile device 990, which includes a transceiver that communicates via the low-power wireless connection 925 and the high-speed wireless connection 937, may be implemented using details of the architecture of the eye-mounted device 100, as may the other elements of the network 995.

Memory 934 comprises any memory device capable of storing various data and applications including, among other things, camera data generated by left and right visible light cameras 114A-B, infrared cameras 220, and image processor 912, as well as images generated for display on the image displays of optical assemblies 180A-B by image display driver 942. Although the memory 934 is shown as being integrated with the high-speed circuitry 930, in other embodiments, the memory 934 may be a separate element of the eye-mounted device 100. In some such embodiments, the routing lines of circuitry may provide connections from the image processor 912 or the low power processor 922 to the memory 934 through a chip that includes a high speed processor 932. In other embodiments, the high speed processor 932 may manage addressing of the memory 934 such that the low power processor 922 will boot the high speed processor 932 anytime a read or write operation involving the memory 934 is required.

As shown in fig. 4, the processor 932 of the eye-worn device 100 can be coupled to a camera system (visible light cameras 114A-B), an image display driver 942, a user input device 991, and a memory 934. As shown in fig. 5, processor 1030 of mobile device 990 may be coupled to camera system 1070, image display driver 1090, user input device 1091, and memory 1040A. The eye-mounted device 100 is capable of performing all or a subset of the functions described below by execution of the separable distorted disparity determination program 945 in the memory 934 by the processor 932 of the eye-mounted device 100. As a result of processor 1030 of mobile device 990 executing separable distortion parallax determination program 945 in memory 1040A, mobile device 990 may perform all or a subset of any of the following functions described below. In the separable distorted disparity determining system 900, the functions may be divided such that the eye-worn device 100 captures an image, but the mobile device 990 performs the remainder of the image processing.

The server system 998 can be one or more computing devices that are part of a service or network computing system, including for example, a processor, memory, and a network communication interface to communicate with the mobile device 990 and the eye-mounted device 100 over a network 995. The eye-worn device 100 is connected to a host. For example, the eye-worn device 100 is paired with the mobile device 990 via a high-speed wireless connection 937, or connected to a server system 998 via a network 995.

The output components of the eye-worn device 100 include visual components, such as left and right image displays (e.g., displays such as Liquid Crystal Displays (LCDs), Plasma Display Panels (PDPs), Light Emitting Diode (LED) displays, projectors, or waveguides) of the optical assemblies 180A-B shown in fig. 2A-B. The image display of the optical assemblies 180A-B is driven by an image display driver 942. The output components of the eye-worn device 100 also include acoustic components (e.g., speakers), haptic components (e.g., vibration motors), other signal generators, and the like. The input components of the eye-worn device 100, the mobile device 990, and the server system 998 may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, an optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or other pointing instrument), tactile input components (e.g., physical buttons, a touch screen that provides touch location and force or gesture of touch, or other tactile input components), audio input components (e.g., a microphone), and so forth.

The eye-worn device 100 may optionally include additional peripheral elements. Such peripheral elements may include biometric sensors, additional sensors, or display elements integrated with the eye-worn device 100. For example, a peripheral element may include any I/O component, including an output component, a motion component, a position component, or any other such element described herein.

For example, the biometric component includes a biometric sensor for detecting an expression (e.g., a hand expression, a facial expression, a voice expression, a body expression)Body posture or eye tracking), measuring biological signals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identifying a person (e.g., voice recognition, retinal recognition, facial recognition, fingerprint recognition, or electroencephalogram-based recognition), and the like. The moving parts include acceleration sensor parts (e.g., accelerometers), gravity sensor parts, rotation sensor parts (e.g., gyroscopes), and the like. The location component includes a location sensor component (e.g., a Global Positioning System (GPS) receiver component) that generates location coordinates, WiFi or Bluetooth for generating positioning system coordinatesTMA transceiver, an altitude sensor component (e.g., an altimeter or barometer for detecting barometric pressure from which altitude may be derived), a direction sensor component (e.g., a magnetometer), and the like. Such positioning system coordinates may also be received from the mobile device 990 over the wireless connections 925 and 937 via the low-power wireless circuitry 924 or the high-speed wireless circuitry 936.

Fig. 5 is a high-level functional block diagram of an example of a mobile device 990 communicating via the separable disparity distortion determination system 900 of fig. 4. Mobile device 990 includes a user input device 1091 for receiving two-dimensional input selections. Mobile device 990 also comprises flash memory 1040A, which flash memory 1040A comprises separable disparity distortion determining program 945 to perform all or a subset of the functions described herein. As shown, the memory 1040A also includes a left raw image 858A captured by the left visible light camera 114A and a right raw image captured by the right visible light camera 114B, a left separable distortion corrected image 808A corresponding to the left raw image, a right separable distorted image 1008B corresponding to the right raw image, a left corrected image 1012A corresponding to the left separable distorted image 1008A, and a right separated distorted image 1012B corresponding to the right separated distorted image 1008B. Mobile device 1090 can include a camera system 1070 that includes at least two visible light cameras (first and second visible light cameras with overlapping fields of view) for capturing left raw image 858A and right raw image 858B. Where the mobile device 990 has similar components of the eye-mounted device 100, such as a camera system, the left raw image 858A and the right raw image 858B can be captured via the camera system 1070 of the mobile device 990.

As shown, the mobile device 990 includes an image display 1080, an image display driver 1090 for controlling the display of images, and a user input device 1091 similar to the eye-mounted device 100. In the example of fig. 5, image display 1080 and user input device 1091 are integrated together into a touch screen display.

Examples of touch screen type mobile devices that may be used include, but are not limited to, smart phones, Personal Digital Assistants (PDAs), tablet computers, notebook computers, or other portable devices. However, the structure and operation of a touch screen type device is provided by way of example and the subject technology described herein is not intended to be so limited. For purposes of this discussion, fig. 5 provides a block diagram illustration of an example mobile device 990 having a touch screen display for displaying content and receiving user input as (or as part of) a user interface.

As shown in fig. 5, the mobile device 990 includes at least one digital transceiver (XCVR)1010, shown as a WWAN XCVR, for digital wireless communication via a wide area wireless mobile communication network. The mobile device 990 may also include additional digital or analog transceivers, e.g., for communicating via NFC, VLC, DECT, ZigBee, BluetoothTMOr short-range XCVR1020 for short-range network communications such as WiFi. For example, short-range XCVR1020 may take the form of any available two-way Wireless Local Area Network (WLAN) transceiver of a type compatible with one or more standard communication protocols implemented in a wireless local area network, such as one of the Wi-Fi standards under IEEE802.11 and WiMAX.

To generate location coordinates for locating mobile device 990, mobile device 990 may include a Global Positioning System (GPS) receiver. Alternatively or additionally, mobile device 990 may utilize either or both of short-range XCVR1020 and WWAN XCVR 1010 to generate location coordinates for positioning. E.g. based on cellular network, WiFi or BluetoothTMThe positioning system of (2) can generate very accurate position coordinates, especially when used in combination. Such location coordinates may pass through one or more XCVR 1010, 1020The network connection is transmitted to the eye-worn device.

The transceivers 1010, 1020 (network communication interfaces) are compliant with one or more of the various digital wireless communication standards used by modern mobile networks. Examples of WWAN transceivers 1010 include, but are not limited to, transceivers configured to operate in accordance with Code Division Multiple Access (CDMA) and third generation partnership project (3GPP) network technologies, including but not limited to 3GPP type 2 (or 3GPP2) and LTE, sometimes referred to as "4G. For example, the transceivers 1010, 1020 provide two-way wireless communication of information including digitized audio signals, still images and video signals, web page information for display and network-related inputs, and various types of mobile messaging to/from the mobile device 990 for separable distortion-disparity determination.

As described above, several of these types of communications through the transceivers 1010, 1020 and the network involve supporting communication protocols and procedures with the eye-worn device 100 or the server system 998 for separable distortion parallax determination, such as transmitting the left raw image 858A and the right raw image 858B. For example, such communications may transmit packet data to and from the eye-worn device 100 via the short-range XCVR1020 over the wireless connections 925 and 937 as shown in fig. 4. Such communications may also transfer data using IP packet data transmissions via a WWAN XCVR 1010, for example, over a network (e.g., the internet) 995 as shown in fig. 4. Both WWAN XCVR 1010 and short range XCVR1020 are connected to an associated antenna (not shown) by Radio Frequency (RF) transmit and receive amplifiers (not shown).

The mobile device 990 further comprises a microprocessor, shown as CPU 1030, sometimes referred to herein as a host controller. A processor is a circuit having elements constructed and arranged to perform one or more processing functions, typically various data processing functions. While discrete logic components may be used, an example is utilizing components forming a programmable CPU. For example, a microprocessor includes one or more Integrated Circuit (IC) chips incorporating electronic components that perform the functions of a CPU. For example, processor 1030 may be based on any known or available microprocessor architecture, such as Reduced Instruction Set Computing (RISC) using the ARM architecture, as is commonly used today in mobile devices and other portable electronic devices. Of course, other processor circuits may be used to form the CPU 1030 or processor hardware in smart phones, laptops, and tablets.

Microprocessor 1030 acts as a programmable host controller for mobile device 990 by configuring mobile device 990 to perform various operations, e.g., according to instructions or programming that can be executed by processor 1030. For example, such operations may include various general operations of the mobile device, as well as operations related to the separable distortion-disparity determination program 945 and communications with the eye-worn device 100 and the server system 998. While the processor may be configured through the use of hardwired logic, a typical processor in a mobile device is a general purpose processing circuit that is configured through the execution of programming.

Mobile device 990 includes a memory or storage device system for storing data and programming. In this example, the memory system may include flash memory 1040A and Random Access Memory (RAM) 1040B. RAM 1040B serves as short-term storage for instructions and data processed by processor 1030, e.g., as working data processing memory. Flash memory 1040A typically provides longer term storage.

Thus, in the example of mobile device 990, flash memory 1040A is used to store programming or instructions for execution by processor 1030. Depending on the type of device, mobile device 990 stores and runs a mobile operating system by which certain application programs are executed, including separable distortion parallax determination program 945. An application (e.g., separable distortion parallax determination program 945) may be a local application, a hybrid application, or a web application (e.g., a dynamic web page executed by a web browser), running on mobile device 990 to determine separable distortion parallax. Examples of Mobile operating systems include Google Android System (Google Android), apple iOS System (I-Phone or iPad devices), Windows Mobile, Amazon Fire OS, RIM blackberry operating systems, and so forth.

It should be appreciated that mobile device 990 is merely one type of host in separable distorted disparity determination system 900, and that other arrangements may be utilized.

Fig. 6 is a method flow diagram having steps that may be implemented in a separable distorted disparity determination system. For ease of description and understanding, the steps of the following flow charts are described with reference to the systems and apparatus described herein. Those skilled in the art will recognize other suitable systems and apparatus for performing the steps described herein. In addition, the method is described with reference to a camera system including two cameras separated in the horizontal direction. In other examples, the cameras may have another orientation relative to each other (e.g., separated in a vertical direction).

At step 602, two images of a scene are obtained. A processor (e.g., element 932 of eye-worn device 100 of fig. 4 or element 1030 of mobile device 990 of fig. 5) obtains images of a scene captured by respective cameras having different viewpoints. In an example, the Left visible light camera 114A of the eye-worn device 100 captures a Left raw image (Left)RAW) The Right visible light image 114B of the eye-worn device 100 captures a Right original image (Right)RAW). The cameras are separated in the direction of the camera baseline (here in the horizontal direction). Fig. 8A depicts an illustrative example of a left original image 858A and a right original image 858B of a three-dimensional scene 715 that includes objects 802 (i.e., trees).

At step 604, the obtained left and right original images are modified to create respective left and right corrected pixel images. The processor may modify the left and right original images by applying a component-separated correction algorithm to create a left separable distortion-corrected pixel image from the left original image and a right separable distortion-corrected pixel image from the right original image. In an example, a monotonic function f is applied, where f is a 1D transform function (e.g., f (x) ═ 1+ k _1 x ^2+ k _2 x ^ 4; where x is the horizontal pixel distance from the center of the distortion, and k _1 and k _2 are parameters). The monotonic function prevents the mapped image from folding over on itself (e.g., since 2 pixels are mapped to the same target). An example of a correction algorithm for component separation is shown in equation 1:

(xseparable distortion,ySeparable distortion)=rx*xraw,ry*yraw) (1.)

Wherein r isx=f(x2 raw) (ii) a And

ry=f(y2 raw);

where x is the pixel position in the horizontal direction and y is the pixel position in the vertical direction.

As shown in equation 1, in the component separable correction algorithm, the x component is affected only by the direction component in the x direction (and not by the direction component in the y direction). Likewise, the y-component is affected only by the directional component in the y-direction (and not by the directional component in the x-direction). This separation of the x and y components produces a more realistic image than can typically be achieved using conventional techniques. In addition, the resulting image better retains the corresponding objects from the image pair in the same raster, which facilitates detection of the corresponding objects when determining disparity.

Fig. 8B depicts an illustrative example of left separable distortion corrected image 808A corresponding to left original image 858A (fig. 8A) and right separable distortion corrected image 808B corresponding to right original image 858B (fig. 8A). During modification, the original images 858A and 858B are transformed into respective separable distortion corrected images 808A and 808B that maintain pixel scene matching at least in the horizontal direction. Distortion may be introduced by the respective lenses of the left and right visible light cameras. The distortion caused by the lens may include producing a curve/image where the real world line/image will be more accurately represented by a straight line/image. An example of an original image with distortion introduced by the lens is shown in fig. 10A. Fig. 10B shows a separable distortion corrected image. As shown, the distorted image in fig. 10A includes curved aspects (e.g., the roof of a building) as compared to the separable distortion corrected image of fig. 10.

At step 606, pixel pairs are determined from corresponding image pixels between the left and right separable distortion corrected images in the horizontal direction. The processor may determine pixel pairs by correlating pixels/objects in left separable distortion correction image 808A with pixels/objects in right separable distortion image 808B. The match may be determined by comparing the color and/or image properties of one or more pixels (pixel area, 50 pixels) in the left image with one or more pixels in the right image. One or more pixels may be identified as a pair if the color and/or image attributes of the compared one or more pixels are the same or within a threshold (e.g., 5%).

At step 608, a corrected disparity match for each pixel pair is determined. The processor may determine a corrected disparity match for each pixel pair by determining a difference in corresponding pixel positions between the left and right separable distortion corrected images and modifying the difference based on the corresponding positions in the respective original images.

The processor may first determine the difference between corresponding pairs of pixels between the left and right separable distortion correction images (step 702; fig. 7). The processor may determine the disparity between the separable distortion corrected images by identifying corresponding features in the separable distortion corrected images 808A and 808B and determining the number of pixels (typically in the horizontal direction) between the location of pixel 810B in the right corrected image 808B (corresponding to the subject pixel 810A appearing in the left image 808A) and the location of the corresponding subject pixel 810C actually appearing in the right image 808B. The processor may determine the disparity between the corrected images by correlating the corrected images 808A/B and determining the number of pixels (typically in the horizontal direction) in the right corrected image 808B between the location of the pixel 810B in the right corrected image 808B that corresponds to the location in the left image 808A where the subject pixel 801A appears and the location where the corresponding subject pixel 801C actually appears in the right image 808B. For example, the correlation of left and right pixels may be achieved using semi-global block matching (SGBM). This is illustrated in fig. 9, where pixel 810A is shown in solid lines in left separable distortion correction image 808A and a representation of the same pixel location 810C is shown in dashed lines in right separable distortion correction image 808B. The disparity of the image pixels in the separable distortion corrected image is the difference between the represented pixel location 810C in the right image 808B and the actual pixel location 810B of the corresponding feature in the right image 808B. As shown, there may be a minimum expected parallax due to the distance between the cameras capturing the images.

The processor then determines a corrected distortion disparity using the difference in the respective original pixel pair positions corresponding to the positions of the pixel pairs in the separable distortion corrected image (step 704; fig. 7). The processor then modifies the disparity matching by replacing the separable distorted disparity with the corrected disparity (step 706; fig. 7).

At step 610, a depth map of the scene is created using the corrected distorted disparity. In one example, the depth map includes a plurality of vertices based on the corrected distortion disparity, and each vertex includes one or more of a color attribute or a texture attribute. The processor may calculate Z-position coordinates for each vertex of the depth map using the corrected distortion disparity.

At step 612, a three-dimensional (3D) scene is generated from the corrected image using the determined disparity. To create the 3D scene, the processor renders the 3D scene from the left and right separable component correction images and the depth map. The processor may render the 3D scene using an image processing 3D rendering program. Suitable 3D rendering programs will be understood by those skilled in the art from the description herein.

At step 614, the 3D image is rendered. In an example, the processor may render the 3D scene on a display of the glasses or a mobile device coupled to the glasses.

In an example, the depth map may be used in a computer vision algorithm that utilizes depth information. For example, the depth map may enable the computer vision system to understand the position of the hand in 3D coordinates.

Any of the separable distortion-disparity determination functions described herein for the eye-worn device 100, the mobile device 990, and the server system 998 may be embodied in one or more applications, as previously described. According to some embodiments, a "function," "application," "instruction," or "program" is a program that performs a function specified in the program. Various programming languages may be employed to create one or more applications structured in various ways, such as an object-oriented programming language (e.g., Objective-C, Java or C + +) or a procedural programming language (e.g., C or assembly language). In a specific example, the third party shouldUsing programs (e.g., using ANDROID by an entity other than the particular platform providerTMOr IOSTMAn application developed by a Software Development Kit (SDK) may be running at the IOSTM、ANDROIDTMPhone or other mobile operating system. In this example, the third party application may invoke API calls provided by the operating system to facilitate the functionality described herein.

Thus, a machine-readable medium may take many forms of tangible storage media. For example, non-volatile storage media includes optical or magnetic disks, such as any storage device in any computer or the like, such as may be used to implement the client devices, media gateways, transcoders, etc. shown in the figures. Volatile storage media includes dynamic memory, such as the main memory of such computer platforms. Tangible transmission media include coaxial cables, copper wire and fiber optics, including the wires that comprise a bus within a computer system. Carrier-wave transmission media can take the form of electrical or electromagnetic signals, or acoustic or light waves such as those generated during Radio Frequency (RF) and Infrared (IR) data communications. Thus, for example, common forms of computer-readable media include: a floppy disk, a hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD or DVD-ROM, any other optical medium, punch cards, any other physical storage medium with patterns of holes, a RAM, a PROM, and EPROM, a flash EPROM, any other memory chip or cartridge, a carrier wave transporting data or instructions, cables or links transporting such a carrier wave, or any other medium from which a computer can read programming code and/or data. Many of these forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to a processor for execution.

The scope of protection is only limited by the appended claims. This scope is intended and should be interpreted to be as broad as the ordinary meaning of the language used in the claims and to include all equivalent structures and functions when interpreted in accordance with the present specification and the subsequent application. However, the claims are not intended to encompass subject matter that fails to meet the requirements in section 101, 102, or 103 of the patent Law, nor should they be construed in such a manner. Any unintended use of the theme is hereby abandoned.

Nothing stated or illustrated other than as described above is intended or should be interpreted as causing a dedication of any component, step, feature, object, benefit, advantage, etc. to the public regardless of whether it is recited in the claims.

It will be understood that the terms and expressions used herein have the ordinary meaning as is accorded to such terms and expressions with respect to their corresponding respective areas of inquiry and study except where specific meanings have otherwise been set forth herein. For example, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises or comprises a list of elements or steps does not include only those elements or steps but may include other elements or steps not expressly listed or inherent to such process, method, article, or apparatus. An element referred to above as "a" or "an" does not, without further limitation, exclude the presence of additional similar elements in a process, method, article, or apparatus that comprises the element.

Unless otherwise indicated, any and all measurements, values, ratings, positions, sizes, dimensions, and other specifications set forth in this specification (including the appended claims) are approximate and imprecise. Such amounts are intended to have reasonable ranges consistent with the functions to which they pertain and with the conventions in the art to which they pertain. For example, unless explicitly stated otherwise, parameter values and the like may vary by ± 10% from the stated amounts.

In addition, in the foregoing detailed description, it can be seen that various features are grouped together in various examples for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed examples require more features than are expressly recited in each claim. Rather, as the following claims reflect, subject matter is claimed in less than all features of any single disclosed example. Thus the following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separately claimed subject matter.

While the foregoing has described what are considered to be the best mode and other examples, it is understood that various modifications may be made therein and that the subject matter disclosed herein may be implemented in various forms and examples, and that they are suitable for use in numerous applications, only some of which have been described herein. It is intended that the following claims protect any and all modifications and variations that fall within the true scope of the present concepts.

32页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:相机装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类