Creating shock waves in three-dimensional depth video and images

文档序号:621634 发布日期:2021-05-07 浏览:5次 中文

阅读说明:本技术 在三维深度视频和图像中创建冲击波 (Creating shock waves in three-dimensional depth video and images ) 是由 萨基·卡茨 埃亚尔·扎克 于 2019-09-03 设计创作,主要内容包括:一种虚拟冲击波创建系统包括眼戴设备,所述眼戴设备包括框架、连接到所述框架侧面的眼戴设备腿和深度捕捉摄像头。处理器编程执行将所述虚拟冲击波创建系统配置为通过将变换函数应用于初始三维坐标来为多个初始深度图像中的每一个生成相应的冲击波图像。所述虚拟冲击波创建系统创建了包括一系列所生成的变形冲击波图像的变形冲击波视频。所述虚拟冲击波创建系统经由图像显示器呈现所述扭曲的冲击波视频。(A virtual shockwave creation system includes an eye-worn device including a frame, eye-worn device legs connected to sides of the frame, and a depth capture camera. The processor is programmed to execute configuring the virtual shockwave creation system to generate a respective shockwave image for each of a plurality of initial depth images by applying a transformation function to the initial three-dimensional coordinates. The virtual shockwave creation system creates a deformed shockwave video comprising a series of generated deformed shockwave images. The virtual shockwave creation system presents the distorted shockwave video via an image display.)

1. A virtual shockwave creation system comprising:

the eye-worn device comprises:

a frame;

an eye-worn device leg connected to a side of the frame; and

a depth capture camera supported by at least one of the frame or the eye-worn device leg, wherein the depth capture camera comprises: (i) at least two visible light cameras having overlapping fields of view; or (ii) at least one visible light camera and a depth sensor;

an image display for presenting an initial video comprising an initial image, wherein the initial image is a two-dimensional raw image or a processed raw image;

an image display driver coupled to the image display to control the image display to present the initial video;

a user input device for receiving a shock wave effect option for a user to apply a shock wave to the presented initial video;

a memory;

a processor coupled to the depth capture camera, the image display driver, the user input device and the memory

Programming in the memory, wherein execution of the programming by the processor configures the virtual shockwave creation system to perform the functions of:

presenting the initial video via the image display;

receiving the shock wave effect option of the user via the user input device to apply a shock wave to the presented initial video;

generating, via the depth capture camera, a series of initial depth images from respective initial images of the initial video, wherein:

based on the respective initial images of the initial video, each of the initial depth images is associated with a time coordinate on a time (T) axis of a presentation time;

each of the initial depth images is formed by a matrix of vertices, each vertex representing a sampled 3D position in the respective three-dimensional scene;

each vertex has a position attribute; and

the location attribute for each vertex is based on a three-dimensional location coordinate system including an X-location coordinate on an X-axis for a horizontal location, a Y-location coordinate on a Y-axis for a vertical location, and a Z-location coordinate on a Z-axis for a depth location;

generating, in response to the received shockwave effect option, a respective warped shockwave image for each of the initial depth images by applying a transformation function to vertices of the respective initial depth image based on at least the Y and Z positional coordinates and the associated time coordinate based at least on the associated time coordinate of each of the initial depth images;

creating a deformed shockwave video comprising a series of said generated deformed shockwave images; and

presenting, by the image display, the warped shockwave video forming respective shockwave regions of vertices grouped together along the Z-axis based at least on the associated time coordinates of respective initial depth images; and

the transformation function moves respective Y position coordinates of vertices in the respective shockwave regions of vertices vertically up or down on the Y-axis.

2. The virtual shockwave creation system of claim 3 wherein:

the transformation function for each initial depth pixel moves the respective Y position coordinate of each vertex in the respective shock wave region of vertices vertically up or down on the Y-axis to vertically undulate or oscillate the respective shock wave region of vertices

For each of the initial depth images, generating the respective shockwave depth image by applying the respective transform function to the respective initial depth image, vertically fluctuating or oscillating the respective shockwave region of a vertex, and storing the respective initial depth image with the vertical fluctuation or oscillation as the respective shockwave depth image.

3. The virtual shockwave creation system of claim 1 wherein:

the function presented via the image display, the deformed shockwave video comprising a series of the generated deformed shockwave images presents a wavefront appearance progressing radially from the depth capture camera, radially from an object emitting a shockwave, or along a Z-axis of a deformed shockwave image of the deformed shockwave video.

4. The virtual shockwave creation system of claim 1 wherein:

the transformation function is based on the respective Y-position coordinates of vertices in the respective shockwave region where a waveform moves a vertex vertically up or down

The waveform provides a wavefront appearance that progresses radially from the depth capture camera, radially from an object emitting a shockwave, or along a Z-axis of a deformed shockwave image of the deformed shockwave video.

5. The virtual shockwave creation system of claim 6 wherein:

the transformation function is applied to create a new modified set of vertices or a three-dimensional image without a depth map.

6. The virtual shockwave creation system of claim 1 wherein:

an earlier initial depth image is associated with an earlier temporal coordinate on the temporal (T) axis of an earlier presentation time in the initial video; and

an intermediate initial depth image is associated with an intermediate time coordinate on the time (T) axis of an intermediate presentation time subsequent to the earlier presentation time in the initial video;

the function of transforming the respective shockwave region along the Z-axis vertex based at least on the relative time coordinate of the respective initial depth image comprises:

transforming, based on the earlier time coordinate, a near range shockwave region of the earlier initial depth image having nearer depth position vertices grouped together consecutively along the Z axis; and

varying a middle region shockwave region of the middle initial depth image having middle depth position vertices grouped together consecutively along the Z-axis based on the middle time coordinate

Changing; and

the close range shock wave region of an apex is closer in depth along the Z-axis than the middle region shock wave region of an apex.

7. The virtual shockwave creation system of claim 8 wherein:

the initial video further comprises:

the later initial depth image is related to an later time coordinate on the time (T) axis of an later presentation time following the intermediate presentation time of the intermediate initial depth image in the initial video; and

the function of transforming the respective shockwave region along the Z-axis vertex based at least on the correlated time coordinate of the respective initial depth image further comprises:

transforming, based on the later time coordinate, distant shockwave regions of the later initial depth image having more distant depth location vertices grouped together consecutively along the Z-axis; and

the remote shock wave region of an apex is further in depth along the Z-axis than the intermediate region of an apex.

8. The virtual shockwave creation system of claim 1 wherein:

executing the programming by the processor further configures the virtual shockwave creation system to calculate respective affinity matrices for the respective initial depth image vertices, the affinity matrices determining impact weights of the transformation function on each of the vertices in the respective shockwave regions of vertices;

the impact weight is based at least on a vertical position of the vertex; and

for each of the initial depth images, the function of generating the respective shockwave depth image by applying the transformation function to the respective initial depth image is further based on the calculated respective affinity matrix.

9. The virtual shockwave creation system of claim 10 wherein:

the impact weights become larger as the height of the vertex relative to the corresponding initial depth image base plane decreases, such that the transformation function moves the Y position coordinate of the vertex vertically upward on the Y-axis by a greater degree; and

the influence weight becomes smaller as the height of the vertex relative to the base plane increases, so that the transformation function moves the Y position coordinate of the vertex vertically upward on the Y axis by a small degree.

10. The virtual shockwave creation system of claim 1 wherein:

the virtual shock wave creation system further comprises an inertial measurement unit; and

the function of transforming the respective shockwave region along the Z-axis vertex based at least on the relative time coordinate of the respective initial depth image comprises:

tracking, via the inertial measurement unit, a head direction of the eyewear wearer, the eyewear wearer being the user or a different user;

determining, based on the head direction, a vertex base plane that is continuous along the Z axis of the respective initial depth image; and

transforming the respective shock wave region of a vertex based at least on the base plane.

11. The virtual shockwave creation system of claim 12 wherein:

the function of tracking the head orientation of the wearer by the inertial measurement unit comprises measuring the head orientation in the X-axis, the Y-axis, the Z-axis, or a combination thereof by the inertial measurement unit; and

determining a deflection angle of the depth capture camera in the X-axis, the Y-axis, the Z-axis, or a combination thereof in response to the measured head direction; and

reorienting the vertex based on the deviation angle such that one axis (the X-axis, the Y-axis, or the Z-axis) is perpendicular to the ground.

12. The virtual shockwave creation system of claim 1 wherein:

for each of the initial depth images, the function of generating the respective shockwave depth image by applying the transformation function to the respective initial depth image comprises:

multiplying each vertex in the respective shockwave region of the respective initial depth image vertex by the transformation function to obtain a new Y position coordinate on the three-dimensional position coordinate system.

13. The virtual shockwave creation system of claim 1 wherein:

the processor comprises a first processor and a second processor;

the memory comprises a first memory and a second memory;

the eye-worn device includes:

a first network communication interface for communicating over a network;

the first processor is coupled to the first network communication interface;

the first processor may access the first memory; and

programming in the first memory, wherein execution of the programming by the first processor configures the eye-worn device to perform a function of generating the initial depth image from the initial image of the initial video via the depth capture camera; and

the virtual shockwave creation system further comprises a host coupled to the eye-worn device over a network, the host comprising:

a second network communication interface for communicating over a network;

the second processor is coupled to the second network communication interface;

the second processor may access the second memory; and

programming in the second memory, wherein execution of the programming by the second processor configures the host to perform functions including:

presenting the initial video via the image display;

receiving the shockwave effect option of the user via the user input device to apply shockwaves to the presented initial video;

generating, for each of the initial depth images, the respective shockwave depth image by applying the transformation function to the respective initial depth image based at least on the associated time coordinate of each of the initial depth images in response to the received shockwave effect option;

creating the warped shockwave video comprising a series of the generated deformed shockwave images; and

presenting the distorted shockwave video through the image display.

14. The virtual shockwave creation system of claim 15 wherein:

the host is a mobile device;

the network is a wireless short-range network or a wireless local area network; and

the user input device comprises a touch screen or a computer mouse.

15. A method comprising the steps of:

generating a series of initial depth images from the initial images of an initial video via a depth capture camera;

determining, in response to the received shockwave effect option, a respective rotation matrix for each of the initial depth images to adjust at least the X and Y position coordinates of the vertex based on the rotation detected by the depth capture camera;

generating a respective deformed shockwave image for each of the initial depth images by applying the respective rotation matrix and transformation function to vertices of the respective initial depth image;

creating a deformed shockwave video comprising a series of said generated deformed shockwave images; and

the distorted shockwave video is presented via an image display.

16. The method of claim 17, wherein:

the transformation function transforms respective shockwave regions of vertices grouped together along a Z-axis based at least on the associated time coordinates of the respective initial depth images; and

the transformation function moves respective Y position coordinates of vertices in the respective shockwave regions of vertices vertically up or down on the Y-axis.

17. The method of claim 17, wherein:

the function of presenting the deformed shockwave video comprising a series of the generated deformed shockwave images via the image display presents an appearance of waves that are radial from the depth capture camera, radial from an object emitting shockwaves, or roll along a Z-axis of the deformed shockwave images of the deformed shockwave video.

18. The method of claim 17, wherein:

the transformation function is based on respective Y position coordinates of vertices in the respective shockwave region where the waveform moves the vertices vertically up or down; and

the waveform provides the appearance of a wave that is radial from the depth capture camera, radial from the object emitting the shockwave, or scrolling along the Z-axis of the deformed shockwave image of the deformed shockwave video.

Technical Field

The present subject matter relates to wearable devices (e.g., eye-worn devices) and mobile devices and techniques by which a user can create shock waves in three-dimensional space of videos and images.

Background

Computing devices, including wearable devices including portable eye-worn devices (e.g., smart glasses, headwear, and head-worn devices), mobile devices (e.g., tablet computers, smart phones, and notebook computers), and commercially available personal computers all integrate an image display and a camera. Today, users of computing devices can create effects on two-dimensional (2D) photographs using camera lenses or filters, and can edit two-dimensional photographs using photo decoration applications such as stickers, emoji expressions, and text.

With the advent of three-dimensional (3D) images and video content, more advanced processing and interaction is required to convert three-dimensional images and video content (e.g., video, pictures, etc.). For example, it is desirable to be able to process and interact with three-dimensional images and video content to create graphical effects on the three-dimensional images and video.

Accordingly, there is a need to enhance video and image editing effects that can be used for three-dimensional images and video content.

Brief Description of Drawings

One or more embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which like reference numerals refer to the same or similar elements.

Fig. 1A is a right side view of an example hardware configuration of an eye-worn device for a virtual shockwave creation system, in which a transformation function is applied to an initial depth image of an initial video to generate a warped shockwave image, creating a warped shockwave video.

FIG. 1B is a cross-sectional top view of the right block of the eye-worn device of FIG. 1A depicting the right visible camera of the depth capture camera and the circuit board.

Fig. 1C is a left side view of the example hardware configuration of the eye-worn device of fig. 1A, showing a left visible light camera of the depth capture camera.

FIG. 1D is a cross-sectional top view of the left block of the eye-worn device of FIG. 1C depicting the left visible light camera and the circuit board of the depth capture camera.

Fig. 2A is a right side view of another example hardware configuration of an eye-worn device for a virtual shockwave creation system, showing a right visible light camera and a depth sensor of a depth capture camera that can generate an initial depth image of a series of initial depth images (as in an initial video).

Fig. 2B and 2C are rear views of example hardware configurations of eye-worn devices including two different types of image displays.

Fig. 3 shows a rear perspective cross-sectional view of the eye-worn device of fig. 2A depicting the infrared camera of the depth sensor, the front portion of the frame, the rear portion of the frame, and the circuit board.

Fig. 4 is a cross-sectional view taken through the infrared camera and frame of the eye-worn device of fig. 3.

Fig. 5 shows a rear perspective view of the eye-worn device of fig. 2A depicting the infrared emitter of the depth sensor, the infrared camera of the depth sensor, the front portion of the frame, the rear portion of the frame, and the circuit board.

Fig. 6 is a cross-sectional view taken through the eye-worn device infrared emitter and frame of fig. 5.

Fig. 7 depicts an example of an infrared light pattern emitted by an infrared emitter of an eye-worn device depth sensor and a change in reflection of the infrared light emission pattern captured by an infrared camera of the eye-worn device depth sensor to measure the depth of a pixel in an original image to generate an initial depth image from an initial video.

FIG. 8A depicts an example of infrared light captured by a depth sensor infrared camera as an infrared image and visible light captured by a visible light camera as a raw image to generate an initial depth image of a three-dimensional scene.

Fig. 8B depicts an example of visible light captured by a left visible light camera as a left raw image and visible light captured by a right visible light camera as a right raw image to generate an initial depth image of a three-dimensional scene.

FIG. 9 is a high-level functional block diagram of an example virtual shockwave creation system including an eye-worn device with a depth capture camera to generate an initial depth image (as in an initial video), a user input device (such as a touch sensor), a mobile device, and a server system connected via various networks.

FIG. 10 shows an example of a mobile device hardware configuration of the virtual shockwave creation system of FIG. 9, including a user input device (e.g., a touch screen device) that receives a shockwave effect option to apply to an initial depth image, generating a warped shockwave image (as in a warped shockwave video).

FIG. 11 is a flow diagram of a method that may be implemented in a virtual shockwave creation system that applies shockwaves from an initial video to an initial depth image to generate a warped shockwave image to create a warped shockwave video.

12A-B illustrate examples of a first raw image captured by one of the visible light cameras and a first shockwave region applying a transformation function to a vertex of the generated first initial depth image, respectively.

13A-B illustrate examples of a second raw image captured by one of the visible light cameras and a second shockwave region applying a transformation function to a vertex of the generated second initial depth image, respectively.

14A-B illustrate examples of a third raw image captured by one of the visible light cameras and a third shockwave region applying a transformation function to a vertex of the generated third initial depth image, respectively.

15A-B illustrate examples of a fourth original image captured by one of the visible light cameras and a fourth shockwave region applying a transformation function to vertices of the generated fourth initial depth image, respectively.

16A-B illustrate examples of a fifth raw image captured by one of the visible light cameras and a fifth shockwave region applying a transformation function to a vertex of the generated fifth initial depth image, respectively.

17A-B illustrate examples of a sixth original image captured by one of the visible light cameras and a sixth shockwave region applying a transformation function to a vertex of the generated sixth initial depth image, respectively.

18A-B illustrate examples of a seventh original image captured by one of the visible light cameras and a seventh shockwave region applying a transformation function to vertices of the generated seventh initial depth image, respectively.

19A-B illustrate examples of an eighth original image captured by one of the visible light cameras and an eighth shockwave region applying an eighth transformation function to a vertex of the generated eighth initial depth image, respectively.

20A-B show examples of a ninth original image captured by one of the visible light cameras and a ninth shockwave region applying a transformation function to a vertex of the generated ninth initial depth image, respectively.

21A-B show examples of a tenth original image captured by one of the visible light cameras and a tenth shockwave region applying a transformation function to a vertex of the generated tenth initial depth image, respectively.

22A-B show examples of an eleventh original image captured by one of the visible light cameras and an eleventh shockwave region applying a transform function to a vertex of the generated eleventh initial depth image, respectively.

23A-B show examples of a twelfth original image captured by one of the visible light cameras and a twelfth shockwave region applying a transformation function to a vertex of the generated twelfth initial depth image, respectively.

24A-B show examples of a thirteenth original image captured by one of the visible light cameras and a thirteenth shockwave region applying a transformation function to a vertex of the generated thirteenth initial depth image, respectively.

Detailed Description

In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant teachings. It will be apparent, however, to one skilled in the art that the present teachings may be practiced without these specific details. In other instances, well known methods, procedures, components, and circuits have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present teachings.

As used herein, the term "shockwave" refers to a computer-generated effect applied to an image or series of images that creates the appearance of a wave formed through a medium such as a structure, a person, and/or air. The term "coupled" or "connected" as used herein refers to any logical, optical, physical, or electrical connection, etc., through which an electrical or magnetic signal generated or provided by one system element is transferred to another coupled or connected element. Unless otherwise specified, coupled or connected elements or devices are not necessarily directly connected to each other, but may be separated by intermediate components, elements, or propagation media that may modify, manipulate, or carry electrical signals. The term "on" refers to direct support by an element or indirect support provided by another element integrated into or supported by the element.

For purposes of illustration and discussion, the orientation of an eye-worn device, related components, and any complete device incorporating a depth capture camera as shown in any of the figures is given by way of example only. In the operation for creating the shockwave, the eye-worn device may be oriented in any other direction suitable for the particular application of the eye-worn device, such as up, down, sideways, or any other direction. Further, to the extent used herein, any directional terms front, back, inside, outside, facing, left, right, lateral, longitudinal, upward, downward, top, bottom, side, horizontal, vertical, and diagonal, are used by way of example only and are not limited to the direction or orientation of any depth capture camera or depth capture camera assembly configured as otherwise described herein.

Additional objects, advantages and novel features of the examples will be set forth in part in the description which follows, and in part will become apparent to those skilled in the art upon examination of the following figures, or may be learned by production or operation of the examples. The objects and advantages of the present subject matter may be realized and attained by means of the methodologies, instrumentalities and combinations particularly pointed out in the appended claims.

Reference will now be made in detail to the examples illustrated in the accompanying drawings and discussed below.

Fig. 1A is a right side view of an example hardware configuration of an eye-worn device 100 for a virtual shockwave creation system, showing a right visible light camera 114B of a depth capture camera for generating an initial depth image. As described further below, in a virtual shockwave creation system, a transformation function is applied to a series of initial depth images of an initial video to generate the series of warped shockwave images of a warped shockwave video. This transformation function may depend on the spatial and temporal coordinates of the initial depth image, as explained below.

The eye-worn device 100 includes a right optical assembly 180B having an image display to present an initial video including an initial image and a deformed shockwave image of a two-dimensional deformed shockwave video. In this example, the user is not presented with an initial video that includes the initial image or processed raw image presented to the user, but is also not presented with a depth video that includes an initial depth image generated based on the processed raw image. Instead, the user is presented with a distorted version of the visible light image. In this example, the depth video including the generated initial depth image is used for computational purposes to generate a warped shockwave image and create a warped shockwave video. As shown in fig. 1A-B, the eye-worn device 100 includes a right visible light camera 114B. The eye-worn device 100 may include a plurality of visible light cameras 114A-B that form a passive type depth capture camera, such as a stereo camera, with the right visible light camera 114B located on the right bank 110B. As shown in fig. 1C-D, the eye-worn device 100 may also include a left visible light camera 114A. Alternatively, in the example of fig. 2A, the depth capture camera may be an active type depth capture camera (see element 213 of fig. 2A) that includes a single visible light camera 114B and a depth sensor.

The left and right visible light cameras 114A-B are sensitive to visible range wavelengths. Each of the visible light cameras 114A-B has a different forward facing field of view that overlaps to allow for the generation of a three dimensional depth image, e.g., the right visible light camera 114B has the depicted right field of view 111B. In general, a "field of view" is the portion of a scene that is visible by a camera in a particular location and orientation in space. Objects or object features outside of the field of view 111A-111B are not recorded in the original image (e.g., a photograph or picture) when the image is captured by a visible light camera. The field of view describes the range of angles at which the image sensors of the visible light cameras 114A-B receive electromagnetic radiation of a given scene in a captured image of the given scene. The field of view may be expressed as the angular size of the viewing cone, i.e., the viewing angle. The viewing angle can be measured horizontally, vertically, or diagonally.

In an example, the field of view of the visible light cameras 114A-B is between 15 and 30, e.g., 24, with a resolution of 480x480 pixels. The "field angle" describes the range of angles that the lens of the visible light cameras 114A-B or infrared camera 220 (see FIG. 2A) can effectively image. Typically, the image circle produced by the camera lens is large enough to completely cover the film or sensor, possibly including some vignetting towards the edges. If the field angle of the camera lens is not filled with sensors, the image circle will be visible, typically with a strong vignetting towards the edges, and the effective field angle will be limited to the field angle.

Examples of such visible light cameras 114A-B include high resolution Complementary Metal Oxide Semiconductor (CMOS) image sensors and Video Graphics Array (VGA) cameras, such as 640p (e.g., 640 x480 pixels, for a total of 30 ten thousand pixels), 720p, or 1080 p. The term "overlap" as used herein when referring to a field of view means that the pixel matrix of the original or infrared image of the generated scene overlaps by 30% or more. The term "substantially overlapping" as used herein when referring to a field of view refers to a pixel matrix overlap of 50% or more of an original image or an infrared image of a generated scene.

Image sensor data from the visible light cameras 114A-B is captured along with the geo-location data, digitized by the image processor, and stored in memory. The left and right raw images captured by the respective visible light cameras 114A-B are in a two-dimensional spatial domain, including a matrix of pixels on a two-dimensional coordinate system that includes an X-axis for horizontal position and a Y-axis for vertical position. Each pixel includes a color attribute (e.g., a red pixel light value, a green pixel light value, and/or a blue pixel light value) and a position attribute (e.g., an X position coordinate and a Y position coordinate).

To provide stereo vision, the visible light cameras 114A-B may be coupled to an image processor (element 912 of FIG. 9) for digital processing and capturing timestamps for images of a scene. Image processor 912 includes circuitry for receiving signals from visible light cameras 114A-B and processing those signals from visible light cameras 114 into a format suitable for storage in memory. The time stamp may be added by the image processor or other processor that controls the operation of the visible light cameras 114A-B. The visible light cameras 114A-B allow the depth capture cameras to simulate the binocular vision of a person. The depth capture camera provides the ability to reproduce a three-dimensional image based on two captured images from the visible light cameras 114A-B having the same time stamp. Such three-dimensional images allow for an immersive, realistic experience, for example, for virtual reality or video games. The three-dimensional depth video may be generated by stitching together a series of three-dimensional depth images with associated time coordinates of the depth video presentation time.

For stereo vision, a pair of raw red, green, and blue (RGB) images of a scene (one image for each of the left and right visible-light cameras 114A-B) are captured at a given moment in time. When processing raw image pairs captured from the forward facing left and right side fields of view 111A-B of the left and right side visible light cameras 114A-B (e.g., by an image processor), a depth image is generated that a user can perceive on an optical assembly 180A-B (e.g., of a mobile device) or other image display. The generated depth image may include a matrix of vertices in a three-dimensional position coordinate system in the three-dimensional spatial domain, including an X-axis for horizontal position (e.g., length), a Y-axis for vertical position (e.g., height), and a Z-axis for depth position (e.g., distance). The depth video also associates each series of generated depth images with a time coordinate on a time (T) axis of the depth video presentation time (e.g., each depth image includes a spatial component as well as a temporal component). The depth video may also include an audio component (e.g., an audio track or stream) that may be captured by the microphone. Each vertex includes a color attribute (e.g., a red pixel light value, a green pixel light value, and/or a blue pixel light value), a position attribute (e.g., an X position coordinate, a Y position coordinate, and a Z position coordinate), a texture attribute, and/or a reflectivity attribute. The texture attribute quantifies the perceived texture of the depth image, such as the spatial arrangement of color or brightness in the vertex region of the depth image.

Typically, the perception of depth comes from the disparity of a given 3D point in the left and right raw images captured by the visible light cameras 114A-B. Disparity is the difference in image positioning of the same 3D point when projected under the view angle of the visible light cameras 114A-B (D ═ x)Left side of-xRight side). For a pixel having parallel optical axes, focal length f, base line b and corresponding image point (x)Left side of,yLeft side of) And (x)Right side,yRight side) The visible light cameras 114A-B may use triangulation to determine depth from parallax to derive the position (Z-axis position coordinates) of the 3D points. In general, the depth of a 3D point is inversely proportional to disparity. Various other techniques may also be used. The generation of the three-dimensional depth image and the warped shockwave image will be described in more detail later.

In an example, the virtual shockwave creation system includes an eye-mounted device 100. The eyewear 100 includes a frame 105, a left eyewear leg 110A extending from a left side 170A of the frame 105, and a right eyewear leg 110B extending from a right side 170B of the frame 105. The eye-worn device 100 also includes a depth capture camera. The depth capturing camera includes: (i) at least two visible light cameras having overlapping fields of view; or (ii) at least one visible light camera 114A-B and a depth sensor (element 213 of FIG. 2A). In one example, the depth capture camera includes a left visible light camera 114A having a left field of view 111A and connected to the frame 105 or the left eye-worn device leg 110A to capture a left image of the scene. The eye-worn device 100 also includes a right visible light camera 114B connected to the frame 105 or the right eye-worn device leg 110B and having a right field of view 111B to capture (e.g., simultaneously with the left visible light camera 114A) a right image of the scene that partially overlaps the left image.

The virtual shockwave creation system also includes a computing device, such as a host (e.g., mobile device 990 of fig. 9-10), coupled to the eye-mounted device 100 over a network. The virtual shockwave creation system also includes an image display (optical components 180A-B of the eye-worn device; image display 1080 of mobile device 990 in fig. 10) for rendering (e.g., displaying) video including images. The virtual shockwave creation system also includes an image display driver (element 942 of the eye-worn device 100 in FIG. 9; element 1090 of the mobile device 990 in FIG. 10) coupled to the image display (optical components 180A-B of the eye-worn device; image display 1080 of the mobile device 990 in FIG. 10) that controls the image display to render the initial video.

In some examples, user input is received to indicate a user desire to apply shockwaves to various initial depth images from an initial video. For example, the virtual shockwave creation system also includes a user input device for receiving a shockwave effect option from a user to apply shockwaves to the presented initial video. Examples of user input devices include a touch sensor (element 991 of FIG. 9 for the eye-worn device 100), a touch screen display (element 1091 of FIG. 10 for the mobile device 1090), and a mouse for a personal computer or laptop. The virtual shockwave creation system also includes a processor (element 932 of eye-mounted device 100 in fig. 9; element 1030 of mobile device 990 in fig. 10) coupled to eye-mounted device 100 and the depth capture camera. The virtual shockwave creation system also includes processor-accessible memory (element 934 of the eye-worn device 100 in fig. 9; elements 1040A-B of the mobile device 990 in fig. 10), and shockwave creation programming (element 945 of the eye-worn device 100 in fig. 9; element 945 of the mobile device 990 in fig. 10) in the memory, e.g., in the eye-worn device 100 itself, the mobile device (element 990 of fig. 9), or another portion of the virtual shockwave creation system (e.g., the server system 998 of fig. 9). The processor (element 932 of fig. 9) executes programming (element 945 of fig. 9) to configure the eye-worn device 100 to generate an initial depth image from initial images 957A-N of an initial video via a depth capture camera. The initial images 957A-N are in two-dimensional space, such as corrected raw images 858A-B or processed raw images 858A-B. Each initial depth image is associated with a time coordinate on the time (T) axis of the presentation time, e.g., based on the initial images 957A-B of the initial video. The initial depth image is formed from a matrix of vertices. Each vertex represents a pixel in the three-dimensional scene. Each vertex has a position attribute. The position attribute of each vertex is based on a three-dimensional position coordinate system including an X position coordinate on an X axis for a horizontal position, a Y position coordinate on a Y axis for a vertical position, and a Z position coordinate on a Z axis for a depth position.

The processor (element 1030 of fig. 10) executing the shockwave creation programming (element 945 of fig. 10) configures the mobile device (element 990 of fig. 10) of the virtual shockwave creation system to perform the following functions. The mobile device (element 990 of fig. 10) renders the initial video via the image display (element 1080 in fig. 10). The mobile device (element 990 of fig. 10) receives the user's shock wave effect option via the user input device (element 1091 of fig. 10), applying the shock wave to the initial video presented. The mobile device (element 990 of fig. 10) applies a transformation function to the vertices of each initial depth image in response to the received shockwave effect selection based at least on the associated time coordinate of each initial depth image. The transformation function may transform respective shockwave regions of vertices grouped together along the Z-axis based at least on the associated time coordinates of the respective initial depth images. The respective transformation function moves the respective Y position coordinates of the vertices in the respective shockwave regions of the vertices vertically up or down on the Y-axis, which manifests as a depth deformation effect. In one example, the transform function transforms all vertices of the initial depth image based on the X, Y and/or Z location coordinates of the vertices and the associated time coordinates. In one example, the transformation function is an equation that is independent of the X position coordinates: new Y ═ func (X, old Y, Z, T). An example transformation function for a wave traveling in the Z direction (so the particular function is not dependent on X) is: novel Y ═ f (Y, Z, t) ═ Y +200/(exp (20/3-abs (Z-300 × t)/150) +1) -200. The transformation is applied at each vertex, where the transformation is space and time dependent. The transformation function is applied to create a new modified set of vertices or a three-dimensional image without a depth map.

The mobile device (element 990 of fig. 10) generates, for each initial depth image, a respective shockwave depth image by applying a transformation function to the respective initial depth image. The mobile device (element 990 of fig. 10) may generate a respective shockwave depth image by applying a transformation function to the vertex position attributes in the respective shockwave region for each vertex of the respective initial depth image. The mobile device (element 990 of fig. 10) creates a warped shockwave video comprising a series of generated warped shockwave images. The mobile device (element 990 of fig. 10) presents the distorted shockwave video via the image display (image display 1080 of fig. 10). The mobile device (element 990 of fig. 10) presents the distorted shockwave video via the image display (image display 1080 of fig. 10). The various shockwave creation programming (element 945 of fig. 9-10) functions described herein may be implemented within other portions of the virtual shockwave creation system, such as the eye-worn device 100 or another host, such as a server system (element 998 of fig. 9), in addition to the mobile device (element 990 of fig. 10).

In some examples, the received shock wave effect option generates a shock wave creating photo filter effect that is applied to the original video as a transformation function in response to a finger swiping across the touch screen display (e.g., combined image display 1080 and user input device 1091). The morphed shockwave video with the shockwave-creating photo filter effect can then be shared with friends over a network transmission via a chat application executing on the mobile device (element 990 of fig. 10).

Fig. 1B is a cross-sectional top view of the right block 110B of the eye-worn device 100 of fig. 1A depicting the right visible camera 114B of the depth capture camera and the circuit board. Fig. 1C is a left side view of the example hardware configuration of the eye-worn device 100 of fig. 1A, showing the left visible light camera 114A of the depth capture camera. FIG. 1D is a cross-sectional top view of the eye-worn device left block 110A of FIG. 1C, depicting the left visible light camera 114A of the depth capture camera and the circuit board. The structure and layout of the left visible light camera 114A is substantially similar to the right visible light camera 114B, except that it is connected and coupled to the left side 170A. As shown in the example of fig. 1B, the eye-worn device 100 includes a right visible light illuminating camera 114B and a circuit board, which may be a flexible Printed Circuit Board (PCB) 140B. The right hinge 226B connects the right block 110B to the right eye-worn device leg 125B of the eye-worn device 100. In some examples, the right visible light camera 114B, the flexible PCB140B, or other components of electrical connectors or contacts may be located on the right eye-worn device leg 125B or the right hinge 226B.

The right block 110B includes a block body 211 and a block cap, which is omitted in the cross section of fig. 1B. Disposed within right block 110B are various interconnected circuit boards, such as PCBs or flexible PCBs, including controller circuitry for right visible light camera 114B, a microphone, low power wireless circuitry (e.g., for via Bluetooth)TMWireless short-range network communication), high-speed wireless circuitry (e.g., for wireless local area network communication via WiFi).

The right visible light camera 114B is coupled to or disposed on the flexible PCB240 and is covered by a visible light camera lens cover that is aligned through an opening formed in the frame 105. For example, the right edge 107B of the frame 105 is connected to the right block 110B and includes an opening for a visible light camera lens cover. The frame 105 includes a front facing side configured to face outward away from the user's eyes. An opening for a visible light camera lens cover is formed on and through the front facing side. In an example, the right visible light camera 114B has an outward facing field of view 111B with a line of sight or perspective of the right eye of the user of the eye-worn device 100. The visible-light camera lens cover may also be adhered to an outward-facing surface of the right block 110B in which an opening having an outward-facing angle of view but in a different outward direction is formed. The coupling may also be an indirect coupling via intermediate components.

The left (first) visible light camera 114A is connected to the left image display of the left optical assembly 180A to capture a scene viewed by the left eye as viewed in the left raw image by the wearer of the eye-worn device 100. The right (second) visible light camera 114B is connected to the right image display of the right optical assembly 180B to capture a scene viewed by the right eye as viewed in the right raw image by the wearer of the eye-worn device 100. The left original image and the right original image partially overlap to render a three-dimensional observable space of the generated depth image.

Flex PCB140B is disposed within right block 110B and is coupled to one or more other components in right block 110B. Although shown as being formed on the circuit board of the right block 110B, the right visible light camera 114B may be formed on the circuit board of the left block 110A, the eyewear legs 125A-B, or the frame 105.

Fig. 2A is a right side view of another example hardware configuration of the eye-mounted device 100 for a virtual shockwave creation system. As shown, the depth capture camera includes a left visible light camera 114A and a depth sensor 213 on the frame 105 to generate an initial depth image (e.g., in an initial video) of a series of initial depth images. Rather than using at least two visible light cameras 114A-B to generate an initial depth image, a single visible light camera 114A and depth sensor 213 are used to generate a depth image, such as an initial depth image. As in the example of fig. 1A-D, the user's shockwave effect option is applied to the initial depth image of the initial video to generate a morphed shockwave image of the morphed shockwave video. The infrared camera 220 of the depth sensor 213 has an outward facing field of view that substantially overlaps the left visible light camera 114A to obtain a line of sight for the user. As shown, infrared emitter 215 and infrared camera 220 are located above left edge 107A along with left visible light camera 114A.

In the example of fig. 2A, the depth sensor 213 of the eye-worn device 100 includes an infrared emitter 215 and an infrared camera 220 that captures infrared images. The visible cameras 114A-B typically include blue filters to block infrared light detection, in an example, the infrared camera 220 is a visible camera, such as a low resolution Video Graphics Array (VGA) camera (e.g., 640 x480 pixels, 30 million pixels total), with the blue filters removed. Infrared emitter 215 and infrared camera 220 are both located on frame 105, for example, infrared emitter 215 and infrared camera 220 are both shown attached to an upper portion of left edge 107A. As described in further detail below, one or more of frame 105 or left and right blocks 110A and 110B include a circuit board that includes infrared emitter 215 and infrared camera 220. For example, infrared emitter 215 and infrared camera 220 may be connected to the circuit board by soldering.

Other arrangements of infrared emitter 215 and infrared camera 220 may also be implemented, including arrangements in which both infrared emitter 215 and infrared camera 220 are on right edge 107A or in different locations on frame 105, e.g., infrared emitter 215 on left edge 107B and infrared camera 220 on right edge 107B. However, the at least one visible light camera 114A and the depth sensor 213 typically have widely overlapping fields of view to generate a three-dimensional depth image. In another example, infrared emitter 215 is on frame 105 and infrared camera 220 is on one of tiles 110A-B, or vice versa. Infrared emitter 215 may be attached substantially anywhere on frame 105, left block 110A, or right block 110B to emit an infrared pattern in the user's visual range. Similarly, infrared camera 220 may be coupled substantially anywhere on frame 105, left block 110A, or right block 110B to capture at least one reflection change in an infrared light emission pattern of a three-dimensional scene within a user's visual range.

Infrared emitter 215 and infrared camera 220 are arranged to face outward to obtain an infrared image of an object or object feature scene observed by a user wearing eye-worn device 100. For example, infrared emitter 215 and infrared camera 220 are positioned directly in front of the eyes, in the upper portion of frame 105, or in tiles 110A-B at either end of frame 105, with a forward facing field of view to capture an image of the scene at which the user is looking, for measurement of objects and object feature depths.

In one example, the infrared emitter 215 of the depth sensor 213 emits infrared illumination, which may be near infrared light or other low energy radiation short wavelength light, in the forward facing field of view of the scene. Alternatively or additionally, the depth sensor 213 may include an emitter that emits light at wavelengths other than infrared, and the depth sensor 213 may also include a camera sensitive to the wavelengths that receives and captures images having the wavelengths. As described above, the eye-worn device 100 is coupled to a processor and memory, for example, in the eye-worn device 100 itself or in another part of the virtual shockwave creation system. The eye-worn device 100 or virtual shockwave creation system can then process the captured infrared image during generation of a three-dimensional depth image of the depth video (e.g., an initial depth image from an initial video).

Fig. 2B-C are rear views of an example hardware configuration of the eye-mounted device 100 including two different types of image displays. The eye-worn device 100 has a form configured to be worn by a user, which in the example is eyeglasses. The eye-worn device 100 may take other forms and may incorporate other types of frames, such as a headset, earphones, or helmet.

In the eyeglass example, the eyewear 100 comprises a frame 105, the frame 105 comprising a left edge 107A, the left edge 107A being connected to a right edge 107B via a nose bridge 106 adapted to the nose of the user. The left and right edges 107A-B include respective apertures 175A-B that retain respective optical elements 180A-B, such as lenses and display devices. As used herein, the term lens refers to a transparent or translucent glass or plastic cover sheet having curved and/or flat surfaces that cause light to converge/diverge or that cause little or no convergence or divergence.

Although shown as having two optical elements 180A-B, the eye-worn device 100 may include other arrangements, such as a single optical element or may not include any optical elements 180A-B, depending on the application or intended user of the eye-worn device 100. As further shown, the eye-worn device 100 includes a left block 110A adjacent a left side 170A of the frame 105 and a right block 110B adjacent a right side 170B of the frame 105. Blocks 110A-B may be integrated into frame 105 on respective sides 170A-B (as shown), or implemented as separate components attached to frame 105 on respective sides 170A-B. Alternatively, blocks 110A-B may be integrated into an eye-worn device leg (not shown) attached to frame 105.

In one example, the image display of optical assemblies 180A-B comprises an integrated image display. As shown in FIG. 2B, the optical assemblies 180A-B include any suitable type of suitable display matrix 170, such as a Liquid Crystal Display (LCD), an Organic Light Emitting Diode (OLED) display, or any other such display. Optical assemblies 180A-B also include optical layer 176, and optical layer 176 may include lenses, optical coatings, prisms, mirrors, waveguides, light bars, and other optical components in any combination. The optical layers 176A-N may include prisms of suitable size and configuration and including a first surface for receiving light from the display matrix and a second surface for emitting light to the eyes of a user. The prisms of optical layers 176A-N extend over all or at least a portion of the respective apertures 175A-B formed in the left and right edges 107A-B to allow a user to see the second surfaces of the prisms when the user's eyes are viewed through the respective left and right edges 107A-B. The first surfaces of the prisms of the optical layers 176A-N face upward from the frame 105, and the display matrix is positioned over the prisms such that photons and light emitted by the display matrix strike the first surfaces. The prisms are sized and shaped such that light is refracted within the prisms and directed by the second surfaces of the prisms of the optical layers 176A-N toward the user's eyes. In this regard, the second surfaces of the prisms of the optical layers 176A-N may be convex to direct light toward the center of the eye. The prism may optionally be sized and shaped to magnify the image projected by the display matrix 170 and the light passes through the prism such that the image viewed from the second surface is larger in one or more dimensions than the image emitted from the display matrix 170.

In another example, the image display device of optical assemblies 180A-B comprises a projected image display as shown in FIG. 2C. The optical assemblies 180A-B include a laser projector 150, which is a three-color laser projector using a scanning mirror or galvanometer. During operation, a light source such as a laser projector 150 is disposed in or on one of the eye-worn device legs 125A-B of the eye-worn device 100. The optical components 180A-B include one or more light bars 155A-N spaced across the width of the optical component 180A-B lens or across the depth of the lens between the front and rear surfaces of the lens.

As the photons projected by the laser projector 150 traverse the lenses of the optical assemblies 180A-B, the photons encounter the light bars 155A-N. When a particular photon encounters a particular light bar, the photon is either directed toward the user's eye or passes to the next light bar. The combination of laser projector 150 modulation and light bar modulation may control specific photons or light. In an example, the processor controls the light bars 155A-N by a mechanical, acoustic, or electromagnetic initiation signal. Although shown as having two optical components 180A-B, the eye-worn device 100 can include other arrangements, such as single or three optical components, or the optical components 180A-B can take different arrangements depending on the application or intended user of the eye-worn device 100.

As further shown in fig. 2B-C, the eyewear device 100 includes a left block 110A adjacent a left side 170A of the frame 105 and a right block 110B adjacent a right side 170B of the frame 105. Blocks 110A-B may be integrated into frame 105 on respective sides 170A-B (as shown), or implemented as separate components attached to frame 105 on respective sides 170A-B. Alternatively, blocks 110A-B may be integrated into eye-worn device legs 125A-B attached to frame 105.

In one example, the image display includes a first (left) image display and a second (right) image display. The eye-worn device 100 includes first and second apertures 175A-B that hold respective first and second optical assemblies 180A-B. The first optical assembly 180A includes a first image display (e.g., the display matrix 170A of FIG. 2B, or the light bars 155A-N' of FIG. 2C and the projector 150A). The second optical assembly 180B includes a second image display (e.g., the display matrix 170B of FIG. 2B, or the light bars 155A-N "of FIG. 2C and the projector 150B).

Fig. 3 shows a rear perspective cross-sectional view of the eye-worn device of fig. 2A depicting infrared camera 220, frame front 330, frame rear 335, and circuit board. As can be seen, the upper portion of the left edge 107A of the frame 105 of the eye-worn device 100 includes a frame front portion 330 and a frame rear portion 335. The frame front 330 includes a forward side configured to face outwardly away from the user's eyes. The frame back 335 includes a rearward side configured to face inward toward the user's eyes. An opening of the infrared camera 220 is formed on the frame front 330.

As shown in the top circled cross-section 4-4 in the left edge 107A of the frame 105, a circuit board, which is a flexible Printed Circuit Board (PCB)340, is sandwiched between the frame front 330 and the frame back 335. The left block 110A is also shown in more detail attached to the left headgear leg 325A via a left hinge 326A. In some examples, components of depth sensor 213 (including infrared camera 220, flexible PCB340, or other electrical connectors or contacts) may be located on left eye-worn device leg 325A or left hinge 326A.

In an example, the left tile 110A includes a tile body 311, a tile cap 312, an inward facing surface 391, and an outward facing surface 392 (labeled but not visible). Disposed within left block 110A are various interconnected circuit boards, such as PCBs or flexible PCBs, including controller circuitry for charging the batteries, inward facing Light Emitting Diodes (LEDs), and outward facing (forward facing) LEDs. Although depth sensor 213, including infrared emitter 215 and infrared camera 220, is shown as being formed on the circuit board at left edge 107A, it may be formed on the circuit board at right edge 107B, for example, in combination with right visible light camera 114B, to capture an infrared image for use in generating a three-dimensional depth image of a depth video.

Fig. 4 is a cross-sectional view through infrared camera 220 and the frame, corresponding to cross-section 4-4 circled by the eye-worn device of fig. 3. The various layers of the eye-worn device 100 can be seen in the cross-section of fig. 4. As shown, the flexible PCB340 is disposed on the frame back 335 and connected to the frame front 330. Infrared camera 220 is disposed on flexible PCB340 and covered by infrared camera lens cover 445. For example, the infrared camera 220 is reflowed to the back of the flexible PCB 340. Reflow attaches infrared camera 220 to the electrical contact pads formed on the back side of flexible PCB340 by subjecting flexible PCB340 to a controlled heat that melts the solder paste to connect the two components. In one example, reflow is used to surface mount infrared camera 220 on flexible PCB340 and electrically connect the two components. However, it should be understood that through holes may be used to connect the leads from infrared camera 220 to flexible PCB340 via connecting wires, for example.

Frame front 330 includes an infrared camera opening 450 for infrared camera lens cover 445. An infrared camera opening 450 is formed on a forward side of the frame front 330 that is configured to face outwardly away from the user's eyes and toward the scene being viewed by the user. In this example, the flexible PCB340 may be connected to the frame back 335 via a flexible PCB adhesive 460. Infrared camera lens cover 445 may be attached to frame front 330 via infrared camera lens cover adhesive 455. The connection may be an indirect connection via intermediate components.

Fig. 5 shows a rear perspective view of the eye-worn device of fig. 2A. The eye-mounted device 100 includes an infrared emitter 215, an infrared camera 220, a frame front 330, a frame back 335, and a circuit board 340. As shown in fig. 3, it can be seen from fig. 5 that the upper portion of the left edge of the frame of the eye-worn device 100 includes a frame front portion 330 and a frame back portion 335. An opening of the infrared emitter 215 is formed on the frame front 330.

As shown in cross-section 6-6, circled in the upper middle of the left edge of the frame, the circuit board, which is a flexible PCB340, is sandwiched between the frame front 330 and the frame back 335. Also shown in more detail is left chunk 110A attached to left eye-worn device leg 325A via left hinge 326A. In some examples, components of depth sensor 213 (including infrared emitter 215, flexible PCB340, or other electrical connectors or contacts) may be located on left side eye-worn device leg 325A or left hinge 326A.

Fig. 6 is a cross-sectional view through infrared emitter 215 and the frame, corresponding to cross-section 6-6 circled for the eye-worn device of fig. 5. The various layers of the eye-worn device 100 are shown in cross-section in fig. 6, and as shown, the frame 105 includes a frame front 330 and a frame back 335. A flexible PCB340 is disposed on the frame back 335 and connected to the frame front 330. Infrared emitter 215 is disposed on flexible PCB340 and covered by infrared emitter lens cover 645. For example, the infrared emitter 215 is reflowed to the back surface of the flexible PCB 340. Reflow attaches the infrared emitter 215 to the contact pad formed on the back side of the flexible PCB340 by subjecting the flexible PCB340 to controlled heat that melts the solder paste to connect the two components. In one example, reflow is used to surface mount infrared emitter 215 on flexible PCB340 and electrically connect the two components. However, it should be understood that vias may be used to connect the leads from infrared emitter 215 to flexible PCB340 via connecting wires, for example.

Frame front 330 includes an infrared emitter opening 650 for infrared emitter lens cover 645. An infrared emitter opening 650 is formed on a forward side of the frame front 330 that is configured to face outwardly away from the user's eyes and toward the scene the user is viewing. In this example, the flexible PCB340 may be connected to the frame back 335 via a flexible PCB adhesive 460. Infrared emitter lens cover 645 may be attached to frame front 330 via infrared emitter lens cover adhesive 655. The coupling may also be an indirect coupling via intermediate components.

Fig. 7 depicts an example of an infrared light 781 emission pattern emitted by the infrared emitter 215 of the depth sensor 213. As shown, the change in reflection of the emitted pattern of infrared light 782 is captured as an infrared image by the infrared camera 220 of the depth sensor 213 of the eye-worn device 100. The change in reflection of the emitted pattern of infrared light 782 is used to measure the depth of a pixel in an original image (e.g., the left original image) to generate a three-dimensional depth image, e.g., an initial depth image of a series of initial depth images (e.g., in an initial video).

The depth sensor 213 in this example includes an infrared emitter 215 for projecting an infrared light pattern and an infrared camera 220 for capturing a distorted infrared image of the infrared light projected by the object or object feature in space, as shown by a scene 715 observed by a wearer of the eye-worn device 100. For example, infrared transmitter 215 may emit infrared light 781, which may fall on objects or object features within scene 715, as a large number of points. In some examples, the infrared light is emitted as a pattern of lines, spirals, or patterns of concentric rings, or the like. Infrared light is generally invisible to the human eye. Infrared camera 220 is similar to a standard red, green, and blue (RGB) camera, but receives and captures images of light in the infrared wavelength range. For depth perception, infrared camera 220 is coupled to an image processor (element 912 of FIG. 9) and shock wave creation programming (element 945), determining time of flight from the captured infrared image of infrared light. For example, the warped dot pattern 782 in the captured infrared image may then be processed by an image processor to determine depth based on the displacement of the dots. Typically, nearby objects or object features have a pattern of further dispersed dots, while distant objects have a more dense pattern of dots. It should be appreciated that the foregoing functionality may be embodied in programming instructions of a shock wave creation program or application (element 945) in one or more components of the system.

Fig. 8A depicts an example of infrared light captured by infrared camera 220 of depth sensor 213 having a left infrared camera field of view 812. Infrared camera 220 captures changes in the reflection of the emitted pattern of infrared light 782 in three-dimensional scene 715 as infrared image 859. As further shown, the visible light is captured by the left visible light camera 114A having a left visible light camera field of view 111A as a left raw image 858A. Based on the ir image 859 and the left raw image 858A, a three-dimensional initial depth image of the three-dimensional scene 715 is generated.

Fig. 8B depicts an example of visible light captured by the left visible light camera 114A and visible light captured by the right visible light camera 114B. Visible light is captured by the left visible light camera 114A having a left visible light camera field of view 111A as a left raw image 858A. Visible light is captured by right visible light camera 114B, which has right visible light camera field of view 111B, as right raw image 858B. Based on left original image 858A and right original image 858B, a three-dimensional initial depth image of three-dimensional scene 715 is generated.

Fig. 9 is a high-level functional block diagram of an example virtual shockwave creation system 900, which includes a wearable device (e.g., eye-worn device 100), a mobile device 990, and a server system 998 connected via various networks. The eye-worn device 100 includes a depth capture camera, such as at least one of the visible light cameras 114A-B; and a depth sensor 213, shown as an infrared emitter 215 and an infrared camera 220. Or the depth capture cameras can include at least two visible light cameras 114A-B (one associated with the left side 170A and one associated with the right side 170B). The depth capture camera generates initial depth images 961A-N of the initial video 960, which are rendered three-dimensional (3D) models as red, green, and blue (RGB) imaging scene texture map images.

The mobile device 990 may be a smartphone, tablet, laptop, access point, or any other such device capable of connecting with the eye-worn device 100 using both a low-power wireless connection 925 and a high-speed wireless connection 937. The mobile device 990 is connected to a server system 998 and a network 995. The network 995 may include any combination of wired and wireless connections.

The eye-worn device 100 also includes two image displays (one associated with the left side 170A and one associated with the right side 170B) of the optical assembly 180A-B. The eye-worn device 100 also includes an image display driver 942, an image processor 912, low power circuitry 920, and high speed circuitry 930. The image display of optical assemblies 180A-B is used to present images and video, which may include a series of depth images, such as initial depth images 961A-N from initial video 960. The image display driver 942 is coupled to the image display of the optical assembly 180A-B to control the image display of the optical assembly 180A-B to present video including images, e.g., initial depth images 961A-N of the initial video 960 and deformed shockwave images 967A-N of the deformed shockwave video 964. The eye-worn device 100 also includes a user input device 991 (e.g., a touch sensor) for receiving a shock wave effect option of the user to apply shock waves to the presented initial video 960.

The components of the eye-worn device 100 shown in fig. 9 are located on one or more circuit boards, such as a PCB or flexible PCB located in an edge or in a leg of the eye-worn device. Alternatively or additionally, the described components may be located in a block, frame, hinge, or nosepiece of the eyewear 100. The left and right visible light cameras 114A-B may include digital camera elements such as Complementary Metal Oxide Semiconductor (CMOS) image sensors, charge coupled devices, lenses, or any other corresponding visible light capturing elements that may be used to capture data, including images of a scene with unknown objects.

The eye-worn device 100 includes a memory 934 that includes a shockwave creation program 945 to perform a subset or all of the functions described herein for shockwave creation, where a user's shockwave effect option is applied to the initial depth images 961A-N to generate warped shockwave images 967A-N. As shown, memory 934 also includes left raw image 858A captured by left visible light camera 114A, right raw image 858B captured by right visible light camera 114B, and infrared image 859 captured by infrared camera 220 of depth sensor 213.

As shown, the eye-worn device 100 includes an orientation sensor including, for example, the Inertial Measurement Unit (IMU)972 described. Typically, inertial measurement unit 972 is an electronic device that uses a combination of accelerometers and gyroscopes (sometimes also magnetometers) to measure and report specific forces, angular velocities, and sometimes magnetic fields around the body. In this example, the inertial measurement unit 972 determines a head direction of the wearer of the eye-worn device 100 that is related to a direction in which the eye-worn device 100 is depth capturing a camera when capturing a related depth image, the head direction being used to transform the respective shock wave regions of the vertices 966A-N, as described below. The inertial measurement unit 972 operates by detecting linear acceleration using one or more accelerometers and detecting rate of rotation using one or more gyroscopes. A typical configuration of an inertial measurement unit includes one accelerometer, gyroscope, and magnetometer for each of three axes: the horizontal axis for left-to-right movement (X), the vertical axis (Y) for top-to-bottom movement, and the depth or distance axis for up-to-down movement (Z). The gyroscope detects the gravity vector. Magnetometers determine rotation in the magnetic field (e.g., facing south, north, etc.), similar to a compass that generates a heading reference. The three accelerometers detect accelerations along the horizontal (X), vertical (Y), and depth (Z) axes defined above, which may be defined relative to the ground, the eye-worn device 100, a depth capture camera, or a user wearing the eye-worn device 100.

Memory 934 includes head direction measurements corresponding to horizontal axis (X-axis), vertical axis (Y-axis), and depth or principal axis measurements from on axis (Z-axis) tracked (e.g., measured) by inertial measurement unit 972. Using the head direction measurements to determine depth capture camera alignment may be used to identify the base planes of the initial depth images 961A-N. In some applications of IMUs, the principal axes are referred to as pitch, roll and yaw axes. The shockwave creation program 945 is configured to perform functions described herein with the inertial measurement unit 972.

The memory 934 also includes a plurality of initial depth images 961A-N generated via the depth capture cameras. The memory 934 also includes an initial video 960, the initial video 960 including a series of initial depth images 961A-N and associated time coordinates 963A-N. A flowchart outlining the functions that may be performed in the shockwave creation program 945 is shown in fig. 11. Memory 934 also includes a shockwave effect option 962 received by user input device 991, the shockwave effect option 962 being a user input indicating a desire to apply a shockwave effect on initial video 960. In some examples, the shock wave effect option 962 may affect the intensity or degree of shock waves imparted to the initial video 960 to distort the initial depth images 961A-N (e.g., by adjusting the amplitude or frequency of the shock waves). The memory 934 also includes a transformation matrix 965, shockwave regions of vertices 966A-N, affinity matrices 968A-N, a waveform 971, left and right correction images 969A-B (e.g., to remove vignetting towards the end of the lens), and image disparity 970, all of which are generated during image processing of the initial depth images 961A-N from the initial video 960 to generate corresponding warped shockwave images 967A-N of the warped shockwave video 964.

As shown in fig. 9, the high-speed circuitry 930 includes a high-speed processor 932, memory 934, and high-speed radio circuitry 936. In this example, an image display driver 942 is coupled to the high speed circuitry 930 and operated by the high speed processor 932 to drive the left and right image displays of the optical assemblies 180A-B. The high speed processor 932 may be any processor capable of managing the high speed communications and operations of any general purpose computing system required by the eye-worn device 100. The high-speed processor 932 includes processing resources needed to manage high-speed data transfers over a high-speed wireless connection 937 to a Wireless Local Area Network (WLAN) using high-speed wireless circuitry 936. In certain embodiments, the high-speed processor 932 executes an operating system, such as the LINUX operating system of the eye-worn device 100 or other such operating system, and the operating system is stored in the memory 934 for execution. The high-speed processor 932 executing the software architecture of the eye-mounted device 100 is used to manage data transfer to the high-speed radio circuit 936, among any other responsibilities. In certain embodiments, the high-speed wireless circuitry 936 is configured to implement an Institute of Electrical and Electronics Engineers (IEEE)802.11 communication standard, also referred to herein as Wi-Fi. In other embodiments, other high-speed communication standards may be implemented by the high-speed wireless circuitry 936.

The low-power radio circuit 924 and the high-speed radio circuit 936 of the eye-worn device 100 may include short-range transceivers (Bluetooth)TM) And a wireless wide area, local area, or wide area network transceiver (e.g., cellular or WiFi). The mobile device 990, which includes a transceiver that communicates via the low-power wireless connection 925 and the high-speed wireless connection 937, may be implemented using details of the architecture of the eye-mounted device 100, as may the other elements of the network 995.

Memory 934 comprises any memory device capable of storing various data and applications including, among other things, camera data generated by left and right visible light cameras 114A-B, infrared cameras 220, and image processor 912, as well as images and video generated for display on the image displays of optical assemblies 180A-B by image display driver 942. Although the memory 934 is shown as being integrated with the high-speed circuitry 930, in other embodiments, the memory 934 may be a separate element of the eye-mounted device 100. In some such embodiments, the routing lines of circuitry may provide connections from the image processor 912 or the low power processor 922 to the memory 934 through a chip that includes a high speed processor 932. In other embodiments, the high speed processor 932 may manage addressing of the memory 934 such that the low power processor 922 will boot the high speed processor 932 anytime a read or write operation involving the memory 934 is required.

As shown in FIG. 9, the processor 932 of the eye-worn device 100 can be coupled to a depth capture camera (visible light cameras 114A-B; or visible light cameras 114A, infrared emitters 215, and infrared cameras 220), an image display driver 942, a user input device 991, and a memory 934. As shown in fig. 10, processor 1030 of mobile device 990 may be coupled to depth capture camera 1070, image display driver 1090, user input device 1091, and memory 1040A. The eye-worn device 100 is capable of performing any of all or a subset of the functions described below by executing the shockwave creation program 945 in the memory 934 by the processor 932 of the eye-worn device 100. The mobile device 990 is able to perform any whole or a subset of the functions described below by the execution of the shockwave creation program 945 in memory 1040A by the processor 1030 of the mobile device 990. The functions may be divided in the virtual shockwave creation system 900 such that the eye-worn device 100 generates initial depth images 961A-N from the initial video 960, but the mobile device 990 performs the remainder of the image processing on the initial depth images 961A-N from the initial video 960 to generate morphed shockwave images 967A-N of the morphed shockwave video 964.

Execution of the shockwave creation program 945 by the processors 932, 1030 configures the virtual shockwave creation system 900 to perform various functions, including the function of generating initial depth images 961A-N by a depth capture camera based on initial images 957A-N from the initial video 960. Each initial depth image 961A-N is associated with a time coordinate on the time (T) axis of the presentation time, e.g., based on the initial images 957A-N in the initial video 960. Each initial depth image 961A-N is formed from a matrix of vertices. Each vertex represents a pixel in the three-dimensional scene 715. Each vertex has a position attribute. The position attribute of each vertex is based on a three-dimensional position coordinate system including an X position coordinate on an X axis for a horizontal position, a Y position coordinate on a Y axis for a vertical position, and a Z position coordinate on a Z axis for a depth position. Each vertex also includes one or more of a color attribute, a texture attribute, or a reflection attribute.

Virtual shockwave creation system 900 presents an initial video 960 via image displays 180A-B, 1080. The eye-worn device 100 receives the user's shock wave effect option via the user input devices 991, 1091 to apply the shock wave to the presented initial video 960. Virtual shockwave creation system 900 receives a user's shockwave effect option 962 via user input devices 991, 1091 to apply a shockwave to the presented initial video 960.

In response to the received shockwave effect option 962, the virtual shockwave creation system 900 applies a respective transformation function 965 to the vertices of each initial depth image 961A-N based at least on the associated time coordinate 963A-N of each initial depth image 961A-N. Transformation function 965 transforms respective shockwave regions of vertices 966A-N grouped together along the Z-axis based at least on the associated time coordinates 963A-N of respective initial depth images 961A-N. Transform function 965 moves the corresponding Y position coordinates of the vertices in the corresponding shockwave regions of vertices 966A-N vertically up or down on the Y axis. Applying the transformation function creates a new modified set of vertices or a three-dimensional image without a depth map.

The virtual shockwave creation system 900 generates, for each of the initial depth images 961A-N, a respective shockwave depth image 967A-N by applying a transformation function 965 to the respective initial depth image 961A-N. The function of applying the respective transform function 965 to the respective initial depth images 961A-N may include multiplying each vertex in the respective shockwave region of the respective initial depth images 961A-N vertices 966A-N by the transform function 965 to obtain a new Y-position coordinate on the three-dimensional position coordinate system.

The virtual shockwave creation system 900 creates a deformed shockwave video 964 that includes a series of generated deformed shockwave images 967A-N. The virtual shockwave creation system 900 presents a distorted shockwave video 964 via the image display 180A-B, 1080. The functionality of presenting the morphed shockwave video 964 including a series of generated morphed shockwave images 967A-N via the image displays 180A-B, 1080 presents the appearance of a wave scrolling radially from the depth capture camera, radially from the object emitting the shockwave, or along the Z-axis of the morphed shockwave images 967A-N of the morphed shockwave video 964.

In one example of the virtual shockwave creation system 900, the processors include a first processor 932 and a second processor 1030. The memories include a first memory 934 and a second memory 1040A. The eye-mounted device 100 includes a first network communication 924 or 936 interface for communicating over a network 925 or 937 (e.g., a wireless short-range network or a wireless local area network), a first processor 932 coupled to the first network communication interface 924 or 936, and a first memory 934 accessible to the first processor 932. The eye-worn device 100 also includes a shockwave creation program 945 in the first memory 934. The first processor 932 executing the shockwave creation program 945 configures the eye-mounted device 100 to perform functions that generate initial depth images 961A-N from the initial video 960 and associated time coordinates 963A-N via a depth capture camera.

The virtual shockwave creation system 900 also includes a host, such as a mobile device 990, coupled to the eye-mounted device 100 via a network 925 or 937. The host includes a second network communication interface 1010 or 1020 for communicating over a network 925 or 937, a second processor 1030 coupled to the second network communication interface 1010 or 1020, and a second memory 1040A accessible to the second processor 1030. The host also includes a shockwave creation program 945 in the second memory 1040A.

The second processor 1030 executing the shockwave creation program 945 configures the host to perform functions of receiving the initial video 960 from the eye-mounted device 100 over a network via the second network communication interface 1010 or 1020. Second processor 1030 executes shockwave creation program 945 to configure the host to render initial video 960 via image display 1080. The second processor 1030 executes the shockwave creation program 945 to configure the host to receive the user's shockwave effect option 962 via a user input device 1091 (e.g., a touch screen or computer mouse) to apply the shockwave to the presented initial video 960. The second processor 1030 executes the shockwave creation program 945 to configure the host to generate, for each initial depth image 961A-N, a respective shockwave depth image 967A-N by applying a transformation function 965 to vertices of the respective initial depth image 961A-N based on at least the Y and Z position coordinates and the associated time coordinates 963A-N, based on at least the associated time coordinates 963A-N of each initial depth image 961A-N, in response to the received shockwave effect option 962. The second processor 1030 executes the shockwave creation program 945 to configure the host computer to create a morphed shockwave video 964 that includes the generated series of morphed shockwave images 967A-N. The second processor 1030 executes the shockwave creation program 945 to configure the host to render the warped shockwave video 964 via the image display 1080.

In this example, the eye-worn device 100 also includes an inertial measurement unit 972. The processor executing programming configures the virtual shockwave creation system 900 to perform the following functions. During capture of the initial depth images 961A-N by the depth capture camera, rotation of the eye-worn device 100 is measured by the inertial measurement unit 972. For each of the initial depth images 961A-N, a respective rotation matrix 973A-N is determined to adjust the X, Y and Z position coordinates of the vertex based on the rotation of the eye-worn device 100 measured during capture. The respective deformed shockwaves 967A-N are generated by applying the rotation matrices 973A-N to the vertices of the respective initial depth images 961A-N and then applying the transformation functions 965.

In one example, the transform function 965 is applied to each initial depth image to move the respective Y-position coordinates of the vertices in the respective shockwave regions of vertices 966A-N vertically up or down on the Y-axis to cause the respective shockwave regions of vertices 966A-N to vertically fluctuate or oscillate. For each of the initial depth images 961A-N, a function generates a respective shockwave depth image 967A-N by applying a transformation function 965 to the respective initial depth image 961A-N, vertically undulates or oscillates the corresponding shockwave region of the vertices 966A-N, and stores the respective initial depth image 961A-N with the vertical undulations or oscillations as the respective shockwave depth image 967A-N.

In some examples, transformation function 965 moves the respective Y-position coordinates of the vertices in the respective shockwave regions of vertices 966A-N vertically upward or downward based on waveform 971. The waveform 971 provides the appearance of a wave that is radial from the depth capture camera, radial from the object emitting the shockwave, or scrolling along the Z-axis of the morphed shockwave images 967A-N of the morphed shockwave video 964. This can provide a visual effect, i.e., an IncredibleMoving in the scene of the morphed shockwave images 967A-N of the morphed shockwave video 964, appear as ground vibrations in an earthquake. Each initial depth image 961A-N includes a starting depth position on the Z-axis corresponding to the minimum depth of the respective initial depth image 961A-N and an ending depth position on the Z-axis having the maximum depth of the respective initial depth image 961A-N. The function of transforming the vertex 966A-N respective shockwave regions along the Z-axis based at least on the associated time coordinates 963A-N of the respective initial depth images 961A-N further includes the following functions. For each of a series of initial depth images 961A-N, the respective shockwave region of vertices 966A-N is iteratively transformed along the Z-axis based on the progression of associated time coordinates 963A-N from a start depth position to an end depth position. In response to reaching the end depth position of the Z-axis or exceeding the restart time interval of waveform 971, the iterative selection of the corresponding shock wave regions 966A-N at the start depth position is restarted.

In an example, the earlier initial depth image 961A is associated with an earlier time coordinate 963A on the time (T) axis of an earlier presentation time in the initial video 960. The intermediate initial depth image 961B is associated with an intermediate time coordinate 963B on the time (T) axis of an intermediate presentation time after an earlier presentation time in the initial video 960. The functionality to transform the vertex 966A-N respective shockwave regions along the Z-axis based at least on the respective initial depth images 961A-N relative time coordinates 963A-N includes the following functionality. The near range shockwave region of the earlier initial depth image 961A having vertices 966A of closer depth positions grouped together consecutively along the Z-axis is transformed based on the earlier time coordinate 963A. An intermediate range shockwave region of the intermediate initial depth image 961B having vertices 966B of intermediate depth positions grouped together consecutively along the Z-axis is transformed based on the intermediate time coordinate. The near range shock wave region of apex 966A is closer in depth along the Z-axis than the mid-range shock wave region of apex 966B.

In this example, the later initial depth image 961C is associated with a later time coordinate 963C on the time (T) axis of the later presentation time after the intermediate presentation time of the intermediate initial depth image 961B in the initial video 960. The function of transforming the vertex 966A-N respective shockwave regions along the Z-axis based at least on the associated time coordinates 963A-N of the respective initial depth images 961A-N further includes transforming remote shockwave regions of the later initial depth images 961C having vertices 966C of farther depth positions grouped together consecutively along the Z-axis based on the later time coordinates 963C. The remote shock wave region of apex 966C is further in depth along the Z-axis than the mid-range shock wave region of apex 966C.

If the transformation matrix 965 is applied to a single vertex, then a spike or shrinkage will occur. To generate smooth (curved) deformed shockwave images 967A-B, affinity matrices 968A-N are computed as the regions of influence. For example, a polygon having a certain width and length or a circle having a certain radius may be provided. Then, the number or affinity (e.g., segmentation) of each polygon vertex or circle center is calculated (e.g., using edge detection), so each vertex has a weight between 0 and 1 on how the vertex is affected by the transformation function 965. Basically each vertex is moved according to this weight. If the weight is 1, the vertex is transformed according to transform function 965. If the weight is 0, the vertex does not move. If the weight is 1/2, the vertex will come halfway between the original position and the transformed position.

Thus, the processors 932, 1030 executing the shockwave creation program 945 configure the virtual shockwave creation system 900 to perform functions including calculating respective affinity matrices 968A-N for respective initial depth images 961A-N vertices, determining impact weights of the transformation function 965 on each vertex in the respective shockwave regions of the vertices 966A-N. The impact weight is based at least on the vertical position of the vertex. For each of the initial depth images 961A-N, the function of generating the respective shockwave depth image 967A-N by applying a transformation function 965 to the respective initial depth image 961A-N is also based on the calculated respective affinity matrix 968A-N. The impact weights become larger as the height of the vertex relative to the base plane of the corresponding initial depth image 961A-N decreases, so that the transform function 965 moves the Y-position coordinate of the vertex vertically upward on the Y-axis by a greater degree. The impact weights become smaller as the height of the vertex relative to the base plane increases, so that transformation function 965 moves the Y-position coordinate of the vertex vertically upward by a smaller magnitude on the Y-axis.

In an example, the virtual shockwave creation system 900 further includes an inertial measurement unit 872 similar to that shown in fig. 9 for the eye-mounted device 100 and the mobile device 990 in fig. 10. The functionality to transform the vertex 966A-N respective shockwave regions along the Z-axis based at least on the respective initial depth images 961A-N relative time coordinates 963A-N includes the following functionality. The orientation of the head of the wearer of the eye-worn device 100 is tracked via the inertial measurement unit 972. The wearer of the eye-worn device 100 is the user who actually created the morphed shockwave video 964 on the mobile device 990 or a different user who is wearing the eye-worn device 100 when generating the initial video 960. Based on the head direction, successive vertex and base planes along the Z-axis of the respective initial depth images 961A-N are determined. The respective shockwave regions of vertices 966A-N are transformed based at least on the base plane.

In this example, the functions of tracking the orientation of the wearer's head via inertial measurement unit 972 include the following functions. First, the head direction in the X-axis, Y-axis, Z-axis, or a combination thereof is measured via the inertial measurement unit 972. Second, a deflection angle of the depth capture camera in an X-axis, a Y-axis, a Z-axis, or a combination thereof is determined in response to the measured head direction. Third, the base plane of the vertex is adjusted based on the deflection angle, such as by reorienting the vertex based on the deflection angle such that one of the X-axis, Y-axis, or Z-axis is perpendicular to the ground.

In one example, the depth capture cameras of the eye-worn device 100 include at least two visible light cameras including a left visible light camera 114A having a left field of view 111A and a right visible light camera 114B having a right field of view 111B. The left field of view 111A and the right field of view 111B have overlapping fields of view 813 (see fig. 8B). The depth capture camera 1070 of the mobile device 990 may be similarly constructed.

The initial video 960 generated via the depth capture camera, which includes a series of initial depth images 961A-N and associated time coordinates 963A-N, may include all or a subset of the following functions. First, a left raw image 858A including a left pixel matrix is captured via a left visible light camera 114A. Second, a right raw image 858B that includes a right matrix of pixels is captured via right visible light camera 114B. Third, a left-corrected image 969A is created from left raw image 858A, a right-corrected image 969B is created from right raw image 858B, left and right raw images 858A-B are aligned and distortions from each respective lens in left and right visible cameras 114A-B (e.g., at lens edges from vignetting) are removed. Fourth, the image disparity 970 is extracted by associating pixels in the left-side corrected image 969A with the right-side corrected image 969B to calculate the disparity of each relevant pixel. Fifth, the Z position coordinates of the vertex of the initial depth image 961A are calculated based on at least the extracted image disparity 970 for each relevant pixel. Sixth, each generated initial depth image 961A-N in the series is sorted from the initial video 960 based on the timestamps captured when the left and right raw images 858A, 858B were captured and the respective time coordinates 963A-N associated with the respective initial depth images 961A-N are set as timestamps.

In an example, the depth capture cameras of the eye-worn device 100 include at least one visible light camera 114A and a depth sensor 213 (e.g., an infrared emitter 215 and an infrared camera 220). The at least one visible light camera 114A and the depth sensor 213 have substantially overlapping fields of view 812 (see fig. 8A). Depth sensor 213 includes an infrared emitter 215 and an infrared camera 220. An infrared emitter 215 is coupled to the frame 105 or the eye-worn device legs 125A-B to emit an infrared light pattern. An infrared camera 220 is coupled to the frame 105 or the eye-worn device legs 125A-B to capture reflection variations in the infrared light emission pattern. The depth capture camera 1070 of the mobile device 990 may be similarly constructed.

Generating the initial depth image 961A via the depth capture camera may include all or a subset of the following functions. First, a raw image 858A is captured via at least one visible light camera 114A. Second, a pattern of infrared light 781 is transmitted via infrared transmitter 215 over a plurality of objects or object features located in scene 715 reached by transmitted infrared light 781. Third, an infrared image 859 of the change in reflection of the emission pattern of infrared light 782 on a plurality of objects or object features is captured via infrared camera 220. Fourth, respective depths from the depth capture camera to a plurality of objects or object features are calculated based on the infrared image 859 of the change in reflection. Fifth, objects or object features in the reflection-altered infrared image 859 are associated with the original image 858A. Sixth, the Z position coordinates of the vertex of the initial depth image 961A are calculated based on at least the calculated respective depths.

In one example, the user input device 991, 1091 includes a touch sensor including an input surface and a sensor array coupled to the input surface to receive at least one finger contact of a user input. The user input device 991, 1091 further includes sensing circuitry integrated into or connected to the touch sensor and connected to the processor 932, 1030. The sensing circuit is configured to measure a voltage to track at least one finger contact on the input surface. The function of receiving the user's shockwave effect option 962 via the user input device 991, 1091 includes receiving at least one finger contact of the user input on the input surface of the touch sensor.

The touch-based user input device 991 may be integrated into the eye-worn device 100. As described above, the eye-worn device 100 includes blocks 110A-B integrated into or coupled to the frame 105 on the sides 170A-B of the eye-worn device 100. Frame 105, eye-worn device legs 125A-B, or blocks 110A-B include a circuit board that includes touch sensors. The circuit board includes a flexible printed circuit board. The touch sensor is disposed on the flexible printed circuit board. The sensor array is a capacitive array or a resistive array. The capacitive array or resistive array includes a grid forming a two-dimensional rectangular coordinate system to track the X and Y axis position coordinates.

The server system 998 can be one or more computing devices that are part of a service or network computing system, including, for example, a processor, memory, and a network communication interface to communicate with the mobile device 990 and the eye-mounted device 100 over a network 995. The eye-worn device 100 is connected to a host. For example, the eye-worn device 100 is paired with the mobile device 990 via a high-speed wireless connection 937, or connected to a server system 998 via a network 995.

The output components of the eye-worn device 100 include visual components, such as left and right image displays (e.g., displays such as Liquid Crystal Displays (LCDs), Plasma Display Panels (PDPs), Light Emitting Diode (LED) displays, projectors, or waveguides) of the optical assemblies 180A-B as described in fig. 2B-C. The left and right image displays of the optical assemblies 180A-B can present an initial video 960 comprising a series of initial depth images 961A-N and deformed shockwave images 967A-N of the deformed shockwave video 964. The image display of the optical assemblies 180A-B is driven by an image display driver 942. The image display driver 942 is coupled to the image display to control the image display to present the initiation video 960 and the distorted shock wave video 964. The output components of the eye-worn device 100 also include acoustic components (e.g., speakers), haptic components (e.g., vibration motors), other signal generators, and the like. The input components of the eye-worn device 100, the mobile device 990, and the server system 998 may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, an optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or other pointing instrument), tactile input components (e.g., physical buttons, a touch screen providing touch location and force or gestures), audio input components (e.g., a microphone), and so forth.

The eye-worn device 100 may optionally include additional peripheral elements. Such peripheral elements may include biometric sensors, additional sensors, or display elements integrated with the eye-worn device 100. For example, a peripheral element may include any I/O component, including an output component, a motion component, a position component, or any other such element described herein.

For example, biometric components include components for detecting expressions (e.g., hand expressions, facial expressions, voice expressions, body gestures, or eye tracking), measuring bio-signals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identifying a person (e.g., voice recognition, retinal recognition, facial recognition, fingerprint recognition, or electroencephalogram-based recognition), and so forth. The moving parts include acceleration sensor parts (e.g., accelerometers), gravity sensor parts, rotation sensor parts (e.g., gyroscopes), and the like. The location component includes a location sensor component (e.g., a GPS receiver component) that generates location coordinates, WiFi or Bluetooth for generating positioning system coordinatesTMA transceiver, an altitude sensor component (e.g., an altimeter or barometer for detecting barometric pressure from which altitude may be derived), a direction sensor component (e.g., a magnetometer), and the like. Such positioning system coordinates may also be received from the mobile device 990 over the wireless connections 925 and 937 via the low-power wireless circuitry 924 or the high-speed wireless circuitry 936.

FIG. 10 is a high-level functional block diagram of an example of a mobile device 990 communicating via the virtual shockwave creation system 900 of FIG. 9. The mobile device 990 includes a user input device 1091 for receiving a shockwave effect option 962 for a user to apply shockwaves to the initial depth images 961A-N of the presented initial video 960 to generate morphed shockwave images 967A-N of the morphed shockwave video 964.

The mobile device 990 includes flash memory 1040A, which includes a shockwave creation program 945 to perform all or a subset of the functions described herein for shockwave creation, wherein the user's shockwave effect option is applied to the original video 960 to create a warped shockwave video 964. As shown, the memory 1040A also includes a left raw image 858A captured by the left visible light camera 114A, a right raw image 858B captured by the right visible light camera 114B, and an infrared image 859 captured by the infrared camera 220 of the depth sensor 213. The mobile device 1090 can include a depth capture camera 1070 that includes at least two visible light cameras (first and second visible light cameras having overlapping fields of view) or at least one visible light camera and a depth sensor having a substantially overlapping field of view, as with the eye-worn device 100. When the mobile device 990 includes the same components as the eye-mounted device 100, such as a depth capture camera, the left raw image 858A, the right raw image 858B, and the infrared image 859 may be captured via the depth capture camera 1070 of the mobile device 990.

The memory 1040A also includes a plurality of initial depth images 961A-N generated via a depth capture camera of the eye-worn device 100 or via a depth capture camera 1070 of the mobile device 990 itself. The memory 1040A also includes an initial video 960, the initial video 960 including a series of initial depth images 961A-N and associated time coordinates 963A-N. A flowchart outlining the functions that may be implemented in the shockwave creation program 945 is shown in fig. 11. The memory 1040A also includes a shockwave effect option 962, received by the user input device 1091, that is a user input indicating a desire to apply a shockwave effect on the initial video 960. In some examples, the shock wave effect option 962 may affect the intensity or degree of shock waves imparted to the initial video 960 to distort the initial depth images 961A-N (e.g., by adjusting the amplitude or frequency of the shock waves). The memory 1040A also includes a transformation matrix 965, shock wave regions of vertices 966A-N, affinity matrices 968A-N, a waveform 971, left and right correction images 969A-B (e.g., to eliminate vignetting towards the end of the lens), and image disparity 970, all of which are generated during image processing of the initial depth images 961A-N of the initial video 960 to generate corresponding warped shock wave images 967A-N of the warped shock wave video 964.

As shown, the mobile device 990 includes an image display 1080, an image display driver 1090 for controlling the display of images, and a user input device 1091 similar to the eye-mounted device 100. In the example of fig. 10, image display 1080 and user input device 1091 are integrated together into a touch screen display.

Examples of touch screen type mobile devices that may be used include, but are not limited to, smart phones, Personal Digital Assistants (PDAs), tablet computers, notebook computers, or other portable devices. However, the structure and operation of a touch screen type device is provided by way of example, and the subject technology as described herein is not intended to be so limited. For purposes of this discussion, fig. 10 provides a block diagram illustration of an example mobile device 990 having a touch screen display for displaying content and receiving user input as (or as part of) a user interface.

The activities of the focus discussed herein generally involve data communication related to processing the initial depth images 961A-N of the initial video 960 to generate the distorted shockwave images 967A-N to create the distorted shockwave video 964 in the portable eye-worn device 100 or the mobile device 990. As shown in fig. 10, the mobile device 990 includes at least one digital transceiver (XCVR)1010, shown as WWANXCVR, for digital wireless communications via a wide area wireless mobile communications network. The mobile device 990 may also include additional digital or analog transceivers, e.g., for communicating via NFC, VLC, DECT, ZigBee, BluetoothTMOr short-range XCVR1020 for short-range network communications such as WiFi. For example, short-range XCVR1020 may take the form of any available bidirectional Wireless Local Area Network (WLAN) transceiver of a type compatible with implementing one or more standard communication protocols in a wireless local area network, such as one of the Wi-Fi standards under IEEE802.11 and WiMAX.

To generate location coordinates for locating mobile device 990, mobile device 990 may include a Global Positioning System (GPS) receiver. Alternatively or additionally, mobile device 990 may utilize either or both of short-range XCVR1020 and WWANXCVR1010 to generate location coordinates for positioning. E.g. based on cellular network, WiFi or BluetoothTMThe positioning system of (2) can generate very accurate position coordinates, especially when used in combination. Such location coordinates may be transmitted to the eye-worn device over one or more network connections via XCVR1010, 1020.

The transceivers 1010, 1020 (network communication interfaces) are compliant with one or more of the various digital wireless communication standards used by modern mobile networks. Examples of WWAN transceivers 1010 include, but are not limited to, transceivers configured to operate in accordance with Code Division Multiple Access (CDMA) and third generation partnership project (3GPP) network technologies, including but not limited to 3GPP type 2 (or 3GPP2) and LTE, sometimes referred to as "4G. For example, the transceivers 1010, 1020 provide two-way wireless communication of information including digitized audio signals, still images and video signals, web page information and network-related inputs for display, and various types of mobile information communication to/from the mobile device 990 for shockwave creation.

As described above, several of these types of communications through the transceivers 1010, 1020 and the network involve protocols and procedures that support communication with the eye-worn device 100 or the server system 998 to create a shockwave, such as transmitting the left raw image 858A, the right raw image 858B, the infrared image 859, the initial video 960, the initial depth images 961A-N, the time coordinates 963A-N, the warped shockwave video 964, and the warped shockwave images 967A-N. For example, such communications may transmit packet data to and from the eye-worn device 100 via the short-range XCVR1020 over the wireless connections 925 and 937 as shown in fig. 9. Such communications may also transfer data using IP packet data transmissions via a WWANXCVR1010, for example, over a network (e.g., the internet) 995 as shown in fig. 9. WWANXCVR1010 and short-range XCVR1020 are both connected to an associated antenna (not shown) by Radio Frequency (RF) transmit and receive amplifiers (not shown).

Mobile device 990 further comprises a microprocessor, shown as CPU1030, sometimes referred to herein as a master controller. A processor is a circuit having elements constructed and arranged to perform one or more processing functions, typically various data processing functions. While discrete logic components may be used, an example is utilizing components forming a programmable CPU. For example, a microprocessor includes one or more Integrated Circuit (IC) chips that incorporate electronic components that perform the functions of a CPU. For example, processor 1030 may be based on any known or available microprocessor architecture, such as Reduced Instruction Set Computing (RISC) using the ARM architecture, as is commonly used today in mobile devices and other portable electronic devices. Of course, other processor circuits may also be used to form the CPU1030 or processor hardware of smart phones, laptops, and tablets.

Microprocessor 1030 acts as a programmable master controller for mobile device 990 by configuring mobile device 990 to perform various operations, e.g., according to instructions or programming that can be executed by processor 1030. For example, such operations may include various general operations of a mobile device, as well as operations related to the shockwave creation program 945 and communications with the eye-worn device 100 and the server system 998. While the processor may be configured through the use of hardwired logic, a typical processor in a mobile device is a general purpose processing circuit that is configured through the execution of programming.

Mobile device 990 includes a memory or storage device system for storing data and programming. In this example, the memory system may include flash memory 1040A and Random Access Memory (RAM) 1040B. RAM1040B is used for short-term storage of instructions and data processed by processor 1030, for example, as working data processing memory. Flash memory 1040A typically provides long term storage.

Thus, in the example of mobile device 990, flash memory 1040A is used to store programming or instructions for execution by processor 1030. Depending on the type of device, the mobile device 990 stores and runs a mobile operating system by which certain applications are executed, including the shock wave creation program 945. The application, such as the shockwave creation program 945, which may be a local application, a hybrid application, or a web application (e.g., a dynamic web page executed by a web browser), runs on the mobile device 990 to create a warped shockwave video 964 from the initial video 960 based on the received shockwave effect options 962. Examples of Mobile operating systems include Google Android System (Google Android), apple iOS System (I-Phone or iPad devices), Windows Mobile, Amazon Fire OS, RIM blackberry operating systems, and so forth.

It should be understood that mobile device 990 is merely one type of host in virtual shockwave creation system 900, and that other arrangements may be utilized. For example, the server system 998 (e.g., the server system shown in fig. 9), after generating the initial depth images 961A-N, may create a shockwave in the initial video 960 via a depth capture camera of the eye-worn device 100.

FIG. 11 is a method flow diagram with steps that may be implemented in the virtual shockwave creation system 900 to apply shockwaves to initial depth images 961A-N of an initial video 960 to generate warped shockwave images 967A-N to create a warped shockwave video 965. Since the contents of the blocks of fig. 11 have been described in detail above, they are not described in detail here.

Beginning at block 1100, the method includes generating a series of initial depth images 961A-N from initial images 957A-N of an initial video 960 via a depth capture camera.

Proceeding now to block 1110, the method further includes determining a respective rotation matrix 973A-N for each of the initial depth images 961A-N. The respective rotation matrices 973A-N are used to adjust X, Y and/or Z position coordinates of the vertices based on the detected rotation of the depth capture camera. For example, the rotation matrices 973A-N may be 2 × 2 or 3 × 3 matrices with X, Y and/or Z-axis position adjustments or angles to normalize the vertices of the captured initial depth images 961A-N with a bottom plane to correct for camera rotation.

Continuing to block 1120, the method further includes generating a respective deformed shockwave image 967A-N for each of the initial depth images 961A-N by applying a respective rotation matrix 973A-N and a transformation function 965 to vertices of the respective initial depth images 961A-N. Transformation function 965 transforms respective shockwave regions of vertices 966A-N grouped together along the Z-axis based at least on the associated time coordinates 963A-N of respective initial depth images 961A-N. Transform function 965 moves the respective Y position coordinates of the vertices of the respective shock wave regions of vertices 966A-N vertically up or down on the Y-axis based on waveform 971. Moving now to block 1130, the method further includes creating a morphed shockwave video 964 that includes a series of generated morphed shockwave images 967A-N.

Ending now at block 1140, the method further comprises presenting the distorted shockwave video 964 via image display 180A-B or 1080. The step of presenting the deformed shockwave video 964 including the series of generated deformed shockwave images 967A-N via the image display 180A-B or 1080 presents the appearance of a wave scrolling radially from the depth capture camera, radially from the object emitting the shockwave, or along the Z-axis of the deformed shockwave images 967A-N of the deformed shockwave video 964.

Fig. 12A-B show an example of a first raw image 858A captured by one of the visible light cameras 114A-B and a first shockwave region applying a transformation function 965 to a vertex 966A of the generated first initial depth image 961A, respectively. The first time coordinate 963A set to 0.00 seconds is associated with the first raw image 858A during capture, so the corresponding first initial depth image 961A and first shockwave depth image 967A are also associated with the first time coordinate 963A of 0.00 seconds. In FIG. 12A, a first raw image 858A is depicted as being captured by one of the visible light cameras 114A-B prior to any image processing (e.g., correction, etc.). Thus, first original image 858A has a fisheye appearance caused by vignetting of visible cameras 114A-B. The first raw image 858A includes various two-dimensional pixels having X and Y position coordinates on the X-axis 1205 and the Y-axis 1210. The corresponding first initial depth image 961A of the series of initial depth images 961A-N in the initial video 960 is then generated using the previously described techniques. In fig. 12B, the Z-axis 1215 is depicted as being overlaid on the generated first shockwave depth image 967A of the created deformed shockwave video 964. The bottom planes 1220 of the first shock wave depth images 967A are continuous along the Z-axis 1215. In addition to the above-disclosed orientation sensor techniques for identifying the bottom plane 1220 (e.g., using inertial measurement unit 972), heuristics may also be utilized that assume that the bottom plane 1220 is located somewhere between 5 and 6 feet from the vertical position of the depth capture camera that generated the first initial depth image 961A. This assumes that a person with an average height is wearing the eye-worn device 100 and not having his/her head skewed or rotated when capturing the first raw image 858A. In this case, the person stands five to six feet on a bottom plane (e.g., ground level). In fig. 12B, the application of the first transformation function 965A over the first shock wave region 966A is depicted, and since the first shock wave region 966A is at a close distance (e.g., short depth/distance) on the Z-axis 1215, this results in the shock wave appearing to be within the close distance.

Fig. 13A-B show an example of a second raw image 858B captured by one of the visible light cameras 114A-B and a second shockwave region applying a transformation function 965 to a vertex 966B of the generated second initial depth image 961B, respectively. A second time coordinate 963B set to 0.25 seconds is associated with the second original image 858B during capture, so the corresponding second initial depth image 961B of the initial video 960 and the second shockwave depth image 967B of the warped shockwave video 964 are also associated with the second time coordinate 963B of 0.25 seconds. In fig. 13B, the application of the second transformation function 965B over the second shock wave region 966B is depicted, and since the second shock wave region 966B is in an intermediate region (e.g., medium depth/distance) on the Z-axis 1215, this results in the shock wave appearing to be within the intermediate region.

14A-B illustrate an example of a third raw image 858C captured by one of the visible light cameras 114A-B and a third shockwave region applying a transformation function 965 to the vertices 966C of the generated third initial depth image 961C, respectively. A third time coordinate 963B set to 0.50 seconds is associated with the third raw image 858C during capture, so the corresponding third initial depth image 961C of the initial video 960 and the third shockwave depth image 967C of the morphed shockwave video 964 are also associated with the third time coordinate 963B of 0.50 seconds. In fig. 14B, the third shock wave region 966C is depicted, but no shock wave occurs because the third shock wave region 966C is either no longer on the bottom plane 1220 or reaches a termination depth position on the Z-axis 1215.

15A-B illustrate an example of a fourth raw image 858D captured by one of the visible light cameras 114A-B and a fourth shockwave region applying a transformation function 965 to the generated fourth initial depth image 961D vertex 966D, respectively. A fourth time coordinate 963D set to 0.75 seconds is associated with the fourth original image 858D during capture, so the corresponding fourth initial depth image 961D of the initial video 960 and the fourth shockwave depth image 967D of the morphed shockwave video 964 are also associated with the fourth time coordinate 963D of 0.75 seconds. In fig. 15B, the application of the fourth transformation function 965D to the fourth shockwave region 966D is depicted, and since the fourth shockwave region 966D is at a close distance (e.g., short depth/distance) on the Z-axis 1215, this results in the shockwave appearing to be within the close distance.

16A-B illustrate an example of a fifth primary image 858E captured by one of the visible light cameras 114A-B and a fifth shockwave region applying a transformation function 965 to a vertex 966E of the generated fifth primary depth image 961E, respectively. A fifth time coordinate 963E set to 1.00 seconds is associated with the fifth primary image 858E during capture, so the corresponding fifth initial depth image 961E of the initial video 960 and the fifth shockwave depth image 967E of the morphed shockwave video 964 are also associated with the fifth time coordinate 963E of 1.00 seconds. In fig. 16B, the application of the fifth transformation function 965E to the fifth shock wave region 966E is depicted, and since the fifth shock wave region 966E is in an intermediate region (e.g., medium depth/distance) on the Z-axis 1215, this results in the shock wave appearing to be within the intermediate region.

17A-B illustrate an example of a sixth raw image 858F captured by one of the visible light cameras 114A-B and a sixth shockwave region where a transformation function 965 is applied to the vertex 966F of the generated sixth initial depth image 961F, respectively. A sixth time coordinate 963F set to 1.25 seconds is associated with the sixth raw image 858F during capture, so the corresponding sixth initial depth image 961F of the initial video 960 and the sixth shockwave depth image 967F of the morphed shockwave video 964 are also associated with the sixth time coordinate 963F of 1.25 seconds. In fig. 17B, the application of the sixth transformation function 965F over the sixth shock wave region 966F is depicted, and since the sixth shock wave region 966F is at a far distance (e.g., long depth/distance) on the Z-axis 1215, this results in the shock wave appearing to be within the far distance.

18A-B illustrate an example of a seventh raw image 858G captured by one of the visible light cameras 114A-B and a seventh shockwave region applying a transformation function 965 to a vertex 966G of the generated seventh initial depth image 961G, respectively. The seventh time coordinate 963G set to 1.50 seconds is associated with the seventh original image 858G during capture, so the corresponding seventh initial depth image 961G of the initial video 960 and the seventh shockwave depth image 967G of the morphed shockwave video 964 are also associated with the seventh time coordinate 963G of 1.50 seconds. In fig. 18B, the application of the seventh transfer function 965G to the seventh shock wave region 966G is depicted, and since the seventh shock wave region 966G is at a very far distance (e.g., maximum depth/distance) on the Z-axis 1215, this results in the shock wave appearing to be within the farthest distance.

Fig. 19A-B show an example of an eighth raw image 858H captured by one of the visible light cameras 114A-B and an eighth shockwave region applying an eighth transformation function 965 to the generated eighth initial depth image 961H vertex 966H, respectively. An eighth time coordinate 963H set to 1.75 seconds is associated with the eighth raw image 858H during capture, so the corresponding eighth initial depth image 961H of the initial video 960 and the eighth shockwave depth image 967H of the warped shockwave video 964 are also associated with the eighth time coordinate 963H of 1.75 seconds. In fig. 19B, the application of the eighth transfer function 965H to the eighth shock wave region 966H is depicted, and since the eighth shock wave region 966H is at a close distance (e.g., short depth/distance) on the Z-axis 1215, this results in the shock wave appearing to be within the close distance.

20A-B illustrate an example of a ninth original image 858I captured by one of the visible light cameras 114A-B and a ninth shockwave region applying a transformation function 965 to the generated ninth initial depth image 961I vertex 966I, respectively. A ninth time coordinate 963I set to 2.00 seconds is associated with the ninth original image 858I during capture, so the corresponding ninth initial depth image 961I of the initial video 960 and the ninth shockwave depth image 967I of the morphed shockwave video 964 are also associated with the ninth time coordinate 963I of 2.00 seconds. In fig. 20B, the application of the ninth transformation function 965I over the ninth shock wave region 966I is depicted, and since the ninth shock wave region 966I is in an intermediate region (e.g., medium depth/distance) on the Z-axis 1215, this results in the shock wave appearing to be within the intermediate region.

21A-B illustrate an example of a tenth raw image 858J captured by one of visible light cameras 114A-B and a tenth shockwave region applying a transformation function 965 to a vertex 966J of a generated tenth initial depth image 961J, respectively. A tenth time coordinate 963J set to 2.25 seconds is associated with the tenth original image 858J during capture, so the corresponding tenth initial depth image 961J of the initial video 960 and a tenth shockwave depth image 967J of the morphed shockwave video 964 are also associated with the tenth time coordinate 963J of 2.25 seconds. In fig. 19B, the application of the tenth transformation function 965J to the tenth shockwave region 966J is depicted and since the tenth shockwave region 966J is in a far distance (e.g., long depth/distance) on the Z-axis 1215, this results in the shockwave appearing to be within the far distance.

22A-B illustrate an eleventh raw image 858K captured by one of the visible light cameras 114A-B and an eleventh shock wave region where a transformation function 965 is applied to the vertex 966K of the generated eleventh initial depth image 961K, respectively. An eleventh time coordinate 963K set to 2.50 seconds is associated with the eleventh original image 858K during capture, so the corresponding eleventh initial depth image 961K of the initial video 960 and the eleventh shockwave depth image 967K of the morphed shockwave video 964 are also associated with the eleventh time coordinate 963K of 2.50 seconds. In fig. 22B, the application of the eleventh transformation function 965K to the eleventh shock wave region 966K is depicted, and since the eleventh shock wave region 966K is at a very far distance (e.g., maximum depth/distance) on the Z-axis 1215, this results in the shock wave appearing to be within the farthest distance.

23A-B show an example of a twelfth raw image 858L captured by one of the visible light cameras 114A-B and a twelfth shockwave region applying a transformation function 965 to a vertex 966L of the generated twelfth initial depth image 961L, respectively. A twelfth time coordinate 963L set to 2.75 seconds is associated with the twelfth original image 858L during capture, so the corresponding twelfth initial depth image 961L of the initial video 960 and the twelfth shockwave depth image 967L of the warped shockwave video 964 are also associated with the twelfth time coordinate 963L of 2.75 seconds. In fig. 23B, the application of the twelfth transform function 965L over the twelfth shockwave region 966L is depicted, and since the twelfth shockwave region 966L is in a close distance (e.g., short depth/distance) on the Z-axis 1215, this results in the shockwave appearing to be within the close distance.

24A-B illustrate an example of a thirteenth raw image 858M captured by one of visible light cameras 114A-B and a thirteenth shockwave region applying a transformation function 965 to a generated thirteenth initial depth image 961M vertex 966M, respectively. A thirteenth time coordinate 963M set to 3.00 seconds is associated with the thirteenth original image 858M during capture, so the corresponding thirteenth initial depth image 961M of the initial video 960 and a thirteenth shockwave depth image 967M of the warped shockwave video 964 are also associated with the thirteenth time coordinate 963M of 3.00 seconds. In fig. 24B, the application of a thirteenth transformation function 965M over a thirteenth shockwave region 966M is depicted, and since the thirteenth shockwave region 966M is in an intermediate region (e.g., medium depth/distance) on the Z-axis 1215, this results in the shockwave appearing to be within the intermediate region.

Any of the shock wave creation functions described herein for the eye-worn device 100, the mobile device 990, and the server system 998 may be embodied in one or more applications, as described previously. According to some embodiments, a "function," "application," "instruction," or "program" is a program that performs a function specified in the program. Various programming languages may be employed to create one or more applications structured in various ways, such as an object-oriented programming language (e.g., Objective-C, Java or C + +) or a procedural programming language (e.g., C or assembly language). In a particular example, the third party application (e.g., using ANDROID by an entity other than the particular platform provider)TMOr IOSTMApplications developed by Software Development Kit (SDK) may be at the IOSTM,ANDROIDTMOr other mobile software running on a mobile operating system, such as a mobile operating system. In this example, the third party application may invoke API calls provided by the operating system to facilitate the functionality described herein.

Thus, a machine-readable medium may take many forms of tangible storage media. For example, non-volatile storage media includes optical or magnetic disks, such as any storage device in any computer or the like, such as may be used to implement the client devices, media gateways, transcoders, etc. shown in the figures. Volatile storage media includes dynamic memory, such as the main memory of such computer platforms. Tangible transmission media include coaxial cables, copper wire and fiber optics, including the wires that comprise a bus within a computer system. Carrier-wave transmission media can take the form of electrical or electromagnetic signals, or acoustic or light waves such as those generated during Radio Frequency (RF) and Infrared (IR) data communications. Thus, for example, common forms of computer-readable media include: a floppy disk, a hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD or DVD-ROM, any other optical medium, punch cards, paper tape, any other physical storage medium with patterns of holes, a RAM, a PROM, and EPROM, a flash EPROM, any other memory chip or cartridge, a carrier wave transporting data or instructions, cables or links transporting such a carrier wave, or any other medium from which a computer can read programming code and/or data. Many of these forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to a processor for execution.

The scope of protection is only limited by the appended claims. This scope is intended and should be interpreted to be as broad as the ordinary meaning of the language used in the claims and to include all equivalent structures and functions when interpreted in accordance with the present specification and the subsequent application. However, the claims are not intended to encompass subject matter that fails to meet the requirements in sections 101, 102, or 103 of the patent Law, nor should they be construed in such a manner. Any unintended adoption of the subject matter is hereby abandoned.

Nothing stated or illustrated in the specification, other than the above, is intended or should be construed to cause a dedication of any component, step, feature, object, benefit, advantage, etc. to the public regardless of whether it is recited in the claims.

It will be understood that the terms and expressions used herein have the ordinary meaning as is accorded to such terms and expressions with respect to their corresponding respective areas of inquiry and study except where specific meanings have otherwise been set forth herein. For example, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises or comprises a list of elements or steps does not include only those elements or steps but may include other elements or steps not expressly listed or inherent to such process, method, article, or apparatus. An element referred to above as "a" or "an" does not, without further limitation, exclude the presence of additional similar elements in a process, method, article, or apparatus that comprises the element.

Unless otherwise indicated, any and all measurements, values, ratings, positions, sizes, dimensions, and other specifications set forth in this specification (including the appended claims) are approximate and imprecise. Such amounts are intended to have reasonable ranges consistent with the functions to which they pertain and with the conventions in the art to which they pertain. For example, unless explicitly stated otherwise, parameter values and the like may vary by ± 10% from the stated amounts.

Additionally, in the foregoing detailed description, it can be seen that various features are grouped together in various examples for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed examples require more features than are expressly recited in each claim. Rather, as the following claims reflect, subject matter that is to be protected lies in less than all features of any single disclosed example. Thus the following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separately claimed subject matter.

While the foregoing has described what are considered to be the best mode and other examples, it is understood that various modifications may be made therein, that the subject matter disclosed herein may be implemented in various forms and examples, and that they may be applied in numerous applications, only some of which have been described herein. It is intended that the appended claims cover any and all modifications and variations as fall within the true scope of this present concept.

74页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:可分离失真视差确定

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类