Outputting a warped image from captured video data

文档序号:1890988 发布日期:2021-11-26 浏览:8次 中文

阅读说明:本技术 从捕获的视频数据输出扭曲图像 (Outputting a warped image from captured video data ) 是由 安德烈·赫切巴琴科 弗朗西斯·云峰·葛 尹波 陈石 法比安·朗格特 约翰内斯·彼得·科普夫 于 2021-05-24 设计创作,主要内容包括:本申请涉及从捕获的视频数据输出扭曲图像。在一个实施例中,一种方法包括从捕获的图像序列生成输出的扭曲图像序列。使用该捕获的图像序列,计算系统可以为捕获的图像序列中的每个图像确定对象特征的一个或更多个三维位置和对应的摄像机位置。利用每个图像的摄像机位置,计算系统可以确定表示虚拟摄像机的视角的视图路径。计算系统可以识别位于视图路径上的虚拟摄像机的一个或更多个虚拟摄像机位置,并且随后从捕获图像序列扭曲一个或更多个图像,以表示虚拟摄像机在每个相应虚拟摄像机位置处的视角。这导致扭曲图像序列,该扭曲图像序列可以被输出以供在客户端设备上查看和交互。(The present application relates to outputting a warped image from captured video data. In one embodiment, a method includes generating an output warped image sequence from a captured image sequence. Using the captured sequence of images, the computing system may determine, for each image in the captured sequence of images, one or more three-dimensional locations of the object feature and a corresponding camera position. With the camera position for each image, the computing system may determine a view path representing the perspective of the virtual camera. The computing system may identify one or more virtual camera positions of the virtual camera that lie on the view path, and then warp one or more images from the sequence of captured images to represent the perspective of the virtual camera at each respective virtual camera position. This results in a sequence of warped images that can be output for viewing and interaction on the client device.)

1. A method comprising, by a computing system:

capturing a sequence of images using a camera;

for each image in the sequence of images, determining (1) a three-dimensional position of a feature of the object depicted in that image, and (2) a first camera position of the camera at the time that the image was captured;

determining a view path of a virtual camera based on the first camera position associated with the sequence of images;

determining second camera positions of the virtual camera, the second camera positions separated by a predetermined interval along the view path;

for each second camera position:

selecting one of the first camera positions associated with the sequence of images; and

warping an image associated with the selected first camera position using the selected first camera position, the second camera position, and a three-dimensional position of an object feature depicted in the selected image; and

a sequence of warped images is output.

2. The method of claim 1, further comprising:

determining a plurality of control points based on the first camera position associated with the sequence of images;

wherein determining the view path of the virtual camera comprises generating a spline using the plurality of control points.

3. The method of claim 1, further comprising:

detecting a gap between (1) a first camera position associated with a first contiguous subset of the sequence of images and (2) a first camera position associated with a second contiguous subset of the sequence of images;

adjusting a first camera position associated with the second contiguous subset of the sequence of images to close the gap;

wherein the view path of the virtual camera is determined based on at least a first camera position associated with the first contiguous subset of the sequence of images and an adjusted first camera position associated with the second contiguous subset of the sequence of images.

4. The method of claim 1, further comprising:

grouping three-dimensional locations of object features depicted in the sequence of images into one or more clusters;

wherein, for each warped image, the three-dimensional locations of the object features used for warping are each determined to be within a threshold distance of one of the one or more clusters.

5. The method of claim 1, further comprising:

determining a corresponding focal point for each second camera position, wherein the focal point is determined based in part on optimizing: (1) a smoothness of a path corresponding to the focal point and (2) a closeness of the focal point;

wherein, for each second camera position, warping the image associated with the selected first camera position for that second camera position also uses a focal point corresponding to that second camera position.

6. The method of claim 1, further comprising:

determining a corresponding focal point for each second camera position, wherein the focal point is determined based in part on optimizing: (1) smoothness of a path corresponding to the focal points and (2) distance between the second camera positions and their respective focal points to be close to a predetermined target distance;

wherein, for each second camera position, warping the image associated with the selected first camera position for that second camera position also uses a focal point corresponding to that second camera position.

7. The method of claim 1, wherein the sequence of images is captured via a user interface on a wireless device, the user interface including user instructions for moving the wireless device in a curved path to capture the sequence of images.

8. The method of claim 1, wherein, for each second camera position, warping the image associated with the selected first camera position comprises:

generating a grid corresponding to the image associated with the selected first camera position;

projecting three-dimensional positions of object features depicted in the image onto the mesh based on at least the selected first camera position to generate a first set of proxels;

based at least on the second camera position, projecting three-dimensional positions of object features depicted in the image to generate a second set of proxels;

generating a warped mesh based on the first set of proxels and the second set of proxels; and

generating the warped image based on the image associated with the selected first camera position and the warped mesh.

9. The method of claim 1, further comprising:

determining a scaling factor for scaling the sequence of warped images to meet a target resolution; and

in response to determining that the scaling factor exceeds a predetermined failure threshold, an error message is generated.

10. The method of claim 1, further comprising:

determining a scaling factor for scaling the sequence of warped images to meet a target resolution; and

in response to determining that the scaling factor is within a predetermined acceptable range:

scaling the sequence of warped images according to the scaling factor;

identifying missing pixel information in a scaled sequence of the warped image; and

and repairing the missing pixel information.

11. The method of claim 1, further comprising:

determining a scaling factor for scaling the sequence of warped images to meet a target resolution; and

in response to determining that the scaling factor is below a predetermined threshold:

scaling the sequence of warped images according to the scaling factor; and

cropping a scaled sequence of the warped image to meet the target resolution.

12. One or more computer-readable non-transitory storage media embodying software that is operable when executed to:

capturing a sequence of images using a camera;

for each image in the sequence of images, determining (1) a three-dimensional position of a feature of the object depicted in that image, and (2) a first camera position of the camera at the time that the image was captured;

determining a view path of a virtual camera based on the first camera position associated with the sequence of images;

determining second camera positions of the virtual camera, the second camera positions separated by a predetermined interval along the view path;

for each second camera position:

selecting one of the first camera positions associated with the sequence of images; and

warping an image associated with the selected first camera position using the selected first camera position, the second camera position, and a three-dimensional position of an object feature depicted in the selected image; and

a sequence of warped images is output.

13. The media of claim 12, wherein the software is further operable when executed to:

determining a plurality of control points based on the first camera position associated with the sequence of images;

wherein determining the view path of the virtual camera comprises generating a spline using the plurality of control points.

14. The media of claim 12, wherein the software is further operable when executed to:

detecting a gap between (1) a first camera position associated with a first contiguous subset of the sequence of images and (2) a first camera position associated with a second contiguous subset of the sequence of images;

adjusting a first camera position associated with the second contiguous subset of the sequence of images to close the gap;

wherein the view path of the virtual camera is determined based on at least a first camera position associated with the first contiguous subset of the sequence of images and an adjusted first camera position associated with the second contiguous subset of the sequence of images.

15. The media of claim 12, wherein the software is further operable when executed to:

grouping three-dimensional locations of object features depicted in the sequence of images into one or more clusters;

wherein, for each warped image, the three-dimensional locations of the object features used for warping are each determined to be within a threshold distance of one of the one or more clusters.

16. The media of claim 12, wherein the software is further operable when executed to:

determining a corresponding focal point for each second camera position, wherein the focal point is determined based in part on optimizing: (1) a smoothness of a path corresponding to the focal point and (2) a closeness of the focal point;

wherein, for each second camera position, warping the image associated with the selected first camera position for that second camera position also uses a focal point corresponding to that second camera position.

17. A system, comprising:

one or more processors; and

one or more computer-readable non-transitory storage media coupled to one or more processors and comprising instructions that, when executed by the one or more processors, are operable to cause the system to:

capturing a sequence of images using a camera;

for each image in the sequence of images, determining (1) a three-dimensional position of a feature of the object depicted in that image, and (2) a first camera position of the camera at the time that the image was captured;

determining a view path of a virtual camera based on the first camera position associated with the sequence of images;

determining second camera positions of the virtual camera, the second camera positions separated by a predetermined interval along the view path;

for each second camera position:

selecting one of the first camera positions associated with the sequence of images; and

warping an image associated with the selected first camera position using the selected first camera position, the second camera position, and a three-dimensional position of an object feature depicted in the selected image; and

a sequence of warped images is output.

18. The system of claim 17, wherein the one or more processors are further operable when executing the instructions to perform operations comprising:

determining a plurality of control points based on the first camera position associated with the sequence of images;

wherein determining the view path of the virtual camera comprises generating a spline using the plurality of control points.

19. The system of claim 17, wherein the one or more processors are further operable when executing the instructions to perform operations comprising:

detecting a gap between (1) a first camera position associated with a first contiguous subset of the sequence of images and (2) a first camera position associated with a second contiguous subset of the sequence of images;

adjusting a first camera position associated with the second contiguous subset of the sequence of images to close the gap;

wherein the view path of the virtual camera is determined based on at least a first camera position associated with the first contiguous subset of the sequence of images and an adjusted first camera position associated with the second contiguous subset of the sequence of images.

20. The system of claim 17, wherein the one or more processors are further operable when executing the instructions to perform operations comprising:

grouping three-dimensional locations of object features depicted in the sequence of images into one or more clusters;

wherein, for each warped image, the three-dimensional locations of the object features used for warping are each determined to be within a threshold distance of one of the one or more clusters.

Technical Field

The present disclosure generally relates to outputting a sequence of warped images (warped images) of an object from captured image data.

Background

For many industries, online shopping has replaced the physical shopping experience, primarily because of the convenience of the shopping experience. With networked devices, consumers can browse, purchase, and ship millions of items to their homes directly from an online vendor without having to leave their homes. In a physical store, consumers can view and physically interact with product displays. In contrast, traditional online sellers present their product inventory via uploaded pictures and text descriptions. The accessibility, affordability, and convenience of online shopping has led to the online sale of millions of new and used products from various vendors (ranging from millions of dollars of retailers to individuals). This diversity creates a rich choice allowing consumers to carefully evaluate one or more products before making a purchase decision. Sellers who provide more detail on their products may be more attractive to consumers who are concerned about the quality of the purchased product.

A mobile computing device (e.g., a smartphone, tablet computer, or laptop computer) may include functionality for determining its position, direction, or orientation, such as a GPS receiver, compass, gyroscope, or accelerometer. Such devices may also include functionality for wireless communications, such as bluetooth communications, Near Field Communications (NFC), or Infrared (IR) communications, or communications with a Wireless Local Area Network (WLAN) or a cellular telephone network. Such devices may also include one or more cameras, scanners, touch screens, microphones, or speakers. The mobile computing device may also execute software applications, such as games, web browsers, or social networking applications. Using social networking applications, users can connect (connect), communicate, and share information with other users in their social networks.

Summary of the specific embodiments

Certain embodiments described herein relate to a method for generating a sequence of warped images intended for use by an online vendor to provide an interactive realistic view of available products. Unlike uploading still images of products or videos captured by users, particular embodiments use one or more camera positions from a captured sequence of captured images to determine a smooth view path that represents the path of a virtual camera around an object. Using the one or more camera positions and the three-dimensional object features, the computing system warps the one or more images to represent perspectives from one or more virtual camera positions located along the view path. This results in a sequence of warped images that can be output for viewing and interaction on the wireless device.

Certain embodiments also provide one or more processes for further improving the quality and production level (production value) of the output warped image sequence. These processes may include gap (gap) detection, outlier (outlier) detection, crop (crop), or repair (inpaint) image sequences. Inaccuracies in the collected sequence of captured images (caused by technical limitations or user error) may reduce the quality and likelihood of successful generation of a distorted image sequence. One or more of these processes are performed to improve reliability and quality by removing or adjusting the captured image sequence and associated data (including one or more camera positions and three-dimensional object features).

The embodiments disclosed herein are merely examples, and the scope of the present disclosure is not limited to them. Particular embodiments may include all, some, or none of the components, elements, features, functions, operations, or steps of the embodiments disclosed herein. Embodiments according to the invention are specifically disclosed in the appended claims, relating to methods, storage media, systems and computer program products, wherein any feature mentioned in one claim category (e.g. method) may also be claimed in another claim category (e.g. system). The dependencies or back-references in the appended claims are chosen for formal reasons only. However, any subject matter resulting from an intentional back-reference to any previous claim (in particular multiple dependencies) may also be claimed, such that any combination of a claim and its features is disclosed and may be claimed regardless of the dependency selected in the appended claims. The subject matter which can be claimed comprises not only the combination of features as set forth in the appended claims, but also any other combination of features in the claims, wherein each feature mentioned in the claims can be combined with any other feature or combination of other features in the claims. Furthermore, any of the embodiments and features described or depicted herein may be claimed in a separate claim and/or in any combination with any of the embodiments or features described or depicted herein or in any combination with any of the features of the appended claims.

Brief Description of Drawings

This patent or application document contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the office upon request and payment of the necessary fee.

FIG. 1 illustrates an example process for capturing and outputting a sequence of warped images.

Fig. 2 shows an example wireframe of a GUI on a client device 210 for capturing a sequence of images of an object for outputting a sequence of warped images.

Fig. 3A illustrates an example spatial distribution of one or more camera positions corresponding to each image in a sequence of captured images of an object.

Fig. 3B shows an example of a gap in the spatial distribution of one or more camera positions.

Fig. 3C illustrates an example of the computing system adjusting a pose (position) of one or more camera positions to close one or more gaps.

Fig. 3D shows the adjusted spatial distribution of camera positions after adjustment for one or more gaps.

FIG. 4 illustrates a point cloud and groups one or more points representing the three-dimensional location of an object feature into one or more clusters (clusters) in an example image of a sequence of captured images.

FIG. 5 illustrates the use of one or more control points to formulate a view path.

Fig. 6 illustrates one or more camera positions along a view path to generate a virtual camera.

FIG. 7A shows a set of foci for each virtual camera position that are optimized for smoothness and closeness of the path corresponding to the focus.

FIG. 7B shows a set of focal points for each virtual camera position optimized for smoothness of the path corresponding to the focal point and for a distance between the virtual camera positions and their respective focal points to approach a predetermined target distance.

Fig. 8 shows an enlarged view of the virtual camera position located on the view path.

Fig. 9A shows an example of projecting the three-dimensional position of the object feature depicted in the selected reference camera position onto a mesh (mesh).

Fig. 9B shows that the mesh is warped such that the projected points of the reference camera position match the projected points of the virtual camera position.

FIG. 10 illustrates projecting one or more sets of three-dimensional positions of object features onto a grid based on a reference camera position and a virtual camera position.

FIG. 11A shows an example wireframe of a GUI on a wireless device for viewing and interacting with an output warped image sequence.

Fig. 11B shows an example wireframe of a GUI on a wireless device with a sequence of warped images of the output that change via user interaction with the GUI of the wireless device.

FIG. 12 illustrates an example method for outputting a warped image sequence from a captured image sequence.

FIG. 13 illustrates an example network environment associated with a social networking system.

FIG. 14 illustrates an example computer system.

Description of example embodiments

Unlike a physical store, where products are placed on shelves, users are limited by the images and descriptions of products provided by sellers when purchasing products online. These often limit the ability of consumers to inspect products in an online environment. For example, a potential purchaser may wish to interact with a product from multiple perspectives, such as by rotating the product to check its quality prior to purchase. This is particularly useful when purchasing used products, as viewing used products from multiple angles may allow a potential buyer to inspect wear and damage not seen in the fixed image.

Online sellers typically want to provide consumers with as much detail as possible of their products while minimizing the cost of doing so. While many online retailers use still images and textual descriptions to describe their products, some online retailers have developed interactive descriptions of products for online consumers. Conventional approaches for providing these interactive descriptions of products online have prevented the challenge for online vendors, particularly those with limited resources. Merely uploading video of an object captured by a camera often appears to be unprofessional, as video uploaded from conventional equipment such as cell phone cameras or web cameras may suffer from sudden, discordant movements, temporal irregularities (e.g., the video may take longer to capture a portion of the object while panning (over) other portions of the object), for example, as the point of view changes throughout the scene, or the inability to keep the object of interest centered in the picture as the video moves throughout the scene. Videos with these deficiencies may appear to be non-professional, thereby negatively impacting the vendor.

To overcome these deficiencies, vendors often create rotatable depictions of products by using some combination of specialized camera equipment and operators, ideal lighting environments, expensive modeling software, and skilled computer graphics technicians, in order to create videos with high quality production levels, or alternatively, interactive three-dimensional models of products. While appealing, the resources required to produce such media often make this process prohibitively expensive for individual sellers or small retailers. In addition, these interactive depictions are typically idealized computer-rendered 3D models of the product, rather than images of the actual product that the consumer will purchase.

Certain embodiments described herein relate to capturing and outputting a sequence of warped images by ordinary consumer equipment to enhance the ability of online sellers to provide consumers with an interactive media of professional appearance of their products in an online marketplace. Fig. 1 shows an example process 100 for capturing and outputting a warped image sequence, which may include one or more capture processes and one or more post-capture (post capture) processes for creating the warped image sequence. Particular embodiments may repeat one or more of the steps depicted in fig. 1 where appropriate. Although this disclosure describes and illustrates particular steps in fig. 1 as occurring in a particular order, this disclosure contemplates any suitable steps of fig. 1 occurring in any suitable order. Further, while this disclosure describes and illustrates an example process for outputting a warped image sequence from a captured image sequence including the particular process of fig. 1, this disclosure contemplates any suitable method for outputting a warped image sequence from a captured image sequence including any suitable steps, which may include all, some, or none of the steps of fig. 1, where appropriate. Moreover, although this disclosure describes and illustrates particular components, devices, or systems performing particular steps of fig. 1, this disclosure contemplates any suitable combination of any suitable components, devices, or systems performing any suitable steps of fig. 1.

The example process 100 may be performed by a computing system that includes a camera and an associated user interface 105 ("UX" or "GUI"). In particular embodiments, the computing system may be, for example, a mobile computing system, such as a smartphone, tablet computer, or laptop computer. A mobile computing system may include functionality for determining its position, direction, or orientation, such as a GPS receiver, compass, gyroscope, or accelerometer. Such devices may also include functionality for wireless communications, such as bluetooth communications, Near Field Communications (NFC), or Infrared (IR) communications, or communications with a Wireless Local Area Network (WLAN) or a cellular telephone network. Such devices may also include one or more cameras, scanners, touch screens, microphones, or speakers. The mobile computing system may also execute software applications, such as games, web browsers, or social networking applications. Using social networking applications, users can connect, communicate, and share information with other users in their social networks.

The example process 100 begins with a series of processes for capturing a sequence of images to output a sequence of warped images. The camera and associated user interface 105 capture a sequence of images of a subject (e.g., object) of interest. In particular embodiments, capturing the sequence of images may include one or more methods, such as, but not limited to, simultaneous localization and mapping ("SLAM") techniques 110, that track the position of the camera and associated user interface 105 and map its pose as the camera and associated user interface 105 move throughout the scene. SLAM110 allows the computing system to map one or more camera locations associated with user interface 105 as each image in the sequence of images is captured. In certain embodiments, capturing a sequence of images may also include a method for progress tracking 115. The progress track 115 may include, for example, one or more indicators on the user interface 105 to inform the user of the appropriate direction of movement of the camera when capturing the sequence of images. In particular embodiments, capturing the sequence of images may also include a method for encoding 120 the captured sequence of images. The end result of this series of processes is a raw capture 125, which includes a sequence of images and associated image data, which can be used to output a warped sequence of images of the subject.

After obtaining the original capture 125 of the image sequence, the example process 100 may continue with one or more post-capture processes to render a warped image sequence. In particular embodiments, the computing system may perform gap detection 130. Gap detection 130 may include identifying and adjusting the pose of one or more camera positions based on criteria detailed herein. In particular embodiments, the computing system may perform outlier detection 135 in order to identify and remove one or more outliers in the three-dimensional point cloud created from the sequence of images as detailed herein. In particular embodiments, the post-capture process may also include automatically cropping 140 one or more images from the captured sequence of images. Automatic cropping 140 may include removing one or more images from a sequence of images based on one or more criteria described herein.

In particular embodiments, process 100 may also include path smoothing 145, as detailed herein, whereby the computing system renders a smoothed view path based on one or more camera positions. In particular embodiments, the view path may be generated using one or more control points corresponding to one or more camera positions described herein. In particular embodiments, the view path may fit a predetermined shape (e.g., an arc or a semicircle) or a mathematical equation (e.g., an nth order polynomial equation). In particular embodiments, the computing system may also generate the focus path based on one or more focal points representing three-dimensional points in space to which the virtual camera will focus, as detailed herein.

In particular embodiments, process 100 may also include mesh warping (mesh warp)150, as detailed herein, mesh warping 150 includes computing a system warping an image in the sequence of images associated with the selected first camera position. By utilizing the first camera position, the position of the second virtual camera positioned along the smooth virtual camera path, and the three-dimensional position of the object feature depicted in the selected image, the computing system may adjust the image by warping the mesh such that the image simulates the viewpoint of the second virtual camera positioned along the smooth virtual camera path. The computing system may repeat this process for each image in the captured sequence of images, thereby producing a sequence of warped images.

In particular embodiments, process 100 may also include rendering the sequence of warped images 155 and making one or more adjustments to the sequence of warped images by cropping 160 and inpainting 165, as detailed herein. In a particular embodiment, these adjustments are determined by identifying a scaling factor for each image in the output sequence of warped images, because each image is warped in a unique manner. The scaling factor may be determined by identifying a scaling factor required to meet a target resolution for each image of the sequence of warped images. Based on the scaling factor, the cropping 160 and the repair 165 may be performed accordingly, as detailed herein.

After cropping and repair, the computing system may encode 170 the warped image sequence into a particular file format. The end result of process 100 is an output file 175 that includes an output warped image sequence of the subject. Files 175 may include any format suitable for viewing, including, for example,. mp4,. mov,. wmv,. flv, or.avi. In particular embodiments, file 175 may be output for viewing on a user interface associated with the client device. The user interface may include one or more elements that allow a user to interact with the sequence of warped images output on file 175, including, for example, rotating the study object of the warped images.

Fig. 2 shows an example wireframe of a GUI on a wireless device 210 for capturing a sequence of images of an object for outputting a sequence of warped images. For example, a seller may be interested in listing objects 220 for resale (e.g., air hockey tables) on an online marketplace. Using a mobile application running on the wireless device 210, a user can capture a sequence of images of the object 220 from one or more camera locations. In particular embodiments, the GUI of the wireless device 210 may include one or more user instructions 230 to move the wireless device 210 in a curved path to capture a sequence of images. These may include, for example, one or more indicators, such as visual icons (e.g., arrows, icons of the wireless device 210 moving in a desired capture direction, or similar directional indicators), tactile feedback, sounds, or similar notifications for providing instructions to the user of the wireless device 210 to properly capture the image sequence. The GUI may also include a video capture button 240 and one or more indicators or grid lines to help the user keep the object 220 centered during capture. Although not shown, the GUI may also include options for traditional camera application functions, such as camera flash, lens zoom, and switching between forward-facing and backward-facing cameras on the wireless device 210. Although not shown, the GUI may also include one or more textual indicators that provide instructions to the user of the wireless device 210 (e.g., "move slowly around the object").

In particular embodiments, the computing system may track the progress of the wireless device 210 while capturing the sequence of images. In particular embodiments, the GUI may also include one or more indicators or notifications to notify the user of the correct movement of the wireless device 210 based on the progress tracking as the sequence of images is captured. This is particularly useful for inexperienced users, who may not be familiar with the appropriate movement of the wireless device 210 required to successfully capture an image sequence for image warping.

The progress tracking may include, for example, the computing system attempting to generate an intermediate smooth view path while capturing the sequence of images in accordance with one or more methods described herein for generating a smooth view path. In particular embodiments, the computing system may attempt to generate an intermediate smooth view path after every nth collected image in the sequence of images. In particular embodiments, if the computing system is capable of generating an intermediate view path according to the methods herein, the computing system may calculate and indicate via one or more indicators or notifications on the GUI that at least one image of the captured sequence of images has been captured from a particular viewpoint.

Capturing image data using conventional wireless devices provides a number of benefits over conventional methods used by retailers. The wireless device 210 is relatively affordable, widely owned by vendors and consumers, and provides greater utility for most vendors and consumers, relative to expensive camera equipment and graphics and modeling software. To render the sequence of distorted images, the user captures only a sequence of images of an object 220 (e.g., an air hockey table) from the wireless device 210. Minimal expenditure in equipment and labor allows sellers to make an interactive sequence of distorted images of the products they sell at low cost, low effort, and low time expenditure.

As another example, creating a sequence of warped images from captured image data rather than modeled images allows for more accurate portrayal of objects 220 available for sale. The 3D model used by many retailers is simply a depiction of the product, not a captured real image of the object. While this may be acceptable to some purchasers who purchase new products, many purchasers of used products prefer to view the actual goods they are purchasing because it allows the purchaser to check for wear or damage to the goods before making a purchase decision.

Fig. 3A shows an example spatial distribution of one or more camera positions 320 corresponding to each image in a sequence of captured images of the object 220 (depicted as an air hockey table). In this example, the spatial distribution of each camera position 320 corresponding to each image in the sequence of images is viewed from above the object 220, e.g., on the x-y plane. For each image in the sequence of captured images of the object 220, the computing system may determine one or more three-dimensional locations 310 of the object features depicted in the image, and a camera location 320 of the camera of the wireless device 210 at the time the image was captured.

In particular embodiments, the post-capture process may also include automatically cropping 140 one or more images from the captured sequence of images. Automatic cropping 140 may include removing one or more images from a captured sequence of images based on one or more criteria. As an example, the one or more repeated images are caused by a path of travel of the user to move the wireless device 210 while capturing the sequence of images. For example, the user may trace back one or more portions of the path of the spatial distribution of camera locations to ensure that enough images of the object 220 are collected from one or more camera viewpoints 320. In this example, the computing system may determine that multiple images were captured from the same camera position 320 and select the best image from the multiple images while removing other duplicate images captured from the same camera position 320. As another example, the one or more repeated images are caused by the user holding the wireless device 210 in a stationary position for a period of time. This typically occurs at the beginning or end of the acquisition process. In this example, the computing system may determine that multiple images were captured from the same camera position 320 and select the best image from the multiple images while removing other duplicate images captured from the same camera position 320.

For each image in the sequence of images, in particular embodiments, the computing system may determine the corresponding camera position 320 of the camera of the wireless device 210 at the time the image is captured. The camera position 320 may include, for example and without limitation, a pose that includes the three-dimensional position (x, y, z, q, etc. coordinates) and orientation (e.g., whether the wireless device 210 is facing the object 220) of the camera located on the wireless device 210 at the time the image in the sequence of images was captured. In particular embodiments, this pose information for each camera position 320 may be determined, for example and without limitation, by using an absolute real world coordinate system, or relative to one or more other camera positions 320 or one or more objects 220 contained within the scene.

In particular embodiments, as wireless device 210 moves throughout the scene, the camera position may be determined by one or more methods of tracking the position of wireless device 210 and mapping its pose, such as, but not limited to, simultaneous localization and mapping ("SLAM") technique 110. The techniques may utilize data from the wireless device 210 and data from one or more images in a sequence of images to determine a corresponding camera position 320 of the wireless device 210 when the images were captured.

In particular embodiments, the desired path of spatial distribution of the one or more camera locations 320 corresponding to each image in the sequence of captured images of the object 220 may be based on user instructions 230 from the GUI on the wireless device 210 and may include, for example, an arc or a semi-circle around the object 220. The user instructions 230 to capture images may include capturing one or more images of a sequence of images from camera locations 320 spatially distributed on a particular shape or geometric path. In other embodiments, the spatial distribution of each camera location 320 may be determined by a user of the wireless device 210 without guidance or user instructions 230 regarding a particular shape or geometric path.

In particular embodiments, the computing system may detect one or more gaps that may identify an irregularity between a first set of camera positions associated with a first contiguous subset of paths of the spatial distribution of camera positions and a second set of camera positions associated with a second contiguous subset. Gaps are problematic for outputting warped image sequences because they may cause abrupt changes in viewpoint due to jumps between the first contiguous subset of paths and the second contiguous subset of paths. Fig. 3B shows an example of a gap in the spatial distribution of one or more camera locations 320. The overlap gap 330 may occur due to the intersection or spatial overlap of one or more consecutive subsets of the paths of the spatially distributed camera positions. In another example, the parallel gap 340 may occur when the first and second contiguous subsets of paths of the spatial distribution of camera locations never intersect, or when the first contiguous subset of paths of the spatial distribution of camera locations deviates from the second contiguous subset of paths of the spatial distribution of camera locations by a minimum distance. In particular embodiments, the computing system may identify one or more gaps by determining that a distance between camera viewpoints exceeds a minimum distance. In particular embodiments, the minimum distance may be an absolute distance (e.g., any distance in excess of 6 inches), or it may be a relative distance (e.g., any distance greater than two times the average distance between each camera location 320).

One or more gaps may be caused by the path of travel of the user moving the wireless device 210 while capturing the sequence of images. For example, a user may attempt to trace back one or more portions of the path of the spatial distribution of camera locations to ensure that enough images of the object 220 are collected from one or more camera viewpoints 320. However, the user may inadvertently traverse a second path of travel that is slightly different from the first path of the spatial distribution of camera locations, resulting in the spatial distribution of camera locations 320 including one or more overlapping gaps 330.

As another example, one or more gaps may be caused by limitations in positioning and mapping techniques (e.g., relocation caused by SLAM). These limitations may result in one or more pose inaccuracies in one or more camera positions 320. The effect of these one or more pose inaccuracies is the spatial distribution of sudden "jumps" of the wireless device 210 from one three-dimensional location to another. This may result in the spatial distribution of camera positions 320 including one or more parallel gaps 340.

In some examples, the computing system may adjust the pose of one or more camera positions 320 associated with a contiguous subset of the sequence of images to close the gap. Fig. 3C illustrates an example of the computing system adjusting the pose of one or more camera positions 320 to close one or more gaps. For example, the computing system may identify one or more contiguous subsets of paths of the spatial distribution of camera positions that include a portion of the gap as described herein. The computing system may adjust the pose of one or more camera positions 320 such that one or more poses of the camera positions comprising the contiguous subset move from the original camera position 350 to the gap-adjusted camera position 360. In certain embodiments, these gaps may be adjusted such that the spatial distribution of camera positions conforms to a particular shape or geometric path, and thus the orientation between the camera viewpoints is consistent. Fig. 3D shows the adjusted spatial distribution of camera positions 320 after adjustment for one or more gaps. The end result is a continuous or near continuous spatial distribution of one or more camera locations 320 identified by the computing system.

In particular embodiments, the computing system may generate a point cloud from one or more images of the captured sequence of images. FIG. 4 illustrates a point cloud and groups one or more points representing three-dimensional locations 310 of object features into one or more clusters 420 in an example image 400 of an image sequence. Using SLAM or similar techniques, the point cloud may include, for example, three-dimensional locations 310 of one or more object features contained within the image 400, such as corners or particular surfaces of the object 220 contained within the image 400. Due to inaccuracies of SLAM or similar techniques, one or more points representing the three-dimensional location 310 may not be accurately positioned within the point cloud. If these inaccuracies are shallow (e.g., located at image depths too close to the camera position 320), the computing system may be particularly affected by these inaccuracies because they may occlude the scene and prevent proper image warping.

To achieve proper image warping, the computing system may identify and remove one or more outliers 430 representing one or more inaccurate three-dimensional locations 310 of the object features. In particular embodiments, the computing system may filter the point cloud by grouping one or more three-dimensional locations 310 of object features depicted in the sequence of images and creating clusters 420. These clusters may be generated using any density-based clustering technique, such as the DBSCAN algorithm. In particular embodiments, the clusters may correspond to, for example, one or more features of object 220, such as a corner of a table or a particular surface.

In particular embodiments, the computing system may determine one or more outliers 430 that are not located within one or more clusters 420. In particular embodiments, one or more outliers 430 may be more than a threshold distance from one or more clusters 420. In a particular embodiment, one or more outliers 430 may not be able to exceed a minimum threshold distance (e.g., too shallow) from the camera location 320. The computing system may identify and then remove one or more outliers from the point cloud 430.

In particular embodiments, the computing system may determine a plurality of control points 510 based on the camera locations 320 associated with the sequence of images. In particular embodiments, control point 510 may include one or more camera locations 320 associated with a contiguous subset of the sequence of images. In particular embodiments, control point 510 may include one or more adjusted camera positions 360 associated with a contiguous subset of the sequence of images resulting from one or more gap adjustments. The computing system may identify any number of control points needed to accurately represent a continuous or near-continuous spatial distribution of camera positions.

Using one or more control points 510, the computing system will determine a view path 520 for the virtual camera based on the camera position 320 associated with the sequence of images and the adjusted camera position 360. Fig. 5 illustrates the use of one or more control points 510 to formulate a view path 520 for a virtual camera. In a particular embodiment, determining the view path 520 of the virtual camera includes generating a spline using the plurality of control points 510. In certain embodiments, the control points define a frame similar to the boundary between which the view path 520 must pass. In particular embodiments, the view path 520 may optionally intersect one or more control points 510 or one or more camera locations 320.

In particular embodiments, view path 520 may represent a virtual camera, e.g., a perspective from which object 220 is viewed from a similar but different perspective than any camera position 320. In a particular embodiment, the view path 520 is defined by a smooth spline, at least a portion of which may be fitted according to one or more geometric shapes (e.g., without limitation, an arc or a semi-circle). In particular embodiments, at least a portion of view path 520 may be fitted according to one or more mathematical equations (e.g., without limitation, an nth order polynomial equation). This results in a smooth view path 520, providing spatial stability and avoiding sudden, erratic movements as the position of the virtual camera moves throughout the scene. The view path 520 allows the computing system to render a sequence of warped images that are stable in time and smooth in space as the rendered view rotates around the object 220.

The view path 520 may be used to determine one or more virtual camera positions 610. Fig. 6 illustrates the generation 610 of one or more virtual camera positions for a virtual camera along a view path 520. The virtual camera position 610 may include a pose that includes a three-dimensional position and orientation of the virtual camera. In particular embodiments, one or more virtual camera locations 610 may intersect with one or more camera locations 320 located along the view path 520. In certain embodiments, one or more virtual camera locations 610 may be separated at predetermined intervals along the view path 520, thereby providing temporal stability. By interpolating at regular, predetermined intervals, the computing system may normalize the velocity of the output warped image sequence, thereby providing a smooth transition between each warped image in the warped image sequence. These virtual camera positions 610 represent positions along the view path 520 at which the computing system generates one or more warped images that are used to generate the output sequence of warped images.

In particular embodiments, the computing system may generate a corresponding focal point 710 for each of the one or more virtual camera positions 610. FIG. 7A shows a set of focal points for each virtual camera position that are optimized for smoothness of the focus path 720 and closeness of the focus path 720 corresponding to the focal points. For each of the one or more virtual camera positions 610, the computing system may identify a corresponding focal point 710, the focal point 710 representing a three-dimensional point in space that the virtual camera located at the corresponding virtual camera position 610 is to focus on (e.g., the virtual camera at the virtual camera position 610 will position itself such that an image captured from the virtual camera at the virtual camera position 610 will have the corresponding focal point 710 captured at the center of the image). In particular embodiments, the computing system may generate a focus path 720 corresponding to each focal point 710.

In a particular embodiment, the corresponding focus 710 for each virtual camera position 610 is determined based in part on optimizing the smoothness of the focus path 720 corresponding to the focus 710. In particular embodiments, the computing system attempts to identify a focus path 720 corresponding to the focal point 710, and the focus path 720 may be fitted smoothly, for example, according to one or more geometric shapes (e.g., an arc or a semicircle), or according to one or more mathematical equations (e.g., an nth order polynomial equation). In particular embodiments, focus path 720 may attempt to form a consistent geometry with respect to view path 520. This results in a smooth focus path 720 corresponding to the focus 710, providing spatial stability and avoiding sudden, inconsistent movements in focus as the virtual camera moves around the object 220 throughout the scene.

In a particular embodiment, the corresponding focal point 710 for each virtual camera position 610 is further determined based in part on the closeness of the optimized focus path 720. FIG. 7A shows a set of focal points 710 optimized based in part on optimizing the closeness of the focus path 720. Although not depicted due to its compactness, focal points 710 in fig. 7A may include a focus path 720 connecting corresponding focal points 710. As part of the closeness optimization, the computing system seeks to identify a corresponding focus point 710 that keeps the focus path 720 as short as possible while maintaining smoothness of the focus path 720. Such optimization is particularly useful for capturing a sequence of images of a small object in which the user of the wireless device 210 can easily keep each image in the sequence of images focused on a center location of the object while maintaining a consistent distance from the object 220 while moving the wireless device 210 in a circular path.

In a particular embodiment, the computing system may not be able to optimize the corresponding focus path 720 based on closeness. This is due to, for example, the close focus path 720 being too close in space to the view path 520 (or, in some examples, interleaved with the view path 520), resulting in an unstable virtual camera orientation (e.g., the virtual camera may be oriented away from one or more virtual viewpoints 610 of the object 220). In these embodiments, the corresponding focal point 710 for each virtual camera position 610 is determined based in part on optimizing the smoothness of the focus path 720 corresponding to the focal point 710 and optimizing the distance between each virtual camera position 610 and their respective focal points 710 to approximate the predetermined target distance.

Fig. 7B shows a set of focal points 710 for each virtual camera position 610, the set of focal points 710 being optimized for smoothness of a focus path 720 corresponding to the focal points 710 and for a distance between the virtual camera positions 610 and their respective focal points 710 to be close to a predetermined target distance. The optimization does not consider closeness when determining one or more focal points 710 (e.g., seeking to minimize the length of the path corresponding to the focal point 710). By optimizing for a predetermined target distance (e.g., 1 meter) between the view path 520 and the focus path 720, the computing system may eliminate the occurrence of interleaving between the view path 520 and the focus path 720. This is particularly useful for capturing a sequence of images of a large object where it is difficult for the user of the wireless device 210 to keep the camera on the wireless device 210 in focus on the center position of the object 220. Such optimization is also useful for capturing a sequence of images of an object in a limited real space where it is difficult for a user of the wireless device 210 to keep the wireless device 210 at a consistent distance from the object when the wireless device 210 captures the sequence of images. Such optimization may also be useful for inexperienced users who may have difficulty moving the wireless device 210 in a circular path around the object 220.

The computing system will then generate a warped image for each virtual camera position 610 using a process such as grid warping. Each warped image replicates an image captured from a virtual camera position 610 centered at a corresponding focus 710. In certain embodiments, this will result in warping one or more regions of the image without affecting one or more other regions of the image. For each virtual camera position 610, the computing system selects the camera position associated with the sequence of images as the corresponding reference camera position 810. Fig. 8 shows a magnified view of a virtual camera position 610 located on a view path 520, the virtual camera position 610 having a corresponding reference camera position 810. In a particular embodiment, the reference camera position 810 is one of the camera positions 320 associated with the sequence of images. In a particular embodiment, the reference camera position 810 is selected by identifying the camera position 320 that is closest to the virtual camera position 610 (e.g., located at a minimum physical distance from the virtual camera position 610). For each reference camera position 810, the computing system will identify the image captured at the reference camera position 810. In particular embodiments, the reference camera position 810 may correspond to one or more virtual camera positions 610.

The mesh warping is performed by utilizing the image associated with the selected reference camera position, the reference camera position 810, the corresponding virtual camera position 610, and the three-dimensional position 310 of the object feature depicted in the image associated with the selected reference camera position. FIG. 9A illustrates an example of projecting three-dimensional locations of one or more object features of an image onto a two-dimensional frame using a grid. In a particular embodiment, the mesh warping includes generating a mesh 910 using the point cloud, the mesh 910 corresponding to an image associated with the selected reference camera location 810. In particular embodiments, the grid 910 may include, for example, lines and corresponding grid nodes 920.

The computing system may then map the three-dimensional position 310 (denoted as x) of one or more object features depicted in the image based at least on the reference camera position 810i) Projected onto a two-dimensional frame, which may be divided into grids (grid) by a mesh. In a particular embodiment, the one or more three-dimensional locations of the one or more object features for warping are each determined to be within a threshold distance of one of the one or more clusters 420. Two-dimensional reference feature point 930 (denoted as p)i) From the formula pi=C xiWhere C represents the reference camera position 810. One or more two-dimensional reference feature points 930 may then be projected onto the frame using the grid 910.

The computing system may then, based at least on the virtual camera position 610, represent the three-dimensional position (denoted as x) of one or more object features depicted in the imagei') onto a two-dimensional frame that may be divided into grids by a mesh. In particular embodiments, one or more three-dimensional locations of one or more object features for warping are each determined to be at one or more ofWithin a threshold distance of one of more clusters 420. Projected virtual feature points 940 (denoted as p)i') from the formula pi’=C’xi'determine, where C' represents the virtual camera position 610. One or more two-dimensional projected virtual feature points 940 may then be projected onto the frame using grid 910.

In a particular embodiment, the computing system may generate a warped mesh based on the image associated with the selected reference camera position 810, the two-dimensional reference feature points 930, and the two-dimensional projected virtual feature points 940. Fig. 10 illustrates projecting one or more sets of three-dimensional positions of object features onto a grid based on a reference camera position 810 and a virtual camera position 610. Each two-dimensional reference feature point 930 (depicted in pink) has a spatial relationship with the node of the grid 920 in which it resides. In particular embodiments, each node within the grid may be assigned a weighting factor (by w) based on the spatial relationshipiRepresentation) to perform mesh warping. In this example, the location of each two-dimensional reference feature point 930 (depicted in pink) may be represented by the formula pi=∑wigiAnd the formula for each two-dimensional projected virtual feature point 940 (depicted in purple) may be represented by the formula pi' ═ Σ wigi' means. With this spatial relationship, the computing system may warp the grid such that the two-dimensional reference feature points 930 (depicted in pink) of the reference camera position 810 match the two-dimensional projected virtual feature points 940 (depicted in purple) of the virtual camera position 610.

In a particular embodiment, the system may then warp each image corresponding to the reference camera position 810 such that it simulates the image captured from the virtual camera position 610. Fig. 9B shows that the mesh is warped such that the two-dimensional reference feature points 930 of the reference camera position 810 match the two-dimensional projected virtual feature points 940 of the virtual camera position 610. In a particular embodiment, this may result in multiple output frames from a single input frame possibly being warped in different ways. In a particular embodiment, warping the image associated with the selected reference camera position 810 for the corresponding virtual camera position 610 also uses the focal point 710 corresponding to the virtual camera position 610.

In particular embodiments, the computing system may repeat one or more of these steps for each virtual camera position 610 located along the view path 520, thereby generating a sequence of warped images associated with each virtual viewpoint 610 located on the view path 520. In a particular embodiment, the number of frames per second of the output warped image sequence may be different than the number of frames per second of the captured image sequence.

In particular embodiments, the computing system may adjust the sequence of warped images, for example, by scaling, cropping, or inpainting one or more of the warped images, thereby further generating a smooth virtual camera path that increases the production level and is appealing to the user. In particular embodiments, these one or more adjustments are determined by identifying a scaling factor for each image in the output warped image sequence (because each image is warped in a unique manner), and determining a worst (e.g., largest) scaling factor for the warped image sequence. The factor may be determined by identifying a minimum scaling factor required to meet a target resolution for each image of the sequence of warped images.

In a particular embodiment, the worst scaling factor is determined to be below a predetermined minimum threshold. In this example, the computing system may scale the sequence of warped images according to the scaling factor of each image in the sequence of output warped images. The computing system may also crop one or more of the scaled images in the warped image. This produces a sequence of warped images that meet the predetermined target resolution and smooth virtual camera path.

In a particular embodiment, the worst scaling factor is determined to be within a predetermined acceptable range (e.g., equal to or between a predetermined minimum threshold and a failure threshold). In this example, the computing system may scale the sequence of warped images according to the scaling factor of each image in the sequence of output warped images. In particular embodiments, the computing system may lock the scaling factor at a minimum threshold for scaling. In particular embodiments, after scaling the warped image sequence, the computing system may identify missing pixel information on one or more scaled images of the warped image sequence that may be caused by a large scaling factor. For example, scaling an image may result in the loss of pixel information on the outer edges of the image. To remedy such missing pixel information, the computing system may then fix one or more of the missing pixel information on one or more of the scaled images. Repair may be performed by any conventional method, such as the Criminisi algorithm.

In a particular embodiment, the computing system may not render the output warped image sequence if the scaling factor exceeds a predetermined failure threshold. In this example, the computing system may provide an error message via the GUI of the wireless device 210 indicating that the captured image sequence was unsuccessful and prompting the user to attempt capture again.

In particular embodiments, the computing system may encode the warped image sequence into a particular file format prior to output. The warped image sequence may be encoded and output in a file that includes any format suitable for viewing, including, for example,. mp4,. mov,. wmv,. flv or.avi. In particular embodiments, a file including a sequence of warped images may be output for viewing on a user interface associated with a client device.

In particular embodiments, the warped image sequence may be accessed by one or more users of an online platform (e.g., an online retailer's website or social networking system) via a user interface on the wireless device 210. Fig. 11A shows an example wireframe of a GUI on the wireless device 210 for viewing and interacting with the output warped image sequence. With the wireless device 210, the user can view the output warped image sequence via the GUI. In particular embodiments, the GUI of the wireless device 210 may be associated with, for example, but not limited to, an online marketplace or social networking system. In a particular embodiment, the GUI of the wireless device 210 may include an output warped image sequence 1110, one or more elements 1120, such as, but not limited to, arrows, that allow a user of the client device to interact with the output warped image sequence 1110. In particular embodiments, these interactions may include, for example, but not limited to, rotating or magnifying the output warped image sequence 1110. In a particular embodiment, the GUI of the wireless device 210 may also include one or more descriptions 1130 of the object 220 depicted in the output warped image sequence 1110, the descriptions 1130 may be useful in an online shopping environment, such as a name, price, and method of delivery of the object 220 after purchase. Although not shown, description 1130 may also include, for example, but not limited to, information about the seller (e.g., contact information or rating), the size or weight of object 220, or the condition of object 220 (e.g., second-hand or new). Fig. 11B shows an example wireframe of a GUI on the wireless device 210, with a sequence of warped images 1140 of the output changed via user interaction with the GUI of the wireless device 210. In this example, the altered, output warped image sequence 1140 has been rotated via interaction with the user of the wireless device 210. As previously mentioned herein, these interactions may include, for example, but not limited to, rotating or magnifying the output warped image sequence.

In particular embodiments, the user interface of wireless device 210 may include one or more visual effects when interacting with the output distorted image sequence 1110, such as, but not limited to, scroll bounce (e.g., the output distorted image sequence bounces back when the user reaches the end of the distorted image sequence) or momentum scroll (e.g., after interaction from the user, the output distorted image sequence remains moving in a manner as if friction is slowing it down).

Fig. 12 illustrates an example method 1200 for outputting a warped image sequence from a captured image sequence. The method may begin at step 1210 where a computing system captures a sequence of images using a camera. In a particular embodiment, the computing system includes a camera and associated GUI on the wireless device. In particular embodiments, the GUI of the wireless device may include one or more user instructions to move the wireless device in a curved path to capture a sequence of images.

At step 1220, the computing system determines, for each image in the sequence of images, (1) a three-dimensional location of a feature of the object depicted in the image and (2) a first camera location of the camera at the time the image was captured. The three-dimensional position of the object feature may correspond to a particular area of the object, such as a corner of a table or a particular surface. The camera position may include, for example and without limitation, a pose that includes the three-dimensional position and orientation of the camera located on the wireless device 210 (e.g., whether the wireless device 210 is facing the object 220) when capturing images in the sequence of images.

At step 1230, the computing system determines a view path for the virtual camera based on the first camera position associated with the sequence of images. In a particular embodiment, determining the view path of the virtual camera includes generating a spline using the plurality of control points. The spline may be smooth, providing spatial stability and avoiding sudden, inconsistent movements of the position as the virtual camera moves throughout the scene.

At step 1240, the computing system determines second camera positions for the virtual camera, the second camera positions being separated by a predetermined interval along the view path. In particular embodiments, the virtual camera position may include a pose, the pose consisting of at least a three-dimensional position and orientation of the virtual camera. These second camera positions of the virtual camera represent positions along the view path at which the computing system generates one or more warped images used to generate the sequence of warped images.

At step 1250, for each second viewpoint, the computing system (1) selects one of the first camera positions associated with the sequence of images, and (2) warps the image associated with the selected first camera position using the selected first camera position, the second camera position, and the three-dimensional position of the object feature depicted in the selected image. In a particular embodiment, the first camera position associated with the sequence of images is selected by identifying the first camera position located at the minimum physical distance virtual camera position.

At step 1260, the computing system outputs the warped image sequence. The warped image sequence may be output in a file that includes any format suitable for viewing, including, for example,. mp4,. mov,. wmv,. flv, or.avi. In particular embodiments, the warped image sequence may be accessed by one or more users of an online platform (e.g., an online retailer's website or social networking system) via a user interface on the wireless device. In particular embodiments, the GUI of the wireless device may include one or more elements that allow a user of the client device to interact with the output warped image sequence. These interactions may include, for example, but not limited to, rotating or magnifying the output warped image sequence.

Particular embodiments may repeat one or more steps of the method of fig. 12 where appropriate. Although this disclosure describes and illustrates particular steps of the fig. 12 method as occurring in a particular order, this disclosure contemplates any suitable steps of the fig. 12 method occurring in any suitable order. Further, although this disclosure describes and illustrates an example method for outputting a warped image sequence from a captured image sequence including particular steps of the method of fig. 12, this disclosure contemplates any suitable method for outputting a warped image sequence from a captured image sequence including any suitable steps, which may include all, some, or none of the steps of the method of fig. 12, where appropriate. Moreover, although this disclosure describes and illustrates particular components, devices, or systems performing particular steps of the method of fig. 12, this disclosure contemplates any suitable combination of any suitable components, devices, or systems performing any suitable steps of the method of fig. 12.

FIG. 13 illustrates an example network environment 1300 associated with a social networking system. Network environment 1300 includes client systems 1330, social-networking systems 1360, and third-party systems 1370 connected to each other through a network 1310. Although fig. 13 illustrates a particular arrangement of client system 1330, social-networking system 1360, third-party system 1370, and network 1310, this disclosure contemplates any suitable arrangement of client system 1330, social-networking system 1360, third-party system 1370, and network 1310. By way of example and not by way of limitation, two or more of client system 1330, social-networking system 1360, and third-party system 1370 may be directly connected to one another, bypassing network 1310. As another example, two or more of client system 1330, social-networking system 1360, and third-party system 1370 may be all or partially physically or logically co-located with one another. Moreover, although fig. 13 illustrates a particular number of client systems 1330, social-networking systems 1360, third-party systems 1370, and networks 1310, the present disclosure contemplates any suitable number of client systems 1330, social-networking systems 1360, third-party systems 1370, and networks 1310. By way of example, and not by way of limitation, network environment 1300 may include a plurality of client systems 1330, social-networking system 1360, third-party systems 1370, and networks 1310.

The present disclosure contemplates any suitable network 1310. By way of example and not limitation, one or more portions of network 1310 may include an ad hoc network, an intranet, an extranet, a Virtual Private Network (VPN), a Local Area Network (LAN), a wireless LAN (wlan), a Wide Area Network (WAN), a wireless WAN (wwan), a Metropolitan Area Network (MAN), a portion of the internet, a portion of the Public Switched Telephone Network (PSTN), a cellular telephone network, or a combination of two or more of these. The network 1310 may include one or more networks 1310.

Link 1350 may connect client system 1330, social-networking system 1360, and third-party system 1370 to communication network 1310 or to each other. The present disclosure contemplates any suitable links 1350. In particular embodiments, one or more links 1350 include one or more wired (e.g., Digital Subscriber Line (DSL) or Data Over Cable Service Interface Specification (DOCSIS)), wireless (e.g., Wi-Fi or worldwide interoperability for microwave access (Wi-MAX)), or optical (e.g., Synchronous Optical Network (SONET) or Synchronous Digital Hierarchy (SDH)) links. In particular embodiments, one or more links 1350 each include an ad hoc network, an intranet, an extranet, a VPN, a LAN, a WLAN, a WAN, a WWAN, a MAN, a portion of the internet, a portion of the PSTN, a cellular technology-based network, a satellite communication technology-based network, another link 1350, or a combination of two or more such links 1350. Link 1350 need not be the same throughout network environment 1300. One or more first links 1350 may differ in one or more respects from one or more second links 1350.

In particular embodiments, client system 1330 may be an electronic device that includes hardware, software, or embedded logic components, or a combination of two or more such components, and that is capable of performing the appropriate functions implemented or supported by client system 1330. By way of example, and not limitation, client system 1330 may include a computer system such as a desktop computer, notebook or laptop computer, netbook, tablet computer, e-book reader, global positioning system device, camera, Personal Digital Assistant (PDA), handheld electronic device, cellular telephone, smartphone, augmented/virtual reality device, other suitable electronic device, or any suitable combination thereof. The present disclosure contemplates any suitable client systems 1330. Client system 1330 may enable a network user at client system 1330 to access network 1310. Client system 1330 may enable its user to communicate with other users at other client systems 1330.

In particular embodiments, client system 1330 may include a web browser 1332 and may have one or more add-ons, plug-ins, or other extensions. A user at client system 1330 may enter a Uniform Resource Locator (URL) or other address directing web browser 1332 to a particular server, such as server 1362 or a server associated with third party system 1370, and web browser 1332 may generate and communicate a hypertext transfer protocol (HTTP) request to the server. The server may accept the request and transmit one or more hypertext markup language files to client system 1330 in response to the request. Client system 1330 may render a web page based on an HTML file from a server for presentation to a user. The present disclosure contemplates any suitable web page files. By way of example and not limitation, web pages may be rendered from HTML files, extensible hypertext markup language (XHTML) files, or extensible markup language (XML) files, according to particular needs. Such pages may also execute scripts, combinations of markup languages and scripts, and the like. Herein, reference to a web page includes one or more corresponding web page files (which a browser may use to render the web page), and vice versa, where appropriate.

In particular embodiments, social-networking system 1360 may be a network-addressable computing system that may host an online social network. Social-networking system 1360 may generate, store, receive, and send social-networking data (e.g., user profile data, concept profile data, social-graph information, or other suitable data related to an online social network). Social-networking system 1360 may be accessed by other components of network environment 1300, either directly or via network 1310. By way of example and not limitation, client system 1330 may access social-networking system 1360 directly or via network 1310 using web browser 1332 or a native application associated with social-networking system 1360 (e.g., a mobile social-networking application, a messaging application, another suitable application, or any combination thereof). In particular embodiments, the social networking system 1360 may include one or more servers 1362. Each server 1362 may be a single server (unity server) or a distributed server spanning multiple computers or multiple data centers. The server 1362 may be of various types, such as, without limitation, a web server, a news server, a mail server, a messaging server, an advertising server, a file server, an application server, an exchange server, a database server, a proxy server, another server suitable for performing the functions or processes described herein, or any combination thereof. In particular embodiments, each server 1362 may include hardware, software, or embedded logic components, or a combination of two or more such components for performing the appropriate functions implemented or supported by the server 1362. In particular embodiments, social-networking system 1360 may include one or more data stores 1364. The data store 1364 may be used to store various types of information. In particular embodiments, the information stored in the data store 1364 may be organized according to particular data structures. In particular embodiments, each data store 1364 may be a relational database, column (column) database, relational database, or other suitable database. Although this disclosure describes or illustrates a particular type of database, this disclosure contemplates any suitable type of database. Particular embodiments may provide an interface that enables client system 1330, social-networking system 1360, or third-party system 1370 to manage, retrieve, modify, add, or delete information stored in data store 1364.

In particular embodiments, the social-networking system 1360 may store one or more social graphs in one or more data stores 1364. In particular embodiments, the social graph may include a plurality of nodes, which may include a plurality of user nodes (each corresponding to a particular user) or a plurality of concept nodes (each corresponding to a particular concept), and a plurality of edges connecting the nodes. Social-networking system 1360 may provide users of an online social network with the ability to communicate and interact with other users. In particular embodiments, a user may join an online social network via social networking system 1360, and then add connections (e.g., relationships) to a number of other users in social networking system 1360 that they want to be related to. As used herein, the term "friend" may refer to any other user of social-networking system 1360 with which the user forms a connection, association, or relationship via social-networking system 1360.

In particular embodiments, social-networking system 1360 may provide users with the ability to take actions on various types of items or objects supported by social-networking system 1360. By way of example and not by way of limitation, items and objects may include groups or social networks to which a user of social-networking system 1360 may belong, events or calendar entries that may be of interest to the user, computer-based applications that the user may use, transactions that allow the user to purchase or sell goods via a service, interactions with advertisements that the user may perform, or other suitable items or objects. The user may interact with anything that can be represented in the social networking system 1360 or by external systems of the third-party system 1370, the third-party system 1370 being separate from the social networking system 1360 and coupled to the social networking system 1360 via the network 1310.

In particular embodiments, social-networking system 1360 may be capable of linking various entities. By way of example and not limitation, social-networking system 1360 may enable users to interact with each other and receive content from third-party systems 1370 or other entities, or allow users to interact with these entities through Application Programming Interfaces (APIs) or other communication channels.

In particular embodiments, third-party system 1370 may include one or more types of servers, one or more data stores, one or more interfaces (including but not limited to APIs), one or more web services, one or more content sources, one or more networks, or any other suitable components (e.g., with which a server may communicate). The third-party system 1370 may be operated by an entity different from the entity operating the social-networking system 1360. However, in particular embodiments, social-networking system 1360 and third-party system 1370 may operate in conjunction with each other to provide social-networking services to users of social-networking system 1360 or third-party system 1370. In this sense, the social networking system 1360 may provide a platform or backbone that other systems (e.g., third party systems 1370) may use to provide social networking services and functionality to users over the entire internet.

In particular embodiments, the third party system 1370 may include a third party content object provider. The third-party content object provider may include one or more sources of content objects that may be delivered to the client system 1330. By way of example and not limitation, content objects may include information about things or activities of interest to a user, such as movie show times, movie reviews, restaurant menus, product information and reviews, or other suitable information, for example. As another example and not by way of limitation, the content object may include an incentive content object (e.g., a coupon, discount coupon, gift coupon, or other suitable incentive object).

In particular embodiments, social-networking system 1360 also includes user-generated content objects that may enhance user interaction with social-networking system 1360. User-generated content may include any content that a user may add, upload, send, or "post" to social-networking system 1360. By way of example and not by way of limitation, a user communicates a post from client system 1330 to social-networking system 1360. Posts may include data such as status updates or other textual data, location information, photos, videos, links, music, or other similar data or media. Content may also be added to the social networking system 1360 by third parties through "communication channels" (e.g., news feeds or streams).

In particular embodiments, social-networking system 1360 may include various servers, subsystems, programs, modules, logs, and data stores. In particular embodiments, social-networking system 1360 may include one or more of the following: web servers, action recorders, API request servers, relevance and ranking engines, content object classifiers, notification controllers, action logs, third-party content object exposure logs, inference modules, authorization/privacy servers, search modules, ad-targeting modules, user interface modules, user profile storage, connected storage, third-party content storage, or location storage. Social networking system 1360 may also include suitable components, such as network interfaces, security mechanisms, load balancers, failover servers, management and network operations consoles, other suitable components, or any suitable combination thereof. In particular embodiments, social-networking system 1360 may include one or more user profile stores for storing user profiles. The user profile may include, for example, biographical information, demographic information, behavioral information, social information, or other types of descriptive information (e.g., work experience, educational history, hobbies or preferences, interests, preferences, or locations). The interest information may include interests associated with one or more categories. The categories may be general or specific. By way of example and not by way of limitation, if a user "likes" an article about a brand of shoes, the category may be the brand, or the general category of "shoes" or "clothing". The associative memory may be used to store information that is associative with the user. The relational information may indicate users who have similar or common work experiences, group memberships, hobbies, educational history, or are related or share common attributes in any manner. The relational information may also include user-defined relations between different users and the content (internal and external). The web server may be used to link the social-networking system 1360 to one or more client systems 1330 or one or more third-party systems 1370 via the network 1310. The web server may include a mail server or other messaging function for receiving and routing messages between the social networking system 1360 and one or more client systems 1330. The API request server may allow third party systems 1370 to access information from social networking system 1360 by calling one or more APIs. The action recorder may be used to receive communications from the web server regarding the user's actions on or off of the social networking system 1360. In conjunction with the action log, a third-party content object log of user exposure to third-party content objects may be maintained. The notification controller may provide information about the content object to client system 1330. The information may be pushed to client system 1330 as a notification, or the information may be pulled from client system 1330 in response to a request received from client system 1330. The authorization server may be used to enforce one or more privacy settings of the users of social-networking system 1360. The privacy settings of the user determine how particular information associated with the user may be shared. The authorization server may allow users to opt-in or opt-out to have their actions recorded by social-networking system 1360 or shared with other systems (e.g., third-party systems 1370), for example, by setting appropriate privacy settings. The third-party content object store may be used to store content objects received from third parties (e.g., third-party systems 1370). The location store may be used to store location information received from client systems 1330 associated with the user. The advertisement pricing module may combine social information, current time, location information, or other suitable information to provide relevant advertisements to the user in the form of notifications.

Fig. 14 shows an example computer system 1400. In a particular embodiment, one or more computer systems 1400 perform one or more steps of one or more methods described or illustrated herein. In a particular embodiment, one or more computer systems 1400 provide the functionality described or illustrated herein. In particular embodiments, software running on one or more computer systems 1400 performs one or more steps of one or more methods described or illustrated herein or provides functions described or illustrated herein. Particular embodiments include one or more portions of one or more computer systems 1400. Herein, reference to a computer system may include a computing system, and vice versa, where appropriate. Further, references to a computer system may include one or more computer systems, where appropriate.

This disclosure contemplates any suitable number of computer systems 1400. The present disclosure contemplates computer system 1400 taking any suitable physical form. By way of example and not limitation, computer system 1400 may be an embedded computer system, a system on a chip (SOC), a single board computer System (SBC) (e.g., a Computer On Module (COM) or a System On Module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a computer system mesh, a mobile phone, a Personal Digital Assistant (PDA), a server, a tablet computer system, an augmented/virtual reality device, or a combination of two or more of these systems. Where appropriate, computer system 1400 may include one or more computer systems 1400; is monolithic or distributed; spanning a plurality of locations; spanning multiple machines; spanning multiple data centers; or reside in a cloud, which may include one or more cloud components in one or more networks. Where appropriate, one or more computer systems 1400 may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated herein. By way of example, and not by way of limitation, one or more computer systems 1400 may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein. One or more computer systems 1400 may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where appropriate.

In a particular embodiment, the computer system 1400 includes a processor 1402, a memory 1404, a storage device 1406, an input/output (I/O) interface 1408, a communication interface 1410, and a bus 1412. Although this disclosure describes and illustrates a particular computer system with a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement.

In a particular embodiment, the processor 1402 includes hardware for executing instructions (e.g., those making up a computer program). By way of example, and not limitation, to execute instructions, processor 1402 may retrieve (or retrieve) instructions from an internal register, an internal cache, memory 1404, or storage 1406; decode them and execute them; and then write the one or more results to an internal register, internal cache, memory 1404, or storage 1406. In particular embodiments, processor 1402 may include one or more internal caches for data, instructions, or addresses. The present disclosure contemplates processor 1402 including any suitable number of any suitable internal caches, where appropriate. By way of example, and not limitation, processor 1402 may include one or more instruction caches, one or more data caches, and one or more Translation Lookaside Buffers (TLBs). The instructions in the instruction cache may be copies of instructions in memory 1404 or storage 1406, and the instruction cache may accelerate retrieval of those instructions by processor 1402. The data in the data cache may be: a copy of the data in memory 1404 or storage 1406 for causing instructions executing at processor 1402 to operate on; the results of a previous instruction executed at processor 1402, for access by a subsequent instruction executed at processor 1402, or for writing to memory 1404 or storage 1406; or other suitable data. The data cache may speed up read or write operations by processor 1402. The TLB may accelerate virtual address translation for the processor 1402. In particular embodiments, processor 1402 may include one or more internal registers for data, instructions, or addresses. The present disclosure contemplates processor 1402 including any suitable number of any suitable internal registers, where appropriate. Where appropriate, the processor 1402 may include one or more Arithmetic Logic Units (ALUs); is a multi-core processor; or include one or more processors 1402. Although this disclosure describes and illustrates a particular processor, this disclosure contemplates any suitable processor.

In a particular embodiment, the memory 1404 includes a main memory for storing instructions for causing the processor 1402 to execute or data for causing the processor 1402 to operate. By way of example, and not limitation, computer system 1400 can load instructions from storage device 1406 or another source (e.g., another computer system 1400) into memory 1404. Processor 1402 may then load the instructions from memory 1404 into an internal register or internal cache. To execute instructions, processor 1402 may retrieve instructions from an internal register or internal cache and decode them. During or after execution of the instructions, processor 1402 may write one or more results (which may be intermediate results or final results) to an internal register or internal cache. Processor 1402 may then write one or more of these results to memory 1404. In a particular embodiment, the processor 1402 only executes instructions in one or more internal registers or internal caches or in the memory 1404 (instead of the storage device 1406 or elsewhere) and only operates on data in one or more internal registers or internal caches or in the memory 1404 (instead of the storage device 1406 or elsewhere). One or more memory buses (which may each include an address bus and a data bus) may couple processor 1402 to memory 1404. The bus 1412 may include one or more memory buses, as described below. In certain embodiments, one or more Memory Management Units (MMUs) reside between processor 1402 and memory 1404 and facilitate accesses to memory 1404 requested by processor 1402. In a particular embodiment, the memory 1404 includes Random Access Memory (RAM). The RAM may be volatile memory, where appropriate. The RAM may be dynamic RAM (dram) or static RAM (sram), where appropriate. Further, the RAM may be single-port RAM or multi-port RAM, where appropriate. The present disclosure contemplates any suitable RAM. Memory 1404 may include one or more memories 1404, where appropriate. Although this disclosure describes and illustrates particular memory, this disclosure contemplates any suitable memory.

In a particular embodiment, the storage device 1406 comprises a mass storage device for data or instructions. By way of example, and not limitation, storage device 1406 may include a Hard Disk Drive (HDD), a floppy disk drive, flash memory, an optical disk, a magneto-optical disk, magnetic tape, or a Universal Serial Bus (USB) drive, or a combination of two or more of these. Storage 1406 may include removable or non-removable (or fixed) media, where appropriate. Storage device 1406 may be internal or external to computer system 1400, where appropriate. In a particular embodiment, the storage device 1406 is non-volatile solid-state memory. In certain embodiments, storage device 1406 comprises Read Only Memory (ROM). Where appropriate, the ROM may be mask-programmed ROM, programmable ROM (prom), erasable prom (eprom), electrically erasable prom (eeprom), electrically variable ROM (earom), or flash memory, or a combination of two or more of these. The present disclosure contemplates mass storage device 1406 taking any suitable physical form. Storage device 1406 may include one or more storage device control units that facilitate communication between processor 1402 and storage device 1406, where appropriate. Storage 1406 may include one or more storage devices 1406, where appropriate. Although this disclosure describes and illustrates a particular storage device, this disclosure contemplates any suitable storage device.

In particular embodiments, I/O interface 1408 comprises hardware, software, or both that provide one or more interfaces for communication between computer system 1400 and one or more I/O devices. Computer system 1400 may include one or more of these I/O devices, where appropriate. One or more of these I/O devices may enable communication between a person and the computer system 1400. By way of example, and not limitation, an I/O device may include a keyboard, keypad, microphone, monitor, mouse, printer, scanner, speaker, still camera, stylus, tablet computer, touch screen, trackball, video camera, another suitable I/O device, or a combination of two or more of these. The I/O device may include one or more sensors. This disclosure contemplates any suitable I/O devices and any suitable I/O interfaces 1408 therefor. I/O interface 1408 may include one or more device or software drivers that enable processor 1402 to drive one or more of these I/O devices, where appropriate. I/O interface 1408 may include one or more I/O interfaces 1408, where appropriate. Although this disclosure describes and illustrates particular I/O interfaces, this disclosure contemplates any suitable I/O interfaces.

In particular embodiments, communication interface 1410 includes hardware, software, or both that provide one or more interfaces for communication (e.g., packet-based communication) between computer system 1400 and one or more other computer systems 1400 or one or more networks. By way of example, and not limitation, communication interface 1410 may include a Network Interface Controller (NIC) or network adapter for communicating with an ethernet or other wire-based network, or a wireless NIC (wnic) or wireless adapter for communicating with a wireless network (e.g., a Wi-Fi network). The present disclosure contemplates any suitable networks and any suitable communication interfaces 1410 for it. By way of example, and not by way of limitation, computer system 1400 may communicate with an ad hoc network, a Personal Area Network (PAN), a Local Area Network (LAN), a Wide Area Network (WAN), a Metropolitan Area Network (MAN), or one or more portions of the internet, or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. By way of example, computer system 1400 may communicate with a Wireless PAN (WPAN) (e.g., a Bluetooth WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (e.g., a Global System for Mobile communications (GSM) network), or other suitable wireless network, or a combination of two or more of these. Computer system 1400 may include any suitable communication interface 1410 for any of these networks, where appropriate. Communication interface 1410 may include one or more communication interfaces 1410, where appropriate. Although this disclosure describes and illustrates a particular communication interface, this disclosure contemplates any suitable communication interface.

In a particular embodiment, the bus 1412 includes hardware, software, or both that couple the components of the computer system 1400 to each other. By way of example, and not limitation, the bus 1412 may include an Accelerated Graphics Port (AGP) or other graphics bus, an Extended Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a hypertransport (ht) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a Low Pin Count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-express (pcie) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or any other suitable bus or combination of two or more of these. The bus 1412 may include one or more buses 1412, where appropriate. Although this disclosure describes and illustrates a particular bus, this disclosure contemplates any suitable bus or interconnect.

Herein, where appropriate, the one or more computer-readable non-transitory storage media may include one or more semiconductor-based or other Integrated Circuits (ICs) (e.g., Field Programmable Gate Arrays (FPGAs) or application specific ICs (asics)), Hard Disk Drives (HDDs), hybrid hard disk drives (HHDs), optical disks, Optical Disk Drives (ODDs), magneto-optical disks, magneto-optical disk drives, floppy disks, Floppy Disk Drives (FDDs), magnetic tape, Solid State Drives (SSDs), RAM drives, SECURE DIGITAL (SECURE DIGITAL) cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these. Computer-readable non-transitory storage media may be volatile, nonvolatile, or a combination of volatile and nonvolatile, where appropriate.

As used herein, the term "or" is inclusive and not exclusive, unless expressly indicated otherwise or indicated otherwise by context. Thus, herein, "a or B" means "A, B or both" unless explicitly indicated otherwise or indicated otherwise by context. Further, "and" are both conjunctive and disjunctive unless expressly indicated otherwise or indicated otherwise by context. Thus, herein, "a and B" means "a and B, either jointly or individually," unless expressly indicated otherwise or indicated otherwise by context.

The scope of the present disclosure includes all changes, substitutions, variations, alterations, and modifications to the example embodiments described or illustrated herein that a person having ordinary skill in the art would comprehend. The scope of the present disclosure is not limited to the example embodiments described or illustrated herein. Moreover, although the present disclosure describes and illustrates respective embodiments herein as including particular components, elements, features, functions, operations, or steps, any of these embodiments may include any combination or permutation of any of the components, elements, features, functions, operations, or steps described or illustrated anywhere herein that a person having ordinary skill in the art would understand. Furthermore, reference in the appended claims to an apparatus or system or a component of an apparatus or system that is suitable for, arranged to, capable of, configured to, implemented, operable to, or operative to perform a particular function includes the apparatus, system, component, whether or not it or that particular function is activated, turned on, or unlocked, provided that the apparatus, system, or component is so adapted, arranged, enabled, configured, implemented, operable, or operative. Moreover, although this disclosure describes or illustrates particular embodiments as providing particular advantages, particular embodiments may provide some, all, or none of these advantages.

42页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种图像生成方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!