Object detection using multiple three-dimensional scans

文档序号:1555267 发布日期:2020-01-21 浏览:14次 中文

阅读说明:本技术 使用多次三维扫描进行对象检测 (Object detection using multiple three-dimensional scans ) 是由 V·沙普德莱纳-科图雷 M·S·B·希玛纳 于 2019-07-11 设计创作,主要内容包括:本公开涉及使用多次三维扫描进行对象检测。本文公开的一个示例性具体实施有利于在不同条件下使用对象的多次扫描来检测对象。例如,可通过在第一条件(例如,明亮光照)下在第一路径上移动图像传感器来捕获所述对象的图像来创建所述对象的第一次扫描。然后可通过在第二条件(例如,昏暗光照)下在第二路径上移动图像传感器来捕获所述对象的附加图像来创建所述对象的第二次扫描。具体实施确定了转换,所述转换使来自这些多次扫描的所述扫描数据彼此相关联,并且使用所述转换在单个坐标系中生成所述对象的3D模型。增强内容可相对于该对象在单个坐标系中定位,并因此将显示在适当位置,而不论稍后检测到物理对象的条件如何。(The present disclosure relates to object detection using multiple three-dimensional scans. One exemplary implementation disclosed herein facilitates detecting an object using multiple scans of the object under different conditions. For example, a first scan of the object may be created by capturing an image of the object by moving an image sensor on a first path under a first condition (e.g., bright lighting). A second scan of the object may then be created by capturing an additional image of the object by moving the image sensor on a second path under a second condition (e.g., dim lighting). Implementations determine a transformation that correlates the scan data from these multiple scans to one another and uses the transformation to generate a 3D model of the object in a single coordinate system. The enhanced content may be located in a single coordinate system relative to the object and thus will be displayed in place regardless of the condition under which the physical object is later detected.)

1. A method, comprising:

at a device comprising a processor, a computer-readable storage medium, and an image sensor:

obtaining first scan data of the physical object under a first condition using the image sensor, the first scan data comprising images from a plurality of image sensor locations defined in a first coordinate system;

obtaining second scan data of the physical object under a second condition, different from the first condition, using the image sensor, the second scan data comprising images from a plurality of image sensor locations defined in a second coordinate system;

determining, via the processor, a transformation between the first coordinate system and the second coordinate system; and

generating a three-dimensional (3D) model of the physical object based on the first scan data, the second scan data, and the transformation.

2. The method of claim 1, wherein determining the transformation comprises matching an image of the first scan data with an image of the second scan data.

3. The method of claim 1, wherein determining the transformation comprises matching a plurality of images of the first scan data with a plurality of images of the second scan data.

4. The method of claim 1, wherein determining the transformation comprises determining a shift between a pose of the image sensor associated with a first image of the first scan data and a second image of the second scan data.

5. The method of claim 1, wherein determining the transformation comprises aligning a first point cloud associated with the first scan data with a second point cloud associated with the second scan data.

6. The method of claim 1, wherein the 3D model is a point cloud of points associated with descriptors.

7. The method of claim 6, wherein the point cloud comprises points having descriptors based on the first scan data and points having descriptors based on the second scan data.

8. The method of claim 6, wherein generating the 3D model comprises merging a first point having a descriptor based on the first scan data with a second point having a descriptor based on the second scan data.

9. The method of claim 8, further comprising determining to merge the first point and the second point based on a proximity of the first point to the second point.

10. The method of claim 1, wherein generating the 3D model comprises representing the first scan data and the second scan data in a single coordinate system.

11. The method of claim 10, further comprising associating augmentation content with the 3D model at a location relative to the single coordinate system.

12. The method of claim 1, further comprising:

obtaining image data via the image sensor;

detecting the physical object using the 3D model and the image data; and

aligning the 3D model with the physical object using the single coordinate system.

13. The method of claim 12, wherein detecting the physical object comprises matching the image data to a point cloud descriptor of the 3D model.

14. The method of claim 12, wherein detecting the physical object comprises determining a current pose of the image sensor relative to the 3D model.

15. The method of claim 12, further comprising displaying a computer-generated reality (CGR) environment describing the physical object based on the image data and the augmented content, wherein the augmented content is positioned based on aligning the 3D model with the physical object using the single coordinate system.

16. The method of claim 1, wherein the first condition and the second condition are different lighting conditions.

17. The method of claim 15, wherein the first condition and the second condition comprise different object states of objects in the physical object.

18. The method of claim 1, further comprising determining that a second scan is requested based on testing the first scan data.

19. A system, comprising:

a non-transitory computer-readable storage medium;

a camera; and

one or more processors coupled to the non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium comprises program instructions that, when executed on the one or more processors, cause the system to perform operations comprising:

obtaining first scan data of the physical object under a first condition using the image sensor, the first scan data comprising images from a plurality of image sensor locations defined in a first coordinate system;

obtaining second scan data of the physical object under a second condition, different from the first condition, using the image sensor, the second scan data comprising images from a plurality of image sensor locations defined in a second coordinate system;

determining, via the processor, a transformation between the first coordinate system and the second coordinate system; and

generating a three-dimensional (3D) model of the physical object based on the first scan data, the second scan data, and the transformation.

20. A non-transitory computer-readable storage medium storing program instructions executable on a computer to perform operations comprising:

obtaining first scan data of the physical object under a first condition using the image sensor, the first scan data comprising images from a plurality of image sensor locations defined in a first coordinate system;

obtaining second scan data of the physical object under a second condition, different from the first condition, using the image sensor, the second scan data comprising images from a plurality of image sensor locations defined in a second coordinate system;

determining, via a processor, a transformation between the first coordinate system and the second coordinate system; and

generating a three-dimensional (3D) model of the physical object based on the first scan data, the second scan data, and the transformation.

Technical Field

The present disclosure relates generally to detecting and tracking real-world physical objects depicted in images, and in particular to systems, methods, and devices for detecting and tracking such physical objects based on previous object scans.

Background

Various electronic devices include image sensors that capture images of real-world environments. For example, many mobile devices include image sensors that can be used to capture a sequence of frames (e.g., video frames) that are presented or stored on a display of such devices for subsequent viewing. Detecting and tracking objects appearing in such frames is desirable for various applications. Such detection and tracking may be facilitated by capturing images of the object and then using those images to detect the object in subsequent images. However, when the capturing condition does not match the detection condition, the object may not be detected and tracked. For example, if the illumination at the time of performing the capturing is different from the illumination at the time of performing the detecting, the object may not be detected.

Disclosure of Invention

Various implementations disclosed herein include devices, systems, and methods that use multiple scans of a subject under different conditions. For example, a first scan of the object may be created by capturing an image of the object by moving an image sensor on a first path under a first condition (e.g., bright lighting). A second scan of the object may then be created by capturing an additional image of the object by moving the image sensor on a second path under a second condition (e.g., dim lighting). The start position, the movement, and the end position of the image sensor may be different from each other for the first path and the second path. Thus, the coordinate systems of the two scans will likely be different from each other. This is not desirable for various applications such as enhancement. For example, if a user wants to enhance an object with enhancement content (e.g., defining a text information box to be displayed over the object), it is unclear which coordinate system should be used to place the enhancement. The user should likely need to perform burdensome tasks that define enhancements with respect to each of a plurality of different coordinate systems associated with each of the scans. A deterministic transformation is implemented that correlates scan data from multiple scans of an object to each other and thus in a common coordinate system.

Some implementations of the present disclosure relate to performing various operations on a computing device having a processor, memory, and an image sensor to facilitate object detection using multiple 3D scans. The apparatus obtains first scan data of a physical object (e.g., a toy building block structure) in a first condition (e.g., dim lighting) using an image sensor. The first scan data may include images (e.g., keyframes) captured from multiple image sensor locations in a first coordinate system. The device also obtains second scan data of the physical object under a second condition (e.g., bright lighting) using the image sensor. The second condition is typically different from the first condition in one or more ways (e.g., lighting, subject status, etc.). The second scan data may include images captured from a plurality of image sensor locations in a second coordinate system.

The device determines a transformation between the first coordinate system and the second coordinate system to facilitate object detection. In one implementation, the transformation is determined by matching one or more images (e.g., keyframes) of the first scan data with one or more images of the second scan data. In one implementation, the transition is determined by determining a shift between a pose (e.g., position and orientation) of an image sensor associated with a first image of the first scan data and a second image of the second scan data. In one implementation, the transformation is determined by aligning a first point cloud associated with the first scan data with a second point cloud associated with the second scan data.

After determining the transformation, the device generates a three-dimensional (3D) model of the physical object (e.g., a point cloud of points associated with the descriptors) that includes the first scan data and the second scan data based on the transformation. In implementations in which the 3D model is a point cloud, the points may have descriptors that are based on both the first scan data and the second scan data. In one implementation, the 3D model merges a first point having a descriptor based on the first scan data with a second point having a descriptor based on the second scan data, e.g., based on a proximity of the first point to the second point.

The generated 3D model may represent both the first scan data and the second scan data in a single coordinate system. This facilitates various features including, but not limited to, improved use of augmented content, for example, in a Computer Generated Reality (CGR) environment. In one implementation, the augmentation content is associated with the 3D model at a location relative to a single coordinate system. Thus, when an end user uses a 3D model to detect an object, the end user's device may obtain image data via its image sensor, detect a physical object using the 3D model and the image data, and align the 3D model with the physical object using a single coordinate system. The process may involve matching the image data to a point cloud descriptor of the 3D model or determining a current pose of the image sensor relative to the 3D model. An end-user's device may display a CGR environment depicting a physical object based on the image data and the enhancement content. For example, the enhancement content may be located based on aligning the 3D model with the physical object using a single coordinate system.

Implementations disclosed herein also provide user interface features that facilitate capturing multiple scans to create a 3D model representing multiple different conditions. In one implementation, this involves enabling a user to create a 3D scan of an object under one condition and presenting a notification requesting the user to create another scan or providing the user with an option to create another scan, enabling the user to create additional scans under another condition and automatically determining a transformation between coordinate systems for the first and second scans and generating a 3D model with both scans and transformations.

According to some implementations, an apparatus includes one or more processors, non-transitory memory, and one or more programs; the one or more programs are stored in a non-transitory memory and configured to be executed by one or more processors, and the one or more programs include instructions for performing or causing the performance of any of the methods described herein. According to some implementations, a non-transitory computer-readable storage medium has stored therein instructions that, when executed by one or more processors of a device, cause the device to perform or cause performance of any of the methods described herein. According to some implementations, an apparatus includes one or more processors, non-transitory memory, an image sensor, and means for performing or causing performance of any of the methods described herein.

Drawings

Accordingly, the present disclosure may be understood by those of ordinary skill in the art and a more particular description may be had by reference to certain illustrative embodiments, some of which are illustrated in the accompanying drawings.

FIG. 1 is a block diagram illustrating an exemplary physical object in a real-world environment according to some implementations.

Fig. 2 is a block diagram of a mobile device displaying a computer-generated reality (CGR) environment of the exemplary physical object of fig. 1, according to some implementations.

FIG. 3 is a block diagram illustrating a first scan of the example physical object of FIG. 1, according to some implementations.

FIG. 4 is a block diagram illustrating a second scan of the example physical object of FIG. 1, according to some implementations.

Fig. 5 is a block diagram illustrating differences in paths of an image sensor during the first scan of fig. 3 and the second scan of fig. 4, according to some implementations.

Fig. 6 is a block diagram illustrating an exemplary point cloud generated based on the first scan of fig. 3, according to some implementations.

Fig. 7 is a block diagram illustrating an exemplary point cloud generated based on the second scan of fig. 4, according to some implementations.

Fig. 8 is a block diagram illustrating an exemplary point cloud generated based on the first scan of fig. 3 and the second scan of fig. 4 based on a conversion determined according to some implementations.

Fig. 9 is a block diagram illustrating exemplary components of an apparatus for generating a 3D model of a physical object and detecting the physical object, according to some implementations.

Fig. 10 is a flowchart representation of a method for generating a 3D model including scan data from multiple scans based on a determined transformation, according to some implementations.

In accordance with common practice, the various features shown in the drawings may not be drawn to scale. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. Additionally, some of the figures may not depict all of the components of a given system, method, or apparatus. Finally, throughout the specification and drawings, like reference numerals may be used to refer to like features.

Detailed Description

Numerous details are described in order to provide a thorough understanding of example implementations shown in the drawings. The drawings, however, illustrate only some example aspects of the disclosure and therefore should not be considered limiting. It will be apparent to one of ordinary skill in the art that other effective aspects or variations do not include all of the specific details described herein. Moreover, well-known systems, methods, components, devices, and circuits have not been described in detail so as not to obscure more pertinent aspects of the example implementations described herein.

FIG. 1 is a block diagram illustrating an exemplary physical object 105 in a real-world environment 100. While this example and other examples discussed herein show a 3D model of a single object 105, the techniques disclosed herein are also applicable to multiple objects as well as entire scenes and other real-world environments. The phrase "physical object" as used herein refers to any type of object or combination of objects in the real world, including, but not limited to, bricks, toys, statues, furniture, doors, buildings, pictures, paintings, sculptures, lights, signs, tables, floors, walls, desks, water areas, faces, human hands, human hair, other body parts, whole human bodies, animals or other organisms, clothing, paper, magazines, books, vehicles, machines or other man-made objects, and any other natural or man-made object or group of objects present in the real world that can be recognized and modeled.

Fig. 2 is a block diagram of a mobile device 205 displaying a computer-generated reality (CGR) environment 200 of the exemplary physical object 105 of fig. 1. In this example, the device 205 captures one or more images of the physical object 105 and compares those images to a 3D model (e.g., previously captured key frames, point cloud values, etc.) to detect and track the presence of the real world object 105. The device 205 may determine the pose (e.g., position and orientation) of the physical object 105, for example, using RGB-D information, infrared camera-based depth detection, and other such techniques. Thus, after detecting a real-world object and determining its pose, the device 205 may align the 3D model of the object with the physical object in the coordinate system corresponding to the real-world space.

Given this alignment, the device is able to provide a CGR environment 200 that combines aspects of a real-world environment with enhanced content. In this example, the CGR environment 200 includes a description 210 of the physical object 105 and enhancement content 215, the enhancement content 215 including a text bubble and the text phrase "IT' S MAGNETIC (which is magnetic)". The augmented content 215 is positioned relative to the description 210 of the physical object 105 based on the alignment of the physical object 105 with the 3D model. For example, the CGR environment 200 creator may have specified that the augmented content 215 is to be displayed at a location determined based on a fixed point at the center of the surface of the 3D model of the physical object 105. Once the 3D model is aligned with the physical object 105, the device 205 determines the appropriate location for the augmented content 215 and generates the CGR environment 200 for display.

Implementations disclosed herein enable display of enhancements and other features as shown in fig. 2 even where the 3D model includes scan data from different scans of the physical object 105 associated with different coordinate systems. For example, a first scan of the physical object 105 may have been performed in dim lighting conditions, and a second scan of the physical object 105 may have been performed in bright lighting conditions and associated with a different coordinate system.

The coordinate system misalignment is shown in fig. 3 and 4. Fig. 3 is a block diagram illustrating a first scan 300 of the exemplary physical object 105 of fig. 1, and fig. 4 is a block diagram illustrating a second scan 400 of the exemplary physical object 105 of fig. 1. In FIG. 3, a first scan 300 includes a scan performed while capturing an image of a physical object 105 while moving a device (e.g., device 205 of FIG. 2) along a first path 315 a-i. The first path 300 includes devices under multiple image sensor poses 310a, 310b, 310c, 310d, 310e, 310f, 310g, 310i, 310j from an initial pose 310a at the beginning of image recording to a final pose 310j at the end of recording. During the first scan 300, the device may track its own pose (e.g., via tracked position and orientation changes of one or more motion sensors on the device, inertial data, etc.). Thus, the relative positions of the data in the first 3D coordinate system are known for the images captured at each of the poses 310a, 310b, 310c, 310D, 310e, 310f, 310g, 310i, 310 j. Image data from the images may be combined to generate a first 3D model of the physical object (e.g., the first point cloud shown in fig. 6) based on the known poses.

In FIG. 4, the second scan 400 includes a scan performed while capturing an image of the physical object 105 while moving the device along the second path 415 a-i. The second path 400 includes devices under multiple image sensor poses 410a, 410b, 410c, 410d, 410e, 410f, 410g, 410i, 410j from an initial pose 410a at the beginning of image recording to a final pose 410j at the end of recording. During the second scan 400, the device may track its own pose, and thus the relative positions of the data in the second 3D coordinate system are known for the images captured at each of the poses 410a, 410b, 410c, 410D, 410e, 410f, 410g, 410i, 410 j. The image data from the images may be combined to generate a second 3D model of the physical object (e.g., the second point cloud shown in fig. 7) based on the known poses. It should be noted that the pose determined for each of the scans may be defined relative to the coordinate system selected for that particular scan (e.g., having an origin and orientation), and the coordinate system used for each scan may be different. Thus, even though the real-world locations of acquisition scans 300 and 400 are the same, the determined poses 310a-j and 410a-j may be different because they are defined with respect to different coordinate systems.

Fig. 5 is a block diagram illustrating a difference in paths of the image sensor during the first scan of fig. 3 and the second scan of fig. 4. Between the first and second scans, the device may often fail to accurately track its own pose changes. In some cases, the first scan and the second scan are separated by a few minutes, days, weeks, or even longer. In any case, the device cannot associate the image sensor pose position of the first scan 300 with the image sensor pose position of the second scan 400. Accordingly, the techniques disclosed herein address this deficiency by determining the transformation that relates the first and second scans and providing a practical and efficient way to relate the 3D models developed from these scans to each other in a single common coordinate system.

Fig. 6 and 7 show a 3D model generated from two scans. Fig. 6 is a block diagram illustrating an exemplary point cloud 600 generated based on the first scan of fig. 3, and fig. 7 is a block diagram illustrating an exemplary point cloud 700 generated based on the second scan of fig. 4, according to some implementations.

Fig. 8 is a block diagram illustrating an exemplary point cloud generated based on the first scan of fig. 3 and the second scan of fig. 4 based on a conversion determined according to some implementations. In this example, a transformation is determined that aligns at least one image (e.g., a keyframe) from each of the scans. The transformation may specify a positional relationship (e.g., rotation r and translation t) between the image sensor camera poses associated with the keyframes. This allows all keyframes and associated 3D models (e.g., point clouds) in both scans to be aligned with each other using a single coordinate system. In the example of fig. 8, the point clouds 600, 700 are aligned with each other using the coordinate system of the first scan. In other implementations, the coordinate system of the second scan or another common coordinate system is used.

The point cloud 800 of FIG. 8 may include points having descriptors from multiple scans and thus a plurality of different conditions. For example, a first scan may include color values of points determined based on a scan of the physical object with the blinds open to allow for bright lighting conditions, while a second scan may include color values of points determined based on a scan of the physical object with the blinds closed or after the sun has set to provide relatively darker lighting conditions. Similarly, the point cloud 800 may include points with descriptors from multiple scans reflecting different configurations or states of the physical object. For example, a physical object may have a panel that is open and closed, and the point determined by the first scan may represent the physical object with the panel closed, and the point determined by the second scan may represent the physical object with the panel open.

To align the scans, at least one image (e.g., a key frame) from each of the scans may be used to determine the transformation. In some implementations, one or more of the same features are detected in the keyframes of the first scan and the keyframes of the second scan. The 3D spatial positions of these same features in the respective coordinate systems are determined and related to each other in a common coordinate system. The feature locations are matched to determine the relative pose of the image sensor for the respective keyframes. In other words, by determining the appropriate transformation (e.g., rotation r, translation t) to align the same feature between the two coordinate systems, the system can generally determine the appropriate transformation between the two coordinate systems. Thus, the image sensor poses of different scans and the 3D model (e.g., point cloud) can be easily aligned with each other in a common coordinate system using the transformation.

In some implementations, the 3D model of the second scan is added to the 3D model of the first scan, e.g., adding additional points, merge points, etc., as shown in fig. 8. Thus, in this example, all 3D model data is included in a single model using the coordinate system of the first scan. The combined 3D model may then be used to detect physical objects in subsequently obtained images. Since the combined 3D model has points from two scans, thus from two different conditions (e.g., bright and dim lighting) or configurations, the physical object in the subsequently obtained image can be better detected under either of these conditions. Furthermore, the additional data may make the 3D model generally more robust (e.g., better able to detect physical objects) even if the scans are not associated with different conditions or configurations.

In some implementations, the device matches point cloud features between multiple scans to determine the transformation. In some implementations, the conversion is determined using a combination of keyframes and point cloud features. In some implementations, a machine learning model (e.g., a trained neural network) is applied to match the image or point cloud features or otherwise determine the transformation.

In some implementations, a single feature match (e.g., one feature in one keyframe from each scan) is used to determine the transformation. In some implementations, multi-feature matching compares multiple features in a single keyframe of a first scan to matching features in a single keyframe of a second scan to determine the transformation. In some implementations, the conversion is determined using a plurality of features in a plurality of keyframes (e.g., two keyframes per scan, all keyframes per scan, etc.).

In some implementations, combining 3D models (e.g., point clouds) involves merging points of different models from different scans. For example, the same physical point may be represented by two different descriptors from two different scans. The system may estimate that these two points should be treated as a single point based on spatial proximity or descriptor similarity. For example, the system may merge/concatenate points that are separated from each other in 3D space by less than a minimum threshold distance.

Note that while fig. 1-8 illustrate the use of two scans, the techniques disclosed herein are also applicable to combining data from any number of scans. These techniques are also useful in many contexts. For example, the techniques may facilitate creating a CGR environment that includes enhancements (e.g., enhanced content) positioned relative to a physical object. The content creator is able to create a 3D model of the physical object using multiple scans for multiple conditions, define the enhancement positioned relative to the content in a single coordinate system, and believe that the enhancement will be displayed in the proper position relative to the model regardless of what condition the end user detected the physical object during subsequent use of the CGR environment.

The devices used by the content creator (e.g., for image capture and 3D model creation) and the end user (e.g., for object detection using the 3D model) may be any of a variety of devices including processors, non-transitory computer-readable media, and image sensors. In some implementations, the device is a Head Mounted Device (HMD) worn by the content creator or end user. The HMD may encompass the field of view of its user. The HMD may include one or more CGR screens or other displays configured to display a CGR environment. In some implementations, the HMD includes a screen or other display for displaying the CGR environment in the field of view of the user. In some implementations, the HMD is worn such that the screen is positioned to display the CGR environment within the field of view of the user. In some implementations, the device is a handheld electronic device (e.g., a smartphone or tablet), laptop computer, or desktop computer configured to create a 3D model of a physical object and facilitate creation or presentation of a CGR environment, e.g., for a content creator, to an end user. In some implementations, the device is a CGR cabin, housing, or room configured to present a CGR environment in which the end user does not wear or hold the device.

Fig. 9 is a block diagram illustrating exemplary components of an apparatus for generating a 3D model of a physical object and detecting the physical object, according to some implementations. In various implementations, these functions may be separated into one or more separate devices. While some specific features are shown, those skilled in the art will appreciate from the present disclosure that various other features are not shown for the sake of brevity and so as not to obscure more pertinent aspects of the particular implementations disclosed herein. To this end, as non-limiting examples, in some implementations, device 205 includes one or more processing units 902 (e.g., microprocessors, ASICs, FPGAs, GPUs, CPUs, processing cores, etc.), one or more input/output (I/O) devices and sensors 906, one or more communication interfaces 908 (e.g., USB, FIREWIRE, THUNDERBOLT, IEEE 802.3x, IEEE 802.11x, IEEE802.16x, GSM, CDMA, TDMA, GPS, IR, BLUETOOTH, ZIGBEE, SPI, I2C, and/or similar types of interfaces), one or more programming (e.g., I/O) interfaces 910, one or more displays 912, one or more internally or externally facing image sensors 914, memory 920, and one or more communication buses 904 for interconnecting these components and various other components.

In some implementations, the one or more communication buses 904 include circuitry that interconnects and controls communications between system components. In some implementations, the one or more I/O devices and sensors 906 include at least one of a touch screen, soft keys, a keyboard, a virtual keyboard, buttons, knobs, joysticks, switches, dials, an Inertial Measurement Unit (IMU), accelerometers, magnetometers, gyroscopes, thermometers, one or more physiological sensors (e.g., a blood pressure monitor, a heart rate monitor, a blood oxygen sensor, a blood glucose sensor, etc.), one or more microphones, one or more speakers, a haptic engine, or one or more depth sensors (e.g., structured light, time of flight, etc.), among others. In some implementations, movement, rotation, or position of the device 205 detected by one or more I/O devices and sensors 906 provides input to the device 205.

In some implementations, the one or more displays 912 are configured to present CGR content. In some implementations, the one or more displays 912 correspond to holographic, Digital Light Processing (DLP), Liquid Crystal Display (LCD), liquid crystal on silicon (LCoS), organic light emitting field effect transistor (OLET), Organic Light Emitting Diode (OLED), surface-conduction electron emitter display (SED), Field Emission Display (FED), quantum dot light emitting diode (QD-LED), micro-electro-mechanical system (MEMS), or similar display types. In some implementations, the one or more displays 912 correspond to diffractive, reflective, polarizing, holographic, etc. waveguide displays. For example, device 205 includes a single display. As another example, device 205 includes a display for each eye. In some implementations, the one or more displays 912 are capable of presenting CGR content.

In some implementations, one or more image sensor systems 914 are configured to obtain image data corresponding to at least a portion of a local scene of device 205. The one or more image sensor systems 914 may include one or more RGB cameras (e.g., with Complementary Metal Oxide Semiconductor (CMOS) image sensors or Charge Coupled Device (CCD) image sensors), RGB-D cameras, monochrome cameras, IR cameras, or event-based cameras, among others. In various implementations, the one or more image sensor systems 914 also include an illumination source, such as a flash, that emits light.

The memory 920 includes high speed random access memory such as DRAM, SRAM, DDR RAM or other random access solid state memory devices. In some implementations, the memory 920 includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. Memory 920 optionally includes one or more storage devices located remotely from the one or more processing units 902. The memory 920 includes a non-transitory computer-readable storage medium. In some implementations, memory 920 or a non-transitory computer readable storage medium of memory 920 stores the following programs, modules, and data structures, or a subset thereof, including an optional operating system module 930 and one or more application programs 940.

Operating system 930 includes procedures for handling various basic system services and for performing hardware dependent tasks. In some implementations, the operating system 930 includes built-in CGR functionality, including, for example, a CGR environment creation feature or CGR environment viewer, configured to be invoked from one or more application programs 940 to create or display a CGR environment within a user interface. The application 940 comprises a scanning unit 942 configured to create a scan of the physical object and to create a 3D model of the physical object based on the scan. The application 940 further comprises a detection unit 844 configured to use these 3D models to detect physical objects in subsequently obtained images, for example during a CGR environment.

Fig. 9 serves more as a functional description of the various features present in a particular implementation, as opposed to the structural schematic of the implementations described herein. As one of ordinary skill in the art will recognize, the items displayed separately may be combined, and some items may be separated. For example, some of the functional blocks shown separately in fig. 9 may be implemented in a single module, and various functions of a single functional block may be implemented in various implementations by one or more functional blocks. The actual number of units and the division of particular functions and how features are allocated therein will vary depending on the implementation and, in some implementations, will depend in part on the particular combination of hardware, software or firmware selected for a particular implementation.

Fig. 10 is a flowchart representation of a method for generating a 3D model including scan data from multiple scans based on a determined transformation, according to some implementations. In some implementations, the method 1000 is performed by an apparatus (e.g., the apparatus 205 of fig. 2 and 9). The method 1000 may be performed at a mobile device, HMD, desktop computer, laptop, or server device. In some implementations, the method 1000 is performed by processing logic (including hardware, firmware, software, or a combination thereof). In some implementations, the method 1000 is performed by a processor executing code stored in a non-transitory computer readable medium (e.g., memory).

At block 1010, the method 1000 obtains first scan data of the physical object under a first condition using an image sensor. Such first scan data may be acquired using an image sensor such as a camera. In some implementations, the first scan data includes a sequence of frames acquired sequentially or a sequence of frames in a set of images. The image data may include pixel data identifying color, intensity, or other visual attributes captured by the image sensor. Some of the captured multiple frames may be identified as key frames, for example, using a key frame selection technique that identifies key frames based on criteria such as time since the last key frame, distance change since the last key frame, and so on.

At block 1020, the method 1000 obtains second scan data of the physical object under a second condition using the image sensor. Obtaining the second scan is similar to obtaining the first scan, however, the position of the image sensor during the scan may be different and the conditions of the environment or physical object may be different. For example, the first condition and the second condition may be different lighting conditions or different object states of a portion of the physical object (e.g., different part positions/configurations).

Some implementations provide a user interface that facilitates acquisition of the first scan data, the second scan data, and additional scan data (if applicable). For example, in some implementations, the device is configured to request a second scan based on testing the first scan data (e.g., to determine whether the first scan data is sufficient to represent a physical object or various conditions). If not, the device presents a user interface element that provides an option (or other instruction) for performing the second scan. After receiving the second scan data, the device again checks whether the existing scan data is sufficient, and if not, prompts the user to obtain additional scan data, and so on until sufficient scan data is obtained. Whether additional scan data is desired/needed may be determined based on evaluating illumination associated with the scan data, for example, using an ambient light sensor based on pixel values in the scan data, or using any other feasible technique. In some implementations, criteria for suggesting/requiring additional scan data are predetermined. In other implementations, the criteria for suggesting/requiring additional scan data are specified by the user, e.g., by the content creator, based on the user's preferences or anticipated end user conditions.

At block 1030, the method 1000 determines a transformation between the first coordinate system and the second coordinate system. In some implementations, the transformation is determined by matching an image (e.g., a key frame) of the first scan data with an image (e.g., a corresponding key frame) of the second scan data. In some implementations, the transformation is determined by matching a plurality of images of the first scan data with a plurality of images of the second scan data. In some implementations, the transition is determined by determining a shift between a pose (e.g., position and orientation) of an image sensor associated with a first image of the first scan data and a second image of the second scan data. In some implementations, determining the transition involves a minimization or other optimization process. For example, the device may select a transformation that aligns the key frames in a manner that minimizes differences between pixels of the physical objects in the key frames. In some implementations, the transformation is determined by aligning a first point cloud associated with the first scan data with a second point cloud associated with the second scan data. In some implementations, for example, in the case of more than two scans, the transformation that aligns the more than two coordinate systems is determined.

At block 1040, the method 1000 generates a 3D model of the physical object containing the first scan data and the second scan data based on the transformation. The 3D model may be a point cloud of points associated with the descriptors. The point cloud may include points having descriptors based on the first scan data and points having descriptors based on the second scan data. Generating such a point cloud may involve merging a first point having a descriptor based on the first scan data with a second point having a descriptor based on the second scan data, e.g., based on a proximity of the first point to the second point.

The generated 3D model may represent both the first scan data and the second scan data in a single coordinate system. This enables the enhanced content to be manually or automatically associated with the 3D model at a location relative to the single coordinate system to create a CGR environment. When the end user views the CGR environment, the enhancement content is correctly positioned. In some implementations, the end user's device obtains image data via an image sensor (e.g., of the real world surrounding the end user), detects the physical object using the 3D model and the image data, and aligns the 3D model with the physical object using a single coordinate system. Detecting the physical object may involve matching the image data to a point cloud descriptor of the 3D model or determining a current pose of the image sensor relative to the 3D model. The end-user's device may then display a CGR environment depicting the physical object based on the image data and the enhancement content. The enhancement content is located based on aligning the 3D model with the physical object using a single coordinate system.

A Computer Generated Reality (CGR) environment refers to a fully or partially simulated environment in which people sense and/or interact via an electronic system. In CGR, a subset of the human's physical movements, or a representation thereof, is tracked and in response, one or more characteristics of one or more virtual objects simulated in the CGR environment are adjusted in a manner that complies with at least one laws of physics. For example, the CGR system may detect head rotations of a person and in response adjust the graphical content and sound field presented to the person in a manner similar to how such views and sounds should change in the physical environment. In some cases (e.g., for accessibility reasons), adjustments to the characteristics of virtual objects in the CGR environment may be made in response to representations of physical motion (e.g., voice commands).

A person may utilize any of their senses to sense and/or interact with CGR objects, including vision, hearing, touch, taste, and smell. For example, a person may sense and/or interact with audio objects that create a 3D or spatial audio environment that provides a perception of a point audio source in 3D space. As another example, an audio object may enable audio transparency that selectively introduces ambient sound from a physical environment with or without computer-generated audio. In some CGR environments, a person may sense and/or interact only with audio objects.

Examples of CGR include virtual reality and mixed reality. A Virtual Reality (VR) environment refers to a simulated environment designed to be based entirely on computer-generated sensory input for one or more senses. The VR environment includes virtual objects that a person can sense and/or interact with. For example, computer-generated images of trees, buildings, and avatars representing people are examples of virtual objects. A person may sense and/or interact with a virtual object in the VR environment through simulation of the presence of the person within the computer-generated environment, and/or through simulation of a subset of the physical movements of the person within the computer-generated environment.

In contrast to VR environments that are designed to be based entirely on computer-generated sensory inputs, a Mixed Reality (MR) environment refers to a simulated environment that is designed to introduce sensory inputs from a physical environment or representations thereof in addition to computer-generated sensory inputs (e.g., virtual objects). On a virtual continuum, a mixed reality environment is anything between the full physical environment as one end and the virtual reality environment as the other end, but not both ends.

In some MR environments, computer-generated sensory inputs may be responsive to changes in sensory inputs from the physical environment. Additionally, some electronic systems for presenting MR environments may track position and/or orientation relative to a physical environment to enable virtual objects to interact with real objects (i.e., physical objects or representations thereof from the physical environment). For example, the system may cause motion such that the virtual trees appear to be stationary relative to the physical ground.

Examples of mixed reality include augmented reality and augmented virtual. An Augmented Reality (AR) environment refers to a simulated environment in which one or more virtual objects are superimposed over a physical environment or representation thereof. For example, an electronic system for presenting an AR environment may have a transparent or translucent display through which a person may directly view the physical environment. The system may be configured to present the virtual object on a transparent or translucent display such that the human perceives the virtual object superimposed over the physical environment with the system. Alternatively, the system may have an opaque display and one or more imaging sensors that capture images or videos of the physical environment, which are representations of the physical environment. The system combines the image or video with the virtual object and presents the combination on the opaque display. A person utilizes the system to indirectly view the physical environment via an image or video of the physical environment and perceive a virtual object superimposed over the physical environment. As used herein, video of the physical environment displayed on the opaque display is referred to as "pass-through video," meaning that the system captures images of the physical environment using one or more image sensors and uses those images when rendering the AR environment on the opaque display. Further alternatively, the system may have a projection system that projects the virtual object into the physical environment, for example as a hologram or on a physical surface, so that the human perceives the virtual object superimposed on the physical environment with the system.

Augmented reality environments also refer to simulated environments in which representations of a physical environment are converted by computer-generated sensory information. For example, in providing a pass-through video, the system may transform one or more sensor images to apply a selected perspective (e.g., viewpoint) that is different from the perspective captured by the imaging sensor. As another example, a representation of a physical environment may be transformed by graphically modifying (e.g., magnifying) a portion thereof, such that the modified portion may be a representative but not realistic version of the original captured image. As another example, a representation of a physical environment may be transformed by graphically eliminating or obscuring portions thereof.

An enhanced virtual (AV) environment refers to a simulated environment in which a virtual or computer-generated environment incorporates one or more sensory inputs from a physical environment. The sensory input may be a representation of one or more characteristics of the physical environment. For example, an AV park may have virtual trees and virtual buildings, but the faces of people are realistically reproduced from images taken of real people. As another example, the virtual object may take the shape or color of the physical object imaged by the one or more imaging sensors. As another example, the virtual object may take the form of a shadow that conforms to the position of the sun in the physical environment.

There are many different types of electronic systems that enable a person to sense and/or interact with various CGR environments. Examples include head-mounted systems, projection-based systems, head-up displays (HUDs), display-integrated vehicle windshields, display-integrated windows, displays formed as lenses designed for placement on a person's eyes (e.g., similar to contact lenses), headphones/earphones, speaker arrays, input systems (e.g., wearable or handheld controllers with or without haptic feedback), smart phones, tablets, and desktop/laptop computers. The head-mounted system may have one or more speakers and an integrated opaque display. Alternatively, the head-mounted system may be configured to accept an external opaque display (e.g., a smartphone). The head-mounted system may incorporate one or more imaging sensors for capturing images or video of the physical environment, and/or one or more microphones for capturing audio of the physical environment. The head mounted system may have a transparent or translucent display instead of an opaque display. A transparent or translucent display may have a medium through which light representing an image is directed to a person's eye. The display may utilize digital light projection, OLED, LED, uuled, liquid crystal on silicon, laser scanning light sources, or any combination of these technologies. The medium may be an optical waveguide, a holographic medium, an optical combiner, an optical reflector, or any combination thereof. In one embodiment, a transparent or translucent display may be configured to selectively become opaque. Projection-based systems may employ retinal projection techniques that project a graphical image onto a person's retina. The projection system may also be configured to project the virtual object into the physical environment, for example as a hologram or on a physical surface.

Numerous specific details are set forth herein to provide a thorough understanding of claimed subject matter. However, it will be understood by those skilled in the art that claimed subject matter may be practiced without these specific details. In other instances, methods, apparatuses, or systems that would be known by one of ordinary skill have not been described in detail so as not to obscure claimed subject matter.

Unless specifically stated otherwise, it is appreciated that throughout the description, discussions utilizing terms such as "processing," "computing," "calculating," "determining," and "identifying" or the like, refer to the action and processes of a computing device, such as one or more computers or similar electronic computing devices, that manipulates and transforms data represented as physical electronic or magnetic quantities within the computing platform's memories, registers or other information storage devices, transmission devices or display devices.

The one or more systems discussed herein are not limited to any particular hardware architecture or configuration. The computing device may include any suitable arrangement of components that provides a result conditioned on one or more inputs. Suitable computing devices include a multi-purpose microprocessor-based computer system having access to stored software that programs or configures the computing system from a general-purpose computing device to a specific-purpose computing device implementing one or more implementations of the disclosed subject matter. The teachings contained herein may be implemented in software for programming or configuring a computing device using any suitable programming, scripting, or other type of language or combination of languages.

Implementations of the methods disclosed herein may be performed in the operation of such computing devices. The order of the blocks presented in the above examples may be varied, e.g., the blocks may be reordered, combined, or divided into sub-blocks. Some blocks or processes may be performed in parallel.

The use of "adapted to" or "configured to" herein is meant to be an open and inclusive language that does not exclude devices adapted to or configured to perform additional tasks or steps. Additionally, the use of "based on" means open and inclusive, as a process, step, calculation, or other action that is "based on" one or more stated conditions or values may in practice be based on additional conditions or values beyond those stated. The headings, lists, and numbers included herein are for ease of explanation only and are not intended to be limiting.

It will also be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first node may be referred to as a second node, and similarly, a second node may be referred to as a first node, which changes the meaning of the description, as long as all occurrences of the "first node" are renamed consistently and all occurrences of the "second node" are renamed consistently. The first node and the second node are both nodes, but they are not the same node.

The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the claims. As used in the description of this particular implementation and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or groups thereof.

As used herein, the term "if" may be interpreted to mean "when the prerequisite is true" or "in response to a determination" or "according to a determination" or "in response to a detection" that the prerequisite is true, depending on the context. Similarly, the phrase "if it is determined that [ the prerequisite is true ]" or "if [ the prerequisite is true ]" or "when [ the prerequisite is true ]" is interpreted to mean "upon determining that the prerequisite is true" or "in response to determining" or "according to determining that the prerequisite is true" or "upon detecting that the prerequisite is true" or "in response to detecting" that the prerequisite is true, depending on context.

The foregoing description and summary are to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined solely by the detailed description of the illustrative embodiments, but rather according to the full breadth permitted by the patent laws. It will be understood that the specific embodiments shown and described herein are merely illustrative of the principles of the invention and that various modifications can be implemented by those skilled in the art without departing from the scope and spirit of the invention.

21页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种实现智能网络的海洋监测系统及其海洋监测设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!