Stereoscopic vision with weakly aligned heterogeneous cameras

文档序号:1832821 发布日期:2021-11-12 浏览:19次 中文

阅读说明:本技术 利用弱对准的异构相机的立体视觉 (Stereoscopic vision with weakly aligned heterogeneous cameras ) 是由 王祚官 单记章 于 2021-05-31 设计创作,主要内容包括:一种利用异构相机的深度估计方法,该方法包括:分别基于第一相机校准数据集和第二相机校准数据集来均质化第一相机图像和第二相机图像,其中第一相机图像和第二相机图像被失真校正并被变焦补偿;确定经均质化的第一相机图像和经均质化的第二相机图像的初始图像对修正变换矩阵;基于初始图像对修正变换矩阵,确定Δ图像对修正变换矩阵;基于初始图像对修正变换矩阵和Δ图像对修正变换矩阵,确定最终图像对修正变换矩阵,从而得到最终经修正图像对;以及,基于深度网络回归,对最终经修正图像对进行视差映射。(A depth estimation method using a heterogeneous camera, the method comprising: homogenizing the first camera image and the second camera image based on the first camera calibration data set and the second camera calibration data set, respectively, wherein the first camera image and the second camera image are distortion corrected and zoom compensated; determining an initial image pair correction transformation matrix of the homogenized first camera image and the homogenized second camera image; determining a delta image pair correction transformation matrix based on the initial image pair correction transformation matrix; determining a final image pair modification transformation matrix based on the initial image pair modification transformation matrix and the delta image pair modification transformation matrix, thereby obtaining a final modified image pair; and performing disparity mapping on the final corrected image pair based on depth network regression.)

1. A depth estimation method using two heterogeneous cameras, comprising:

homogenizing a first camera image and a second camera image based on a first camera calibration data set and a second camera calibration data set, respectively, wherein the first camera image and the second camera image are distortion corrected and zoom compensated;

determining an initial image pair correction transformation matrix of the homogenized first camera image and the homogenized second camera image;

determining a delta image pair correction transformation matrix based on the initial image pair correction transformation matrix;

determining a final image pair modification transformation matrix based on the initial image pair modification transformation matrix and the delta image pair modification transformation matrix, thereby obtaining a final modified image pair; and the number of the first and second groups,

disparity mapping the final corrected image pair based on depth network regression.

2. The depth estimation method of claim 1, wherein the first camera and the second camera are weakly aligned.

3. The depth estimation method of claim 1, wherein distortion correction is performed before image pair correction.

4. The depth estimation method of claim 1, wherein focus compensation is performed prior to image pair correction.

5. The method of depth estimation according to claim 1, wherein a neural network is used to regress a drift of the delta image pair correction transformation matrix.

6. The depth estimation method of claim 1, wherein the final corrected image pair is disparity mapped using a neural network.

7. A depth estimation method using two heterogeneous cameras, comprising:

calibrating a first camera having a first camera calibration data set;

calibrating a second camera having a second camera calibration data set, the second camera having at least one of a different focal length, a different field of view, and a different number of pixels than the first camera;

performing distortion correction between a first camera image and a second camera image based on the first camera calibration data set and the second camera calibration data set;

performing focal length compensation between the first camera image and the second camera image based on the first camera calibration data set and the second camera calibration data set;

correcting an image pair between the distortion corrected and focus compensated first camera image and the second camera image based on transformation matrix regression; and the number of the first and second groups,

disparity mapping is performed on the corrected image pairs based on depth network regression.

8. The depth estimation method of claim 7, wherein the first camera and the second camera are weakly aligned.

9. The depth estimation method of claim 7, wherein distortion correction is performed before image pair correction.

10. The depth estimation method of claim 7, wherein focus compensation is performed prior to image pair correction.

11. The depth estimation method of claim 7, wherein a neural network is used to regress a drift of a modified transformation matrix of the modified image pair.

12. The depth estimation method of claim 7, wherein the corrected image pair is disparity mapped using a neural network.

13. A depth estimation method using two heterogeneous cameras, comprising:

calibrating a first camera having a first camera calibration data set;

calibrating a second camera having a second camera calibration data set, the second camera having at least one of a different focal length, a different field of view, and a different number of pixels than the first camera;

performing distortion correction between a first camera image and a second camera image based on the first camera calibration data set and the second camera calibration data set;

performing focal length compensation between the first camera image and the second camera image based on the first camera calibration data set and the second camera calibration data set;

performing parallax mapping on the first camera image subjected to distortion correction and focal length compensation based on depth network regression;

determining an initial pose based on the first camera calibration data set and the second camera calibration data set;

performing pose mapping on the first and second camera images and the initial pose using a pose network, and outputting a delta pose;

warping the delta pose, the disparity map, and the first camera image; and the number of the first and second groups,

the warped and distortion-corrected and focus-compensated second camera image is reconstructed to minimize a reconstruction error.

14. The depth estimation method of claim 13, wherein the pose mapping is performed online.

15. The depth estimation method of claim 13, wherein the pose mapping is tracked.

16. The depth estimation method of claim 13, wherein the first camera and the second camera are weakly aligned.

17. The depth estimation method of claim 13, wherein distortion correction is performed before image pair correction.

18. The depth estimation method of claim 13, wherein the focus compensation is performed before the image pair correction.

19. The depth estimation method of claim 13, wherein a neural network is used to regress a drift of the pose graph transform matrix.

20. The depth estimation method of claim 13, wherein the first camera image is disparity mapped using a neural network.

Technical Field

The present disclosure relates to stereoscopic systems and stereo vision utilizing weakly-aligned heterogeneous (heterology) cameras.

Background

Stereoscopic viewing may be used to recover depth information for a scene using two images taken from different perspectives. The depth information may be utilized by computer vision applications, including depth perception in autonomous driving. Stereo vision allows the distance of an object to be determined based on triangulation of epipolar geometry, where the distance is represented by a horizontal pixel shift (also referred to as a disparity map) in the left and right corrected image pairs.

Disclosure of Invention

A first example of depth estimation with two heterogeneous cameras, the method comprising at least one of: homogenizing a first camera image and a second camera image based on a first camera calibration data set and a second camera calibration data set, respectively, wherein the first camera image and the second camera image are distortion corrected and zoom compensated; determining an initial image pair correction transformation matrix of the homogenized first camera image and the homogenized second camera image; determining a delta (delta) image-pair modification transformation matrix based on the initial image-pair modification transformation matrix; determining a final image pair modification transformation matrix based on the initial image pair modification transformation matrix and the delta image pair modification transformation matrix, thereby obtaining a final modified image pair; and performing disparity mapping on the final corrected image pair based on depth network regression.

A second example of depth estimation with two heterogeneous cameras, the method comprising at least one of: calibrating a first camera having a first camera calibration data set; calibrating a second camera having a second camera calibration data set, the second camera having at least one of a different focal length, a different field of view, and a different number of pixels than the first camera; performing distortion correction between a first camera image and a second camera image based on the first camera calibration data set and the second camera calibration data set; performing focal length compensation between the first camera image and the second camera image based on the first camera calibration data set and the second camera calibration data set; correcting an image pair between the distortion corrected and focus compensated first camera image and the second camera image based on transformation matrix regression; and performing disparity mapping on the corrected image pair based on depth network regression.

A third example of depth estimation with two heterogeneous cameras, the method comprising at least one of: calibrating a first camera having a first camera calibration data set; calibrating a second camera having a second camera calibration data set, the second camera having at least one of a different focal length, a different field of view, and a different number of pixels than the first camera; performing distortion correction between a first camera image and a second camera image based on the first camera calibration data set and the second camera calibration data set; performing focal length compensation between the first camera image and the second camera image based on the first camera calibration data set and the second camera calibration data set; performing parallax mapping on the first camera image subjected to distortion correction and focal length compensation based on depth network regression; determining an initial pose based on the first camera calibration data set and the second camera calibration data set; performing pose mapping on the first and second camera images and the initial pose using a pose network, and outputting a delta pose; warping (warp) the delta pose, the disparity map, and the first camera image; and reconstructing the warped and distortion-corrected and focus-compensated second camera image to minimize a reconstruction error.

Drawings

In the drawings:

fig. 1 is a first example system diagram according to an embodiment of the present disclosure;

FIG. 2 is a second example system diagram according to an embodiment of the present disclosure;

FIG. 3 is an example logic flow of depth estimation from dual heterogeneous cameras according to one embodiment of the present disclosure;

FIG. 4 is an example logic flow for depth monitoring for monocular depth estimation in accordance with one embodiment of the present disclosure;

FIG. 5 is a first example method of depth estimation according to one embodiment of this disclosure;

FIG. 6 is a second example method of depth estimation according to one embodiment of this disclosure; and is

Fig. 7 is a third example method of depth estimation according to one embodiment of this disclosure.

Detailed Description

The following examples are set forth merely to illustrate the application of the apparatus and method and are not intended to limit the scope thereof. Equivalent modifications to such apparatus and methods are intended to be included within the scope of the claims.

Certain terms are used throughout the following description and claims to refer to particular system components. As one skilled in the art will appreciate, different companies may refer to a component and/or method by different names. This document does not intend to distinguish between components and/or methods that differ in name but not function.

In the following discussion and claims, the terms "include" and "comprise" are used in an open-ended fashion, and thus should be interpreted to mean "including, but not limited to … …". Likewise, the terms "coupled" or "coupling" are intended to mean either an indirect connection or a direct connection. Thus, if a first device couples to a second device, that connection may be through a direct connection, or through an indirect connection via other devices and connections.

One current problem with stereo cameras is camera misalignment or camera mismatch, which may significantly affect the 3D depth estimation. The stereo camera may be affected by various environmental factors, such as temperature changes, mechanical stress, and vibration. These factors may cause baseline shifts in roll, pitch, and yaw, among other factors. Currently, to address these factors, stereo cameras may be fixed in stereo dedicated equipment, which may increase deployment costs and limit their practical use.

In modern vision-based systems, more than one camera is typically utilized to implement different functions. In one example, in an autopilot system, there may be two or three front-facing cameras with different fields of View (FOV) in order to View objects at different distances. In the present application, heterogeneous cameras are applied to stereoscopic vision. These heterogeneous cameras may have different distortions or focal lengths compared to typical stereo camera systems. In addition, alignment between cameras may not be mechanically enforced as in typical stereoscopic camera systems and may drift over time. In order to solve the problems involved in using heterogeneous cameras for stereoscopic systems, a solution is proposed in which image distortion correction and zoom compensation are performed based on a camera calibration (calibration) result before image correction. In addition, the relative pose between the cameras is approximated as being substantially accurate and having a slight drift over time, wherein the drift tends to remain consistent over short time intervals. The proposed solution uses in part on-line pose correction to track and compensate for pose shifts.

A conventional stereoscopic vision system consists of two cameras of the same nature (homogeneous), and the depth is represented as a disparity map between the corrected images of the two cameras.

Furthermore, a solution for monocular depth estimation will be proposed, in which only a single camera is used. Because of the difficulty in obtaining ground truth depth, monocular depth is often trained using indirect information, such as triangulation between adjacent video frames. In this approach, pose variations between frames are estimated, which is more challenging than the pose drift of a regression-stabilized positioned camera. Accordingly, the present application proposes a solution for weakly-aligned heterogeneous cameras to provide depth monitoring information when training a monocular depth estimator.

Fig. 1 depicts an example automated parking assist system 100 that may be used to implement a deep neural network associated with the operation of one or more portions or steps of processes 700 and 800. In this example, the processors associated with the hybrid system include a Field Programmable Gate Array (FPGA)122, a Graphics Processor Unit (GPU)120, and a Central Processing Unit (CPU) 118.

Processing units 118, 120, and 122 have the capability of providing a deep neural network. The CPU is a general purpose processor that can perform many different functions, the generality of which results in the ability to perform a number of different tasks, but its processing of multiple data streams is limited and its functions related to neural networks are also very limited. GPUs are graphics processors that have many small processing cores that are capable of processing parallel tasks in sequence. An FPGA is a field programmable device that has the ability to be reconfigured and hard-wired to perform any function that may be programmed into the CPU or GPU. Because the programming of FPGAs takes the form of circuits, they are many times faster than CPUs and significantly faster than GPUs.

The system may also include other types of processors, such as an Accelerated Processing Unit (APU) comprising on-chip CPU and GPU elements, and a Digital Signal Processor (DSP) dedicated to performing high-speed digital data processing. Application Specific Integrated Circuits (ASICs) may also perform hardwired functions of the FPGA; however, the lead time to design and produce ASICs is on the order of the quarters of a year, rather than the fast turn-around implementation available when programming FPGAs.

The graphics processor unit 120, the central processing unit 118, and the field programmable gate array 122 are connected to each other and to the memory interface controller 112. The FPGA is connected to the memory interface through a programmable logic circuit to memory interconnect 130. This additional device is used due to the fact that FPGAs operate at very large bandwidths and in order to minimize the electronic circuitry utilized from the FPGA for performing memory tasks. In addition, memory and interface controller 112 is also connected to a persistent storage disk 110, a system memory 114, and a Read Only Memory (ROM) 116.

The system of fig. 2 may be utilized to program and train an FPGA. The GPU works well with unstructured data and can be used to train, once the data has been trained, a deterministic inference model can be found, and the CPU can program the FPGA with the model data determined by the GPU.

The memory interface and controller are connected to a central interconnect 124, which in addition is connected to the GPU 120, CPU 118 and FPGA 122. In addition, the central interconnect 124 is also connected to an input and output interface 128 and a network interface 126, the input and output interface 128 being connected to a first camera 132, a second camera 134.

FIG. 2 depicts a second example hybrid computing system 200 that may be used to implement a neural network associated with the operations of one or more portions or steps of the flow 500. In this example, the processors associated with the system include a Field Programmable Gate Array (FPGA)210 and a Central Processing Unit (CPU) 220.

The FPGA is electrically connected to an FPGA controller 212, and the FPGA controller 212 interfaces with a Direct Memory Access (DMA) 218. The DMAs are connected to an input buffer 214 and an output buffer 216 that are coupled to the FPGA to buffer data into and out of the FPGA, respectively. The DMA 218 has two first-in-first-out (FIFO) buffers, one for the host CPU and one for the FPGA, which allows data to be written to and read from the appropriate buffers.

On the CPU side of the DMA, a main switch 228 is provided, and the main switch 228 shuttles (shuttlle) data and commands to the DMA. The DMA is also connected to a Synchronous Dynamic Random Access Memory (SDRAM) controller 224, which allows data to be shuttled to and from the FPGA to the CPU 220, and to an external SDRAM 226 and the CPU 220. The main switch 228 is connected to a peripheral interface 230, and the peripheral interface 230 is connected to a first camera 232 and a second camera 234. The flash controller 222 controls the persistent memory and is connected to the CPU 220.

Depth estimation for dual heterogeneous cameras

Fig. 3 depicts a first image 310 from a first camera having a first camera calibration data set and a second image 312 from a second camera having a second camera calibration data set, the second camera having at least one of a different focal length, a different field of view, and a different number of pixels than the first camera. The first image 310 undergoes distortion correction 314 and the second image 312 undergoes distortion correction 316. The distortion corrected first image undergoes zoom compensation 318 and the distortion corrected second image undergoes zoom compensation 320. The distortion correction and zoom compensation processes make the first image and the second image homogeneous. The distortion corrected and zoom compensated first image undergoes an initial first image modification 322 which utilizes a transformation matrix H1Thereby producing an initially corrected first image 326. The distortion corrected and zoom compensated second image undergoes an initial firstTwo-image correction 324 using transformation matrix H2Thereby producing an initially corrected second image 328. The initially modified first image and the initially modified second image undergo transformation matrix regression 330 to produce a shift, i.e., a shift Δ H1And a shift Δ H2The shift results in a final first image modification 332 and a final second image modification 334. The final first image modification 332 and the final second image modification 334 are sent to a depth network 336 to produce a disparity map 338 from which depth information can be obtained.

To process images from heterogeneous cameras, the corrected images are homogenized with pre-processing. The pretreatment consists of the following two steps: distortion correction 314, 316 and zoom compensation 318, 320. The drift of the modified transformation matrix is regressed using a neural network H.

The method can comprise the following steps:

1) calibrating the first camera and the second camera;

2) performing image distortion correction based on the intrinsic parameters obtained in step 1);

3) the zoom compensation is used to compensate for any difference in focal length between the first camera and the second camera. For example, based on perspective geometry, can be usedAndthe image in view 2312 is projected to view 1310.

4) Several methods may be utilized to correct the image. For example, it may be done by camera calibration or by solving equations with a fundamental matrixAmong other methods.

5) The correction network 330 regresses the slight shift of the transformation matrix H. The input to the correction network is a corrected image pair 326, 328 and the output is a shift Δ H1And a shift Δ H2. Can train end to endOr independently train the correction network. A disparity map 338 from the corrected image may be obtained by the depth net 336.

Depth monitoring for monocular depth estimation

Fig. 4 depicts a first image 410 from a first camera having a first camera calibration data set and a second image 412 from a second camera having a second camera calibration data set, the second camera having at least one of a different focal length, a different field of view, and a different number of pixels than the first camera. The first camera calibration data set and the second camera calibration data set are determined by an initial calibration 420. The depth network 414 is used for disparity mapping 416 regression of the distortion corrected and focus compensated first camera image. The initial pose 422 is based on the initial calibration 420. Pose network 418 receives first image 410, second image 412, and the initial pose to determine a delta pose. The initial pose 422 and delta pose are summed. The system then warps 426 the initial image, the disparity map 416, and the summed initial and delta poses. The warping 426 and the second image are regressed to minimize the reconstruction error 428.

As shown in fig. 4, the overall logic of the training monocular depth estimator is similar to the video-based logic. The key difference is attitude estimation. While training based on video, regressing the camera pose change P between adjacent frames and determining an estimate of the small delta change Δ P is much less challenging and the resulting estimate is stable and accurate. The pose network takes the image pair and the initial pose P as inputs and outputs a pose drift Δ P. The final relative pose is the overlap of the initial poses P and Δ P.

With the depth map Z and the regressed pose P + Δ P ═ R, t, reconstructing view 2 from view 1 can be warped by:

where K is the camera eigen matrix and p is the pixel coordinates of its homogeneous form.

Training an objective function for minimizing reconstruction errors

L=∑p|I1(p)-I2(p)|+α|ΔP|,

Where the magnitude of Δ P is constrained and it is expected to have a very limited value. In this example, the deep network and attitude network parameters are optimal.

Examples of the invention

Fig. 5 depicts a first example of depth estimation with two heterogeneous cameras, including homogenizing (510) a first camera image and a second camera image based on a first camera calibration data set and a second camera calibration data set, respectively, wherein the first camera image and the second camera image are distortion corrected and zoom compensated. The method comprises the following steps: an initial image pair correction transformation matrix of the homogenized first camera image and the homogenized second camera image is determined (512), and a delta image pair correction transformation matrix is determined (514) based on the initial image pair correction transformation matrix. The method further includes determining (516) a final image pair modification transformation matrix based on the initial image pair modification transformation matrix and the delta image pair modification transformation matrix, resulting in a final modified image pair, and disparity mapping (518) the final modified image pair based on depth network regression.

The first camera and the second camera may be weakly aligned and distortion correction and/or focus compensation may be performed prior to image pair correction. A neural network may be used to regress the drift of the image pair corrected modified transformation matrix and/or to map the disparity of the corrected image pair.

Fig. 6 depicts a second example of depth estimation utilizing two heterogeneous cameras, including calibrating (610) a first camera having a first camera calibration data set, and calibrating (612) a second camera having a second camera calibration data set, the second camera having at least one of a different focal length, a different field of view, and a different number of pixels than the first camera. The method includes performing distortion correction (614) between the first camera image and the second camera image based on the first camera calibration data set and the second camera calibration data set, and performing focal length compensation (616) between the first camera image and the second camera image based on the first camera calibration data set and the second camera calibration data set. The method further includes modifying (618) the image pair between the distortion-corrected and focus-compensated first and second camera images based on transformation matrix regression, and disparity mapping (620) the modified image pair based on depth network regression.

The first camera and the second camera may be weakly aligned and distortion correction and/or focus compensation may be performed prior to image pair correction. A neural network may be used to regress the drift of the image pair corrected modified transformation matrix and/or to map the disparity of the corrected image pair.

Fig. 7 depicts a third example of depth estimation with two heterogeneous cameras, including calibrating (710) a first camera having a first camera calibration data set, and calibrating (712) a second camera having a second camera calibration data set, the second camera having at least one of a different focal length, a different field of view, and a different number of pixels than the first camera. The method includes performing distortion correction (714) between the first camera image and the second camera image based on the first camera calibration data set and the second camera calibration data set, and performing focal length compensation (716) between the first camera image and the second camera image based on the first camera calibration data set and the second camera calibration data set. The method further includes disparity mapping (718) the distortion corrected and focus compensated first camera image based on depth network regression and determining (720) an initial pose based on the first camera calibration data set and the second camera calibration data set. The method further includes pose mapping (722) the first and second camera images and the initial pose using a pose network, and outputting a delta pose. The method also includes warping (724) the delta pose, the disparity map, and the first camera image, and reconstructing (726) the warped and distortion corrected and focus compensated second camera image to minimize reconstruction errors.

Pose mapping may be performed online and tracked. The first camera and the second camera may be weakly aligned and distortion correction and/or focus compensation may be performed prior to image pair correction. A neural network may be used to regress the drift of the pose graph transform matrix and/or to map the disparity of the first camera image.

Those of skill in the art will appreciate that the various illustrative blocks, modules, elements, components, methods, and algorithms described herein may be implemented as electronic hardware, computer software, or combinations of both. To illustrate this interchangeability of hardware and software, various illustrative blocks, modules, elements, components, methods, and algorithms have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application. Various components and blocks may be arranged differently (e.g., in a different order, or divided in a different manner) entirely without departing from the scope of the subject technology.

It should be understood that the specific order or hierarchy of steps in the processes disclosed is a schematic illustration of example methods. It should be understood that the particular order or hierarchy of steps in the processes may be rearranged based on design preferences. Some steps may be performed simultaneously. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.

The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. The foregoing description provides various examples of the subject technology, and the subject technology is not limited to these examples. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean "one and only one" unless specifically so stated, but rather "one or more. The term "some" refers to one or more, unless specifically stated otherwise. Pronouns in the male (e.g., his) include female and neutral (e.g., her and its), and vice versa. Headings and subheadings (if any) are used for convenience only and do not limit the invention. The predicate words "configured to", "operable to", and "programmed to" do not imply any particular tangible or intangible modification to the subject matter, but are intended to be used interchangeably. For example, the processor being configured to monitor and control operations or components may also mean that the processor is programmed to monitor and control operations, or that the processor is operable to monitor and control operations. Likewise, the processor being configured to execute code may be interpreted as the processor being programmed or operable to execute code.

A phrase such as an "aspect" does not imply that such aspect is essential to the subject technology or that such aspect applies to all configurations of the subject technology. The disclosure relating to an aspect may apply to all configurations, or one or more configurations. One aspect may provide one or more examples. For example, a phrase of an aspect may refer to one or more aspects and vice versa. A phrase such as an "embodiment" does not imply that such embodiment is essential to the subject technology or that such embodiment applies to all configurations of the subject technology. The disclosure relating to one embodiment may apply to all embodiments, or one or more embodiments. Embodiments may provide one or more examples. A phrase such as an "embodiment" may refer to one or more embodiments and vice versa. A phrase such as "configured" does not imply that such configuration is essential to the subject technology or that such configuration applies to all configurations of the subject technology. The disclosure relating to one configuration may apply to all configurations, or one or more configurations. The configuration may provide one or more examples. A phrase such as "configured" may refer to one or more configurations and vice versa.

The word "example" is used herein to mean "serving as an example or illustration. Any aspect or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects or designs.

All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. Furthermore, to the extent that the terms "includes," "has," "having," "including," and the like are used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim.

References to "one embodiment," "an embodiment," "some embodiments," "various embodiments," etc., indicate that a particular element or characteristic is included in at least one embodiment of the invention. Although these phrases may appear in various places, these phrases do not necessarily refer to the same embodiment. Those skilled in the art, in light of the present disclosure, will be able to design and incorporate any of a variety of mechanisms suitable for carrying out the functions described above.

It is understood that this disclosure teaches only one example of the illustrative embodiments and that many variations of the invention may be readily devised by those skilled in the art upon reading this disclosure and the scope of the invention is determined by the claims that follow.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种人脸属性分析方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!