Device and method for realizing real-time positioning and mapping

文档序号:551875 发布日期:2021-05-14 浏览:2次 中文

阅读说明:本技术 实时定位和建图的实现装置和方法 (Device and method for realizing real-time positioning and mapping ) 是由 史晓龙 高天豪 彭雄峰 马宽 刘志花 王强 金允泰 全明帝 李洪硕 于 2020-03-04 设计创作,主要内容包括:提供了一种实时定位和建图的实现装置和方法。所述实现装置包括:图像获取单元,被配置为获取周围环境的彩色图像及深度图像;初始位姿估计单元,被配置为基于所述彩色图像和深度图像,估计初始位姿;地图构建单元,被配置为基于所述深度图像和彩色图像,构建三维地图;以及位姿确定单元,被配置基于初始位姿以及三维地图,确定最终位姿。(An implementation device and method for real-time positioning and mapping are provided. The implementation device comprises: an image acquisition unit configured to acquire a color image and a depth image of a surrounding environment; an initial pose estimation unit configured to estimate an initial pose based on the color image and the depth image; a map construction unit configured to construct a three-dimensional map based on the depth image and the color image; and a pose determination unit configured to determine a final pose based on the initial pose and the three-dimensional map.)

1. An implementation device for real-time positioning and mapping comprises:

an image acquisition unit configured to acquire a color image and a depth image of a surrounding environment;

an initial pose estimation unit configured to estimate an initial pose based on the color image and the depth image;

a map construction unit configured to construct a three-dimensional map based on the depth image and the color image; and

a pose determination unit configured to determine a final pose based on the initial pose and the three-dimensional map.

2. The implementation apparatus of claim 1, wherein the map construction unit comprises:

a map reconstruction module configured to reconstruct an initial three-dimensional map based on a first depth image and a first color image of the surrounding environment;

a map update module configured to update the three-dimensional map based on a second depth image and a second color image of the surrounding environment.

3. The implementation apparatus of claim 1, wherein the initial pose estimation unit includes:

a feature extraction module configured to extract point features in the color image;

a feature matching module configured to perform point matching according to the point features;

an estimation module configured to estimate an initial pose using the matched point features,

when the number of the matched point features is less than a first threshold value, the feature extraction module further extracts line segment features in the color image, the feature matching module performs line segment matching according to the line segment features, and the estimation module estimates the initial pose by using the matched point features and the line segment features.

4. The implementation apparatus of claim 3, wherein the feature matching module performs line segment matching according to geometric information around the line segment feature.

5. The implementation apparatus of claim 1, further comprising: a global picture tracking module configured to determine a key frame having a common viewpoint with a current frame among previous key frames,

wherein the initial pose estimation unit estimates an initial pose using the determined key frame based on the color image and the depth image.

6. An implementation method for real-time positioning and mapping, the method comprising:

acquiring a color image and a depth image of the surrounding environment;

estimating an initial pose based on the color image and the depth image;

constructing a three-dimensional map based on the depth image and the color image; and

and determining a final pose based on the initial pose and the three-dimensional map.

7. The method of claim 6, wherein the step of constructing a three-dimensional map comprises:

reconstructing an initial three-dimensional map based on a first depth image and a first color image of the surrounding environment;

updating the three-dimensional map based on a second depth image and a second color image of the surrounding environment.

8. The method of claim 6, wherein the step of estimating an initial pose comprises:

extracting point features in the color image;

performing point matching according to the point characteristics;

estimating an initial pose using the matched point features,

wherein, when the number of the matched point features is less than a first threshold, the step of estimating the initial pose further comprises: line segment features are extracted from the color image, line segment matching is performed according to the line segment features, and the initial pose is estimated using the matched point features and line segment features.

9. The method of claim 8, wherein the step of line segment matching based on the line segment characteristics comprises: and performing line segment matching according to the geometric structure information around the line segment characteristics.

10. The method of claim 6, further comprising: a key frame having a common viewpoint with the current frame is determined among the previous key frames,

wherein the step of estimating the initial pose comprises: estimating an initial pose using the determined keyframes based on the color images and the depth images.

11. The method of claim 8, further comprising: based on the matching result of the line segment characteristics, three-dimensional collinear constraint is established,

wherein the step of determining the final pose comprises: a final pose is also determined from the three-dimensional collinear constraint,

wherein the three-dimensional co-linear constraint represents a point on a line segment of the first frame on a corresponding line segment of the second frame.

12. The method of claim 6, wherein the step of determining the final pose comprises: determining a final pose using a preset set of frames based on the initial pose and the three-dimensional map,

wherein the preset frame set comprises: the plurality of tracking subsets obtained by dividing the tracking set related to the current frame.

13. The method of claim 6, wherein the step of determining the final pose comprises: and setting a time domain window with a preset size for the current frame, and determining a final pose by using the key frame in the time domain window based on the initial pose and the three-dimensional map.

14. A computer-readable storage medium storing instructions that, when executed by at least one computing device, cause the at least one computing device to perform the method of any of claims 6 to 13.

15. A system comprising at least one computing device and at least one storage device storing instructions that, when executed by the at least one computing device, cause the at least one computing device to perform the method of any of claims 6 to 13.

Technical Field

The present application relates to the field of real-time location and mapping (SLAM), and more particularly, to a device and method for implementing SLAM based on reconstructed maps.

Background

In a method of constructing a three-dimensional map in the related art, information of a surrounding environment is generally acquired in real time by using sensing devices such as a camera, a laser sensor, And an inertial measurement unit mounted on a device, so that a three-dimensional map of the surrounding environment is constructed, And information of a position And a posture of the device in a world coordinate system is output in real time, which is called as a real-time positioning And Mapping (SLAM).

Almost all SLAM systems are based on two frameworks: one is nonlinear optimization and the other is statistical filtering. According to different hardware sensing devices, the SLAM system can also be divided into laser sensors, binocular/multi-view, monocular + inertial measurement devices, monocular + depth cameras, and the like.

The laser sensing technology is a technology commonly used in early SLAM systems, has high precision and can obtain denser map points, but because laser equipment has larger volume, larger weight and high cost, the laser equipment is not suitable for the requirements of the current lightweight SLAM systems (such as unmanned aerial vehicles, AR glasses and the like), but has use value in some equipment with low load requirements (such as sweeping robots and storage robots). Another technique is a binocular/multi-view SLAM system that estimates approximate depth information of feature points in a scene from the disparity between two cameras, and then further refines the estimated values obtained at the front end in the optimization process at the back end. The more classical binocular SLAM techniques include OKVIS and ORB-SLAM, among others.

In addition, vision-based SLAM technology has also been gradually applied to consumer-grade products. However, when the monocular camera is used in the SLAM system, there are some problems, such as a need for a relatively fine initialization process, difficulty in estimating the scale of the motion trajectory, and the like. Therefore, some changes to the monocular camera are studied intensively, and currently, the monocular camera plus an inertial sensor, the monocular camera plus a depth camera, and some elaborate initialization algorithms are more classical. The inertial sensor is sensitive to the motion of the equipment, so that the defect of the monocular camera can be well overcome, and the pose of the equipment can be accurately estimated when the equipment moves fast or rotates greatly. The monocular camera plus depth camera can roughly measure the three-dimensional space position of the scene, so that the scale of the motion trail can be obtained. Monocular cameras are widely applied to many consumer-grade products (such as unmanned aerial vehicles, AR glasses, automatic food delivery robots and the like) due to being light and portable.

However, most existing SLAM technologies include a nonlinear optimization part, but real-time performance and accuracy are a problem that cannot be easily combined in nonlinear optimization, and many existing SLAM systems can only utilize measurement information of a short time in the past to ensure real-time performance when performing nonlinear optimization, and thus it is difficult to obtain good positioning accuracy. In addition, some SLAM systems may need to often work in familiar environments, such as warehouse robots and meal delivery robots. The research on how to utilize the previous map information to further reduce the calculation amount of the system with the time is relatively deficient. Many vision-based SLAM techniques use feature point extraction and feature point matching to initially track the position of the camera, but in some scenes with relatively poor texture (e.g., glass rooms, more white-wall rooms), fewer feature points can be extracted and matched, which may degrade tracking accuracy and cause large drift errors.

Therefore, there is a need for a SLAM technique that can ensure real-time performance, reduce the amount of computation, and also ensure map accuracy.

The above information is provided merely as background information to aid in understanding the present disclosure. No determination is made as to whether any of the above information is applicable as prior art to the present disclosure, and no statement is made.

Disclosure of Invention

According to an exemplary embodiment of the present invention, an apparatus for implementing real-time positioning and mapping is provided, which includes: an image acquisition unit configured to acquire a color image and a depth image of a surrounding environment; an initial pose estimation unit configured to estimate an initial pose based on the color image and the depth image; a map construction unit configured to construct a three-dimensional map based on the depth image and the color image; and a pose determination unit configured to determine a final pose based on the initial pose and the three-dimensional map.

The map building unit may include: a map reconstruction module configured to reconstruct an initial three-dimensional map based on a first depth image and a first color image of the surrounding environment; a map update module configured to update the three-dimensional map based on a second depth image and a second color image of the surrounding environment.

The initial pose estimation unit may include: a feature extraction module configured to extract point features in the color image; a feature matching module configured to perform point matching according to the point features; an estimation module configured to estimate an initial pose using the matched point features. The feature extraction module may further extract line segment features in the color image when the number of matched point features is less than a first threshold, the feature matching module may perform line segment matching based on the line segment features, and the estimation module may estimate the initial pose using the matched point features and the line segment features.

The feature matching module may perform line segment matching according to geometric structure information around the line segment features.

The implementation device may further include: a global picture tracking module configured to determine a key frame having a common viewpoint with the current frame among the previous key frames. The initial pose estimation unit may estimate an initial pose using the determined key frame based on the color image and the depth image.

The implementation device may further include: and the dotted line optimization combination module is configured to establish three-dimensional collinear constraint based on the matching result of the line segment characteristics. The pose determination unit may also determine a final pose from the three-dimensional collinear constraint. The three-dimensional co-linear constraint may represent a point on a line segment of a first frame on a corresponding line segment of a second frame.

The pose determination unit may determine a final pose using a preset frame set based on the initial pose and the three-dimensional map. The preset frame set may include: the plurality of tracking subsets obtained by dividing the tracking set related to the current frame.

The pose determination unit may set a time domain window of a predetermined size for the current frame, and determine a final pose using the keyframe within the time domain window based on the initial pose and the three-dimensional map.

According to another exemplary embodiment of the present invention, an implementation method for real-time positioning and mapping is provided, where the method includes: acquiring a color image and a depth image of the surrounding environment; estimating an initial pose based on the color image and the depth image; constructing a three-dimensional map based on the depth image and the color image; and determining a final pose based on the initial pose and the three-dimensional map.

The step of constructing a three-dimensional map may comprise: reconstructing an initial three-dimensional map based on a first depth image and a first color image of the surrounding environment; updating the three-dimensional map based on a second depth image and a second color image of the surrounding environment.

The step of estimating the initial pose may comprise: extracting point features in the color image; performing point matching according to the point characteristics; and estimating an initial pose by using the matched point characteristics. When the number of matched point features is less than the first threshold, the estimating of the initial pose may further include: line segment features are extracted from the color image, line segment matching is performed according to the line segment features, and the initial pose is estimated using the matched point features and line segment features.

The step of performing line segment matching according to the line segment characteristics may include: and performing line segment matching according to the geometric structure information around the line segment characteristics.

The method may further comprise: determining a key frame having a common viewpoint with the current frame among the previous key frames, wherein the estimating of the initial pose may include: estimating an initial pose using the determined keyframes based on the color images and the depth images.

The method may further comprise: and establishing three-dimensional collinear constraint based on the matching result of the line segment characteristics. The step of determining the final pose may comprise: a final pose is also determined from the three-dimensional collinear constraint. The three-dimensional co-linear constraint may represent a point on a line segment of a first frame on a corresponding line segment of a second frame.

The step of determining the final pose may comprise: based on the initial pose and the three-dimensional map, a preset set of frames is used to determine a final pose. The preset frame set may include: the plurality of tracking subsets obtained by dividing the tracking set related to the current frame.

The step of determining the final pose may comprise: and setting a time domain window with a preset size for the current frame, and determining a final pose by using the key frame in the time domain window based on the initial pose and the three-dimensional map.

According to an exemplary embodiment of the present invention, a computer-readable storage medium storing instructions is provided, wherein the instructions, when executed by at least one computing device, cause the at least one computing device to perform the aforementioned method for real-time positioning and mapping.

According to an exemplary embodiment of the present invention, a system is provided comprising at least one computing device and at least one storage device storing instructions, wherein the instructions, when executed by the at least one computing device, cause the at least one computing device to perform the aforementioned method of implementing real-time positioning and mapping.

Advantageous effects

By applying the SLAM scheme according to the exemplary embodiment of the present invention, at least the following effects can be achieved:

because a map with space accuracy is reconstructed through 3D semantics when the SLAM system enters a specific environment for the first time, and the map is fused and perfected by calculating space time confidence coefficient when entering the specific environment each time later, the accuracy of the map can be ensured, and in the optimization processing of the back end, because the coordinates of three-dimensional map points can be fixed, only the pose (namely, the position and the posture) of the SLAM system equipment is optimized, the whole SLAM system is lighter;

when the whole SLAM system enters a scene area with poor texture, the line segment characteristics of the scene can be extracted and matched, and three-dimensional space point line collinear constraint sub-items are added in the rear-end optimization, so that the whole SLAM system is more robust;

more effective constraints can be obtained by performing feature tracking on the global map, and the positioning accuracy of the SLAM system is improved; through the characteristic re-identification characteristic, on the basis of 2d-2d matching or 3d-2d matching of the original time domain space, the characteristic in space domain matching is added, and the visual measurements are added into the energy function of local and global polar beam optimization, so that the accumulated error is further reduced;

when the global beam adjustment method is implemented, the original longer feature tracking set can be decomposed into a plurality of small feature tracking subsets, so that the calculation efficiency is improved;

when a global light beam adjustment method is implemented, when a new key frame arrives, the three-dimensional map points and the poses of all the related key frames can be optimized no longer, but a plurality of key frames in a preset time domain window related to the new key frame are optimized, and the rear-end optimization efficiency is further improved.

Other aspects, advantages, and salient features of the invention will become apparent to those skilled in the art from the following detailed description, which, taken in conjunction with the annexed drawings, discloses various embodiments of the invention.

Drawings

The above and other aspects, features and advantages of particular embodiments of the present disclosure will become more apparent from the following description when taken in conjunction with the accompanying drawings, in which:

fig. 1 is a block diagram illustrating a map reconstruction-based SLAM system according to an exemplary embodiment of the present invention.

Fig. 2 is a schematic diagram illustrating a process flow of a map reconstruction based SLAM system according to an exemplary embodiment of the present invention.

Fig. 3 is a block diagram illustrating an initial pose estimation unit in a map reconstruction based SLAM system according to an exemplary embodiment of the present invention.

Fig. 4 is a block diagram illustrating a map building unit in a map reconstruction based SLAM system according to an exemplary embodiment of the present invention.

Fig. 5 shows a schematic diagram of the generation of a line segment feature descriptor vector.

Fig. 6 shows a schematic diagram of a feature re-identification operation.

FIG. 7 shows a key frame screening diagram for the full graph trace module.

FIG. 8 illustrates a full graph trace flow diagram of the full graph trace module.

FIG. 9 illustrates a partitioning diagram of a feature tracking set

FIG. 10 shows a time domain window based global beam adjustment diagram.

Fig. 11 shows a schematic diagram of the closed loop error cancellation process.

Fig. 12 is a flowchart illustrating a SLAM method based on map reconstruction according to an exemplary embodiment of the present invention.

Throughout the drawings, it should be noted that: the same reference numerals are used to designate the same or similar elements, features and structures.

Detailed Description

In order that those skilled in the art will better understand the present invention, exemplary embodiments thereof will be described in further detail below with reference to the accompanying drawings and detailed description.

Fig. 1 is a block diagram illustrating a map reconstruction based SLAM system 100 according to an exemplary embodiment of the present invention.

Referring to fig. 1, a map reconstruction based SLAM system 100 (hereinafter, simply referred to as "SLAM system 100") includes an image acquisition unit 110, an initial pose estimation unit 120, a map construction unit 130, a pose determination unit 140, and a storage unit 150.

The image acquisition unit 110 may acquire a color image and a depth image of the surrounding environment in which the SLAM system 100 is currently located, as a color image and a depth image of the current frame to be processed. Further, the "previous frame" mentioned below refers to a frame that has been processed by the SLAM system 100 according to the exemplary embodiment of the present invention and has the related information stored in the storage unit 150. The image acquisition unit 110 may include a monocular camera and a depth camera to obtain the color image and the depth image, respectively, however, it should be understood that the present application is not limited thereto, and the image acquisition unit 110 may be implemented using any other camera or combination of cameras (e.g., a binocular camera) capable of obtaining a depth image and a color image. The color image may be an RGB image or may be another type of color image (e.g., a grayscale image) capable of providing pixel grayscale information.

The initial pose estimation unit 120 may estimate the initial pose of the SLAM system 100 based on the color image and the depth image. In an exemplary embodiment of the present invention, the initial pose may be a relative pose, which will be explained in detail later with reference to fig. 3.

Fig. 2 is a schematic diagram illustrating a process flow of the map reconstruction based SLAM system 100 according to an exemplary embodiment of the present invention.

Referring to fig. 2, it can be seen that the main processing flow of the SLAM system 100 includes: the method comprises the following steps of inertial navigation data pre-integration, feature tracking, new feature extraction, feature re-identification, global beam adjustment, local beam adjustment and the like.

In the inertial navigation data pre-integration processing, the relative attitude can be estimated according to the acquired inertial navigation data, the input of the processing is the inertial navigation data, and the output is the relative attitude; in the feature tracking process, the initial extracted features can be tracked, wherein the input of the process is the initial extracted features, and the output is the features tracked by the next frame; in the new feature extraction process, features may be extracted in the image acquired by the image acquisition unit 110, the input of the process being a color image, and the output being features; in the feature re-identification processing, the features once extracted can be searched, the input of the processing is the original features and the pose result of the previous frame, and the output is the features after the feature re-identification processing, and the specific implementation can be explained by referring to fig. 6 later; in the global beam adjustment processing, global nonlinear optimization can be carried out, wherein the input of the processing is a feature matching set, and the output is a pose; in the local beam adjustment processing, local nonlinear optimization can be performed, wherein the input of the processing is a feature matching set, and the output is a pose.

Fig. 3 is a block diagram illustrating the initial pose estimation unit 120 in the map reconstruction based SLAM system 100 according to an exemplary embodiment of the present invention.

Referring to fig. 3, the initial pose estimation unit 120 may include: a feature extraction module 121 configured to extract features in the color image acquired by the image acquisition unit 110; a feature matching module 122 configured to perform feature matching according to the features; and an estimation module 123 configured to estimate an initial pose of the SLAM system 100 using the matched features.

In an exemplary embodiment of the present invention, after the feature extraction module 121 completes the feature extraction, the feature matching module 122 may obtain three-dimensional coordinates of each extracted feature in the camera coordinate system of the current frame according to the depth information of the depth image, then find a matching feature thereof on another frame according to the local information description of the feature, and obtain three-dimensional coordinates of the matching feature in the camera coordinate system of the another frame. In an exemplary embodiment of the present invention, the other frame may be a specific frame among previous frames, which will be described in more detail later. These feature-related information obtained by the feature extraction module 121 and the feature matching module 122 may be stored in the storage unit 150 as related information for the current frame to be used in the subsequent processing.

For example only, the feature extraction module 121 may extract a point feature in a color image of a current frame, and the feature matching module 122 may perform point matching according to the point feature. More specifically, the feature matching module 122 may find a point feature matching the point feature extracted by the feature extraction module 121 in a specific frame among the previous frames. The estimation module 123 may then estimate the initial pose of the SLAM system 100 using the matched point features. Methods of feature matching using point features are known to those skilled in the art and therefore, for the sake of brevity, will not be described in detail herein.

Furthermore, when the environment in which the SLAM system 100 is located is relatively lacking in texture (e.g., in a glass room, a room with relatively uniform color, etc.), a relatively small number of matched point features can be obtained, which can result in a relatively large error or direct failure of the SLAM system 100. Therefore, preferably, in the exemplary embodiment of the present invention, when the number of point features matched by the feature matching module 122 is less than the first preset threshold (for example, 20), the feature extraction module 121 may further extract line segment features in the color image of the current frame, and the feature matching module 122 may further perform line segment matching according to the line segment features, that is, find line segment features matching the line segment features extracted by the feature extraction module 121 in the specific frame. At this time, the estimation module 123 may estimate the initial pose of the SLAM system 100 using both the matched point features and line segment features.

In an exemplary embodiment of the present invention, in order to perform feature matching using the extracted line segment features, a feature descriptor may be set to the line segment. At this time, since the utilization efficiency of the local gray information around the line segment feature is low due to the lack of the environmental texture, in an exemplary embodiment of the present invention, the line segment feature may be described according to the relative shape information between the line segments. Specifically, any one of the line segments extracted from the color image may be used as a reference line segment, and then the feature descriptor may be established according to the relative positions and relative angles of all the line segments remaining in the color image with respect to this reference line segment. The coordinates of the midpoint of the line segment are used to represent its position. For example, for any line segment i, its descriptor is in the form of a vector vi(k) Can be expressed as:

vi(k)=#{q≠pi:(q-pi)∈bin(k)} (1)

wherein bin (k) represents the kth region among a plurality of regions in the color image, k being a region index, piQ represents the coordinates of the midpoints of the line segments other than the line segment i.

Fig. 5 shows a schematic diagram of the generation of a line segment feature descriptor vector. In FIG. 5, { v1,v2,…,vnIndicates line segment feature descriptor vectors in the 1 st area, the 2 nd area, …, and the nth area, respectively, of the 4 × 4 block areas of the color image as shown in fig. 5.

The feature matching module 122 may perform line segment matching based on the geometric information around the line segment features. Specifically, the feature matching module 122 may calculate the euclidean distance for the line segment feature descriptor vectors of the current frame and the particular frame two by two, thereby finding the best line segment match. Here, in order to guarantee the correct rate of matching, in an exemplary embodiment of the present invention, it may be set that the optimal line segment matching must be the best matching of both line segments each in the corresponding image.

In an exemplary embodiment of the present invention, since the depth image of the current frame may provide depth information, the estimation module 123 may estimate a pose change between the current frame and the specific frame using the matched point feature and/or line segment feature based on an Iterative Closest Point (ICP) method, thereby obtaining an initial pose of the SLAM system 100. That is, the initial pose of the SLAM system 100 obtained for the current frame is the relative pose with respect to that particular frame. Further, it should be understood that the present application is not limited thereto, and various other suitable methods may be used to estimate the initial pose of the SLAM system 100.

Further, in an exemplary embodiment of the present invention, when the SLAM system 100 first enters a specific environment, the estimation module 123 may directly set the initial pose of the SLAM system 100 at that time as a zero vector.

Further, in the exemplary embodiment of the present invention, when the number of matched point features is very small, for example, when the number of point features matched with the point features extracted by the feature extraction module 121 in the specific frame is less than the second preset threshold (for example, 5), the relative pose estimated by the point feature matching is no longer reliable, at which time, the estimation module 123 may determine the initial pose of the SLAM system 100 to remain the same as the relative pose of the SLAM system 100 in the specific frame.

Further, in an exemplary embodiment of the present invention, the specific frame may be a number of frames that are one frame or nearby of the current frame. However, if only the relationship with several frames near the current frame is used to establish the constraint for the current frame, when the SLAM system 100 runs faster or rotates more, the number of frames that can establish contact with the current frame becomes smaller, possibly affecting the accuracy of the SLAM system 100. Therefore, preferably, the SLAM system 100 according to an exemplary embodiment of the present invention may further include a full graph tracking module (not shown). The global tracking module (not shown) may find, among previous frames (e.g., previous keyframes), co-view keyframes that have a common view point (e.g., a common view feature point) with the current frame (i.e., the common view point is visible in both the current frame and the co-view keyframes, there is a projection point corresponding to the common view point), which may be used to further establish constraints for the current frame. In this case, the specific frame may further include the common view key frames. That is, the feature matching module 122 may also find features in the co-view keyframe that match the features (including point features and line segment features) extracted by the feature extraction module 121, so that the estimation module 123 may further estimate the initial pose using these matched features. The full graph trace module will be described in detail below in conjunction with fig. 6, 7, and 8.

Fig. 6 shows a schematic diagram of a feature re-identification operation. FIG. 7 shows a key frame screening diagram for the full graph trace module. FIG. 8 illustrates a full graph trace flow diagram of the full graph trace module.

Referring to fig. 6, the top left image has a point feature extracted, reference 6993, from the image at frame number 1035 to the image at frame number 1042, and then in the image at frame number 1043 this feature tracking is lost because there is a ladder in this image that blocks this feature. But in the frame number 1723 image this feature is re-identified. The angle between the frame from which the feature was extracted (i.e., the frame number 1035) and the frame from which the feature was re-identified (i.e., the frame number 1723) was 67.8 degrees, and the translation distance was approximately 3 meters or so. The frames numbered 1035 to 1042 have a common feature point 6993 with the frame numbered 1723, and therefore the frames numbered 1035 to 1042 can also be used to establish constraints for the frame numbered 1723. This can increase the number of features that generate constraints between frames and find more extensive and stable constraints for the current frame (i.e., find more frames that can establish constraints for the current frame), which can in turn increase the accuracy of the SLAM system 100 positioning.

Alternatively, since the initial pose estimation has been performed using a frame previous to or near the current frame, and then the initial pose estimation performed using the co-view key frame closer in time to the current frame is less significant, in an exemplary embodiment of the present invention, the initial pose estimation unit 120 may preferably select only the co-view key frame whose time exceeds a preset time threshold from the current frame for the initial pose estimation. This is further described later with reference to fig. 7.

When selecting the key frame matched with the current frame, the key frame screening is needed due to the calculation burden, and the key frame screening can be carried out by using a time and space combined strategy.

For temporal filtering, by way of example only, referring to FIG. 7, a key frame that is temporally closer to the current frame, e.g., K in the figuret,Kt+1,Kt+2The frames are close to the current frame, so that a plurality of common viewpoints exist among the frames, and the final precision is not greatly influenced by adding a few common viewpoints, so that the final precision is not greatly influencedA time threshold can be set, and matched features are searched in the key frames which are far away from the current frame time, so that the calculation efficiency is improved. In addition, a series of filtering operations, such as feature filtering, baseline large feature matching, and feature classification, are also required for the points on the key frame from a spatial perspective. When the key frame containing all points that are sensitive to spatial temporal conditions (i.e., points that have been filtered for spatial and temporal conditions) has been processed, then key frame filtering is complete.

The process of spatially screening dots may be as follows:

first, points of instability can be filtered out. The inverse depth of each map point is iteratively optimized in the full map beam adjustment, so that the change in inverse depth of each map point over a period of time can be calculated, and if the change is too large or its inverse depth is less than zero, then this point is considered unstable and removed.

In addition, points at the edges of the image may also be removed. Here, through a preliminary estimation of the camera pose (e.g. by using an initial pose), typically a projected point of a map point will be projected around the corresponding feature point on the current frame, which map point can be filtered out if the calculated spatial distance between the projected point and the corresponding feature point is too large.

Then, a description distance (i.e., descriptor) between the projection point and the corresponding feature point may be calculated, and then when the description distance is less than a certain threshold, a feature re-recognition operation may be performed.

Since points sensitive to spatio-temporal conditions tend to have too wide a baseline with the corresponding point between the current frame, which is prone to generate a mismatch, the process described below in connection with fig. 8 may be used in an exemplary embodiment of the present invention to determine the co-view keyframe.

By way of example only, as shown in FIG. 8, C1,C2,C3Are all key frames, C4Is the current frame. Wherein, C1,C2,C3And C4The distance is long, and C is not considered in the traditional method1,C2,C3And C4Of the constraint(s) in between.In the exemplary embodiment of the present invention, however, the feature point P is used1(Here, the feature point P1Is a map point), in key frame C1Can see the feature point P therein1The full graph tracking module (not shown) may determine the key frame C by1Whether it is the current frame C4Common view key frame of (1):

(1) the characteristic point P1Projected to the current frame C according to the initial relative pose relationship estimated by the initial pose estimation unit 1204Marked as feature point pi

(2) Calculating the current frame C4In the feature point piNearby feature points qiAnd a characteristic point piAnd the local gray scale difference d (p)i,qi);

(3) Find current frame C4Neutralization feature point piSet of all feature points having local gray level difference smaller than a set thresholdWherein k is 1,2,3, …, m is the number of the found feature points;

(4) the feature point p is aligned based on the following equations (2) and (3)iFeature descriptor D (p) ofi) And characteristic pointsFeature descriptor ofThe comparisons were performed separately. Here, if the key frame C1And the current frame C4Is less than a given threshold (e.g., a relative rotation threshold T)ΔR45 degrees relative translation threshold TΔT2 meters), the feature points p are directly comparediORB descriptor D (p) ofi) And characteristic pointsORB descriptor ofOtherwise, the feature points can be first determined as shown in equation (3)Re-projecting (warp) the description of (a) onto the current frame and then comparing the hamming distances of the ORB descriptors;

(5) the above-mentioned groups are collectedThe point with the minimum hamming distance between the ORB descriptors in (b) is determined as the point P with the feature point1Matched feature points, and at this point the keyframe C can be determined1With the current frame C4Having a common view feature point P1

Wherein the content of the first and second substances,

in the above description, the distance between the feature points is indicated using the hamming distance, but this is merely an example, and the distance between the feature points may be determined using other various distance representation methods.

In an exemplary embodiment of the present invention, when the initial pose estimation unit 120 obtains a plurality of initial poses using pose estimation by a plurality of frames (for example, using the above-described co-view key frame in addition to the previous frame of the current frame), the initial pose estimation unit 120 may determine statistical values (for example, an average value, a median value, and the like) of the obtained plurality of initial poses as the initial poses of the SLAM system 100.

Next, referring back to fig. 1, the map construction unit 130 may construct a three-dimensional map based on the depth image and the color image acquired by the image acquisition unit 110.

Fig. 4 illustrates a structure of the map building unit 130 in the map reconstruction based SLAM system 100 according to an exemplary embodiment of the present invention.

As shown in fig. 4, the map construction unit 130 may include a map reconstruction module 131, a map storage module 132, and a map update module 133 according to an exemplary embodiment of the present invention.

The map reconstruction module 131 may reconstruct an initial three-dimensional map based on a first depth image and a first color image of a specific environment when entering the specific environment for the first time, and then the map update module 133 may update the three-dimensional map based on a second depth image and a second color image of the specific environment when entering the specific environment again

More specifically, when the SLAM system 100 first enters a specific environment, a three-dimensional map may be reconstructed by the map reconstruction module 131 based on a depth image and a color image acquired when the SLAM system 100 first enters the specific environment, and the three-dimensional coordinates of each map point on the reconstructed three-dimensional map (i.e., the three-dimensional coordinates in the world coordinate system) and the spatial accuracy thereof may be determined.

For example only, map reconstruction module 131 may reconstruct a three-dimensional map using three-dimensional semantic reconstruction based on depth information included in the depth image and grayscale information included in the color image. In the process of reconstructing a three-dimensional map using three-dimensional semantic reconstruction by the map reconstruction module 131, in addition to determining the three-dimensional coordinates of each map point on the three-dimensional map, the spatial accuracy of each map point may be determined. The spatial accuracy may indicate a reprojection error of the calculated map points, and the calculation regarding the spatial accuracy is described in detail below. Furthermore, it should be understood that the above-listed three-dimensional semantic reconstruction method is only one of the ways to implement the three-dimensional map reconstruction process described above, and the present application is not limited thereto, and any other suitable three-dimensional map reconstruction method known in the art may also be used to reconstruct a three-dimensional map and determine the spatial accuracy of each map point.

After completing the three-dimensional map reconstruction, the map reconstruction module 131 may store the three-dimensional map with spatial accuracy in the map storage module 132. Here, since the spatial-temporal confidence is not obtained when the SLAM system 100 first enters a specific environment, the spatial-temporal confidence of the map point on the three-dimensional map at that time may be set to null and stored.

Thereafter, when the SLAM system 100 reenters the specific environment, the three-dimensional map may not need to be reconstructed again, but the map updating module 133 calculates the three-dimensional coordinates and the spatial temporal confidence thereof of each map point based on the depth image and the color image acquired when the SLAM system 100 reenters the specific environment, and updates the three-dimensional coordinates and the spatial temporal confidence thereof of the corresponding map point on the three-dimensional map stored in the map storage module 132 according to the calculated three-dimensional coordinates and the spatial temporal confidence thereof of each map point. The updated three-dimensional map may be used subsequently as the three-dimensional map constructed by the map construction unit 130.

Here, for a specific map point on the three-dimensional map, the spatial-temporal confidence level thereof generally decreases with the passage of time, and the speed of decrease thereof generally correlates with the three-dimensional semantic type of the specific map point. In an exemplary embodiment of the present invention, the spatial-temporal confidence C of the specific map point may be determined as the following formula (4)c

Wherein, t1Indicating the current system time, t, of the SLAM system 1000Indicating the initial system time of the SLAM system 100. ω 1 and ω 2 represent a spatial weight and a temporal weight, respectively.Represents the reprojection error of the specific map point when performing three-dimensional semantic reconstruction, and can be calculated by the following equation (5):

in the formula (5), the first and second groups,epsilon represents the pose of the SLAM system 100 in the world coordinate system, which can be obtained based on the initial pose estimated by the initial pose estimation unit 120, for the two-dimensional coordinates of the projected point corresponding to the specific map point, X represents the three-dimensional coordinates of the specific map point in the world coordinate system, and pi is a function for converting the input variable into the two-dimensional coordinates.

Thus, the updating of the three-dimensional coordinates of the specific map point and its spatial accuracy can be achieved as in the following equations (6) and (7), respectively:

Cp′=0.5×(Cp+Cc)......(7)

wherein, XpAnd CpThe three-dimensional coordinates of the specific map point stored in the map storage module 132 and the spatial time confidence, X, thereofcThe current three-dimensional coordinate, X, of the particular map point calculated by the map update module 133p' and Cp' are the updated three-dimensional coordinates of the particular map point and its spatial temporal confidence, respectively. PrIndicates the initial pose of the SLAM system 100 estimated by the initial pose estimation unit 120.

In calculating the above Xp' and Cp' thereafter, X can be calculated usingp' and Cp' to replace the X stored in the map storage module 132, respectivelypAnd CpThereby enabling the update of the three-dimensional map stored in the map storage module 132.

In the illustration of fig. 4, the map storage module 132 is shown as a separate module, however, it should be understood that the map storage module 132 may also be integrated with the map update module 133 as one module, by way of example only.

Through the above-described update process of the map update module 133, each time the SLAM system 100 enters the same environment, only the stored three-dimensional map can be updated without reconstructing the three-dimensional map each time, thereby reducing the amount of computation required to obtain the three-dimensional map, and also making the map more and more accurate, so that the three-dimensional map points can be fixed and only the pose of the SLAM system 100 can be optimized in the subsequent use process of the three-dimensional map.

Next, referring back to fig. 1, the pose determination unit 140 according to an exemplary embodiment of the present invention may determine the final pose of the SLAM system 100 based on the estimated initial pose of the SLAM system 100 and the constructed three-dimensional map. In an exemplary embodiment of the present invention, the pose determination unit 140 may determine the final pose of the SLAM system 100 using a preset frame set based on the initial pose and the three-dimensional map. The determined final pose is a pose in a world coordinate system, and the determined final pose may be stored in the storage unit 150.

In an exemplary embodiment of the invention, the final pose may be determined using a global beam adjustment method. Specifically, the pose determination unit 140 may perform global beam balancing to determine the final pose of the SLAM system 100 based on the following equations (8) and (9):

ψ*=argmin∑i∈Kj∈P||eij|| …… (8)

eij=xij-π(εiw,Xwj) …… (9)

in the above equations (8) and (9), K is a preset frame set and includes a current frame, and may be a set including a key frame and a current frame, for example. P represents a set of feature points extracted by the feature extraction module 121. e.g. of the typeijRepresenting a reprojection error of a jth feature point in the set P of feature points on an ith frame in the set K, and when there is no projection of the jth feature point on the ith frame, e may be representedijIs set to 0. x is the number ofijAnd the two-dimensional coordinates of the projection point of the jth characteristic point on the ith frame are obtained. EpsiloniwRepresenting the pose of the SLAM system 100 in the world coordinate system for the ith frame. XwjRepresenting the three-dimensional coordinates of the jth feature point in a world coordinate system and being ground-basedThe three-dimensional map provided by the map building unit 130.

Based on the optimal solution obtained by equation (8), the pose of the SLAM system 100 corresponding to each frame in the set K (i.e., the pose of the SLAM system 100 at the time of obtaining the each frame) can be obtained. At this time, not only the final pose of the SLAM system 100 can be determined, but also the corresponding poses previously determined can be updated with the obtained poses of the SLAM system 100 for other frames, whereby the positioning accuracy of the SLAM system 100 can be constantly improved.

Further, in the process of solving the optimal solution of the above equation (8), the initial pose determined by the initial pose estimation unit 120 may be made epsiloniwReference (e.g., ∈)iwMay be initially set to the initial pose and adjusted based on the initial pose in subsequent calculations) in order to speed up the computation.

In an exemplary embodiment of the present invention, the key frame may be a frame selected according to a preset rule among a plurality of frames stored in the storage unit 150, for example, a 1 st frame, a 5 th frame, a 9 th frame, … … selected according to a preset interval, or may even be all stored frames.

In the above global beam adjustment to determine the final pose of the SLAM system 100 based on the equations (8) and (9), for example, for the jth feature point of the current frame, all the common-view frames having the corresponding map feature point of the jth feature point as the common-view feature point (i.e., on which the corresponding map feature point of the jth feature point can be seen) can be taken as one feature tracking set (i.e., the preset frame set) related to the jth feature point to participate in the calculation of the final pose as a whole (for example, in the present invention, the aforementioned global tracking module (not shown) can be used to find such a common-view frame), thereby maintaining global consistency. However, this takes a long time, and cannot satisfy the real-time performance of the SLAM system 100, and even the accuracy of the SLAM system 100 sometimes has to be sacrificed to ensure the real-time performance.

Alternatively, when performing global beam adjustment, the pose determination unit 140 may set the preset frame set to include a plurality of tracking subsets obtained by dividing the feature tracking set related to the current frame. More specifically, the pose determination unit 140 may divide each feature tracking set associated with each feature of the current frame into one or more feature tracking subsets, respectively, and determine the final pose of the SLAM system 100 by performing global beam balancing based on each feature tracking subset. That is, the preset frame set may include one or more feature tracking subsets obtained by dividing each feature tracking set related to each feature of the current frame, respectively. This is described below with reference to fig. 9.

FIG. 9 illustrates an example diagram of the partitioning of a feature tracking set.

Referring to FIG. 9, by way of example only, assume C1To C4The reference frame is a common-view frame related to the jth characteristic point of the current frame, and K is a common-view key frame related to the jth characteristic point of the current frame. That is, in the common view frame C1To C4And the corresponding map feature point T of the jth feature point can be seen in the common-view key frame KjTracking a set for the feature of the jth feature point, and selecting a common-view key frame K and a common-view frame C1To C4And (4) forming.

In an exemplary embodiment of the present invention, referring to fig. 9, the pose estimation unit 140 may estimate TjDivision into subsets Tj1、Tj2、Tj3And Tj4Wherein, Tj1From the common-view key frame K and the common-view frame C1To C3Constitution Tj2From the common-view key frame K and the common-view frame C2To C4Constitution Tj3From the common-view key frame K and the common-view frame C1、C3And C4Is formed of and Tj4From the common-view key frame K and the common-view frame C1、C2And C4And (4) forming.

Thus, when the final pose of the SLAM system 100 is calculated using the global beam adjustment method as in equation (8), the set of frames involved in the sub-term associated with the jth feature point in equation (8) is not T-thjBut rather the subset Tj1、Tj2、Tj3And Tj4By calculating the corresponding sub-items based on these 4 shorter subsets, there may beThe computational burden is effectively reduced.

Furthermore, in an exemplary embodiment of the present invention, whether each feature tracking set is divided may be determined based on the number of frames in the feature tracking set corresponding to the feature, for example, the feature tracking set may be divided into several feature tracking subsets of a preset size only when the number of frames in the feature tracking set exceeds a preset threshold value, so that the number of frames in each feature tracking subset is less than or equal to the preset threshold value and each feature tracking subset includes a common view key frame, and the frames in all feature tracking subsets cover all the frames in the feature tracking set.

Furthermore, in the application of the SLAM system 100, for a long motion trajectory, a considerable number of key frames are accumulated and optimized at the back end, and thus the SLAM system 100 cannot work efficiently. Therefore, in another exemplary embodiment of the present invention, the pose determination unit 140 may set a time domain window of a predetermined size for the current frame and determine the final pose of the SLAM system 100 using frames within the time domain window (e.g., key frames within the time domain window, which may include co-view key frames, and may also include non-co-view key frames). That is, the aforementioned preset frame set may also include a key frame within a time domain window determined by setting the time domain window of a predetermined size for the current frame. FIG. 10 shows a time domain window based global beam adjustment diagram. As shown in fig. 10, the temporal window for the current frame may include co-view key frames or non-co-view key frames.

Alternatively, the pose determination unit 140 may further set a time domain window of a predetermined size for the current frame, and use the time domain window to screen frames among the feature tracking subsets generated in the foregoing processing, remove frames outside the time domain window, and then perform global beam adjustment based on the feature tracking subsets subjected to the screening processing, thereby determining the final pose of the SLAM system 100, whereby the arithmetic load can be further reduced, and the image processing efficiency can be improved.

However, when there is a loop in the motion trajectory of the SLAM system 100, processing using only key frames within the temporal window instead of all key frames may result in the loop not being closed, as shown in fig. 11 (a). In an exemplary embodiment of the present invention, a closed loop detection may be used to close the loop, and the pose difference values generated thereby may sequentially transfer the trajectory back to ensure that the motion trajectory is smooth, as shown in fig. 11 (a) to 11 (d). The closed loop error cancellation process shown in fig. 11 is known to those skilled in the art and will not be described in further detail herein for the sake of brevity.

Further, the detailed processing of the pose determination unit 140 to determine the final pose of the SLAM system 100 has been described above taking the point feature as an example, however, when the line segment feature is also extracted by the initial pose estimation unit 120, the global beam adjustment may be performed further in consideration of the line segment feature.

Specifically, the SLAM system according to an exemplary embodiment of the present invention may further include a dotted line optimization combination module (not shown) that may establish a three-dimensional collinear constraint based on a matching result of the line segment features of the feature matching module 132, so that the pose determination unit 140 may also determine the final pose according to the three-dimensional collinear constraint. For example only, a line segment constraint sub-term corresponding to the three-dimensional co-linear constraint may also be added and used in the aforementioned global beam adjustment method. Here, the three-dimensional co-linear constraint indicates that a point on a line segment of the first frame is on a corresponding line segment of the second frame. More specifically, line segment matching between two frames is obtained according to the line segment descriptor vector, any line segment is matched, and three-dimensional coordinates of two matched line segments under the camera coordinate system are both obtained, so that when an endpoint (spatial three-dimensional coordinate) of one line segment is transformed into the other camera coordinate system through the frame pose, the endpoint is always above the line segment matched with the line segment

In other words, when the final pose of the SLAM system 100 is calculated as in equation (8), the reprojection error involved in the sub-term on the right side of the equation includes not only the reprojection error of the point feature but also the reprojection error of the line segment feature. Here, the manner of calculating the reprojection error of the line segment feature is similar to that of calculating the reprojection error of the point feature shown in equation (9) and is known to those skilled in the art, and thus will not be described in detail here.

Further, the above description has been made in terms of the global beam adjustment method, but the present invention is not limited thereto, and other methods capable of determining the posture of the apparatus may also be used.

Fig. 12 is a flowchart illustrating a SLAM method based on map reconstruction according to an exemplary embodiment of the present invention.

Referring to fig. 12, at step 1210 of the method, a color image and a depth image of the surroundings of the SLAM system 100 may be acquired using the image acquisition unit 110.

Then, in step 1220, the initial pose estimation unit 120 may be used to estimate the initial pose of the SLAM system 100 based on the color image and the depth image obtained in step 1210.

More specifically, in step 1220, the initial pose estimation unit 120 may extract point features in the color image, perform point matching from the point features, and then estimate the initial pose of the SLAM system 100 using the matched point features.

When the number of matched point features is less than a first preset threshold, the initial pose estimation unit 120 may further extract line segment features in the color image, and perform line segment matching according to the line segment features. At this time, the initial pose estimation unit 120 may estimate the initial pose of the SLAM system 100 using the matched point feature and line segment feature.

Further, when the number of matched point features is less than a second preset threshold (here, the second preset threshold is less than the first preset threshold), the initial pose estimation unit 120 may directly determine the initial pose of the SLAM system 100 to remain the same as the relative pose of the SLAM system 100 in the specific frame.

Here, the specific frame may be a previous frame or several frames near the current frame.

Optionally, although not shown in the flowchart of fig. 12, the method may further include the operation of determining a common view key frame. In an exemplary embodiment of the present invention, a common view key frame having a common view point with a current frame may be determined in a previous frame (e.g., a previous key frame) using the aforementioned full picture tracking unit (not shown). Thus, the initial pose estimation unit 120 in step 1220 may further estimate the initial pose using the common view key frames, thereby being able to enhance the accuracy of the SLAM system 100.

Then, in step 1230, a three-dimensional map may be constructed using the map construction unit 130 based on the depth image and the color image.

The map construction unit 130 may reconstruct an initial three-dimensional map based on a first depth image and a first color image of a specific environment when entering the specific environment for the first time, and then update the three-dimensional map based on a second depth image and a second color image of the specific environment when entering the specific environment again.

More specifically, when the SLAM system 100 first enters a specific environment, the map construction unit 130 may reconstruct a three-dimensional map based on a depth image and a color image acquired by the image acquisition unit 110 when the specific environment is first entered, determine three-dimensional coordinates of each map point on the reconstructed three-dimensional map and spatial accuracy thereof, and store the three-dimensional coordinates of each map point on the three-dimensional map with spatial accuracy and spatial temporal confidence thereof. Here, since the spatial-temporal confidence is not obtained when the SLAM system 100 first enters a specific environment, the spatial-temporal confidence of the map point on the three-dimensional map at that time may be set to null and stored.

Thereafter, when the SLAM system 100 reenters the specific environment, the map construction unit 130 may calculate the three-dimensional coordinates of each map point and the spatial temporal confidence thereof based on the depth image and the color image acquired by the image acquisition unit 110 upon reentering the specific environment, and update the three-dimensional coordinates of the corresponding map point on the stored three-dimensional map and the spatial temporal confidence thereof according to the calculated three-dimensional coordinates of each map point and the spatial temporal confidence thereof. The updated three-dimensional map may be used subsequently as the three-dimensional map constructed by the map construction unit 130.

At step 1240, the pose determination unit 140 may be used to determine the final pose of the SLAM system 100 based on the initial pose of the SLAM system 100 estimated at step 1220 and the three-dimensional map constructed at step 1230.

Here, as described above, the pose determination unit 140 may determine the final pose using the preset frame set based on the initial pose and the three-dimensional map. The preset frame set may include a plurality of tracking subsets obtained by dividing a tracking set related to a current frame. Alternatively, the pose determination unit 140 may also set a time domain window of a predetermined size for the current frame, and determine the final pose using the keyframe within the time domain window based on the initial pose and the three-dimensional map.

Preferably, after the extraction and matching of the line segment features are performed in step 1220, the method may further use a dotted line optimization combination unit (not shown) to establish a three-dimensional collinear constraint based on the matching result of the line segment features, so that the pose determination unit 140 may perform one step to determine the final pose using the three-dimensional collinear constraint. Whereby the accuracy of the SLAM system 100 can be further improved

Finally, in step S1250, the current frame-related information including the final pose of the SLAM system 100 may be stored by the storage unit 150.

The operations performed by the respective constituent components of the SLAM system 100 according to the exemplary embodiment of the present invention in the respective steps of fig. 12 have been described in detail above in conjunction with fig. 1 to 11, and thus, for the sake of brevity, a repeated description will not be made herein.

The SLAM system 100 and the method thereof of the exemplary embodiment of the present invention can ensure real-time map construction, reduce the amount of computation, and also ensure map accuracy. Further, it should be understood that the exemplary embodiments set forth in the description are merely examples shown for the convenience of understanding, the present application is not limited thereto, and any equivalents, alternatives, and modifications of the present invention may be considered to fall within the scope of the present invention without departing from the spirit and scope of the inventive concept.

The exemplary embodiments of the present invention can also be embodied as computer readable codes on a computer readable recording medium. The computer readable recording medium is any data storage device that can store data which can be thereafter read by a computer system. Examples of the computer-readable recording medium include: read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, optical data storage devices, and carrier waves (such as data transmission through the internet via a wired or wireless transmission path). The computer readable recording medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion. In addition, functional programs, codes, and code segments for accomplishing the present invention can be easily construed by programmers of ordinary skill in the art to which the present invention pertains within the scope of the present invention.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the following claims.

24页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种定位方法、装置、电子设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!