Motion recovery structure method suitable for large-scale scene

文档序号:551861 发布日期:2021-05-14 浏览:7次 中文

阅读说明:本技术 一种适用于大规模场景的运动恢复结构方法 (Motion recovery structure method suitable for large-scale scene ) 是由 许彪 孙钰珊 王庆栋 韩晓霞 郝铭辉 王保前 刘玉轩 于 2021-04-13 设计创作,主要内容包括:本发明公开了一种适用于大规模场景的运动恢复结构方法,主要包括以下步骤:根据影像匹配结果,计算两两重叠影像间的关联度得分;基于关联度得分,将影像划分为若干个影像分区;利用ISfM方法重建每个影像分区;基于光束法将已完成重建的各个影像分区融合成完整场景。该方法在保证精度的同时具有稳健、效率高且易于分布式并行实现等特点,可适用于包含不同影像规模的多种场景,尤其对于几万甚至几十万张影像的大场景具有很高的稀疏重建效率。(The invention discloses a motion recovery structure method suitable for large-scale scenes, which mainly comprises the following steps: calculating the association degree score between every two overlapped images according to the image matching result; dividing the image into a plurality of image partitions based on the relevancy scores; reconstructing each image partition by using an ISfM method; and fusing the image partitions which are completely reconstructed into a complete scene based on a beam method. The method has the characteristics of stability, high efficiency, easiness in distributed parallel implementation and the like while ensuring the precision, can be suitable for various scenes comprising different image scales, and particularly has high sparse reconstruction efficiency for large scenes of tens of thousands of images or even hundreds of thousands of images.)

1. A motion restoration structure method suitable for large-scale scenes is characterized by comprising the following steps:

calculating the association degree score between every two overlapped images according to the image matching result;

dividing the image into a plurality of image partitions based on the relevancy scores;

reconstructing each image partition by using an ISfM method;

and fusing the image partitions which are completely reconstructed into a complete scene based on a beam method.

2. The method of claim 1, wherein the calculating the relevancy score between two overlapped images according to the image matching result further comprises the following steps:

and obtaining the matching relation between the images by adopting an SIFT matching algorithm.

3. The method as claimed in claim 1, wherein the calculating the correlation score between the overlapped images comprises:

calculating the association degree score between the overlapped images based on the number of the homonymous points and the point distribution, wherein the calculation formula is as follows:

in the formula (I), the compound is shown in the specification,scoring the correlation between the images;

andis a weight;

counting the number of the same-name points of the image pairs;

the most number of same-name points in the image pair formed by the left image and the right image;

the ratio of the area of the circumscribed rectangle of the image point in the effective grid to the area of the image frame.

4. The method as claimed in claim 1, wherein the step of dividing the image into a plurality of image partitions based on the relevancy scores includes the following steps:

performing primary division on an image to obtain a first sub-area;

merging the first partitions to obtain initial image partitions;

and eliminating the weak connection subareas from the initial image subareas to obtain image subareas.

5. The method of claim 4, wherein the merging the first partitions to obtain the initial partitions of the image comprises:

the threshold range of the number of images in the initial image partition is preset.

6. The method according to claim 4, wherein the removing of the weakly connected partitions from the initial partitions of the image to obtain the partitions of the image specifically comprises:

and eliminating the weakly connected partitions in the initial partitions of the image, and dynamically adjusting the image in the weakly connected partitions.

7. The method according to claim 1, wherein the reconstructing each image partition by using the ISfM method comprises the following steps:

s310, obtaining a matching result;

s320, estimating the relative pose of the image pair by using the matching points and acquiring initial structure information;

s330, estimating the pose of the image to be added by utilizing a known scene structure and a 2D-3D corresponding relation, and selecting an image set added into the scene according to a criterion and a strategy;

s340, obtaining a new sparse structure through rendezvous, and enlarging the coverage area of the scene;

s350, optimizing a scene by using a beam method to minimize the sum of reprojection errors;

s360, filtering unreliable 2D and 3D points which may exist, wherein the elimination criterion comprises a projection error and an intersection angle;

and S370, looping the steps S330 to S360 until the scene sparse reconstruction is completed or no more images can be added to the scene.

8. The method as claimed in claim 7, wherein said selecting the image set to be added to the scene according to the criteria and strategies comprises the following steps:

s331, estimating the pose of the associated image by using the 2D-3D corresponding relation, and recording the number of effective image points;

s332, selecting 0.25 time of the median of all effective image points in the step 331 as a threshold, and eliminating an estimation result of which the effective image points are smaller than the threshold to obtain an alternative image set to be increased;

s333, based on the images arranged in the alternative image set and the scene, all possible 3D points are obtained through triangulation, and the number of effective image points of the alternative images which can be used for optimization of a beam method is counted;

s334, selecting 0.5 times of the median of the number of available image points in the step S333 as a threshold, and adding the image larger than the threshold into a scene;

and S335, removing unreliable results.

9. The method as claimed in claim 8, wherein said removing unreliable results comprises the following steps:

counting errors, calculating the average value of the effective point position errors in the image partitions, and eliminating the image partitions larger than a threshold value;

camera calibration parameters, namely rejecting abnormal calibration results of the same-source camera in different image partitions;

successfully arranging the proportion of the images;

and removing the weak connection image subareas from the effective public ground points among the image subareas.

10. The method of claim 1, wherein the fusing of the reconstructed image partitions into the complete scene based on the beam method comprises the following steps:

calculating space similarity transformation parameters between every two image partitions, and unifying a coordinate system and a scale of the image partitions to be merged;

determining a fusion task list;

determining initial values of image poses and structural information coordinates of the fusion image partition;

and fusing the image partitions by using a light beam method.

Technical Field

The invention relates to the technical field of three-dimensional modeling of a motion recovery structure, in particular to a motion recovery structure method suitable for large-scale scenes.

Background

The Motion-from-Motion (SfM) is a processing procedure for simultaneously obtaining scene three-dimensional Structure information through the postures of a group of overlapped image recovery cameras, is used for solving the arrangement problems of no GPS/POS, no camera calibration parameters and disordered images, and is widely applied to the fields of 4D product production, three-dimensional live-action modeling, virtual and augmented reality, robot navigation and positioning and the like. Typical SfM includes two implementation forms of incremental SfM (incremental SfM, ISfM) and global SfM (global SfM, GSfM), the former has high robustness and is most widely applied, but the former has the disadvantages of high dependency on initial image pair selection, low large scene reconstruction efficiency, difficulty in distributed parallel computation and the like.

Disclosure of Invention

Objects of the invention

In view of the above problems, the present invention aims to provide a motion recovery structure scheme suitable for large-scale scenes, which has the characteristics of stability, high efficiency, easy distributed parallel implementation, etc. while ensuring accuracy, and is suitable for various scenes including different image scales, especially for large scenes with tens of thousands or even hundreds of thousands of images, and the present invention discloses the following technical schemes.

(II) technical scheme

As a first aspect of the present invention, the present invention discloses a motion restoration structure method suitable for a large-scale scene, comprising the following steps:

calculating the association degree score between every two overlapped images according to the image matching result;

dividing the image into a plurality of image partitions based on the relevancy scores;

reconstructing each image partition by using an ISfM method;

and fusing the image partitions which are completely reconstructed into a complete scene based on a beam method.

In a possible embodiment, the calculating a relevancy score between two overlapping images according to the image matching result further includes the following steps:

and obtaining the matching relation between the images by adopting an SIFT matching algorithm.

In a possible embodiment, the calculating the relevancy score between the overlapped images specifically includes:

calculating the association degree score between the overlapped images based on the number of the homonymous points and the point distribution, wherein the calculation formula is as follows:

in the formula (I), the compound is shown in the specification,scoring the correlation between the images;

andis a weight;

counting the number of the same-name points of the image pairs;

the most number of same-name points in the image pair formed by the left image and the right image;

the ratio of the area of the circumscribed rectangle of the image point in the effective grid to the area of the image frame.

In a possible embodiment, the dividing the image into a plurality of image partitions based on the relevancy score specifically includes the following steps:

performing primary division on an image to obtain a first sub-area;

merging the first partitions to obtain initial image partitions;

and eliminating the weak connection subareas from the initial image subareas to obtain image subareas.

In a possible implementation manner, the merging the first partitions to obtain the initial partitions of the image specifically includes:

the threshold range of the number of images in the initial image partition is preset.

In a possible implementation manner, the removing the weak link partitions from the initial image partitions to obtain the image partitions specifically includes:

and eliminating the weakly connected partitions in the initial partitions of the image, and dynamically adjusting the image in the weakly connected partitions.

In a possible embodiment, the reconstructing each image partition by using the ISfM method specifically includes the following steps:

s310, obtaining a matching result;

s320, estimating the relative pose of the image pair by using the matching points and acquiring initial structure information;

s330, estimating the pose of the image to be added by utilizing a known scene structure and a 2D-3D corresponding relation, and selecting an image set added into the scene according to a criterion and a strategy;

s340, obtaining a new sparse structure through rendezvous, and enlarging the coverage area of the scene;

s350, optimizing a scene by using a beam method to minimize the sum of reprojection errors;

s360, filtering unreliable 2D and 3D points which may exist, wherein the elimination criterion comprises a projection error and an intersection angle;

and S370, looping the steps S330 to S360 until the scene sparse reconstruction is completed or no more images can be added to the scene.

In a possible implementation manner, the selecting the image set added to the scene according to the criteria and the strategy specifically includes the following steps:

s331, estimating the pose of the associated image by using the 2D-3D corresponding relation, and recording the number of effective image points;

s332, selecting 0.25 time of the median of all effective image points in the step 331 as a threshold, and eliminating an estimation result of which the effective image points are smaller than the threshold to obtain an alternative image set to be increased;

s333, based on the images arranged in the alternative image set and the scene, all possible 3D points are obtained through triangulation, and the number of effective image points of the alternative images which can be used for optimization of a beam method is counted;

s334, selecting 0.5 times of the median of the number of available image points in the step S333 as a threshold, and adding the image larger than the threshold into a scene;

and S335, removing unreliable results.

In a possible implementation, the eliminating unreliable result specifically includes the following steps:

counting errors, calculating the average value of the effective point position errors in the image partitions, and eliminating the image partitions larger than a threshold value;

camera calibration parameters, namely rejecting abnormal calibration results of the same-source camera in different image partitions;

successfully arranging the proportion of the images;

and removing the weak connection image subareas from the effective public ground points among the image subareas.

In a possible implementation, the fusing the image partitions that have been completely reconstructed into a complete scene based on the beam method specifically includes the following steps:

calculating space similarity transformation parameters between every two image partitions, and unifying a coordinate system and a scale of the image partitions to be merged;

determining a fusion task list;

determining initial values of image poses and structural information coordinates of the fusion image partition;

and fusing the image partitions by using a light beam method.

(III) advantageous effects

The invention discloses a motion recovery structure method suitable for large-scale scenes, which has the following beneficial effects:

the method is suitable for various scenes comprising different image scales while ensuring the precision, has the characteristics of stability, high efficiency, easiness in distributed parallel implementation and the like, and can be used for solving the problem of sparse reconstruction of large scenes of tens of thousands of images or even hundreds of thousands of images; in order to ensure the integrity of the reconstruction result, the images in the removed partitions are divided into other partitions again, so that the dynamic partition adjustment of the images is realized; in order to ensure the robustness of the algorithm and the integrity of a scene reconstruction result, when fusion fails, the related images are processed according to a strategy of dynamic partition adjustment; no matter the partition is ISfM or the fusion partition, each step is an independent process, so each processing process can be distributed to a plurality of computing nodes, distributed parallel processing is realized, and the processing efficiency of a large scene is improved.

Drawings

The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining and illustrating the present invention and should not be construed as limiting the scope of the present invention.

FIG. 1 is a flow chart of a motion restoration structure method for large-scale scenes in accordance with the present disclosure;

FIG. 2 is a schematic illustration of an active mesh and an inactive mesh as disclosed herein;

FIG. 3 is a flowchart of step S200 disclosed herein;

FIG. 4 is a schematic diagram of the partitioning process of image partitions and the corresponding image partition results disclosed in the present invention;

FIG. 5 is a flowchart of step S300 of the present disclosure;

FIG. 6 is a flow chart of an ISfM method for image partition fusion disclosed in the present invention;

FIG. 7 is a flow chart of an ISfM disclosed herein;

FIG. 8 is a flow chart illustrating the selection of an image set to be added to a scene according to the present disclosure;

FIG. 9 is a flowchart of step S335 disclosed herein;

fig. 10 is a flowchart of step S400 of the present disclosure.

Detailed Description

In order to make the implementation objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be described in more detail below with reference to the accompanying drawings in the embodiments of the present invention.

It should be noted that: in the drawings, the same or similar reference numerals denote the same or similar elements or elements having the same or similar functions throughout. The embodiments described are some embodiments of the present invention, not all embodiments, and features in embodiments and embodiments in the present application may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc., indicate orientations or positional relationships based on those shown in the drawings, and are used merely for convenience in describing the present invention and for simplifying the description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and therefore, should not be taken as limiting the scope of the present invention.

A first embodiment of a motion restoration structure method applicable to a large-scale scene disclosed in the present invention is described in detail below with reference to fig. 1 to 10. The method is mainly applied to the motion recovery structure of large-scale scenes, has the characteristics of stability, high efficiency, easiness in distributed parallel implementation and the like, and can be used for solving the problem of sparse reconstruction of large scenes with tens of thousands of images or even hundreds of thousands of images.

As shown in fig. 1, the present embodiment mainly includes the following steps:

s100, calculating a relevancy score between every two overlapped images according to an image matching result;

s200, dividing the image into a plurality of image partitions based on the association degree score;

s300, reconstructing each image partition by using an ISfM method;

and S400, fusing the image partitions which are completely reconstructed into a complete scene based on a beam method.

In the embodiment, each processing process is distributed to a plurality of computing nodes, so that distributed parallel processing is realized, and the processing efficiency of a large scene is improved; in order to ensure the integrity of the reconstruction result, the images in the removed partitions are divided into other partitions again, so that the dynamic partition adjustment of the images is realized; in order to ensure the robustness of the algorithm and the integrity of a scene reconstruction result, when fusion fails, the related images are processed according to a strategy of partition dynamic adjustment, are re-divided into other partitions, and are reconstructed according to an ISfM method.

Before calculating the association score between two overlapped images according to the image matching result in step S100, the method further includes the following steps:

and S000, extracting image features by adopting an SIFT matching algorithm to obtain a matching relation between the images, and obtaining the matching relation between the images by comparing the features.

In step S100, a correlation score between two images is calculated based on the number of the same-name points and the distribution of the point positions in the images, and as shown in FIG. 2, the number of the same-name points in the image pair is countedAnd dividing grids according to the image frames, such as 15 × 15, if the points with the same name fall into a certain grid, accumulating the numerical values in the grids by 1, after all the points with the same name are divided, taking the grid with the accumulated value smaller than the threshold as an invalid grid, and taking the grid with the accumulated value larger than the threshold as an effective grid. Calculating the correlation score corr between the images according to the formula (1):

in the formula (I), the compound is shown in the specification,scoring the correlation between the images;

andis a weight;

counting the number of the same-name points of the image pairs;

the most number of same-name points in the image pair formed by the left image and the right image;

the ratio of the area of the external rectangle of the image points in the effective grid to the area of the image frame is shown, wherein the area of the external rectangle is the area of the lower boundary of the maximum and minimum values of the coordinates x and y directions of all the image points in the statistical effective grid.

As shown in fig. 3 and 4, in an embodiment, the dividing the image into a plurality of image partitions based on the association degree score in the step S200 includes the following steps:

s210, performing primary division on the image to obtain a first sub-area. And (3) regarding each image in the scene as an independent partition, combining the homonymous images with the highest relevancy score in the field of the image, and obtaining a first partition. Recording one image with the highest image relevancy score, arranging the images from high to low according to the image relevancy score, traversing the sequencing result, sequentially dividing the images which are not partitioned and the homonymous images with the highest score into the same partition, and finishing the primary division of the images, wherein each partition at least comprises two images.

And S220, merging the first partitions to obtain initial image partitions. Taking the maximum value of the image association degree scores between different first sub-partitions as the association degree scores of the two partitions, sequentially merging the two first sub-partitions of the image with the maximum association degree, namely gradually merging the small partitions into the large partitions, and repeating the process to obtain the initial partition result of the image of the scene.

In one embodiment, in order to avoid the situation of too many or too few images in the initial partition, a threshold range of the number of images in the initial partition is preset, for example, the threshold range of the number of images in the initial partition is preset to be 15 to 75, if the number of images in a certain initial partition is less than the threshold, the initial partition is classified into the partition with the highest relevance score, if the number of images in the initial partition is greater than the threshold, the process returns to step S210 or step S220, the images are continuously partitioned, the process of partitioning image aggregation is traced back, and the initial partition is divided into two or more initial partitions meeting the threshold.

And S230, removing the weak connection subareas from the initial image subareas to obtain image subareas. Counting the number of public ground points between every two initial partitions and the maximum number of points of each initial partition and the field of the initial partition, if the maximum number of points of a certain initial partition and the field of the certain initial partition is smaller than a threshold value, regarding the certain initial partition and the field of the certain initial partition as a weak connection partition, removing the certain initial partition as the weak connection partition, and dynamically adjusting the image to re-partition the image into other initial partitions.

As shown in fig. 5-7, in one embodiment, the reconstructing each image partition by using the ISfM method in step S300 specifically includes the following steps:

s310, obtaining a matching result;

the image matching algorithm is used for obtaining matching points, SIFT feature descriptors are usually used for extracting image features, an image retrieval technology is used for identifying matched image pairs, and the matching result is obtained by comparing Euclidean distances of the features.

S320, estimating the relative pose of the image pair by using the matching points and acquiring initial structure information;

selecting two images as an initial image pair, estimating the relative pose of the image pair by using matching points and acquiring initial structural information, estimating an essential matrix E by using a commonly used 5-point method and then resolving the essential matrix EAnd optimizing the matrix and the beam method to obtain more accurate image pose and structure.

S330, estimating the pose of the image to be added by utilizing the known scene structure and the 2D-3D corresponding relation, and selecting the image set added into the scene according to the criterion and the strategy.

As shown in fig. 8, the criteria and strategies selected in this embodiment select an image set to be added to a scene, and include the following steps:

s331, estimating the pose of the associated image by using the 2D-3D corresponding relation, and recording the number of effective image points;

s332, selecting 0.25 time of the median of all effective image points in the step 331 as a threshold, and eliminating an estimation result of which the effective image points are smaller than the threshold to obtain an alternative image set to be increased;

s333, based on the images arranged in the alternative image set and the scene, all possible 3D points are obtained through triangulation, and the number of effective image points of the alternative images which can be used for optimization of a beam method is counted;

s334, selecting 0.5 times of the median of the number of available image points in the step S333 as a threshold, and adding the image larger than the threshold into a scene;

and S335, removing unreliable results.

In consideration of image matching precision, topographic relief, flight mode, camera calibration characteristics and other uncertain factors, in order to ensure the correctness of a complete reconstruction result, the further detection is required to be regarded as mutually independent partitioned reconstruction results so as to eliminate unstable items which may exist.

As shown in fig. 9, the removing the unreliable result in step S335 specifically includes the following steps:

s3351, counting errors, calculating the average value of the effective point errors in the image partitions, and eliminating the image partitions larger than a threshold value;

calculating the average value of the point location errors of the effective points in the image partitions, taking 2 times of the average value of the point location errors of all the image partitions as a threshold, and rejecting the partitions larger than the threshold.

S3352, camera calibration parameters are checked, and abnormal calibration results of the same-source camera in different partitions are eliminated;

the abnormal calibration result comprises a focal length, a principal point and a distortion parameter.

S3353, the ratio of the images is successfully arranged. If the ratio of the number of the successfully arranged images in a certain partition to the total number is smaller than a threshold (such as 0.5), the reconstruction result is considered to be unreliable, and the images are removed;

s3354, removing the weak connection image subareas from the effective public ground points among the image subareas.

And counting the number of effective public ground points between every two image partitions and the maximum number of points of each initial partition and the neighborhood thereof, and if the maximum number of points of a certain image partition and the neighborhood thereof is less than a threshold value, determining that the image partition is a weak connection partition and rejecting the image partition.

In order to ensure the integrity of the reconstruction result, the image in the removed partition is divided into other image partitions again to realize the dynamic image adjustment of the image partition, the reconstruction is completed by an ISfM method, and the algorithm is input from the dynamic image adjustment of the partition and is executed from a rear intersection process.

And S340, obtaining a new sparse structure through rendezvous, and enlarging the coverage of the scene.

And S350, optimizing the scene by using the beam method so that the reprojection error sum is minimum.

S360, filtering unreliable 2D and 3D points which may exist, wherein the elimination criterion comprises a projection error and an intersection angle;

the two steps of optimization and filtering by using the light beam method need to be executed for several times in a circulating way, unreliable points are filtered out after the optimization by using the light beam method, the light beam method is optimized by using the residual points, then the filtering is carried out, the operation is repeated for several times, the influence of the unstable points on the optimization is eliminated, and the precision is improved for subsequent further incremental use.

And S370, looping the steps S330 to S360 until the scene sparse reconstruction is completed or no more images can be added to the scene.

As shown in fig. 10, in an embodiment, the fusing of the image partitions into the complete scene based on the beam method in step S400, where the fusing of the image partitions aims to combine the image partitions into the complete scene reconstruction result by using the common ground point, specifically includes the following steps:

s410, calculating space similarity transformation parameters between every two image partitions, and unifying a coordinate system and a scale of the image partitions to be merged;

and calculating the linear transformation 7 parameters of the three-dimensional space of every two subareas through the public ground points between the subareas, and unifying the coordinate system and the scale of the subareas to be merged.

S420, determining a fusion task list;

the effective points of each partition and the neighborhood are accumulated and taken as a relevance index, and the effective points are arranged from small to large in sequence, and are abbreviated asPreferably, the partitions with small association degree indexes are fused. First, selectWith the largest number of valid points as a referenceFused therewith, markingAndis processed; second, for those not yet markedIf the number of effective points is the largestIf, ifIs marked, thenIs added toIn the fused list of (1), otherwiseAndfusing and marking as a processed partition; and finally, continuously repeating the process until all the partitions are marked to obtain a fusion task list.

S430, determining initial values of image poses and structural information coordinates of the fusion image partition;

using the transformation parameters between two partitions calculated in S410, if a fusion task involves only two partitions, optionally one of them is used as a reference, the other is transformed to the reference partition. When three or more partitions need to be fused together, the partition with the largest relevance index is selected as a reference, and if other partitions and the reference have effective transformation parameters, the other partitions and the reference are directly transformed into the reference partition. When a certain partition and a reference partition can not be directly transformed, the space similarity transformation is converted into the reference partition through multiple transformations in a transmission mode according to the rigid body characteristics of the space similarity transformation and transformation parameters between every two partitions.

And S440, fusing the image partitions by using a light beam method.

The principle of fusing 'scenes' first and then 'cameras' is followed. When the scene is fused, the camera calibration parameters of each image partition are mutually independent, and the image pose and the structure coordinate are optimized by a light beam method. When the camera is fused, a plurality of sets of calibration parameters of the same camera are combined into one set, and then the optimization of the light beam method is carried out, so that the calibration parameters of the camera are unique. The process is a fusion cycle, and the process is continuously repeated until all the partitions are fused into a complete scene.

In order to ensure the robustness of the algorithm and the integrity of a scene reconstruction result, when fusion fails, the related images are processed according to a strategy of dynamic adjustment of image partitions, are divided into other image partitions again, and are reconstructed according to an ISfM method.

In this embodiment, no matter the image partition ISfM or the fused image partition, each step is an independent process, so that each processing process can be distributed to a plurality of computing nodes, distributed parallel processing is realized, and the processing efficiency of a large scene is improved. The basic parallel scheduling scheme design includes a host side and a compute node side. The host side is responsible for data organization, scene partitioning, fusion task list determination, calculation task division, partition dynamic adjustment and result recovery. The computing node side is responsible for receiving tasks appointed by the host side, including the ISfM of the independent partition and the fusion between the partitions, and meanwhile, the computing result is returned to the host side.

The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种通过三维模型标记点获取对应二维图像的方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!