System and method for determining vehicle position

文档序号:1409480 发布日期:2020-03-06 浏览:28次 中文

阅读说明:本技术 用于确定车辆位置的系统和方法 (System and method for determining vehicle position ) 是由 A·夏尔马 于 2018-06-28 设计创作,主要内容包括:描述了一种方法。所述方法包含获得多个图像。所述方法还包含检测所述多个图像中的对象。所述方法进一步包含确定所述对象上的多个特征点。所述特征点基于对象类型彼此具有已建立的关系。所述方法还包含使用所述多个特征点来确定相对于地平面的运动轨迹和相机姿态。(A method is described. The method includes obtaining a plurality of images. The method also includes detecting an object in the plurality of images. The method further includes determining a plurality of feature points on the object. The feature points have an established relationship with each other based on object type. The method also includes determining a motion trajectory and a camera pose relative to a ground plane using the plurality of feature points.)

1. A method, comprising:

obtaining a plurality of images;

detecting an object in the plurality of images;

determining a plurality of feature points on the object, wherein the feature points have an established relationship to each other based on object type; and

determining a motion trajectory and a camera pose relative to a ground plane using the plurality of feature points.

2. The method of claim 1, wherein the object includes a lane marker or a traffic sign.

3. The method of claim 1, wherein determining the plurality of feature points comprises determining three or more end angles of a detected lane marker.

4. The method of claim 1, wherein determining the plurality of feature points comprises determining three or more vertices of a detected traffic sign.

5. The method of claim 1, wherein determining the motion trajectory comprises determining scale information using the plurality of feature points.

6. The method of claim 1, wherein determining the motion trajectory and the camera pose comprises comparing a plurality of feature points determined from a first image with corresponding feature points determined from a second image.

7. The method of claim 1, further comprising:

combining the detected object measurements with inertial sensor measurements; and

determining the motion profile based on the combined measurements.

8. The method of claim 1, further comprising:

combining satellite navigation receiver measurements with measurements of the detected object; and

determining a pose and a position in a global frame based on the combined measurements.

9. The method of claim 8, further comprising:

combining vehicle sensor measurements from one or more vehicle sensors with the satellite navigation receiver measurements and measurements of the detected object; and

determining the pose and position in the global frame based on the combined measurements.

10. The method of claim 1, further comprising:

combining the detected object measurement with a speedometer measurement; and

determining the motion profile based on the combined measurements.

11. The method of claim 1, wherein the method is performed in a vehicle.

12. The method of claim 11, further comprising transmitting the camera pose to a mapping service.

13. The method of claim 1, wherein the method is performed by a server.

14. An electronic device, comprising:

a memory; and

a processor in communication with the memory, the processor configured to:

obtaining a plurality of images;

detecting an object in the plurality of images;

determining a plurality of feature points on the object, wherein the feature points have an established relationship to each other based on object type; and

determining a motion trajectory and a camera pose relative to a ground plane using the plurality of feature points.

15. The electronic device of claim 14, wherein the object includes a lane marker or a traffic sign.

16. The electronic device of claim 14, wherein the processor configured to determine the plurality of feature points comprises the processor configured to determine three or more end angles of detected lane markers.

17. The electronic device of claim 14, wherein the processor configured to determine the plurality of feature points comprises the processor configured to determine three or more vertices of a detected traffic sign.

18. The electronic device of claim 14, wherein the processor configured to determine the plurality of feature points comprises the processor configured to determine scale information using the plurality of feature points.

19. The electronic device of claim 14, wherein the processor configured to determine the motion trajectory and the camera pose comprises the processor configured to compare a plurality of feature points determined from a first image with corresponding feature points determined from a second image.

20. A non-transitory computer-readable medium storing computer-executable code, comprising:

code for causing an electronic device to obtain a plurality of images;

code for causing the electronic device to detect an object in the plurality of images;

code for causing the electronic device to determine a plurality of feature points on the object, wherein the feature points have an established relationship to each other based on object type; and

code for causing the electronic device to determine a motion trajectory and a camera pose relative to a ground plane using the plurality of feature points.

21. The computer-readable medium of claim 20, wherein the object includes a lane marker or a traffic sign.

22. The computer-readable medium of claim 20, wherein the code for causing the electronic device to determine the plurality of feature points comprises code for causing the electronic device to determine three or more end angles of detected lane markers.

23. The computer-readable medium of claim 20, wherein the code for causing the electronic device to determine the plurality of feature points comprises code for causing the electronic device to determine three or more vertices of a detected traffic sign.

24. The computer-readable medium of claim 20, wherein the code for causing the electronic device to determine the motion trajectory comprises code for causing the electronic device to determine scale information using the plurality of feature points.

25. The computer-readable medium of claim 20, wherein code for causing the electronic device to determine the motion trajectory and the camera pose comprises code for causing the electronic device to compare a plurality of feature points determined from a first image with corresponding feature points determined from a second image.

26. An apparatus, comprising:

means for obtaining a plurality of images;

means for detecting an object in the plurality of images;

means for determining a plurality of feature points on the object, wherein the feature points have an established relationship to each other based on object type; and

means for determining a motion trajectory and a camera pose relative to a ground plane using the plurality of feature points.

27. The apparatus of claim 26, wherein the object includes a lane marker or a traffic sign.

28. The apparatus of claim 26, wherein the means for determining the plurality of feature points comprises means for determining three or more end angles of detected lane markers.

29. The apparatus of claim 26, wherein the means for determining the plurality of feature points comprises means for determining three or more vertices of a detected traffic sign.

30. The apparatus of claim 26, wherein the means for determining the plurality of feature points comprises means for determining scale information using the plurality of feature points.

Technical Field

The present disclosure relates generally to electronic devices. More particularly, the present disclosure relates to systems and methods for determining vehicle position.

Background

Electronic devices, such as cellular telephones, wireless modems, computers, digital music players, Global Positioning System (GPS) units, Personal Digital Assistants (PDAs), gaming devices, etc., have become a part of everyday life. Small computing devices are now placed in anything from the vehicle to the enclosure lock. In the past few years, the complexity of electronic devices has increased dramatically. For example, many electronic devices have one or more processors that help control the device, as well as a number of digital circuits that support the processor and other portions of the device.

Some electronic devices (e.g., vehicles) may be equipped with advanced driver assistance systems. These systems may be used in autonomous vehicles. One useful technique in these systems is Visual Inertial Odometry (VIO). However, daily vehicle driving may involve a semi-controlled environment with a variety of scenarios that are often challenging for computer-based vehicle automation, particularly VIOs. Perception-based vehicle positioning and location helps assist with visual inertial odometry.

Disclosure of Invention

A method is described. The method includes obtaining a plurality of images. The method also includes detecting an object in the plurality of images. The method further includes determining a plurality of feature points on the object. The feature points have an established relationship with each other based on object type. The method also includes determining a motion trajectory and a camera pose relative to a ground plane using the plurality of feature points. The object may comprise a lane marker or a traffic sign.

Determining the plurality of feature points may include determining three or more end angles of the detected lane marker. Determining the plurality of feature points may include determining three or more vertices of the detected traffic sign.

Determining the motion trajectory may include determining scale information using a plurality of feature points. Determining the motion trajectory and the camera pose may include comparing a plurality of feature points determined from the first image with corresponding feature points determined from the second image.

The method may further include combining the measurements of the detected object with inertial sensor measurements. The motion profile may be determined based on the combined measurements.

The method may also include combining satellite navigation receiver measurements with measurements of the detected object. The pose and position in the global frame may be determined based on the combined measurements. Vehicle sensor measurements from one or more vehicle sensors may be combined with satellite navigation receiver measurements and measurements of detected objects. The pose and position in the global frame may be determined based on the combined measurements.

The method may further comprise combining the measurement of the detected object with a speedometer measurement. The motion profile may be determined based on the combined measurements.

The method may be performed on a vehicle. The method may also include transmitting the camera pose to a mapping service. In another embodiment, the method may be performed by a server.

An electronic device is also described. The electronic device includes a memory and a processor in communication with the memory. The processor is configured to obtain a plurality of images. The processor is further configured to detect an object in the plurality of images. The processor is further configured to determine a plurality of feature points on the object. The feature points have an established relationship with each other based on object type. The processor is further configured to determine a motion trajectory and a camera pose relative to a ground plane using the plurality of feature points.

A non-transitory computer-readable medium storing computer-executable code is also described. The computer-readable medium includes code for causing an electronic device to obtain a plurality of images. The computer-readable medium also includes code for causing the electronic device to detect an object in the plurality of images. The computer-readable medium further includes code for causing the electronic device to determine a plurality of feature points on the object. The feature points have an established relationship with each other based on object type. The computer-readable medium further includes code for causing the electronic device to determine a motion trajectory and a camera pose relative to a ground plane using the plurality of feature points.

An apparatus is also described. The apparatus includes means for obtaining a plurality of images. The apparatus also includes means for detecting an object in the plurality of images. The apparatus further includes means for determining a plurality of feature points on the object. The feature points have an established relationship with each other based on object type. The apparatus also includes means for determining a motion trajectory and a camera pose with respect to a ground plane using the plurality of feature points.

Drawings

FIG. 1 is a block diagram illustrating one example of an electronic device in which systems and methods for determining a vehicle location may be implemented;

FIG. 2 is a flow chart illustrating one configuration of a method for determining vehicle position;

FIG. 3 is a block diagram illustrating another example of an electronic device in which systems and methods for determining vehicle location may be implemented;

FIG. 4 illustrates an example of vehicle localization using lane marker detection;

FIG. 5 is a flow chart illustrating one configuration of a method for determining vehicle position based on lane marker detection;

FIG. 6 is a flowchart showing a configuration of a method for determining a vehicle position based on traffic sign detection;

FIG. 7 is a flow chart illustrating another configuration of a method for determining vehicle position;

FIG. 8 is a flow chart showing yet another configuration of a method for determining vehicle position; and

fig. 9 illustrates certain components that may be included within an electronic device configured to implement various configurations of the systems and methods disclosed herein.

Detailed Description

Various configurations are now described with reference to the figures, where like reference numbers indicate functionally similar elements. The systems and methods generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of several configurations, as represented in the figures, is not intended to limit the scope of what is claimed, but is merely representative of systems and methods.

FIG. 1 is a block diagram illustrating one example of an electronic device 102 in which systems and methods for determining a vehicle location may be implemented. Examples of the electronic device 102 include cameras, video cameras, digital cameras, cellular phones, smart phones, computers (e.g., desktop computers, laptop computers, etc.), tablet devices, media players, televisions, vehicles, automobiles, personal cameras, wearable cameras, virtual reality devices (e.g., headphones), augmented reality devices (e.g., headphones), mixed reality devices (e.g., headphones), action cameras, surveillance cameras, mounted cameras, connected cameras, robots, aircraft, drones, Unmanned Aerial Vehicles (UAVs), smart applications, healthcare equipment, game consoles, Personal Digital Assistants (PDAs), set top boxes, appliances, and so forth. For example, the electronic device 102 may be a vehicle for use in an Advanced Driver Assistance System (ADAS).

The electronic device 102 may be configured with a positioning module 118. In configuration, the positioning module 118 may include or may implement a Visual Inertial Odometer (VIO) system. The VIO system may be implemented as a positioning engine that combines camera inputs with inertial information provided by inertial sensors 108 (e.g., an Inertial Measurement Unit (IMU) system) to determine the position of the electronic device 102. The inertial sensors 108 may include one or more accelerometers and/or one or more gyroscopes, with which the inertial sensors 108 generate inertial measurements.

In one aspect, a visual odometer may be used to locate a vehicle. The VIO may be part of an autonomous driving system. One purpose of the VIO system is to use information from nearby features, such as corners of buildings or trees, to locate the vehicle. The VIO system may provide information about where the electronic device 102 is located relative to its environment. It should be noted that the VIO may provide a location of positioning. For example, the VIO may provide the location of the electronic device 102 relative to a previous location.

The current implementation of a visual inertial odometer for vehicle locations for automotive applications relies on inertial sensors 108 and a monocular camera 106. Using the images captured by the camera 106, the VIO system may detect and track key points (e.g., sharp corners) in the images. However, if the camera 106 is the only sensor used, it is not possible to know the size or depth of the feature because the camera projects the three-dimensional (3D) world onto a two-dimensional (2D) image. By combining inertial sensor measurements with moving camera frames, it is possible to estimate the scale of the keypoint features. The scale may then be used to estimate a new pose of camera 106 relative to its previous pose, thereby estimating the vehicle's ego-motion. In one approach, the VIO system may integrate inertial measurements to obtain the position of the electronic device 102.

Daily vehicle driving may occur in semi-controlled environments with a variety of scenarios that are often challenging for computer-based vehicle automation, particularly VIOs. Inertial measurements may not be sufficient to determine the position of the electronic device 102. One problem with inertial measurements in VIO applications is dimensional drift. When the vehicle is moving at a constant speed, no bias in the inertial sensors 108 may be observed, which may lead to dimensional drift. As used herein, a "bias" (also referred to as a sensor bias) is the difference between an ideal output and the actual output provided by a sensor 108 (e.g., a gyroscope or accelerometer).

In a monocular VIO system (e.g., using monocular camera 106 to provide image information), the estimated VIO trajectory may be used to calculate the depth of the visual feature. The visual features may not provide any correction for the scale drift.

In an example, when a vehicle (e.g., electronic device 102) is moving at a constant speed, particularly in a highway scenario, the accelerometer that measures acceleration becomes zero. In this case, there is no observable bias signal in the system (i.e., accelerometer bias), which becomes a scalar value. Thus, in a monocular camera system, the VIO system can only measure the position of a feature on a scale for all features observed. For example, if a monocular camera were to view the same feature point from three different camera vantage points, the location of the feature could be triangulated in the real world, but only up to a certain scale. The positions of the feature points may be arbitrarily moved in the depth direction. Therefore, the precise location of the feature point cannot be observed without additional data.

It should be noted that an advantage of using inertial sensors 108 is that the dimensions become observable when there is sufficient acceleration or excitation in the accelerometer or gyroscope. The problem with vehicle motion is that when the vehicle is traveling in a very straight line at an approximately constant speed, the dimensions cannot be observed using inertial measurements alone.

Another problem with inertial measurements in VIO applications is that IMU measurements tend to become noisy at constant speed. The six degrees of freedom of the camera pose 122 are not observable without limiting IMU measurements using visual features. In the case of an automotive VIO, the camera pose 122 may be the pose of the vehicle itself.

In some methods, vehicle localization may be performed using a map. For example, the vehicle may use GPS coordinates and a pre-configured characteristic digital map of the environment to determine the location of the vehicle. Features within the environment may be detected and compared to the map. For example, a building, bridge, or other structure may be detected in the image and associated with a digital map to determine the orientation of the vehicle. However, a problem with this map-based approach is that the features change over time, and the map may not accurately reflect the environment. Thus, the method relies on an updated map, not necessarily always available. Furthermore, even a very detailed map may not identify all features in the environment that are available for vehicle localization.

For the VIO system to work properly, the key point features should be fixed. However, in environments where stationary features are not readily observable, the lack of features may cause significant drift in scale and may disrupt the VIO algorithm. For example, when driving through a desert landscape, the camera 106 may observe few features.

When applied to automotive applications, VIO systems may be made more robust if they employ one or more mechanisms that are typically available while driving. In one approach, one of the complementary mechanisms that may help determine the dimension is vehicle speed. The vehicle speed may be from a speedometer (e.g., a wheel speed encoder) or from GPS.

In another approach, the electronic device 102 may use information from a perception algorithm to help determine the vehicle location. In an automotive environment, the electronic device 102 may be or may be incorporated into a vehicle. These perception algorithms may include lane marking detectors 114 and/or traffic sign detectors 116.

The electronic device 102 may include one or more components or elements. One or more of the components or elements may be implemented in hardware (e.g., circuitry) or a combination of hardware and software and/or firmware (e.g., processor 104 with instructions).

In some configurations, the electronic device 102 may include a processor 104, a memory 112, one or more cameras 106, one or more inertial sensors 108, and/or one or more satellite navigation receivers 110 (e.g., GPS receivers, Global Navigation Satellite Systems (GNSS), etc.). The processor 104 may be coupled to (e.g., in electronic communication with) the memory 112, the camera 106, the inertial sensors 108, and/or the satellite navigation receiver 110.

The processor 104 may be a general purpose single-or multi-chip microprocessor (e.g., an ARM), a special purpose microprocessor (e.g., a Digital Signal Processor (DSP)), a microcontroller, a programmable gate array, or the like. The processor 104 may be referred to as a Central Processing Unit (CPU). Although only a single processor 104 is illustrated in the electronic device 102, in alternative configurations, a combination of processors 104 may be used (e.g., an Image Signal Processor (ISP) and an application processor, an ARM and DSP, etc.). The processor 104 may be configured to implement one or more of the methods disclosed herein. For example, the processor 104 may be configured to determine a motion trajectory 120 and a camera pose 122 of the electronic device 102.

In some configurations, the electronic device 102 may perform one or more of the functions, processes, methods, steps, etc., described in connection with one or more of fig. 2-9. Additionally or alternatively, the electronic device 102 may include one or more of the structures described in connection with one or more of fig. 2-9.

The electronic device 102 may obtain one or more images (e.g., digital images, image frames, video, etc.) and other sensor data. For example, one or more cameras 106 may capture multiple images. In one embodiment, the camera 106 may be a forward facing monocular camera mounted on a vehicle. Additionally or alternatively, the electronic device 102 may request and/or receive one or more images from another device, such as one or more external cameras coupled to the electronic device 102, a web server, a traffic camera, a drop camera (dropcam), a vehicle camera, a web camera, and so forth.

The electronic device 102 may detect lane markings or traffic signs. The electronic device 102 may be configured with a lane marking detector 114, a traffic sign detector 116, or both. The autonomous vehicle may be equipped with a robust perception algorithm: a lane marker detector 114 and a traffic sign detector 116. In one approach, the lane marker detector 114 and the traffic sign detector 116 may be implemented as Computer Vision (CV) -based algorithms. In another approach, the lane marker detector 114 and the traffic sign detector 116 may be implemented as a deep neural network algorithm. It should be noted that the lane marker detector 114 and the traffic sign detector 116 may be implemented using other algorithms than those listed herein.

The lane marker detector 114 and the traffic sign detector 116 may be configured to identify a particular set of features observed in the road and driving environment. The lane marker detector 114 and the traffic sign detector 116 are more robust than normal keypoint trackers. For example, in a typical VIO system, the keypoint tracker may identify any stationary object with sharp contrast (i.e., sharp corners). However, these key points may not be optimal for use in vehicle positioning. For example, the keypoint tracker may detect trees or leaves on buildings that are far away.

On the other hand, the lane marker detector 114 and the traffic sign detector 116 are configured to detect road features that may be reliably used to perform vehicle localization. The lane marker detector 114 is configured to detect one or more lane markers or lane marker segments within the image. Lane markings are devices or materials that convey information on the surface of the roadway. Examples of lane markings include painted traffic lanes, painted crosswalks, painted parking spaces, handicapped parking spaces, reflective signs, botts (Bott) dots, and indented sign strips.

The lane markers have a known configuration and relationship to the electronic device 102. For example, the size of the lane markings may be known or determined by the lane marking detector 114. Furthermore, the fact that the lane markers are located on the road may be utilized to determine the position of the vehicle.

The traffic sign detector 116 may determine one or more traffic signs in the image. Using CV or deep neural network algorithms, the traffic sign detector 116 may identify that the objects in the image are traffic signs. The traffic sign detector 116 may also determine the type of traffic sign observed in the scene. For example, the traffic sign detector 116 may determine that the traffic sign is a speed limit sign, a highway exit sign, a road hazard warning sign, or the like.

It should be noted that the lane marking detector 114 and the traffic sign detector 116 are designed to recognize a particular set of features observed on the road and in the driving environment. Features such as lane markers and traffic signs have a known configuration. For example, lane markings and traffic signs have certain shapes (e.g., rectangular, circular, octagonal, etc.) and sizes.

The lane marker detector 114 may be configured to detect lane markers and other metrics on the roadway. These metrics may include the lateral distance of the vehicle from the lane markings. The lane marker detector 114 may also detect the direction of the vehicle relative to the lane markers. The lane marker detector 114 may determine a quality metric of the confidence of the lane marker detection/tracking result. The lane marker detector 114 may also detect the corners of lane markers in the image plane and the location of possible raised pavement markers (e.g., botts' dots).

The traffic sign detector 116 may be configured to detect traffic signs and other metrics on the roads. These metrics may include detecting vertices (e.g., corners) of traffic signs in the image plane. The traffic sign detector 116 may also determine a quality metric for the confidence of the traffic sign detection/tracking results.

The processor 104 may determine a plurality of feature points on a lane marker or traffic sign. As used herein, the term feature points refers to various locations on the same feature (e.g., lane marker or traffic sign). The processor 104 may determine the pixel location of a given feature point in the digital image.

The feature points have an established relationship with each other based on the object type. Because the feature points are from the same feature (e.g., lane marker or traffic sign), the feature points have a known association. In other words, once the processor 104 detects a certain feature, the relationship of the points on the feature may be known. One object type may be a lane marker. Another object type may be a traffic sign.

In the case of lane markings, the lane marking detector 114 may determine three or more end angles of the detected lane markings. For example, for a rectangular lane marker, the lane marker detector 114 may determine at least three or all four corner points of the lane marker. Since the feature is known as a lane marker, these three or more corner points have an established relationship belonging to the same lane marker. Furthermore, these corner points have an established relationship defining the extent of the lane markings. In other words, these corner points define the boundaries of the lane markings. Additionally, the width and/or length of the lane markings may be determined based on the corner points.

In the case of a traffic sign, the traffic sign detector 116 may determine three or more vertices of the detected traffic sign. For example, for a rectangular sign, the traffic sign detector 116 may determine at least three or all four vertices of the sign. For a circular sign, the traffic sign detector 116 may determine three points on the sign. Because the feature is determined to be a traffic sign, the three or more vertices have established relationships that belong to the same traffic sign. The vertices may define the boundaries of the traffic sign.

Because the lane marker detector 114 and the traffic sign detector 116 are more robust in detecting and tracking scene features due to their inherent design, they may be used to provide more reliable scale information than angle detector-based keypoint trackers, which are more heuristic dependent and less accurate. For example, a typical VIO keypoint tracker may identify any feature on the road that has strong contrast. However, the keypoint tracker does not know the relationship of the detected points. For example, the keypoint tracker does not know whether two or more detected points belong to the same feature.

The processor 104 may use the plurality of feature points detected by the lane marker detector 114 or the traffic sign detector 116 to determine the motion trajectory 120 and the camera pose 122 relative to the ground plane. For example, the positioning module 118 may use a plurality of feature points to determine the scale information. The positioning module 118 may receive feature points of the lane markings from the lane marking detector 114. The locating module 118 may receive the characteristic points of the traffic sign from the traffic sign detector 116.

The processor 104 may determine the scale information using a plurality of feature points.

The localization module 118 may determine the motion trajectory 120 by comparing a plurality of feature points from the first image with corresponding feature points in the second image. As used herein, the term "motion trajectory" refers to a curve or path as the electronic device 102 moves in space. The motion trajectory 120 may be determined in a local frame. In other words, the location module 118 may determine a change in location relative to a previous location. The movement may be referred to as a self-movement of the vehicle.

It should be noted that the motion profile 120 is a location of the vehicle motion. At this time, the motion trajectory 120 is independent of the global frame. Thus, the motion profile 120 is not dependent on GPS or other global positioning methods. This is beneficial in situations where the GPS signal is unavailable or corrupted. For example, in a parking garage, GPS signals may not be available. Also, in urban canyons, GPS signals may be corrupted by interference. However, the systems and methods described herein do not rely on GPS signals or external maps to perform the location and position of the vehicle.

In one embodiment, the location module 118 may receive feature points of a feature in the first image. The location module 118 may receive feature points of the same feature in the second image. Knowing the time elapsed between the first image and the second image, the localization module 118 can compare the changes in the corresponding feature points to determine the motion trajectory 120.

Furthermore, because the feature points tracked by the lane marker detector 114 and the traffic marker detector 116 (e.g., vertices on traffic markers or corners of lane markers) are related to a single geometric shape, the camera pose 122 may be estimated using the feature points. The camera pose 122 cannot be determined with conventional angle detectors.

In one embodiment, the camera pose 122 may be a six degree of freedom (6DoF) pose of the camera 106. The 6DoF pose of the camera 106 may be defined as a translation (e.g., a change in position forward/backward, up/down, and left/right) and also an azimuth (e.g., pitch, yaw, and roll) of the camera 106. The camera pose 122 may be used to define the pose of the vehicle itself.

The camera pose 122 may be determined relative to the local ground. For example, the location module 118 may apply a transformation to convert the feature points from the image plane to the ground plane coordinate system. The localization module 118 may then determine a 6DoF pose based on tracking the change in feature points from one image to another.

It should be noted that the motion trajectory 120 and camera pose 122 may not be determined using a map. This is beneficial because the electronic device 102 may perform vehicle localization without relying on maps, which may be inaccurate, incomplete, or unavailable. Instead, the electronic device 102 may perform vehicle localization using feature points from detected lane markings or traffic signs.

In one embodiment, the electronic device 102 may combine the measurements of the detected lane markings or traffic signs with inertial sensor measurements. For example, the positioning module 118 may couple measurements of lane marking features and traffic signs with measurements from the inertial sensors 108 to jointly determine the motion profile 120. In this case, the positioning module 118 may use inertial measurements when vehicle acceleration is observed. The location module 118 may use measurements of lane marking features and traffic signs when inertial measurements are not observable.

In another aspect, the electronic device 102 may use the determined relative position and orientation of the camera 106 to verify and correct GPS information. Once the relative motion trajectory 120 and camera pose 122 are obtained, these measurements may be fused with GPS information to find the absolute position of the electronic device 102. For example, the motion trajectory 120 and camera pose 122 may be located in the latitude and longitude provided by the GPS. This may provide a very accurate position of the vehicle in the global reference frame.

In one embodiment, the electronic device 102 may combine the satellite navigation receiver measurements with measurements of detected lane markings or traffic signs. This may be achieved as described in connection with fig. 3.

The memory 112 may store instructions and/or data. The processor 104 may access (e.g., read from and/or write to) the memory 112. The memory 112 may store images and instruction code for execution by the processor 104. The memory 112 may be any electronic component capable of storing electronic information. The memory 112 may be implemented as Random Access Memory (RAM), Read Only Memory (ROM), magnetic disk storage media, optical storage media, flash memory devices in RAM, on-board memory included with the processor 104, Erasable Programmable Read Only Memory (EPROM), Electrically Erasable Programmable Read Only Memory (EEPROM), registers, and so forth, including combinations thereof.

Data and instructions may be stored in memory 112. The instructions are executable by the processor 104 to implement one or more of the methods described herein. Executing the instructions may involve using data stored in the memory 112. When the processor 104 executes instructions, various portions of the instructions may be loaded onto the processor 104 and various pieces of data may be loaded onto the processor 104.

It should be noted that one or more elements or components of an electronic device may be combined and/or divided. It should be noted that one or more elements or components described in connection with fig. 1 may be optional.

FIG. 2 is a flow chart illustrating one configuration of a method 200 for determining vehicle position. The method 200 may be performed by the electronic device 102 described herein.

The electronic device 102 may obtain 202 a plurality of images. For example, the electronic device 102 may be configured with a camera 106. The camera 106 may capture one or more images (e.g., digital images, image frames, video, etc.).

The electronic device 102 may detect 204 an object in the plurality of images. The object may comprise a lane marker or a traffic sign. For example, the electronic device 102 may be configured with a lane marking detector 114, a traffic sign detector 116, or both. The lane marker detector 114 may use Computer Vision (CV) -based or deep neural network algorithms to detect lane markers or lane marker segments in the image. The traffic sign detector 116 may also detect one or more traffic signs in the image using CV-based or deep neural network algorithms.

The electronic device 102 may determine 206 a plurality of feature points on the object. The feature points may have an established relationship with each other based on the object type. The relationship between feature points is known based on the object type. In the case of lane markings, the lane marking detector 114 may determine three or more end angles of the detected lane markings. Three or more corner points have an established relationship belonging to the same lane marker and may define the extent of the lane marker.

In the case of a traffic sign, the traffic sign detector 116 may determine three or more vertices of the detected traffic sign. Three or more vertices have established relationships that belong to the same traffic sign and may define the boundaries of the traffic sign.

The electronic device 102 may determine 208 a motion trajectory 120 and a camera pose 122 relative to a ground plane using the plurality of feature points. For example, the electronic device 102 may use a plurality of feature points to determine the scale information. In one embodiment, the electronic device 102 may determine 208 the motion trajectory 120 and the camera pose 122 by comparing a plurality of feature points determined from the first image with corresponding feature points determined from the second image.

FIG. 3 is a block diagram illustrating another example of an electronic device 302 in which systems and methods for determining vehicle location may be implemented. The electronic device 302 described in connection with fig. 3 may be implemented in accordance with the electronic device 102 described in connection with fig. 1.

The electronic device 302 may be configured with a camera 306, one or more inertial sensors 308, a GPS receiver 310, one or more vehicle sensors 311, and a processor 304. The camera 306 may provide an image 328 to the processor 304. The inertial sensor 308 may provide inertial sensor measurements 330 to the processor 304. For example, the inertial sensor measurements 330 may include measurements from one or more accelerometers or one or more gyroscopes.

The GPS receiver 310 may provide GPS measurements 332 to the processor 304. The GPS measurements 332 may contain GPS coordinates. It should be noted that although a GPS receiver 310 is described, other satellite navigation measurements may be used. For example, the GPS receiver 310 may be implemented as a GNSS receiver.

One or more vehicle sensors 311 may provide vehicle sensor measurements 338 to the processor 304. Examples of vehicle sensors 311 include wheel speed encoders, drive shaft angle sensors, and/or speedometers. Examples of vehicle sensor measurements 338 include vehicle speed, wheel speed, or driveshaft angle.

The processor 304 may include or may implement a lane marking detector 314. The lane marking detector 314 may receive the image 328 from the camera 306. The lane marker detector 314 may determine a lane marker measurement 334. For example, the lane marker detector 314 may detect one or more lane markers in the image 328. The lane marker detector 314 may also determine three or more feature points on the detected lane marker.

The processor 304 may also include or may implement a traffic sign detector 316. The traffic sign detector 316 may receive the image 328 from the camera 306. The traffic sign detector 316 may determine a traffic sign measurement 336. For example, the traffic sign detector 316 may detect one or more traffic signs in the image 328. The traffic sign detector 316 may also determine three or more feature points on the detected traffic sign.

The processor 304 may also include or may implement a Visual Inertial Odometer (VIO) module 318. The VIO module 318 may be configured to receive inertial sensor measurements 330, lane marker measurements 334, and traffic sign measurements 336. The VIO module 318 may determine the motion trajectory 320 and camera pose 322 of the electronic device 302 based on the inertial sensor measurements 330, the lane marker measurements 334, the traffic sign measurements 336, or a combination thereof.

In one embodiment, the VIO module 318 may combine the lane marking measurement 334 or the traffic sign measurement 336 with the inertial sensor measurement 330. For example, the VIO module 318 may couple the measurements 334, 336 of the lane marking features and traffic signs with the inertial sensor measurements 330 to jointly determine the motion trajectory 320 and the camera pose 322. In this case, the VIO module 318 may use the inertial sensor measurements 330 when vehicle acceleration is observed. The VIO module 318 may use the lane marking measurements 334 and the traffic sign measurements 336 when the inertial sensor measurements 330 are negligible (e.g., when acceleration is not observable).

In another configuration, the VIO module 318 may supplement the inertial sensor measurements 330 with lane marking measurements 334 or traffic sign measurements 336. For example, the VIO module 318 may use the lane marker measurements 334 or the traffic sign measurements 336 to perform redundant checks on the motion trajectory 320 and the camera pose 322 determined from the inertial sensor measurements 330.

In another embodiment, the processor 304 may also include or may implement a fusion engine 324. The fusion engine 324 may combine the GPS measurements 332 with the motion trajectory 320 and camera pose 322 determined by the VIO module 318. The fusion engine 324 may determine a global pose and position 326 based on the combined measurements. The global pose and position 326 may include the motion trajectory 320 and the camera pose 322 in a global frame.

It is known that GPS pseudorange information is affected by multipath when a vehicle travels in an urban canyon. The information received by the GPS receiver 310 may be spread around. For example, when a vehicle is traveling in a city, GPS information may jump around due to tall buildings. Another advantage of using information from the lane marker detector 314 and the traffic sign detector 316 is outlier detection and rejection of the GPS and VIO fusion outputs.

The fusion engine 324 may be configured to receive lane marking measurements 334 and traffic sign measurements 336. The fusion engine 324 may verify the global pose and position 326 using the lane marking measurements 334 and the traffic sign measurements 336. For example, if the GPS/VIO fusion estimates that the vehicle's position is outside of the lane, then knowing the history of lateral offset of the lane markers (e.g., as determined by the lane marker detector 314) may correct this.

The fusion engine 324 can reject GPS outliers and keep updating the vehicle location based on the vehicle model. For example, by determining the motion trajectory 320 and the camera pose 322 using the lane marking measurements 334 and the traffic sign measurements 336, the electronic device 302 may know that the vehicle has been traveling in a particular lane for a period of time. If the GPS/VIO fusion estimates that the vehicle's position is outside of the lane, and the lane marker detector 314 indicates that the vehicle has never passed the lane, the electronics 302 may rely more on the lane marker detector 314 and may reject the results from the GPS/VIO fusion. In this way, the systems and methods described herein may provide redundant checks of the results of the GPS algorithm.

In yet another embodiment, the fusion engine 324 may receive vehicle sensor measurements 338. In one approach, the fusion engine 324 may combine the vehicle sensor measurements 338 (e.g., speedometer measurements) with the measurements 334, 336 of the detected objects (e.g., lane markings or traffic signs). The fusion engine 324 may determine a global pose and position 326 based on the combined measurements. In another approach, the fusion engine 324 may combine vehicle sensor measurements 338 from one or more vehicle sensors 311 with GPS measurements 332 and measurements 334, 336 of detected objects (e.g., lane markings or traffic signs). The fusion engine 324 may determine a global pose and position 326 based on the combined measurements.

In one embodiment, the processor 304 may be included in a vehicle (e.g., an automobile, truck, bus, boat, robot, etc.). Thus, the vehicle may be configured to determine its own global pose and position 326.

In another embodiment, the processor 304 may be contained in a server separate from the vehicle. For example, the server may receive images 328, inertial sensor measurements 330, GPS measurements 332, and/or vehicle sensor measurements 338 from a remote source (e.g., a vehicle). The server may use the information to determine a motion trajectory 320, a camera pose 322, and/or a global pose and position 326 of the vehicle.

In yet another embodiment, the vehicle or server may be configured to transmit the motion trajectory 320, camera pose 322, and/or global pose and position 326 to a mapping service. For example, the map service may be a cloud-based service. The map service may detect and locate key landmarks to generate an accurate location map. The map service may generate a position estimate for the vehicle in the global frame. For example, the vehicle may transmit the motion trajectory 320, camera pose 322, inertial sensor measurements 330, GPS measurements 332, and/or vehicle sensor measurements 338 to a mapping service. The map service may then determine 326 the global pose and position of the vehicle in the localization map.

Fig. 4 shows an example of vehicle localization using lane marker detection. The first example (a) shows a bird's eye view 440 of the vehicle's motion relative to the lane markings 444 a. In the example, the vehicle is from a first time (t)1) To a second time (t)2) In the second position. The lane marker 444a includes four corners 446 a. The example depicts when the vehicle is from time t1To a time t2The geometry of the four corners 446a changes in the second position of (a).

In a second example (b) a corresponding front view 442 of a lane marker 444b is shown. The example depicts when the vehicle is from time t1To a time t2The geometry of the angle 446b changes in the second position.

As observed in these examples, the electronic device 102 may use the lane marker detector 114 to detect the lane markers 444 in the image 328. The lane marker detector 114 may detect three or more angles 446 on the lane marker 444. By tracking angle 446 and comparing vehicle orientation with respect to angle 446 from time t1Time of arrivalt2The electronic device 102 may determine the motion trajectory 120 and the camera pose 122 of the vehicle.

FIG. 5 is a flow diagram illustrating one configuration of a method 500 for determining vehicle position based on lane marker detection. The method 500 may be performed by the electronic device 102 described herein.

The electronic device 102 may obtain 502 a plurality of images 328. For example, the electronic device 102 may be configured with a camera 106. The camera 106 may capture one or more images 328 (e.g., digital images, image frames, videos, etc.).

The electronic device 102 may detect 504 the lane markings 444. For example, the electronic device 102 may be configured with a lane marking detector 114. The lane marker detector 114 may use Computer Vision (CV) -based or deep neural network algorithms to detect lane markers 444 or lane marker segments in the image 328.

The electronic device 102 may determine 506 three or more end angles 446 of the detected lane markings 444. For example, the lane marker detector 114 may be configured to identify an angle 446 of the lane marker 444. The lane marker detector 114 may provide pixel coordinates for the lane marker angle 446. It should be noted that at least three separate angles 446 from the same lane marker 444 may be recognized to enable the electronic device 102 to determine the pose of the camera 106.

The electronic device 102 may use the three or more end angles 446 of the detected lane markings 444 to determine 508 the motion trajectory 120 and the camera pose 122 relative to the ground plane. For example, the electronic device 102 may use three or more end angles 446 to determine the dimensional information. In one embodiment, the electronic device 102 may determine 508 the motion trajectory 120 and the camera pose 122 by comparing three or more end angles 446 determined from the first image 328 with corresponding three or more end angles 446 determined from the second image 328.

Fig. 6 is a flow chart illustrating a configuration of a method 600 for determining a vehicle location based on traffic sign detection. The method 600 may be performed by the electronic device 102 described herein.

The electronic device 102 may obtain 602 a plurality of images 328. For example, the electronic device 102 may be configured with a camera 106. The camera 106 may capture one or more images 328 (e.g., digital images, image frames, videos, etc.).

The electronic device 102 may detect 604 a traffic sign. For example, the electronic device 102 may be configured with a traffic sign detector 116. The traffic sign detector 116 may use Computer Vision (CV) -based or deep neural network algorithms to detect traffic signs in the image 328.

The electronic device 102 may determine 606 three or more vertices of the detected traffic sign. For example, the traffic sign detector 116 may be configured to identify vertices of traffic signs. The traffic sign detector 116 may provide pixel coordinates for the traffic sign vertices. It should be noted that at least three separate vertices from the same traffic sign may be identified to enable the electronic device 102 to determine the pose of the camera 106.

The electronic device 102 may use the three or more vertices of the detected traffic sign to determine 608 the motion trajectory 120 and the camera pose 122 relative to the ground plane. For example, the electronic device 102 may determine the scale information using three or more traffic sign vertices. In one embodiment, the electronic device 102 may determine 608 the motion trajectory 120 and the camera pose 122 by comparing three or more traffic sign vertices determined from the first image 328 with corresponding traffic sign vertices determined from the second image 328.

FIG. 7 is a flow chart illustrating another configuration of a method 700 for determining vehicle position. The method 700 may be performed by the electronic device 302 described herein.

The electronic device 302 may detect 702 the lane markings 444 or traffic signs. For example, the electronic device 302 may be configured with a lane marking detector 314, a traffic sign detector 316, or both. The lane marker detector 314 may use Computer Vision (CV) -based or deep neural network algorithms to detect lane markers 444 or lane marker segments in the image 328. The lane marker detector 314 may generate a lane marker measurement 334.

The traffic sign detector 316 may also detect one or more traffic signs in the image 328 using CV-based or deep neural network algorithms. The traffic sign detector 316 may generate a traffic sign measurement 336.

The electronics 302 may receive 704 the inertial sensor measurements 330. For example, the electronic device 302 may be configured with one or more inertial sensors 308. The inertial sensors 308 may include one or more accelerometers and/or one or more gyroscopes, with which the inertial sensors 308 generate the inertial sensor measurements 330.

The electronic device 302 may combine 706 the lane marking measurement 334 or the traffic sign measurement 336 with the inertial sensor measurement 330. For example, the electronics 302 may provide lane marking measurements 334, traffic sign measurements 336, and inertial sensor measurements 330 to the VIO module 318.

The electronic device 302 may determine 708 the motion profile 320 based on the combined measurements. For example, the VIO module 318 may use the inertial sensor measurements 330 to determine dimensional information for the image 328 when vehicle acceleration is observed. When the inertial sensor measurements 330 are negligible (e.g., when acceleration is not observable), the VIO module 318 may use the lane marking measurements 334 and the traffic sign measurements 336 to determine the scale information for the image 328.

FIG. 8 is a flow chart illustrating yet another configuration of a method 800 for determining vehicle position. The method 800 may be performed by the electronic device 302 described herein.

The electronic device 302 may detect 802 the lane markings 444 or traffic signs. For example, the electronic device 302 may be configured with a lane marking detector 314, a traffic sign detector 316, or both. The lane marker detector 314 may generate a lane marker measurement 334. The traffic sign detector 316 may generate a traffic sign measurement 336.

The electronic device 302 may receive 804 the GPS measurements 332. For example, the GPS receiver 310 may receive GPS signals. The GPS receiver 310 may determine the latitude and longitude of the electronic device 302 based on the GPS signals.

The electronic device 302 may receive 806 vehicle sensor measurements 338 from one or more vehicle sensors 311. The vehicle sensor measurements 338 include wheel speed or driveshaft angle.

The electronic device 302 may combine 808 the lane marking measurement 334, the traffic sign measurement 336, the GPS measurement 332, and the vehicle sensor measurement 338. For example, the fusion engine 324 may receive lane marking measurements 334, traffic sign measurements 336, GPS measurements 332, and vehicle sensor measurements 338.

The electronic device 302 may determine 810 a global pose and position 326 based on the combined measurements. For example, the electronic device 302 may determine the local motion trajectory 320 and the camera pose 322 based on the lane marker measurements 334 and the traffic sign measurements 336. The fusion engine 324 can locate the motion trajectory 320 and the camera pose 322 in a global frame using the GPS measurements 332 and the vehicle sensor measurements 338.

Fig. 9 illustrates certain components that may be included within an electronic device 902 configured to implement various configurations of the systems and methods disclosed herein. Examples of the electronic device 902 may include a camera, a video camera, a digital camera, a cellular phone, a smart phone, a computer (e.g., a desktop computer, a laptop computer, etc.), a tablet device, a media player, a television, a vehicle, an automobile, a personal camera, a wearable camera, a virtual reality device (e.g., headphones), an augmented reality device (e.g., headphones), a mixed reality device (e.g., headphones), an action camera, a surveillance camera, an installed camera, a connected camera, a robot, an aircraft, an Unmanned Aerial Vehicle (UAV), a smart application, a healthcare apparatus, a game console, a Personal Digital Assistant (PDA), a set-top box, and so forth. The electronic device 902 may be implemented in accordance with one or more of the electronic devices 102 described herein.

The electronic device 902 includes a processor 904. The processor 904 may be a general purpose single-or multi-chip microprocessor (e.g., an ARM), a special purpose microprocessor (e.g., a Digital Signal Processor (DSP)), a microcontroller, a programmable gate array, or the like. Processor 904 may be referred to as a Central Processing Unit (CPU). Although only a single processor 904 is illustrated in the electronic device 902, in alternative configurations, a combination of processors (e.g., an ARM and DSP) may be implemented.

The electronic device 902 also includes memory 912. The memory 912 may be any electronic component capable of storing electronic information. The memory 912 may be implemented as Random Access Memory (RAM), Read Only Memory (ROM), magnetic disk storage media, optical storage media, flash memory devices in RAM, on-board memory included with the processor, EPROM memory, EEPROM memory, registers, and so forth, including combinations thereof.

Data 909a and instructions 907a may be stored in memory 912. The instructions 907a may be executable by the processor 904 to perform one or more of the methods, procedures, steps, and/or functions described herein. Executing the instruction 907a may involve the use of data 909a stored in the memory 912. When the processor 904 executes the instructions 907, various portions of the instructions 907b may be loaded onto the processor 904 and/or various data fragments 909b may be loaded onto the processor 904.

The electronic device 902 may also include a transmitter 911 and/or a receiver 913 to allow signals to be sent to and received from the electronic device 902. The transmitter 911 and the receiver 913 may be collectively referred to as a transceiver 915. One or more antennas 917a-b can be electrically coupled to the transceiver 915. The electronic device 902 may also include (not shown) multiple transmitters, multiple receivers, multiple transceivers and/or additional antennas.

The electronic device 902 may include a Digital Signal Processor (DSP) 921. Electronic device 902 can also include a communication interface 923. Communication interface 923 may allow and/or allow one or more inputs and/or outputs. For example, communication interface 923 may include one or more ports and/or communication devices for linking other devices to electronic device 902. In some configurations, the communication interface 923 may include a transmitter 911, a receiver 913, or both (e.g., the transceiver 915). Additionally or alternatively, the communication interface 923 may include one or more other interfaces (e.g., a touchscreen, a keypad, a keyboard, a microphone, a camera, etc.). For example, communication interface 923 may enable a user to interact with electronic apparatus 902.

Various components of electronic device 902 may be coupled together by one or more buses, which may include a power bus, a control signal bus, a status signal bus, a data bus, and so forth. For clarity, the various buses are shown in FIG. 9 as the bus system 919.

The term "determining" encompasses a wide variety of actions and, thus, "determining" can include calculating (computing/calculating), processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Further, "determining" may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory), and the like. Further, "determining" may include resolving, selecting, establishing, and the like.

The phrase "based on" does not mean "based only on," unless expressly specified otherwise. In other words, the phrase "based on" describes that "is based only on" and "is based at least on" both.

The term "processor" should be broadly interpreted as encompassing general purpose processors, Central Processing Units (CPUs), microprocessors, Digital Signal Processors (DSPs), controllers, microcontrollers, state machines, and the like. In some cases, a "processor" may refer to an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), or the like. The term "processor" may refer to a combination of processing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.

The term "memory" should be broadly construed to encompass any electronic component capable of storing electronic information. The term memory may refer to various types of processor-readable media, such as Random Access Memory (RAM), read-only memory (ROM), non-volatile random access memory (NVRAM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable PROM (eeprom), flash memory, magnetic or optical data storage, registers, and so forth. A memory is considered to be in electronic communication with a processor if the processor can read information from, and/or write information to, the memory. A memory integrated into the processor is in electronic communication with the processor.

The terms "instructions" and "code" should be construed broadly to encompass any type of computer-readable statements. For example, the terms "instructions" and "code" may refer to one or more programs, routines, subroutines, functions, procedures, and the like. The "instructions" and "code" may comprise a single computer-readable statement or many computer-readable statements.

The functions described herein may be implemented in software or firmware executed by hardware. The functions may be stored as one or more instructions on a computer-readable medium. The terms "computer-readable medium" or "computer program product" refer to any tangible storage medium that can be accessed by a computer or a processor. By way of example, and not limitation, computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. It should be noted that computer-readable media may be tangible and not transitory. The term "computer program product" refers to a computing device or processor in combination with code or instructions (e.g., a "program") that may be executed, processed, or computed by the computing device or processor. As used herein, the term "code" may refer to software, instructions, code or data that is executed by a computing device or processor.

Software or instructions may also be transmitted over a transmission medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of transmission medium.

The methods disclosed herein comprise one or more steps or actions for achieving the method. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is required for proper operation of the described method, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.

Further, it should be appreciated that modules and/or other suitable means for performing the methods and techniques described herein may be downloaded and/or otherwise obtained by a device. For example, an apparatus may be coupled to a server to facilitate communication of the apparatus for performing the methods described herein. Alternatively, various methods described herein may be provided via a storage device (e.g., Random Access Memory (RAM), Read Only Memory (ROM), a physical storage medium such as a Compact Disc (CD) or floppy disk, etc.), such that the various methods may be obtained by the device upon coupling or providing the storage device to the device.

It is to be understood that the claims are not limited to the precise configuration and components shown above. Various modifications, changes, and variations may be made in the arrangement, operation, and details of the systems, methods, and apparatus described herein without departing from the scope of the claims.

26页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:定位

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!