System and method for calibrating an inertial test unit and camera

文档序号:639434 发布日期:2021-05-11 浏览:14次 中文

阅读说明:本技术 用于标定惯性测试单元和相机的系统和方法 (System and method for calibrating an inertial test unit and camera ) 是由 王镇 于 2019-09-23 设计创作,主要内容包括:本申请涉及一种用于标定自动驾驶车辆的惯性测试单元和相机的系统。所述系统包括至少一个包括用于标定惯性测试单元和相机的指令的存储介质,以及与所述存储介质通讯的至少一个处理器,当执行所述指令时,所述至少一个处理器用于:获取自动驾驶车直线行驶的轨迹(510);确定惯性测试单元相对于第一坐标系的惯性测试单元姿态(520);确定相机相对于第二坐标系的相机姿态(530);确定第一坐标系和第二坐标系之间的相对坐标姿态(540);以及基于所述惯性测试单元姿态、所述相机姿态和所述相对坐标姿态,确定相机和惯性测试单元之间的相对姿态(550)。本申请还披露了一种用于标定自动驾驶车辆的惯性测试单元和相机的方法和一种非暂时性可读介质。(The present application relates to a system for calibrating an inertial test unit and a camera of an autonomous vehicle. The system includes at least one storage medium including instructions for calibrating the inertial test unit and the camera, and at least one processor in communication with the storage medium, the at least one processor, when executing the instructions, being configured to: acquiring a straight running track (510) of the automatic driving vehicle; determining an inertial test unit pose of the inertial test unit relative to a first coordinate system (520); determining a camera pose of the camera relative to a second coordinate system (530); determining a relative coordinate pose between the first coordinate system and the second coordinate system (540); and determining a relative pose between the camera and the inertial test unit based on the inertial test unit pose, the camera pose, and the relative coordinate pose (550). A method and a non-transitory readable medium for calibrating an inertial test unit and a camera of an autonomous vehicle are also disclosed.)

1. A system for calibrating an inertial test unit and a camera of an autonomous vehicle, comprising:

at least one storage medium comprising a set of instructions for calibrating the inertial test unit and the camera; and

at least one processor in communication with the storage medium, wherein the at least one processor, when executing the set of instructions, is configured to:

acquiring a straight-line running track of the automatic driving vehicle;

determining an inertial test unit attitude of the inertial test unit relative to a first coordinate system;

determining a camera pose of the camera relative to a second coordinate system;

determining a relative coordinate pose between the first coordinate system and the second coordinate system; and

determining a relative pose between the camera and the inertial test unit based on the inertial test unit pose, the camera pose, and the relative coordinate pose.

2. The system according to claim 1, wherein said at least one processor is further configured to:

determining the first coordinate system based on the trajectory of the autonomous vehicle.

3. The system of claim 2, wherein to determine the inertial test unit pose, the at least one processor is further configured to:

obtaining inertial test unit data from the inertial test unit; and

determining the inertial test unit pose based on the inertial test unit data and the first coordinate system.

4. The system according to claim 1, wherein said at least one processor is further configured to:

acquiring camera data from the camera; and

determining the second coordinate system based on the camera data.

5. The system of claim 4, wherein to determine the camera pose, the at least one processor is further configured to:

determining the camera pose based on the camera data and the second coordinate system.

6. The system according to claim 4, wherein to determine the second coordinate system, the at least one processor is further configured to:

determining a second ground normal vector based on the camera data and a three-dimensional reconstruction technique;

determining a second direction of travel of the camera based on the camera data; and

determining the second coordinate system based on the second ground normal vector and the second direction of travel of the camera.

7. The system of claim 6, wherein the three-dimensional reconstruction technique is a motion structure SFM method.

8. The system according to any one of claims 1 to 7, wherein to determine the relative coordinate pose, the at least one processor is further configured to:

aligning a first ground normal vector of the first coordinate system with the second ground normal vector of the second coordinate system;

aligning a first direction of travel of the inertial test unit with the second direction of travel of the camera; and

determining the relative coordinate pose between the first coordinate system and the second coordinate system.

9. A method for calibrating an inertial test unit and a camera of an autonomous vehicle, implemented on a computing device comprising at least one storage medium containing a set of instructions and at least one processor in communication with the storage medium, the method comprising:

acquiring a straight-line running track of the automatic driving vehicle;

determining an inertial test unit attitude of the inertial test unit relative to a first coordinate system;

determining a camera pose of the camera relative to a second coordinate system;

determining a relative coordinate pose between the first coordinate system and the second coordinate system; and

determining a relative pose between the camera and the inertial test unit based on the inertial test unit pose, the camera pose, and the relative coordinate pose.

10. The method of claim 9, further comprising:

determining the first coordinate system based on the trajectory of the autonomous vehicle.

11. The method of claim 10, wherein the determining the inertial test unit pose further comprises:

obtaining inertial test unit data from the inertial test unit; and

determining the inertial test unit pose based on the inertial test unit data and the first coordinate system.

12. The method of claim 9, further comprising:

acquiring camera data from the camera; and

determining the second coordinate system based on the camera data.

13. The method of claim 12, wherein the determining the camera pose comprises:

determining the camera pose based on the camera data and the second coordinate system.

14. The method of claim 12, wherein the determining the second coordinate system comprises:

determining a second ground normal vector based on the camera data and a three-dimensional reconstruction technique;

determining a second direction of travel of the camera based on the camera data; and

determining the second coordinate system based on the second ground normal vector and the second direction of travel of the camera.

15. The method of claim 14, wherein the three-dimensional reconstruction technique is a motion structure SFM method.

16. The method of any one of claims 9 to 15, wherein said determining the relative coordinate pose comprises:

aligning a first ground normal vector of the first coordinate system with the second ground normal vector of the second coordinate system;

aligning a first direction of travel of the inertial test unit with the second direction of travel of the camera; and

determining the relative coordinate pose between the first coordinate system and the second coordinate system.

17. A non-transitory readable medium comprising at least one set of instructions for calibrating an inertial test unit and a camera of an autonomous vehicle, wherein the at least one set of instructions, when executed by at least one processor of an electronic device, instruct the at least one processor to perform a method comprising:

acquiring a straight-line running track of the automatic driving vehicle;

determining an inertial test unit attitude of the inertial test unit relative to a first coordinate system;

determining a camera pose of the camera relative to a second coordinate system;

determining a relative coordinate pose between the first coordinate system and the second coordinate system; and

determining a relative pose between the camera and the inertial test unit based on the inertial test unit pose, the camera pose, and the relative coordinate pose.

18. The non-transitory readable medium of claim 17, wherein the method further comprises:

determining the first coordinate system based on the trajectory of the autonomous vehicle.

19. The non-transitory readable medium of claim 18, wherein the determining the inertial test unit pose further comprises:

obtaining inertial test unit data from the inertial test unit; and

determining the inertial test unit pose based on the inertial test unit data and the first coordinate system.

20. A system for calibrating an inertial test unit and a camera of an autonomous vehicle, comprising:

a trajectory acquisition module configured to acquire a trajectory along which the autonomous vehicle travels straight;

an inertial test unit pose determination module configured to determine an inertial test unit pose of the inertial test unit relative to a first coordinate system;

a camera pose determination module configured to determine a camera pose of the camera relative to a second coordinate system;

a relative coordinate pose determination module configured to determine a relative coordinate pose between the first coordinate system and the second coordinate system; and

a relative pose determination module configured to determine a relative pose between the camera and the inertial test unit based on the inertial test unit pose, the camera pose, and the relative coordinate pose.

Technical Field

The present application relates generally to systems and methods for autonomous driving, and more particularly to systems and methods for calibrating an inertial test unit (IMU) and a camera of an autonomous vehicle.

Background

Autonomous vehicles incorporating various sensors are becoming increasingly popular. Vehicle-mounted inertial test units and cameras play an important role in autonomous driving. However, in some cases, calibration between the inertial test unit and the camera is complicated, or indirect. It is therefore desirable to provide a system and method for calibrating an inertial test unit and camera in a simple and straightforward manner.

Disclosure of Invention

One aspect of the present application introduces a system for calibrating an inertial test unit and a camera of an autonomous vehicle. The system may include at least one storage medium including a set of instructions for calibrating the inertial test unit and the camera; and at least one processor in communication with the storage medium and configured, when executing the instructions, to: acquiring a straight-line running track of an automatic driving vehicle; determining the attitude of the inertial test unit relative to the first coordinate system; determining a camera pose of the camera relative to a second coordinate system; determining a relative coordinate pose between the first coordinate system and the second coordinate system; and determining a relative pose between the camera and the inertial test unit based on the inertial test unit pose, the camera pose, and the relative coordinate pose.

In some embodiments, the at least one processor is further configured to: determining the first coordinate system based on a trajectory of the autonomous vehicle.

In some embodiments, to determine the inertial test unit pose, the at least one processor is further to: acquiring inertial test unit data from an inertial test unit; and determining the inertial test unit pose based on the inertial test unit data and the first coordinate system.

In some embodiments, the at least one processor is further configured to: acquiring camera data from a camera; and determining the second coordinate system based on the camera data.

In some embodiments, to determine the camera pose, the at least one processor is further to: determining the camera pose based on camera data and a second coordinate system.

In some embodiments, to determine the second coordinate system, the at least one processor is further configured to: determining a second ground normal vector based on the camera data and the three-dimensional reconstruction technology; determining a second direction of travel of the camera based on the camera data; and determining the second coordinate system based on a second ground normal vector and a second direction of travel of the camera.

In some embodiments, the three-dimensional reconstruction technique is a motion restoration Structure (SFM) method.

In some embodiments, to determine the relative coordinate pose, the at least one processor is further configured to: aligning a first ground normal vector of a first coordinate system with a second ground normal vector of a second coordinate system; aligning a first direction of travel of the inertia test unit with a second direction of travel of the camera; and determining a relative coordinate pose between the first coordinate system and the second coordinate system.

According to another aspect of the present application, a method for calibrating an inertial test unit and a camera of an autonomous vehicle. The method may include acquiring a trajectory of a straight-line driving of the autonomous vehicle; determining the attitude of the inertial test unit relative to the first coordinate system; determining a camera pose of the camera relative to a second coordinate system; determining a relative coordinate posture between the first coordinate system and the second coordinate system; and determining a relative pose between the camera and the inertial test unit based on the inertial test unit pose, the camera pose, and the relative coordinate pose.

According to yet another aspect of the present application, a non-transitory computer-readable medium includes at least one set of instructions to calibrate an inertial test unit and a camera of an autonomous vehicle. When executed by at least one processor of an electronic device, at least one set of instructions instructs the at least one processor to perform a method. The method may include acquiring a trajectory of a straight-line driving of the autonomous vehicle; determining the attitude of the inertial test unit relative to the first coordinate system; determining a camera pose of the camera relative to a second coordinate system; determining a relative coordinate posture between the first coordinate system and the second coordinate system; and determining a relative pose between the camera and the inertial test unit based on the inertial test unit pose, the camera pose, and the relative coordinate pose.

According to yet another aspect of the present application, a system for calibrating an inertial test unit and a camera of an autonomous vehicle may include a trajectory acquisition module configured to acquire a trajectory of a straight-line driving of the autonomous vehicle; an inertial test unit pose determination module configured to determine an inertial test unit pose of the inertial test unit relative to a first coordinate system; a camera pose determination module configured to determine a camera pose of the camera relative to a second coordinate system; a relative coordinate pose determination module configured to determine a relative coordinate pose between the first coordinate system and the second coordinate system; and a relative pose determination module configured to determine a relative pose between the camera and the inertial test unit based on the inertial test unit pose, the camera pose, and the relative coordinate pose.

Additional features of the present application will be set forth in part in the description which follows. Additional features of some aspects of the present application will be apparent to those of ordinary skill in the art in view of the following description and accompanying drawings, or in view of the production or operation of the embodiments. The features of the present application may be realized and attained by practice or use of the methods, instrumentalities and combinations of the various aspects of the specific embodiments described below.

Drawings

Methods of the present application will be further described by way of exemplary embodiments. These exemplary embodiments will be described in detail by means of the accompanying drawings. The figures are not drawn to scale. These embodiments are not intended to be limiting, and in these embodiments, like reference numerals in the various figures denote similar structure, in which:

FIG. 1 is a schematic illustration of an exemplary autopilot system shown in accordance with some embodiments of the present application;

FIG. 2 is a schematic diagram of exemplary hardware and/or software components of an exemplary computing device shown in accordance with some embodiments of the present application;

FIG. 3 is a schematic diagram of exemplary hardware and/or software components of an exemplary mobile device shown in accordance with some embodiments of the present application;

FIG. 4 is a block diagram of an exemplary processing device shown in accordance with some embodiments of the present application;

FIG. 5 is a flow chart of an exemplary process for calibrating an inertial test unit and camera of an autonomous vehicle, shown in accordance with some embodiments of the present application;

FIG. 6 is a schematic diagram of a relative pose between a camera and an inertial test unit, shown in accordance with some embodiments of the present application;

FIG. 7 is a flow chart illustrating an exemplary process for determining the inertial test unit pose of the inertial test unit with respect to the first coordinate system according to some embodiments of the present application;

FIG. 8 is a flow diagram of an exemplary process for determining a camera pose of a camera relative to a second coordinate system, shown in accordance with some embodiments of the present application;

FIG. 9 is a flow diagram of an exemplary process for determining a second coordinate system, shown in accordance with some embodiments of the present application; and

FIG. 10 is a flow diagram illustrating an exemplary process for determining a relative coordinate pose between a first coordinate system and a second coordinate system according to some embodiments of the present application.

Detailed Description

The following description is presented to enable one of ordinary skill in the art to make and use the invention and is provided in the context of a particular application and its requirements. It will be apparent to those skilled in the art that various modifications to the disclosed embodiments are possible, and that the general principles defined in this application may be applied to other embodiments and applications without departing from the spirit and scope of the application. Thus, the present application is not limited to the described embodiments, but should be accorded the widest scope consistent with the claims.

The terminology used in the description presented herein is for the purpose of describing particular example embodiments only and is not intended to limit the scope of the present application. As used herein, the singular forms "a", "an" and "the" may include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this application, specify the presence of stated features, integers, steps, operations, components, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, components, and/or groups thereof.

These and other features, aspects, and advantages of the present application, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description of the accompanying drawings, all of which form a part of this specification. It is to be understood, however, that the drawings are designed solely for the purposes of illustration and description and are not intended as a definition of the limits of the application. It should be understood that the drawings are not to scale.

Flow charts are used herein to illustrate operations performed by systems according to some embodiments of the present application. It should be understood that the operations in the flow diagrams may be performed out of order. Rather, various steps may be processed in reverse order or simultaneously. Also, one or more other operations may be added to the flowcharts. One or more operations may also be deleted from the flowchart.

Further, while the systems and methods disclosed herein relate primarily to calibrating an inertial test unit and camera in an autopilot system, it should be understood that this is but one exemplary embodiment. The systems and methods of the present application may be applied to any other type of transportation system. For example, the systems and methods of the present application may be applied to transportation systems in different environments, including terrestrial, marine, aerospace, etc., or any combination thereof. The autonomous vehicles of the transportation system may include taxis, private cars, tailplanes, buses, trains, bullet trains, high speed railways, subways, ships, airplanes, space vehicles, hot air balloons, and the like, or any combination thereof.

One aspect of the present application relates to systems and methods for calibrating an inertial test unit and a camera of an autonomous vehicle. The system and method may define two coordinate systems when the autonomous vehicle is traveling straight. One coordinate system is used to determine the pose of the inertial test unit and the other coordinate system is used to determine the pose of the camera. Although the pose of the inertial test unit and the pose of the camera are in two different coordinate systems, the system and method may determine the relative poses of the two coordinate systems. In this manner, the system and method may determine the relative pose between the inertial test unit and the camera to calibrate them. According to the system and method described herein, the inertial test unit and the camera can be calibrated in a simple and straightforward manner.

FIG. 1 is a schematic diagram of an exemplary autopilot system 100 shown in accordance with some embodiments of the present application. In some embodiments, autopilot system 100 may include a vehicle 110 (e.g., vehicles 110-1, 110-2.. and/or 110-n), a server 120, a terminal device 130, a storage device 140, a network 150, and a positioning navigation system 160.

Vehicle 110 may be any type of autonomous vehicle, unmanned aerial vehicle, or the like. An autonomous vehicle or unmanned aerial vehicle may refer to a vehicle that is capable of some degree of driving automation. Exemplary levels of driving automation may include a first level where the vehicle is primarily supervised by humans and has a particular autonomous function (e.g., autonomous steering or acceleration), a second level where the vehicle has one or more Advanced Driver Assistance Systems (ADAS) (e.g., adaptive cruise control systems, lane keeping systems) that may control braking, steering, and/or acceleration of the vehicle, a third level where the vehicle may be autonomously driven when one or more particular conditions are met, a fourth level where the vehicle may operate without manual input or oversight, but still subject to certain limitations (e.g., confined to a certain area), a fifth level where the vehicle may operate autonomously in all circumstances, and the like, or any combination thereof.

In some embodiments, vehicle 110 may have equivalent structures that enable vehicle 110 to move or fly. For example, the vehicle 110 may include the structure of a conventional vehicle, such as a chassis, a suspension, a steering device (e.g., a steering wheel), a braking device (e.g., a brake pedal), an accelerator, and so forth. As another example, the vehicle 110 may have a body and at least one wheel. The body may be any body type, such as a sports vehicle, a coupe, a sedan, a pick-up truck, a station wagon, a Sport Utility Vehicle (SUV), a minivan, or a switch car. At least one wheel may be configured as all-wheel drive (AWD), front-wheel drive (FWR), rear-wheel drive (RWD), or the like. In some embodiments, contemplated vehicles 110 may be electric vehicles, fuel cell vehicles, hybrid vehicles, conventional internal combustion engine vehicles, and the like.

In some embodiments, the vehicle 110 is able to sense its environment and navigate using one or more detection units 112. The at least two detection units 112 may include a Global Positioning System (GPS) module, a radar (e.g., LiDAR), an inertial test unit (IMU), a camera, and the like, or any combination thereof. Radar (e.g., LiDAR) may be configured to scan the surrounding environment and generate point cloud data. The point cloud data may then be used to produce a digital three-dimensional representation of one or more objects surrounding the vehicle 110. A GPS module may refer to a device capable of receiving geolocation and time information from GPS satellites and then computing the geographic location of the device. Inertial test unit sensors may refer to electronic devices that use various inertial sensors to measure and provide specific forces, angular rates of a vehicle, and sometimes magnetic fields around the vehicle. The various inertial sensors may include acceleration sensors (e.g., piezoelectric sensors), velocity sensors (e.g., hall sensors), distance sensors (e.g., radar, lidar, infrared sensors), steering angle sensors (e.g., tilt sensors), traction-related sensors (e.g., force sensors), and so forth. The camera may be configured to acquire one or more images relating to objects (e.g., people, animals, trees, roadblocks, buildings, or vehicles) within range of the camera.

In some embodiments, the server 120 may be a single server or a group of servers. The set of servers may be centralized or distributed (e.g., server 120 may be a distributed system). In some embodiments, the server 120 may be local or remote. For example, server 120 may access information and/or data stored in terminal device 130, detection unit 112, vehicle 110, storage device 140, and/or position navigation system 160 via network 150. As another example, server 120 may be directly connected to terminal device 130, detection unit 112, vehicle 110, and/or storage device 140 to access stored information and/or data. In some embodiments, the server 120 may be implemented on a cloud platform or an on-board computer. By way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an internal cloud, a multi-tiered cloud, and the like, or any combination thereof. In some embodiments, the server 120 may execute on a computing device 200 described in fig. 2 herein that contains one or more components.

In some embodiments, the server 120 may include a processing device 122. Processing device 122 may process information and/or data associated with autonomous driving to perform one or more functions described herein. For example, the processing device 122 may calibrate the inertial test unit and the camera. In some embodiments, the processing apparatus 122 may include one or more processing engines (e.g., a single chip processing engine or a multi-chip processing engine). By way of example only, the processing device 122 may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), an application specific instruction set processor (ASIP), an image processing unit (GPU), a physical arithmetic processing unit (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a micro-controller unit, a Reduced Instruction Set Computer (RISC), a microprocessor, or the like, or any combination thereof. In an embodiment, processing device 122 may be integrated into vehicle 110 or terminal device 130.

In some embodiments, the terminal device 130 may include a mobile device 130-1, a tablet computer 130-2, a laptop computer 130-3, a built-in device in a vehicle 130-4, a wearable device 130-5, or the like, or any combination thereof. In some embodiments, the mobile device 130-1 may include a smart home device, a wearable device, a smart mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof. In some embodiments, the smart home devices may include smart lighting devices, smart appliance control devices, smart monitoring devices, smart televisions, smart cameras, interphones, and the like, or any combination thereof.In some embodiments, the wearable device may include a smart bracelet, smart footwear, smart glasses, smart helmet, smart watch, smart garment, smart backpack, smart accessory, or the like, or any combination thereof. In some embodiments, the smart mobile device may include a smart phone, a Personal Digital Assistant (PDA), a gaming device, a navigation device, a point of sale (POS), etc., or any combination thereof. In some embodiments, the virtual reality device and/or the enhanced virtual reality device may include a virtual reality helmet, virtual reality glasses, virtual reality eyecups, augmented reality helmets, augmented reality glasses, augmented reality eyecups, and the like, or any combination thereof. For example, the virtual reality device and/or the augmented reality device may include GoogleTMGlass, Oculus Rift, Hololens, Gear VR, etc. In some embodiments, the in-vehicle device 130-4 may include an in-vehicle computer, an in-vehicle television, or the like. In some embodiments, the server 120 may be integrated into the terminal device 130. In some embodiments, the terminal device 130 may be a device with positioning technology for locating the position of the terminal device 130.

Storage device 140 may store data and/or instructions. In some embodiments, storage device 140 may store data obtained from vehicle 110, detection unit 112, processing device 122, terminal device 130, position navigation system 160, and/or an external storage device. For example, the storage device 140 may store inertial test unit data acquired from the inertial test unit in the detection unit 112. For another example, the storage device 140 may store camera data acquired from a camera in the detection unit 112. In some embodiments, storage device 140 may store data and/or instructions that server 120 may perform or be used to perform the exemplary methods described in this disclosure. For example, the storage device 140 may store instructions that the processing device 122 may execute or otherwise be used to calibrate the inertial test unit and the camera. In some embodiments, storage device 140 may include mass storage, removable storage, volatile read-write memory, read-only memory (ROM), and the like, or any combination thereof. Exemplary mass storage devices may include magnetic disks, optical disks, solid state disks, and the like. Exemplary removable memories may include flash drives, floppy disks, optical disks, memory cards, compact disks, magnetic tape, and the like. Exemplary volatile read and write memories can include Random Access Memory (RAM). Exemplary random access memories may include Dynamic Random Access Memory (DRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), Static Random Access Memory (SRAM), thyristor random access memory (T-RAM), and zero capacitance random access memory (Z-RAM), among others. Exemplary read-only memories may include mask read-only memory (MROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), compact disc read-only memory (CD-ROM), digital versatile disc read-only memory (dvd-ROM), and the like. In some embodiments, the storage device 140 may execute on a cloud platform. By way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an internal cloud, a multi-tiered cloud, and the like, or any combination thereof.

In some embodiments, storage device 140 may be connected to network 150 to communicate with one or more components of autonomous driving system 100 (e.g., server 120, terminal device 130, detection unit 112, vehicle 110, and/or position location navigation system 160). One or more components of the autopilot system 100 may access data or instructions stored in the storage device 140 via the network 150. In some embodiments, storage device 140 may be directly connected to or in communication with one or more components of autonomous driving system 100 (e.g., server 120, terminal device 130, detection unit 112, vehicle 110, and/or position navigation system 160). In some embodiments, the storage device 140 may be part of the server 120. In some embodiments, storage device 140 may be integrated into vehicle 110.

The network 150 may facilitate the exchange of information and/or data. In some embodiments, one or more components of the autonomous driving system 100 (e.g., the server 120, the terminal device 130, the detection unit 112, the vehicle 110, the storage device 140, or the position location navigation system 160) may send information and/or data to other components of the autonomous driving system 100 via the network 150. For example, server 120 may obtain inertial test unit data or camera data from vehicle 110, terminal device 130, storage device 140, and/or position navigation system 160 via network 150. In some embodiments, the network 150 may be any form of wired or wireless network, or any combination thereof. By way of example only, network 150 may include a cable network, a wired network, a fiber optic network, a telecommunications network, an intranet, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), a bluetooth network, a zigbee network, a Near Field Communication (NFC) network, the like, or any combination thereof. In some embodiments, the network 150 may include one or more network access points. For example, the network 150 may include wired or wireless network access points (e.g., 150-1, 150-2) through which one or more components of the autopilot system 100 may connect to the network 150 to exchange data and/or information.

The position location navigation system 160 may determine information associated with the object, e.g., the terminal device 130, the vehicle 110, etc. In some embodiments, the positioning and navigation system 160 may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a COMPASS navigation system (COMPASS), a beidou navigation satellite system, a galileo positioning system, a quasi-zenith satellite system (QZSS), and the like. The information may include the position, altitude, velocity or acceleration of the object, current time, etc. The position location navigation system 160 may include one or more satellites, such as satellite 160-1, satellite 160-2, and satellite 160-3. The satellites 160-1 to 160-3 may determine the above information independently or collectively. Satellite positioning navigation system 160 may transmit the above information to network 150, terminal device 130, or vehicle 110 via a wireless connection.

One of ordinary skill in the art will appreciate that when an element (or component) of the autopilot system 100 executes, the element may execute via an electrical signal and/or an electromagnetic signal. For example, when the terminal device 130 sends a request to the server 120, the processor of the terminal device 130 may generate an electrical signal encoding the request. The processor of the terminal device 130 may then send the electrical signal to an output port. If the end device 130 communicates with the server 120 via a wired network, the output port may be physically connected to a cable, which further transmits the electrical signal to the input port of the server 120. If the end device 130 communicates with the server 120 via a wireless network, the output port of the end device 130 may be one or more antennas that convert the electrical signals to electromagnetic signals. Within an electronic device, such as terminal device 130 and/or server 120, when its processor processes instructions, issues instructions, and/or performs actions, the instructions and/or actions are performed by electrical signals. For example, when a processor retrieves or saves data from a storage medium (e.g., storage device 140), it may send electrical signals to a read/write device of the storage medium, which may read or write structured data in the storage medium. The structured data may be transmitted to the processor in the form of electrical signals over a bus of the electronic device. Herein, an electrical signal may refer to one electrical signal, a series of electrical signals, and/or a plurality of discrete electrical signals.

FIG. 2 is a schematic diagram of exemplary hardware and/or software components of an exemplary computing device shown in accordance with some embodiments of the present application. In some embodiments, server 120 and/or terminal device 130 may be implemented on computing device 200. For example, processing device 122 may be implemented on computing device 200 and configured to perform the functions of processing device 122 disclosed herein.

Computing device 200 may be used to implement any of the components of autopilot system 100 of the present application. For example, the processing device 122 of the autopilot system 100 may execute on the computing device 200 via its hardware, software programs, firmware, or a combination thereof. Although only one such computer is shown for convenience, the computer functionality described herein in connection with the autopilot system 100 may be implemented in a distributed manner across a plurality of similar platforms to distribute processing loads.

Computing device 200 may include a Communication (COM) port 250 connected to a network (e.g., network 150) connected thereto to facilitate data communication. Computing device 200 may also include a processor in the form of one or more processors (e.g., logic circuits), for executing program instructions (e.g., processor 220). For example, a processor may include interface circuitry and processing circuitry therein. Interface circuitry may be configured to receive electrical signals from bus 210, where the electrical signals encode structured data and/or instructions for the processing circuitry. The processing circuitry may perform logical computations and then determine the conclusion, result, and/or instruction encoding as electrical signals. The interface circuit may then send the electrical signals from the processing circuit via bus 210.

The computing device 200 may also include different forms of program memory and data memory, including: such as a disk 270, Read Only Memory (ROM)230, or Random Access Memory (RAM)240, for storing various data files that are processed and/or transmitted by the computing device 200. Exemplary computing device 200 may also include program instructions stored in read only memory 230, random access memory 240, and/or other types of non-transitory storage media that are executed by processor 220. The methods and/or processes of the present application may be embodied in the form of program instructions. Computing device 200 also includes input/output component 260, which supports input/output between computing device 200 and other components therein. Computing device 200 may also receive programming and data via network communications.

For illustration only, only one processor is depicted in computing device 200. However, it should be noted that the computing device 200 in the present application may also include multiple processors, and thus operations performed by one processor described in the present application may also be performed by multiple processors, collectively or individually. For example, the processors of computing device 200 perform operations a and B, yet for example, operations a and B may also be performed jointly or separately by two different processors in computing device 200 (e.g., a first processor performing operation a, a second processor performing operation B, or a first and second processor performing operations a and B jointly).

Fig. 3 is a schematic diagram of exemplary hardware and/or software components of an exemplary mobile device shown in accordance with some embodiments of the present application. In some embodiments, the terminal device 130 may be employed on the mobile device 300. As shown in fig. 3, mobile device 300 may include a communication platform 310, a display 320, a Graphics Processing Unit (GPU)330, a Central Processing Unit (CPU)340, an input/output (I/O)350, a memory 360, a mobile Operating System (OS)370, and a storage 390. In some embodiments, any other suitable component, including but not limited to a system bus or a controller (not shown), may also be included in mobile device 300.

In some embodiments, the operating system 370 is mobile (e.g., iOS)TM、AndroidTM、Windows PhoneTM) And one or more application programs 380 may be loaded from storage 390 into memory 360 for execution by CPU 340. The applications 380 may include a browser or any other suitable mobile application for receiving and presenting information related to positioning or other information from the processing device 122. User interaction with the information flow may be accomplished via the input/output 350 and provided to the processing device 122 and/or other components of the autopilot system 100 via the network 150.

To implement the various modules, units, and their functions described herein, a computer hardware platform may be used as the hardware platform for one or more of the components described herein. A computer with user interface components may be used to implement a Personal Computer (PC) or any other type of workstation or terminal device. The computer may also function as a server if appropriately programmed.

Fig. 4 is a block diagram of an exemplary processing device 122 shown in accordance with some embodiments of the present application. The processing device 122 may include a trajectory acquisition module 410, an inertial test unit pose determination module 420, a camera pose determination module 430, a relative coordinate pose determination module 440, and a relative pose determination module 450.

The trajectory acquisition module 410 may be configured to acquire a straight-line travel trajectory of the autonomous vehicle.

The inertial test unit pose determination module 420 may be configured to determine an inertial test unit pose of the inertial test unit relative to the first coordinate system. For example, the inertial test unit pose determination module 420 may acquire inertial test unit data from the inertial test unit and determine a first coordinate system. As another example, the inertial test unit pose determination module 420 may determine the inertial test unit pose based on the inertial test unit data and the first coordinate system.

The camera pose determination module 430 may be configured to determine a camera pose of the camera relative to the second coordinate system. For example, the camera pose determination module 430 may acquire camera data from a camera and determine the second coordinate system based on the camera data. For another example, camera pose determination module 430 may determine the camera pose based on the camera data and the second coordinate system.

Relative coordinate pose determination module 440 may be configured to determine a relative coordinate pose between the first coordinate system and the second coordinate system. For example, the relative coordinate pose determination module 440 may align a first ground normal vector of a first coordinate system with a second ground normal vector of a second coordinate system and align a first direction of travel of the inertia test unit with a second direction of travel of the camera. Relative coordinate pose determination module 440 may also determine a relative coordinate pose between the first coordinate system and the second coordinate system.

The relative pose determination module 450 may be configured to determine a relative pose between the camera and the inertial test unit based on the inertial test unit pose, the camera pose, and the relative coordinate pose.

The modules in the processing device 122 may be connected to or communicate with each other through a wired connection or a wireless connection. The wired connection may include a metal cable, an optical cable, a hybrid cable, etc., or any combination thereof. The wireless connection may include a Local Area Network (LAN), a Wide Area Network (WAN), bluetooth, zigbee network, Near Field Communication (NFC), etc., or any combination thereof. Two or more modules may be combined into a single module, and any one module may be split into two or more units. For example, the processing device 122 may include a storage module (not shown) for storing information and/or data related to the inertial test unit and the camera (e.g., inertial test unit data, camera data, etc.).

FIG. 5 is a flow chart of an exemplary process 500 for calibrating an inertial test unit and a camera of an autonomous vehicle, according to some embodiments of the present application. In some embodiments, process 500 may be implemented as a set of instructions (e.g., an application program) stored in read only memory 230 or random access memory 240. Processor 220 and/or the modules in fig. 4 may execute a set of instructions, and when executing the instructions, processor 220 and/or the modules may be configured to perform process 500. The operation of the process shown below is for illustration purposes only. In some embodiments, process 500 may be accomplished with one or more additional operations not described, and/or one or more operations not discussed herein. Additionally, the order of the operations of the process as shown in FIG. 5 and described below is not intended to be limiting.

At 510, the processing device 122 (e.g., the trajectory acquisition module 410, interface circuitry of the processor 220) may acquire a trajectory of the autonomous vehicle straight-driving.

In some embodiments, the inertial test unit and camera may be mounted on an autonomous vehicle for sensing and navigating the environment around the autonomous vehicle. In some embodiments, the autonomous vehicle may be controlled (by the driver or the processing device 122) to travel a predetermined distance in a straight line. In some embodiments, the predetermined distance may be a default value stored in (a storage device of) the system 100, such as the storage device 140, the read-only memory 230, the random access memory 240, etc., or determined by the system 100 or an operator thereof according to different application scenarios. For example, the predetermined distance may be 50 meters, 100 meters, 200 meters, 1000 meters, or the like. The processing device 122 may obtain the trajectory of the autonomous vehicle when the autonomous vehicle is traveling straight.

In 520, the processing device 122 (e.g., the inertial test unit pose determination module 420) may determine an inertial test unit pose of the inertial test unit relative to the first coordinate system.

In some embodiments, the inertial test unit pose relative to the first coordinate system may reflect an orientation, position, pose, or rotation of the inertial test unit relative to the first coordinate system. In some embodiments, the inertial test unit pose may be represented as an euler angle, a rotation matrix, an orientation quaternion, or the like, or any combination thereof. For example, the inertial test unit attitude may be represented as a rotation matrix as shown in FIG. 6Where R may represent a matrix, I may represent an inertial test unit, s1A first coordinate system may be represented.

FIG. 6 is a schematic diagram of an exemplary relative pose between a camera and an inertial test unit, shown in accordance with some embodiments of the present application. As shown in FIG. 6, C may represent the origin of the camera, and XI、YIAnd ZIMay represent three axes of the camera, respectively. I may represent the origin, X, of the inertial test unitI、YIAnd ZIMay represent three axes of the inertial test unit, respectively. O is1And O2Can respectively represent the first coordinate system S1And a second coordinate system S2Of the origin. XI、YIAnd ZICan respectively represent a first coordinate system S1Three axes of (a). X2、Y2And Z2Can respectively represent a second coordinate system S2Three axes of (a).The relative pose of the camera with respect to the inertial test unit may be represented. Can represent the camera relative to a second coordinate system S2The relative attitude of (a). Can represent the inertia test unit relative to a first coordinate system S1The relative attitude of (a). Can represent a second coordinate system S2Relative to a first coordinate system S1The relative attitude of (a).

In some embodiments, the first coordinate system may be a defined three-dimensional coordinate system. For example, when the autonomous vehicle is traveling straight, the processing device 122 may determine a first ground normal vector and a first direction of travel of the autonomous vehicle. Processing device 122 may determine the first coordinate system according to the right hand rule using the first ground normal vector and the first direction of travel as two axes of the first coordinate system.

In some embodiments, when the autonomous vehicle is traveling straight, the inertial test unit may use various inertial sensors to detect and output acceleration, rotation rate, and sometimes a magnetic field around the inertial test unit. For example, the various inertial sensors may include one or more accelerometers, one or more gyroscopes, one or more magnetometers, and the like, or any combination thereof. The processing device 122 may use the acceleration, rotation rate, and/or magnetic field to calculate the inertial test unit pose. Processes or methods for determining the first coordinate system and/or the inertial test unit pose may be found elsewhere in this application (e.g., fig. 7 and its description).

In 530, the processing device 122 (e.g., the camera pose determination module 430) may determine a camera pose of the camera relative to the second coordinate system.

In some embodiments, the camera pose relative to the second coordinate system may reflect an orientation, position, pose, or rotation of the camera relative to the second coordinate system. In some embodiments, the camera pose may be represented as an euler angle, a rotation matrix, an orientation quaternion, or the like, or any combination thereof. For example, the camera pose may be represented as a rotation matrix as shown in FIG. 6Wherein R may represent a matrix, R may represent a camera, and s2A second coordinate system may be represented.

In some embodiments, the second coordinate system may be a defined three-dimensional coordinate system associated with the camera. For example, when the autonomous vehicle is traveling straight, the camera may capture video or images within the camera's field of view. The processing device 122 may establish the second coordinate system based on the video or images taken from the camera. For example, the processing device 122 may take at least two pictures from a video or image and process the at least two pictures according to a three-dimensional reconstruction technique. The processing device 122 may obtain a second ground normal vector in the three-dimensional scene. The processing device 122 may determine the second coordinate system using the second ground normal vector and the second direction of travel of the camera as two axes of the second coordinate system according to the right hand rule.

In some embodiments, the processing device 122 may determine the camera pose based on three-dimensional reconstruction techniques. For example, the processing device 122 may input at least two pictures and/or internal parameters of the camera into the three-dimensional reconstruction technique. The three-dimensional reconstruction technique may output a camera pose relative to the second coordinate system and three-dimensional structural data of the scene captured by the camera. Processes or methods for determining the second coordinate system and/or camera pose may be found elsewhere in this application (e.g., fig. 8-9 and their descriptions).

At 540, processing device 122 (e.g., relative coordinate pose determination module 440) may determine a relative coordinate pose between the first coordinate system and the second coordinate system.

In some embodiments, the relative coordinate pose between the first coordinate system and the second coordinate system may reflect the orientation, position, pose, or rotation of the first coordinate system relative to the second coordinate system. In some embodiments, the relative coordinate pose may be represented as an euler angle, a rotation matrix, an orientation quaternion, or the like, or any combination thereof. For example, the relative coordinate pose may be represented as a rotation matrixAs shown in fig. 6, where R may represent a matrix, s1May represent a first coordinate system, s2A second coordinate system may be represented.

The first coordinate system and the second coordinate system are both defined coordinate systems and are essentially two different representations of the same coordinate system. In some embodiments, processing device 122 may determine the relative coordinate pose by rotating and aligning axes of the first coordinate system and the second coordinate system. For example, the processing device 122 may align a first ground normal vector of a first coordinate system with a second ground normal vector of a second coordinate system and align a second direction of travel of the inertia test unit with a second direction of travel of the cameras around the ground normal vector to determine the relative coordinate pose. In some embodiments, processing device 122 may determine the relative coordinate pose based on the same reference coordinate system. For example, processing device 122 may determine a first relative pose of a first coordinate system with respect to a world coordinate system, respectivelyAnd a second relative pose of the second coordinate system with respect to the world coordinate systemThe processing device 122 may determine the first relative pose by determining a first relative poseMultiplying by the second relative poseDetermining a relative coordinate pose of a first coordinate system relative to a second coordinate systemA process or method for determining relative coordinate poses can be found elsewhere in the application (e.g., fig. 10 and its description).

In 550, the processing device 122 (e.g., the relative pose determination module 450) may determine a relative pose between the camera and the inertial test unit based on the inertial test unit pose, the camera pose, and the relative coordinate pose.

In some embodiments, the relative pose between the camera and the inertial test unit may reflect the orientation, position, pose, or rotation of the camera relative to the inertial test unit. In some embodiments, the relative pose may be represented as an euler angle, a rotation matrix, an orientation quaternion, or the like, or any combination thereof. For example, the relative poses may be expressed as euler angles α, β, and γ. α, β, and γ may represent rotation angles around an X-axis, a Y-axis, and a Z-axis, respectively. As another example, the relative pose may be represented as a rotation matrix as shown in FIG. 6Where R may represent a matrix, C may represent a camera, and I may represent an inertial test unit. Rotation matrixMay be about three axes Rx、RYAnd RZThe product of the three rotation matrices. Wherein the content of the first and second substances, and is

In some embodiments, the processing device 122 may be based on inertial test unit attitudeCamera poseAnd relative coordinate attitudeDetermining the relative pose of a camera with respect to an inertial test unitFor example, the processing device 122 may determine the relative pose according to equation (1) below

The attitude of the relative coordinate is a transposed matrix of the attitude of the relative coordinate, and the attitude of the inertial test unit is a transposed matrix of the attitude of the inertial test unit.

In some embodiments, the relative pose between the camera and the inertial test unit may be used to navigate the autonomous vehicle. For example, processing device 122 may calculate the position of a three-dimensional target in a camera acquired by a lidar of an autonomous vehicle as the autonomous vehicle travels. With the help of the inertial test unit, the processing device 122 may first transform the three-dimensional target acquired by the lidar into the inertial test unit coordinate system, and then transform the three-dimensional target into the camera coordinate system using the relative pose between the camera and the inertial test unit.

It should be noted that the foregoing description is provided for illustrative purposes only, and is not intended to limit the scope of the present application. Various changes and modifications will occur to those skilled in the art based on the description herein. However, such changes and modifications do not depart from the scope of the present application. For example, one or more other optional operations (e.g., a store operation) may be added elsewhere in process 500. In a storage operation, the processing device 122 may store information and/or data (e.g., relative pose between the camera and the inertial test unit) in a storage device (e.g., storage device 140) disclosed elsewhere in this application.

FIG. 7 is a flow chart illustrating an exemplary process 700 for determining the inertial test unit pose with respect to the first coordinate system of the inertial test unit according to some embodiments of the present application. In some embodiments, process 700 may be implemented by a set of instructions (e.g., an application program) stored in read only memory 230 or random access memory 240. Processor 220 and/or the modules in fig. 4 may execute a set of instructions, and when executing the instructions, processor 220 and/or the modules may be configured to perform process 700. The operation of the process shown below is for illustration purposes only. In some embodiments, process 700 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed herein. Further, the order of the operations of the process shown in fig. 7 and described below is not limiting.

In 710, the processing device 122 (e.g., the inertial test unit attitude determination module 420, interface circuitry of the processor 220) may obtain inertial test unit data from the inertial test unit.

In some embodiments, the inertial test unit may include at least two inertial sensors, such as one or more accelerometers, one or more gyroscopes, one or more magnetometers, and the like, or any combination thereof. The inertial test unit may output inertial test unit data using the at least two inertial sensors. For example, the inertial test unit data may include acceleration, rotation rate, magnetic field around the autonomous vehicle, and the like, or any combination thereof. The processing device 122 may obtain the inertia test unit data from the inertia test unit while the autonomous vehicle is traveling.

At 720, the processing device 122 (e.g., the inertial test unit pose determination module 420) may determine a first coordinate system based on the trajectory of the autonomous vehicle.

In some embodiments, the processing device 122 may obtain the trajectory of the autonomous vehicle when the autonomous vehicle is traveling straight. Processing device 122 may determine a first ground normal vector and a first direction of travel of the autonomous vehicle from the trajectory of the autonomous vehicle. As shown in fig. 6, processing device 122 may use the first ground normal vector and the first direction of travel as two axes of a first coordinate system S1 (e.g., the first ground normal vector is X1, the first direction of travel is Y1), and determine a third axis of the first coordinate system S1 (e.g., Z1) according to right-hand rules.

In 730, the processing device 122 (e.g., the inertial test unit pose determination module 420) may determine an inertial test unit pose based on the inertial test unit data and the first coordinate system.

In some embodiments, the processing device 122 may calculate an inertial test unit pose of the inertial test unit relative to the first coordinate system based on the acceleration, the rate of rotation, and/or a magnetic field around the autonomous vehicle. For example, the processing device 122 may fuse the acceleration, rotation rate, and/or magnetic field according to a fusion algorithm to determine the inertial test unit pose. Exemplary fusion algorithms may include a complementary filtering method, a conjugate gradient filtering method, an extended kalman filtering method, an unscented kalman filtering method, or the like, or any combination thereof.

It should be noted that the foregoing is provided for illustrative purposes only and is not intended to limit the scope of the present application. Various changes and modifications will occur to those skilled in the art based on the description herein. However, such changes and modifications do not depart from the scope of the present application. For example, one or more other optional operations (e.g., storage operations) may be added elsewhere in process 700. In a storage operation, processing device 122 may store information and/or data (e.g., inertial test unit data) related to the inertial test unit in a storage device (e.g., storage device 140) disclosed elsewhere in this application.

Fig. 8 is a flow diagram of an exemplary process 700 for determining a camera pose of a camera relative to a second coordinate system, shown in accordance with some embodiments of the present application. In some embodiments, process 800 may be implemented by a set of instructions (e.g., an application program) stored in read only memory 230 or random access memory 240. Processor 220 and/or the modules in fig. 4 may execute a set of instructions, and when executing the instructions, processor 220 and/or the modules may be configured to perform process 800. The operation of the process shown below is for illustration purposes only. In some embodiments, process 800 may be accomplished with one or more additional operations not described, and/or without one or more operations discussed herein. Additionally, the order of the operations of the process illustrated in FIG. 8 and described below is not intended to be limiting.

In 810, the processing device 122 (e.g., the camera pose determination module 430, interface circuitry of the processor 220) may acquire camera data from the camera.

In some embodiments, the camera may acquire camera data (e.g., video or images) within range of the autonomous vehicle when the autonomous vehicle is traveling straight. The processing device 122 may acquire camera data from the camera.

In 820, the processing device 122 (e.g., the camera pose determination module 430) may determine a second coordinate system based on the camera data.

In some embodiments, the processing device 122 may input the camera data into a three-dimensional reconstruction technique to acquire a three-dimensional scene. Exemplary three-dimensional reconstruction techniques may include a texture Shape (SFT) method, a shading reconstruction three-dimensional shape method, a multi-view stereo (MVS) method, a motion restoration Structure (SFM) method, a time-of-flight (ToF) method, a structured light method, a moire schlieren method, and the like, or any combination thereof. In a three-dimensional scene, processing device 122 may acquireA second ground normal vector of the camera and a second direction of travel. As shown in fig. 6, the processing device 122 may take the second ground normal vector and the second driving direction as the second coordinate system S according to the right-hand rule2For example, the second ground normal vector as X2 and the second direction of travel as Y2 of a second coordinate system S2 to determine a third axis (e.g., Z)2)). A process or method for determining the second coordinate system may be found elsewhere in the application (e.g., fig. 10 and its description).

In 830, the processing device 122 (e.g., the camera pose determination module 430) may determine a camera pose based on the camera data and the second coordinate system.

In some embodiments, the processing device 122 may determine the camera pose using three-dimensional reconstruction techniques. For example, the processing device 122 may input camera data and/or internal parameters of the camera into the motion recovery structure method. The motion restoration structure method may automatically restore the motion of the camera and the three-dimensional structure of the scene photographed by the camera using the video or image photographed by the camera. For example, in the motion restoration structure method, a set of two-dimensional feature points in a video or image may be tracked to obtain a feature point trajectory over time. Using the trajectory of the feature points over time, the position at which the camera is located and/or the three-dimensional position of the feature points can then be derived. Using the position at which the camera is located and/or the three-dimensional position of the feature points, a rotation matrix between the camera and the second coordinate system may be determined. The processing device 122 may determine a camera pose of the camera relative to the second coordinate system based on the rotation matrix.

It should be noted that the foregoing is provided for illustrative purposes only and is not intended to limit the scope of the present application. Various changes and modifications will occur to those skilled in the art based on the description herein. However, such changes and modifications do not depart from the scope of the present application. For example, one or more other optional operations (e.g., a store operation) may be added elsewhere in process 800. In a storage operation, the processing device 122 may store information and/or data (e.g., camera data) related to cameras in storage devices disclosed elsewhere in this application (e.g., storage device 140).

Fig. 9 is a flow diagram of an exemplary process 900 for determining a second coordinate system, shown in accordance with some embodiments of the present application. In some embodiments, process 900 may be implemented by a set of instructions (e.g., an application program) stored in read only memory 230 or random access memory 240. Processor 220 and/or the modules in fig. 4 may execute a set of instructions, and when executing the instructions, processor 220 and/or the modules may be configured to perform process 900. The operation of the process shown below is for illustration purposes only. In some embodiments, process 900 may be accomplished with one or more additional operations not described and/or one or more operations not discussed herein. Additionally, the order of the operations of the process illustrated in FIG. 9 and described below is not intended to be limiting.

In 910, the processing device 122 (e.g., the camera pose determination module 430) may determine a second ground normal vector based on the camera data and the three-dimensional reconstruction technique.

In some embodiments, the processing device 122 may input the camera data into a three-dimensional reconstruction technique (e.g., a kinematic mechanism method) to acquire a three-dimensional scene. The processing device 122 may obtain a ground normal vector in the three-dimensional scene as a second ground normal vector.

In 920, the processing device 122 (e.g., the camera pose determination module 430) may determine a second direction of travel of the camera based on the camera data.

The processing device 122 may acquire the travel direction of the camera as a second travel direction in the three-dimensional scene.

In 930, the processing device 122 (e.g., the camera pose determination module 430) may determine a second coordinate system based on the second ground normal vector and the second direction of travel of the camera.

In some embodiments, as shown in FIG. 6, processing device 122 may use a second ground normal vector and a second direction of travel, respectively, as second coordinate system S according to a right-hand rule2Two axes (e.g. the second ground normal vector is X)2The second driving direction is Y2) To determine the second seatMarker series S2Of (e.g. Z)2). The processing device 122 may use the second ground normal vector, the second direction of travel, and the determined third axis, respectively, as X2,Y2And Z2And determining a second coordinate system.

It should be noted that the foregoing is provided for illustrative purposes only and is not intended to limit the scope of the present application. Various changes and modifications will occur to those skilled in the art based on the description herein. However, such changes and modifications do not depart from the scope of the present application. For example, one or more other optional operations (e.g., store operations) may be added elsewhere in process 900. In a storage operation, the processing device 122 may store information and/or data (e.g., camera data) related to cameras in storage devices disclosed elsewhere in this application (e.g., storage device 140).

FIG. 10 is a flow diagram illustrating an exemplary process 900 for determining a relative coordinate pose between a first coordinate system and a second coordinate system according to some embodiments of the present application. In some embodiments, process 1000 may be implemented by a set of instructions (e.g., an application program) stored in read only memory 230 or random access memory 240. Processor 220 and/or the modules in fig. 4 may execute a set of instructions, and when executing the instructions, processor 220 and/or the modules may be configured to perform process 1000. The operation of the process shown below is for illustration purposes only. In some embodiments, process 1000 may be accomplished with one or more additional operations not described, and/or one or more operations not discussed herein. Additionally, the order of the operations of the process illustrated in FIG. 10 and described below is not intended to be limiting.

In 1010, processing device 122 (e.g., relative coordinate pose determination module 440) may align a first ground normal vector of a first coordinate system with a second ground normal vector of a second coordinate system.

In some embodiments, processing device 122 may translate and/or rotate the first coordinate system toward the second coordinate system. The processing device 122 may align the first ground normal vector with the second ground normal vector after translation and/or rotation. In some embodiments, the processing device 122 may record the translation and/or rotation in the form of euler angles, rotation matrices, orientation quaternions, and the like, or any combination thereof.

In 1020, the processing device 122 (e.g., the relative coordinate pose determination module 440) may align the first direction of travel of the inertia test unit with the second direction of travel of the camera.

In some embodiments, after aligning the first ground normal vector with the second ground normal vector, the processing device 122 may also align the first direction of travel of the inertia test unit with the second direction of travel of the camera about the aligned ground normal vector by rotating. In some embodiments, the processing device 122 may record the rotation in the form of an euler angle, a rotation matrix, an orientation quaternion, or the like, or any combination thereof.

In 1030, processing device 122 (e.g., relative coordinate pose determination module 440) may determine a relative coordinate pose between the first coordinate system and the second coordinate system described in fig. 6.

In some embodiments, the processing device 122 may determine the relative coordinate pose based on the translation and/or rotation. In some embodiments, the relative coordinate pose may be represented as an euler angle, a rotation matrix, an orientation quaternion, or the like, or any combination thereof.

It should be noted that the foregoing is provided for illustrative purposes only and is not intended to limit the scope of the present application. Various changes and modifications will occur to those skilled in the art based on the description herein. However, such changes and modifications do not depart from the scope of the present application. For example, one or more other optional operations (e.g., storage operations) may be added elsewhere in process 1000. For another example, processing device 122 may determine a relative coordinate pose between the first coordinate system and the second coordinate system based on the same reference coordinate system (e.g., world coordinate system).

Having thus described the basic concepts, it will be apparent to those of ordinary skill in the art having read this application that the foregoing disclosure is to be construed as illustrative only and is not limiting of the application. Various modifications, improvements and adaptations of the present application may occur to those skilled in the art, although they are not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present application and thus fall within the spirit and scope of the exemplary embodiments of the present application.

Also, this application uses specific language to describe embodiments of the application. For example, "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the application. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the application may be combined as appropriate.

Moreover, those of ordinary skill in the art will understand that aspects of the present application may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, articles, or materials, or any new and useful improvement thereof. Accordingly, various aspects of the present application may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combination of hardware and software. The above hardware or software may be referred to as a "unit", "module", or "system". Furthermore, aspects of the present application may take the form of a computer program product embodied in one or more computer-readable media, with computer-readable program code embodied therein.

A computer readable signal medium may comprise a propagated data signal with computer program code embodied therewith, for example, on baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electro-magnetic, optical, and the like, or any suitable combination. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code on a computer readable signal medium may be propagated over any suitable medium, including radio, electrical cable, fiber optic cable, radio frequency, etc., or any combination of the preceding.

Computer program code required for operation of various portions of the present application may be written in any one or more programming languages, including a subject oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, a conventional programming language such as C, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, a dynamic programming language such as Python, Ruby, and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).

Additionally, the order in which elements and sequences of the processes described herein are processed, the use of alphanumeric characters, or the use of other designations, is not intended to limit the order of the processes and methods described herein, unless explicitly claimed. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.

Similarly, it should be noted that in the preceding description of embodiments of the present application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the embodiments. This method of application, however, is not to be interpreted as reflecting an intention that the claimed subject matter to be scanned requires more features than are expressly recited in each claim. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.

27页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:信息处理装置、信息处理方法和程序

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!