Indoor navigation positioning device based on multi-view vision and positioning method thereof

文档序号:849042 发布日期:2021-03-16 浏览:8次 中文

阅读说明:本技术 基于多目视觉的室内导航定位装置及其定位方法 (Indoor navigation positioning device based on multi-view vision and positioning method thereof ) 是由 王纪武 刘伟 戴波 杨历 原雪纯 褚文杰 裴欣 韩晓 许钧翔 严晨 韩硕 于 2020-11-24 设计创作,主要内容包括:本发明提供了一种基于多目视觉的室内导航定位装置及其定位方法,所述定位装置包括机器人、L形线光源、多个单目相机以及控制系统。L形线光源设置在机器人上。所述多个单目相机位于机器人上方,相邻两个单目相机的视场范围有交叉,且所述多个单目相机的总视场范围不小于机器人的行走范围。控制系统包括视觉处理系统和信号传输系统,视觉处理系统通信连接于所述多个单目相机,信号传输系统包括上位机和下位机,上位机通信连接于视觉处理系统,下位机设置在机器人上并通信连接于上位机和机器人。在本申请中,由于该定位装置和该定位方法的定位过程不受外界环境干扰、也不受定位区域限制,从而提高了定位精度、降低了定位成本。(The invention provides an indoor navigation positioning device based on multi-view vision and a positioning method thereof. The L-shaped linear light source is arranged on the robot. The plurality of monocular cameras are located above the robot, the view field ranges of two adjacent monocular cameras are crossed, and the total view field range of the plurality of monocular cameras is not smaller than the walking range of the robot. The control system comprises a vision processing system and a signal transmission system, the vision processing system is in communication connection with the monocular cameras, the signal transmission system comprises an upper computer and a lower computer, the upper computer is in communication connection with the vision processing system, and the lower computer is arranged on the robot and is in communication connection with the upper computer and the robot. In the application, the positioning process of the positioning device and the positioning method is not interfered by the external environment and is not limited by the positioning area, so that the positioning precision is improved, and the positioning cost is reduced.)

1. An indoor navigation positioning device based on multi-view vision is characterized by comprising a robot (1), an L-shaped line light source (2), a plurality of monocular cameras (3) and a control system (4);

the L-shaped linear light source (2) is arranged on the robot (1);

the monocular cameras (3) are positioned above the robot (1), the view field ranges of two adjacent monocular cameras (3) are crossed, and the total view field range of the monocular cameras (3) is not smaller than the walking range (S) of the robot (1);

control system (4) include vision processing system (41) and signal transmission system (42), vision processing system (41) communication connect in a plurality of monocular cameras (3), signal transmission system (42) include host computer (421) and next machine (422), host computer (421) communication connection in vision processing system (41), next machine (422) set up on robot (1) and communication connection in host computer (421) and robot (1).

2. The indoor visual sense-based navigation and positioning device according to claim 1, further comprising a mounting bracket (5), wherein the mounting bracket (5) fixedly mounts the plurality of monocular cameras (3).

3. A positioning method of an indoor navigation positioning device based on multi-vision, characterized in that the positioning method is realized by the indoor navigation positioning device based on multi-vision of claim 1, and the positioning method comprises the following steps:

s1, numbering the monocular cameras (3) and establishing a camera coordinate system O of each monocular camera (3)2a-XaYaZaA pixel coordinate system O corresponding to each monocular camera (3)1a-UaVaAnd a world coordinate system O-XYZ in the indoor scene, wherein a is a camera number;

s2, acquiring initial images of indoor scenes by using the monocular cameras (3), acquiring initial image data of all the monocular cameras (3) through the vision processing system (41), and splicing all the initial image data through the upper computer (421) to obtain a two-dimensional panoramic map;

s3, manually planning a target motion track of the robot (1) on the two-dimensional panoramic map through an upper computer (421), wherein the target motion track is formed by a series of planning points on the two-dimensional panoramic map;

s4, calculating coordinates of a series of planning points on the two-dimensional panoramic map in a world coordinate system;

s5, placing the robot (1) in the indoor scene, selecting the robot (1) as a tracking target through an upper computer (421), in the moving process of the robot (1), carrying out target tracking on the robot (1) through the upper computer (421), obtaining the real-time position of the robot (1) under a pixel coordinate system, and then calculating the real-time position of the robot (1) under a world coordinate system;

s6, calculating the real-time posture of the robot (1) at the real-time position in the step S5 based on the position of the L-shaped line light source (2) on the robot (1) in the real-time image acquired by the monocular camera (3), wherein the real-time position and the real-time posture of the robot (1) in a world coordinate system are the real-time posture of the robot (1);

s7, the upper computer (421) compares the real-time pose of the mobile robot (1) with the target motion track and outputs a walking control signal to the lower computer (422), the lower computer (422) transmits the received walking control signal to the robot (1), and the robot (1) completes a walking instruction based on the walking control signal and finally reaches a planned destination.

4. The positioning method of the indoor navigation and positioning device based on multi-view vision as claimed in claim 3, wherein in step S4, the calculation process of any planned point on the two-dimensional panoramic map includes the steps of:

s41, reading the coordinate (u) of the planning point in the pixel coordinate system1,v1);

S42, selecting two adjacent monocular cameras (3) from all the monocular cameras (3) acquiring the planning point, projecting the original points of the camera coordinate systems of the two adjacent monocular cameras (3) into a world coordinate system, and obtaining the coordinate P of the projection point of the original points of the camera coordinate systems of the two adjacent monocular cameras (3)1(x1,y1) And P2(x2,y2);

S43, calculating coordinates (x, y) of the planning point in the world coordinate system, wherein the calculation formula is:

wherein f isaxIs a monocular camera (3) along UaNormalized focal length of axis, fayIs a monocular camera (3) along VaNormalized focal length of axis, caxIs a U of the optical center of the monocular camera (3)aAxis coordinate, cayV being the optical center of the monocular camera (3)aAxial coordinate, ZcThe vertical distance between the monocular camera (3) and the plane where the robot (1) is located.

5. The positioning method of the indoor navigation and positioning device based on multi-view vision as claimed in claim 3, wherein the step S5 is implemented by calculating the real-time position of the robot (1) in the world coordinate system at any time, and the step S5 comprises the steps of:

s51, reading the current coordinate (u) of the robot (1) in the pixel coordinate system1',v1');

S52, two adjacent monocular cameras (3) are selected from all the monocular cameras (3) collected to the robot (1), the original points of the camera coordinate systems of the two adjacent monocular cameras (3) are projected to a world coordinate system, and the coordinates P of the projection points of the original points of the camera coordinate systems of the two adjacent monocular cameras (3) are obtained1'(x1',y1') and P2'(x2',y2');

S53, calculating the coordinates (x ', y') of the robot (1) under the world coordinate system, wherein the calculation formula is as follows:

wherein f isaxIs a monocular camera (3) along UaNormalized focal length of axis, fayIs a monocular camera (3) along VaNormalized focal length of axis, caxIs a U of the optical center of the monocular camera (3)aAxis coordinate, cayV being the optical center of the monocular camera (3)aAxial coordinate, ZcThe vertical distance between the monocular camera (3) and the plane where the robot (1) is located.

6. The positioning method of the indoor navigation and positioning device based on multi-view vision as claimed in claim 3, wherein in step S6, the calculation process of the real-time pose of the robot (1) at the real-time position in step S5 comprises the steps of:

s61, selecting a segment AB on the L-shaped linear light source (2) as a target segment, and respectively reading the coordinates of an endpoint A, B of the segment AB under the current pixel coordinate system;

s62, byDetermining the direction of rotation of the robot (1), whereinIs the direction vector of the line segment AB currently under the pixel coordinate system,the direction vector of the line segment AB at the previous moment in the pixel coordinate system is shown;

s63, byThe rotation angle theta of the robot (1) is determined.

7. The positioning method of the indoor navigation positioning device based on the multi-vision as claimed in claim 6, wherein the L-shaped linear light source (2) comprises a long line segment (21) and a short line segment (22), and the line segment AB is the long line segment (21) or the short line segment (22) on the L-shaped linear light source (2).

Technical Field

The invention relates to the technical field of robot navigation and positioning, in particular to an indoor navigation and positioning device based on multi-view vision and a positioning method thereof.

Background

Positioning is to determine the position of a target object, and positioning technologies can be divided into outdoor positioning and indoor positioning according to different environments. Nowadays, outdoor positioning technologies such as GPS in the united states, GLONASS in russia, GALILEO in the european union, and beidou satellite navigation system in china are mature enough to satisfy positioning in most outdoor environments. However, under indoor conditions, the number of obstacles is large, the environment is complex and even multidimensional, so that once the outdoor positioning technologies are applied to indoor scene conditions, the positioning accuracy is greatly reduced due to satellite signal attenuation, and the technologies cannot be directly applied to indoor.

At present, how to obtain position information in a complex indoor scene becomes a research hotspot nowadays, and a batch of solutions of special equipment represented by infrared positioning, ultrasonic positioning, WIFI signal positioning, ultra-wideband positioning and radio frequency identification positioning and solutions based on geomagnetic positioning emerge. However, the solutions based on the dedicated device and the geomagnetic positioning have problems of being easily interfered, limited positioning area, high laying cost, and the like.

Disclosure of Invention

In view of the problems in the background art, an object of the present invention is to provide an indoor navigation positioning apparatus and a positioning method thereof based on multi-view vision, wherein the positioning process is not interfered by the external environment and is not limited by the positioning area, thereby improving the positioning accuracy and reducing the positioning cost.

In order to achieve the above object, the present invention provides an indoor navigation and positioning device based on multi-view vision, which includes: the robot comprises a robot, an L-shaped line light source, a plurality of monocular cameras and a control system. The L-shaped linear light source is arranged on the robot. The plurality of monocular cameras are located above the robot, the view field ranges of two adjacent monocular cameras are crossed, and the total view field range of the plurality of monocular cameras is not smaller than the walking range of the robot. The control system comprises a vision processing system and a signal transmission system, the vision processing system is in communication connection with the monocular cameras, the signal transmission system comprises an upper computer and a lower computer, the upper computer is in communication connection with the vision processing system, and the lower computer is arranged on the robot and is in communication connection with the upper computer and the robot.

In the binocular vision based indoor navigation and positioning apparatus according to some embodiments, the binocular vision based indoor navigation and positioning apparatus further comprises a mounting bracket which fixedly mounts the plurality of monocular cameras.

The invention also provides a positioning method of the indoor navigation positioning device based on the multi-view vision, which is realized by the indoor navigation positioning device based on the multi-view vision. Wherein the positioning method comprises steps S1-S7.

S1, numbering the monocular cameras, and establishing a camera coordinate system O of each monocular camera2a-XaYaZaA pixel coordinate system O corresponding to each monocular camera1a-UaVaAnd a world coordinate system O-XYZ in the indoor scene, where a is the camera number. And S2, acquiring initial images of indoor scenes by using the monocular cameras, acquiring initial image data of all the monocular cameras through the vision processing system, and splicing all the initial image data through the upper computer to obtain a two-dimensional panoramic map. And S3, manually planning a target motion track of the robot on the two-dimensional panoramic map through the upper computer, wherein the target motion track is formed by a series of planning points on the two-dimensional panoramic map. And S4, calculating the coordinates of a series of planning points on the two-dimensional panoramic map in a world coordinate system. S5, placing the robot in the indoor scene, selecting the robot as a tracking target through the upper computer, carrying out target tracking on the robot and obtaining the real-time position of the robot under the pixel coordinate system by the upper computer in the moving process of the robot, and then calculating the real-time position of the robot under the world coordinate system. And S6, calculating the real-time posture of the robot at the real-time position in the step S5 based on the position of the L-shaped line light source on the robot in the real-time image acquired by the monocular camera, wherein the real-time position and the real-time posture of the robot in the world coordinate system are the real-time posture of the robot. And S7, the upper computer compares the real-time pose of the mobile robot with the target motion track and outputs a walking control signal to the lower computer, the lower computer transmits the received walking control signal to the robot, and the robot completes a walking instruction based on the walking control signal and finally reaches a planned destination.

In the positioning method of the indoor navigation positioning device based on the multi-view vision according to some embodiments, in step S4, the calculation process of the arbitrary planning point on the two-dimensional panoramic map includes the steps of: s41, reading the coordinate (u) of the planning point in the pixel coordinate system1,v1) (ii) a S42, two adjacent monocular cameras are selected from all the monocular cameras which acquire the planning point, the original points of the camera coordinate systems of the two adjacent monocular cameras are projected into a world coordinate system, and the coordinate P of the projection point of the original points of the camera coordinate systems of the two adjacent monocular cameras is obtained1(x1,y1) And P2(x2,y2) (ii) a S43, calculating coordinates (x, y) of the planning point in the world coordinate system, wherein the calculation formula is:

wherein f isaxIs a monocular camera edge UaNormalized focal length of axis, fayFor monocular camera edge VaNormalized focal length of axis, caxU being the optical center of monocular cameraaAxis coordinate, cayV being the optical center of monocular cameraaAxial coordinate, ZcThe vertical distance between the monocular camera and the plane where the robot is located.

In the positioning method of the indoor navigation positioning device based on multi-view vision according to some embodiments, in step S5, the calculation process of the real-time position of the robot in the world coordinate system at any time comprises the following steps: s51, reading out the current coordinate (u) of the robot in the pixel coordinate system1',v1') to a host; s52, two adjacent monocular cameras are selected from all the monocular cameras which acquire the robot, the original points of the camera coordinate systems of the two adjacent monocular cameras are projected into a world coordinate system, and the coordinate P of the projection point of the original points of the camera coordinate systems of the two adjacent monocular cameras is obtained1'(x1',y1') and P2'(x2',y2') to a host; s53, calculating the coordinates (x ', y') of the robot in the world coordinate system, and the calculation formula is:

wherein f isaxIs a monocular camera edge UaNormalized focal length of axis, fayFor monocular camera edge VaNormalized focal length of axis, caxU being the optical center of monocular cameraaAxis coordinate, cayV being the optical center of monocular cameraaAxial coordinate, ZcThe vertical distance between the monocular camera and the plane where the robot is located.

In the positioning method of the multi-vision based indoor navigation positioning device according to some embodiments, in step S6, the calculation process of the real-time pose of the robot at the real-time position in step S5 includes the steps of: s61, selecting a segment AB on the L-shaped line source as a target segment, and respectively reading the coordinates of an endpoint A, B of the segment AB under a pixel coordinate system; s62, byDetermining a rotation direction of the robot, whereinIs the direction vector of the line segment AB currently under the pixel coordinate system,the direction vector of the line segment AB at the previous moment in the pixel coordinate system is shown; s63, byThe rotation angle θ of the robot is determined.

In the positioning method of the indoor navigation positioning device based on the multi-vision according to some embodiments, the L-shaped linear light source comprises a long line segment and a short line segment, and the line segment AB is the long line segment or the short line segment on the L-shaped linear light source.

The invention has the following beneficial effects:

in the indoor navigation positioning device and the positioning method based on the multi-view vision, the positioning process of the positioning device and the positioning method is not interfered by the external environment and is not limited by the positioning area, so that the positioning precision is improved, and the positioning cost is reduced. In addition, the positioning range realized by the positioning device and the positioning method can realize elastic adjustment along with the flexible deployment of the monocular camera. In addition, the positioning device and the positioning method are suitable for occasions with high requirements on automation degree and high efficiency of machine operation, and can effectively avoid the influence of personnel intervention on production safety and operation efficiency.

Drawings

Fig. 1 is a schematic structural diagram of an indoor navigation and positioning device based on multi-vision of the present invention.

Fig. 2 is a view illustrating the range of the field of view of a plurality of monocular cameras according to the present invention.

FIG. 3 is a schematic diagram of the position of the L-shaped line light source at two different times in the present invention.

Fig. 4 is a relationship diagram of three types of cartesian coordinate systems in the present invention.

Fig. 5 is a schematic block diagram of a positioning method of the indoor navigation positioning device based on multi-vision of the invention.

Wherein the reference numerals are as follows:

1 robot 41 vision processing system

2L-shaped line source 42 signal transmission system

21 long line 421 upper computer

22 short line 422 lower machine

3 monocular camera 5 installing support

4 control system S running range

Detailed Description

In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "including" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures are intended to cover non-exclusive inclusions.

Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments. In addition, "a plurality" appearing in the present application means two or more (including two).

The indoor navigation positioning device based on multi-view vision and the positioning method thereof according to the present application are described in detail below with reference to the accompanying drawings.

Referring to fig. 1 to 4, the indoor navigation and positioning device based on multi-view vision of the present application includes a robot 1, an L-shaped line light source 2, a plurality of monocular cameras 3, and a control system 4.

The robot 1 (provided with a control program) is connected to the control system 4 in a communication manner, and the robot 1 finishes walking under the control action of the control system 4. In some embodiments, the robot 1 may be a mobile cart or a mobile manipulator.

An L-shaped linear light source 2 is provided on the robot 1. During the positioning process of the indoor navigation positioning device based on the multi-view vision, the L-shaped line light source 2 clearly presents as an L-shaped broken line segment in the image acquired by the monocular camera 3.

In some embodiments, referring to fig. 2, the L-shaped linear light source 2 includes a long line segment 21 and a short line segment 22. Wherein, long line segment 21 and short line segment 22 all can be sharp linear light source, and L shape linear light source 2 forms by two sharp linear light source concatenations this moment, and the contained angle between two sharp linear light sources can carry out reasonable setting based on actual conditions.

Referring to fig. 1 and 2, the plurality of monocular cameras 3 are located above the robot 1, and the view field ranges of two adjacent monocular cameras 3 are crossed, so that the robot 1 can be photographed by at least two monocular cameras 3 at any time and at any position. Wherein the number and relative position of the monocular cameras 3 may be selectively set for different ranges of indoor scenes. In the positioning process of the indoor navigation positioning device based on the multi-view vision, in order to ensure that the plurality of monocular cameras 3 can always shoot the walking of the robot 1, the total view field range of the plurality of monocular cameras 3 is not less than the walking range S of the robot 1. It should be noted that, for the convenience of positioning calculation and the reduction of positioning error, the internal parameters of the monocular cameras 3 are all consistent.

In some embodiments, referring to fig. 1, control system 4 includes a vision processing system 41 and a signal transmission system 42. Wherein the vision processing system 41 is communicatively connected to the plurality of monocular cameras 3. The signal transmission system 42 includes an upper computer 421 and a lower computer 422, the upper computer 421 is communicatively connected to the vision processing system 41, and the lower computer 422 is disposed on the robot 1 and is communicatively connected to the upper computer 421 and the robot 1.

In some embodiments, referring to fig. 1, the indoor navigation and positioning device based on multi-view vision further includes a mounting bracket 5, the mounting bracket 5 fixedly mounts the plurality of monocular cameras 3 and ensures that the plurality of monocular cameras 3 are mounted at the same height, i.e. all monocular cameras 3 are perpendicular to the plane of the robot 1 (i.e. Z is described below)c) Are all equal.

In the indoor navigation positioning device based on the multi-view vision, the plurality of monocular cameras 3 are used for acquiring the real-time position of the robot 1; the vision processing system 41 acquires the image data acquired by the plurality of monocular cameras 3 based on the communication connection with the plurality of monocular cameras 3; the upper computer 421 of the signal transmission system 42 splices the image data based on the communication connection with the vision processing system 41 to obtain a two-dimensional panoramic map of an indoor scene, and performs motion trajectory planning, target tracking, pose coordinate conversion and walking control signal output to the lower computer 422 on the robot 1; the lower computer 422 transmits the received walking control signal to the robot 1, and the robot 1 completes a walking instruction based on the walking control signal and finally reaches a planned destination, so that indoor navigation and positioning are realized. The indoor navigation positioning device based on the multi-view vision is simple in structure and convenient to operate, cannot be interfered by an external environment in the positioning process, and is not limited by a positioning area (namely, the application range is wide), so that the positioning precision is improved, and the positioning cost is reduced. And based on the flexible deployability of the number and the positions of the monocular cameras 3, the indoor navigation positioning device based on the monocular vision has high transportability, low transplantation cost and a positioning range capable of realizing flexible adjustment along with the flexible deployment of the cameras. In addition, this application positioner is applicable to degree of automation and requires the efficient occasion (especially is applicable to the occasion of robot long term reciprocating motion indoor), and can effectively avoid personnel to intervene the influence to production safety and operating efficiency.

The positioning method of the indoor navigation positioning device based on the multi-view vision is implemented by using the indoor navigation positioning device based on the multi-view vision, and referring to fig. 1 to 5, the positioning method of the indoor navigation positioning device based on the multi-view vision includes steps S1-S7.

S1, numbering the monocular cameras 3, and establishing a camera coordinate system O of each monocular camera 32a-XaYaZaA pixel coordinate system O corresponding to each monocular camera 31a-UaVaAnd a world coordinate system O-XYZ in the indoor scene, where a is the camera number. Wherein, referring to fig. 4, X of the camera coordinate system of each monocular camera 3aAxis, and U of pixel coordinate system corresponding to each monocular camera 3aThe axis and the Y axis of the world coordinate system are parallel to each other; y of the camera coordinate system of each monocular camera 3aAxis, and V of pixel coordinate system corresponding to each monocular camera 3aThe axis and the X axis of the world coordinate system are parallel to each other; of monocular cameras 3Z of camera coordinate systemaThe axis and the Z axis of the world coordinate system are parallel to each other.

S2, acquiring an initial image of an indoor scene by using the plurality of monocular cameras 3, acquiring initial image data of all the monocular cameras 3 through the vision processing system 41, and stitching all the initial image data through the upper computer 421 to obtain a two-dimensional panoramic map.

And S3, manually planning a target motion track of the robot 1 on the two-dimensional panoramic map through the upper computer 421, wherein the target motion track is formed by a series of planning points on the two-dimensional panoramic map.

And S4, calculating coordinates of a series of planning points on the two-dimensional panoramic map in a world coordinate system (namely coordinate conversion of the planning points between a pixel coordinate system and the world coordinate system).

S5, placing the robot 1 in the indoor scene, selecting the robot 1 as a tracking target through the upper computer 421, performing target tracking on the robot 1 through the upper computer 421 in the moving process of the robot 1, obtaining the real-time position of the robot 1 under the pixel coordinate system, and then calculating the real-time position of the robot 1 under the world coordinate system (namely coordinate conversion of the real-time position of the robot 1 between the pixel coordinate system and the world coordinate system).

And S6, calculating the real-time posture of the robot 1 at the real-time position in the step S5 based on the position of the L-shaped line light source 2 on the robot 1 in the real-time image acquired by the monocular camera 3, wherein the real-time position and the real-time posture of the robot 1 in the world coordinate system are the real-time posture of the robot 1.

S7, the upper computer 421 compares the real-time pose of the mobile robot 1 with the target motion trajectory, and outputs a walking control signal to the lower computer 422, the lower computer 422 transmits the received walking control signal to the robot 1, and the robot 1 completes a walking instruction based on the walking control signal and finally reaches the planned destination.

In the positioning method of the indoor navigation positioning device based on the multi-view vision, the positioning process of the positioning method is not interfered by the external environment and is not limited by the positioning area, so that the positioning precision is improved, and the positioning cost is reduced. In addition, the positioning range realized by the positioning method can realize elastic adjustment along with the flexible deployment of the monocular camera. In addition, the positioning method is suitable for occasions with high automation degree requirements and high machine operation requirements (particularly suitable for occasions with long-term reciprocating motion of the robot indoors), and can effectively avoid the influence of personnel intervention on production safety and operation efficiency.

In one embodiment, referring to fig. 1, in step S4, the calculation process of the arbitrary planning point on the two-dimensional panoramic map includes steps S41-S43.

S41, reading the coordinate (u) of the planning point in the pixel coordinate system1,v1). It should be noted that the initial images acquired by the at least two monocular cameras 3 contain the planning point, and the coordinates of the planning point in the pixel coordinate systems corresponding to the at least two monocular cameras 3 are consistent.

S42, selecting two adjacent monocular cameras 3 from all the monocular cameras 3 that have collected the planning point, projecting the origins of the camera coordinate systems of the two adjacent monocular cameras 3 that have collected the planning point into the world coordinate system, respectively, and obtaining the coordinates P of the projection points of the origins of the camera coordinate systems of the two adjacent monocular cameras 31(x1,y1) And P2(x2,y2)。

S43, calculating coordinates (x, y) of the planning point in the world coordinate system, wherein the calculation formula is:

wherein f isaxIs a monocular camera 3 edge UaNormalized focal length of axis, fayFor monocular camera 3 edge VaNormalized focal length of axis, caxU being the optical center of monocular camera 3aAxis coordinate, cayV being the optical center of monocular camera 3aAxial coordinate, ZcIs the vertical distance between the monocular camera 3 and the plane where the robot 1 is located.

In order to facilitate the positioning calculation and reduce the positioning error, the internal parameters of the monocular cameras 3 are consistent, while for any type of monocular camera, the internal parameter parameters are fixed, and f may be different1x=f2x=…=fx,f1y=f2y=…fy,c1x=c2x=…=cx,c1y=c2y=…=cyThen, the above calculation formula is:

in one embodiment, in step S5, the calculation process of the real-time position of the robot 1 at any time in the world coordinate system includes steps S51-S53.

S51, the coordinates (u) of the robot 1 in the pixel coordinate system are read1',v1'). It should be noted that, during the moving process of the robot 1, at any time, the at least two monocular cameras 3 acquire the current position of the robot 1, and the coordinates of the robot 1 currently under the pixel coordinate systems corresponding to the at least two monocular cameras 3 are all consistent.

S52, two adjacent monocular cameras 3 are selected from all the monocular cameras 3 which acquire the current position of the robot 1, the original points of the camera coordinate systems of the two adjacent monocular cameras 3 which acquire the current position of the robot 1 are projected into a world coordinate system, and the coordinates P of the projection points of the original points of the camera coordinate systems of the two adjacent monocular cameras 3 are obtained1'(x1',y1') and P2'(x2',y2')。

S53, calculating the coordinates (x ', y') of the robot 1 in the world coordinate system, and the calculation formula is:

wherein f isaxIs a monocular camera 3 edge UaNormalized focal length of axis, fayFor monocular camera 3 edge VaNormalized focal length of axis, caxU being the optical center of monocular camera 3aAxis coordinate, cayV being the optical center of monocular camera 3aAxial coordinate, ZcIs the vertical distance between the monocular camera 3 and the plane where the robot 1 is located.

Similarly, for the convenience of positioning calculation and reducing positioning error, the internal parameters of the monocular cameras 3 are all consistent, while for any type of monocular camera, the internal parameters are fixed, and f may be different1x=f2x=…=fx,f1y=f2y=…fy,c1x=c2x=…=cx,c1y=c2y=…=cyThen, the above calculation formula is:

in an embodiment, in step S6, the calculation process of the real-time pose (i.e., orientation) of the robot 1 at the real-time position in step S5 includes steps S61-S63.

S61, select the segment AB on the L-shaped line source 2 as the target segment, and respectively read the coordinates of the end point A, B of the segment AB currently under the pixel coordinate system. The monocular camera 3 that has acquired the current position of the robot 1 acquires the line segment AB at the same time, and the coordinates of the end point A, B of the line segment AB can be directly read out from the pixel coordinate system corresponding to the monocular camera 3 that has acquired the line segment AB. The end point A, B of the line segment AB is consistent in the coordinates of the pixel coordinate systems corresponding to all the monocular cameras 3 that have acquired the line segment AB.

S62, byIt is determined in which direction the robot 1 is rotating (i.e. whether the robot 1 is rotating clockwise or counterclockwise), whereinIs the direction vector of the line segment AB in the pixel coordinate system at the current moment,is the direction vector of the pixel coordinate system at the moment on the segment AB.

S63, byThe rotation angle θ of the robot 1 is determined.

In one embodiment, in step S6, the line segment AB may be a long line segment 21 or a short line segment 22 on the L-shaped linear light source 2. In the whole positioning calculation process, the long line segment 21 and the short line segment 22 of the L-shaped linear light source 2 are different in length, so that the line segment AB can be always the same line segment at any two adjacent moments, and the accuracy of judging the rotation angle theta of the robot 1 is improved.

13页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种用于星敏感器姿态测量的超分辨率成像方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!