Projection device

文档序号:1734916 发布日期:2019-12-20 浏览:23次 中文

阅读说明:本技术 一种投影装置 (Projection device ) 是由 郭峻豪 詹宏智 冯信璁 江长轩 于 2019-08-22 设计创作,主要内容包括:本发明提供一种定位方法及定位系统包括:第一图像采集装置获取目标物的第一图像,第二图像采集装置获取该目标物的第二图像;根据该目标物在该第一图像中的相对位置及该第一图像采集装置的拍摄角度范围获取该目标物相对该第一图像采集装置的第一偏移角度,根据该目标物在该第二图像中的相对位置及第二图像采集装置的拍摄角度范围获取该目标物相对该第二图像采集装置的第二偏移角度;根据所述第一图像采集装置、第二图像采集装置及目标物的位置信息或相对位置信息,第一偏移角度及第二偏移角度获得该目标物的位置。上述定位方法及定位系统根据图像采集装置位置及其捕捉的图像信息分析获得目标物的位置信息,从而实现对物体的定位监控。(The invention provides a positioning method and a positioning system, comprising the following steps: the method comprises the steps that a first image acquisition device acquires a first image of a target object, and a second image acquisition device acquires a second image of the target object; acquiring a first offset angle of the target relative to the first image acquisition device according to the relative position of the target in the first image and the shooting angle range of the first image acquisition device, and acquiring a second offset angle of the target relative to the second image acquisition device according to the relative position of the target in the second image and the shooting angle range of the second image acquisition device; and obtaining the position of the target object according to the position information or the relative position information of the first image acquisition device, the second image acquisition device and the target object, the first offset angle and the second offset angle. The positioning method and the positioning system analyze and obtain the position information of the target object according to the position of the image acquisition device and the image information captured by the image acquisition device, thereby realizing the positioning monitoring of the object.)

1. A method of positioning, comprising:

A. the method comprises the steps that a first image acquisition device acquires a first image of a target object, and a second image acquisition device acquires a second image of the target object;

B. acquiring a first offset angle of the target relative to the first image acquisition device according to the relative position of the target in the first image and the shooting angle range of the first image acquisition device, and acquiring a second offset angle of the target relative to the second image acquisition device according to the relative position of the target in the second image and the shooting angle range of the second image acquisition device;

C. and obtaining the position of the target object according to the position information or the relative position information of the first image acquisition device, the second image acquisition device and the target object, the first offset angle and the second offset angle.

2. The method according to claim 1, wherein in step a, the first image and the second image are respectively a motion picture frame or a still picture acquired by the first image capturing device and the second image capturing device at the same time.

3. The positioning method according to claim 1, wherein in step a: and identifying a common object in the first image and the second image as the target object.

4. The positioning method according to claim 1, wherein in step B: first offset angle theta1' is: theta1'=(image_x1)×(θ1110)/w1………………(1)

Wherein, image _ x1Is the abscissa, w, of the object in the second image1Is the frame width of the first image (theta)1110) Indicating the range of the shooting angle, theta, of the first image-pickup device10And theta11Respectively acquire the first imageA start angle and an end angle of a shooting angle range of the container device;

second offset angle θ2' is: theta2'=(image_x2)×(θ2120)/w2………………(2)

Wherein, image _ x2Is the abscissa, w, of the object in the second image2Is the frame width of the second image (theta)2120) Indicating the range of the shooting angle, theta, of the second image pickup device20And theta21Respectively the start angle and the end angle of the shooting angle range of the second image acquisition device.

5. The positioning method of claim 4, wherein θ is10And the theta11The absolute angle of the first image acquisition device relative to a preset initial position of the first image acquisition device or the relative angle of the first image acquisition device relative to the current direction of the first image acquisition device; theta is a function of20And the theta21Is an absolute angle relative to a preset initial position of the second image capturing device or is a relative angle relative to a current pointing direction of the second image capturing device.

6. The positioning method according to claim 4, wherein θ is10And theta11Is a fixed value or a variable value adjusted according to the depth of field or focal length of the first image acquisition device; theta20、θ21Is a fixed value or a variable value adjusted according to the depth of field or focal length of the second image capturing device.

7. The positioning method according to claim 1, wherein step C comprises: obtaining a third distance and a third angle of a connecting line between the first image acquisition device and the second image acquisition device according to the position information of the first image acquisition device and the second image acquisition device; and obtaining the position of the target object according to the third distance, the third angle, the first offset angle and the second offset angle.

8. The positioning method according to claim 1, wherein step C specifically comprises: according to the triangulation principle, obtaining a triangular geometric relation:

d is obtained by calculation according to the formula (3)1Then obtaining X by calculation according to the formula (4)0Y is obtained by calculation according to the formula (6)0(ii) a Alternatively, d is obtained by calculation according to equation (3)2Then obtaining X by calculation according to the formula (5)0Y is obtained by calculation according to the formula (7)0

Wherein the position coordinates of the first image capturing device are expressed as (x)1,y1) The position coordinate of the second image acquisition device is expressed as (x)2,y2),(X0,Y0) Representing the position coordinates of the object, d1Representing the distance of the first image-pickup device to the target object, d2Indicating the distance, theta, of the second image pick-up device to the object1Representing a first offset angle, theta, of the object with respect to the first image-capturing device2Representing a second offset angle of the object relative to the second image capturing device.

9. The positioning method according to claim 1, wherein the position information of the first image capturing device includes a first GPS position and first orientation information, and the position information of the second image capturing device includes a second GPS position and second orientation information.

10. A positioning system, comprising: the monitoring device is in communication connection with the at least two image acquisition devices; the at least two image acquisition devices comprise a first image acquisition device and a second image acquisition device;

the first image acquisition device is used for acquiring a first image of a target object;

the second image acquisition device is used for acquiring a second image of the target object;

the monitoring device is used for acquiring a first offset angle of the target relative to the first image acquisition device according to the relative position of the target in the first image and the shooting angle range of the first image acquisition device, and acquiring a second offset angle of the target relative to the second image acquisition device according to the relative position of the target in the second image and the shooting angle range of the second image acquisition device; the monitoring device is further used for obtaining the position of the target object according to the position information or the relative position information of the first image acquisition device, the second image acquisition device and the target object, the first offset angle and the second offset angle.

Technical Field

The present invention relates to the field of monitoring and positioning, and in particular, to a positioning method and a positioning system.

Background

In the existing positioning tracking monitoring device, position information on a target object is acquired and transmitted to a camera device or a monitoring device for positioning tracking, a positioning device such as a GPS is required to be installed on the target object, the positioning device is required to be communicated with the monitoring device to transmit the positioning information to a monitoring platform, but the positioning device cannot be installed on each target object under the actual monitoring condition; in addition, the monitoring device needs to establish communication with a target object provided with the positioning device to obtain positioning information of the target object, the process is complex, the use scenes or occasions are extremely limited, the monitoring device is extremely inconvenient to use in many scenes, and some scenes cannot be used.

Disclosure of Invention

Accordingly, it is desirable to provide a positioning method and a positioning system that can facilitate positioning monitoring and improve versatility.

Based on the above purpose, the present invention provides a positioning method, which includes the following steps:

A. the method comprises the steps that a first image acquisition device acquires a first image of a target object, and a second image acquisition device acquires a second image of the target object;

B. acquiring a first offset angle of the target relative to the first image acquisition device according to the relative position of the target in the first image and the shooting angle range of the first image acquisition device, and acquiring a second offset angle of the target relative to the second image acquisition device according to the relative position of the target in the second image and the shooting angle range of the second image acquisition device;

C. and obtaining the position of the target object according to the position information or the relative position information of the first image acquisition device, the second image acquisition device and the target object, the first offset angle and the second offset angle.

Preferably, in the step a, the first image and the second image are respectively a dynamic image frame or a static image obtained by the first image capturing device and the second image capturing device at the same time.

Preferably, in step a: and identifying a common object in the first image and the second image as the target object.

Preferably, in step B: first offset Angle θ'1Comprises the following steps:

wherein, image _ x1Is the abscissa, w, of the object in the second image1Is the frame width of the first image (theta)1110) Indicating the range of the shooting angle, theta, of the first image-pickup device10And theta11Respectively the starting angle and the ending angle of the shooting angle range of the first image acquisition device;

second offset angle θ2' is: theta2'=(image_x2)×(θ2120)/w2………………(2)

Wherein, image _ x2Is the abscissa, w, of the object in the second image2Is the frame width of the second image (theta)2120) Indicating the range of the shooting angle, theta, of the second image pickup device20And theta21Respectively the start angle and the end angle of the shooting angle range of the second image acquisition device.

Preferably, the theta is10And the theta11The absolute angle of the first image acquisition device relative to a preset initial position of the first image acquisition device or the relative angle of the first image acquisition device relative to the current direction of the first image acquisition device; theta is a function of20And the theta21Is an absolute angle relative to a preset initial position of the second image capturing device or is a relative angle relative to a current pointing direction of the second image capturing device.

Preferably, theta10And theta11Is a fixed value or a variable value adjusted according to the depth of field or focal length of the first image acquisition device; theta20、θ21Is a fixed value or a variable value adjusted according to the depth of field or focal length of the second image capturing device.

Preferably, step C includes: obtaining a third distance and a third angle of a connecting line between the first image acquisition device and the second image acquisition device according to the position information of the first image acquisition device and the second image acquisition device; and obtaining the position of the target object according to the third distance, the third angle, the first offset angle and the second offset angle.

Preferably, step C specifically includes: according to the triangulation principle, obtaining a triangular geometric relation:

d is obtained by calculation according to the formula (3)1Then obtaining X by calculation according to the formula (4)0Y is obtained by calculation according to the formula (6)0(ii) a Alternatively, d is obtained by calculation according to equation (3)2Then obtaining X by calculation according to the formula (5)0Y is obtained by calculation according to the formula (7)0

Wherein the position coordinates of the first image capturing device are expressed as (x)1,y1) The position coordinate of the second image acquisition device is expressed as (x)2,y2),(X0,Y0) Representing the position coordinates of the object, d1Representing the distance of the first image-pickup device to the target object, d2Indicating the distance, theta, of the second image pick-up device to the object1Representing a first offset angle, theta, of the object with respect to the first image-capturing device2Representing a second offset angle of the object relative to the second image capturing device.

Preferably, the position information of the first image capturing device includes a first GPS position and first orientation information, and the position information of the second image capturing device includes a second GPS position and second orientation information.

Based on the above object, the present invention further provides a positioning system, comprising: the monitoring device is in communication connection with the at least two image acquisition devices; the at least two image acquisition devices comprise a first image acquisition device and a second image acquisition device;

the first image acquisition device is used for acquiring a first image of a target object;

the second image acquisition device is used for acquiring a second image of the target object;

the monitoring device is used for acquiring a first offset angle of the target relative to the first image acquisition device according to the relative position of the target in the first image and the shooting angle range of the first image acquisition device, and acquiring a second offset angle of the target relative to the second image acquisition device according to the relative position of the target in the second image and the shooting angle range of the second image acquisition device; the monitoring device is further used for obtaining the position of the target object according to the position information or the relative position information of the first image acquisition device, the second image acquisition device and the target object, the first offset angle and the second offset angle.

The positioning method and the positioning system can realize the positioning monitoring of the object in the monitoring area according to the self-positioning information and the respective captured images of the first image acquisition device and the second image acquisition device, and can also adjust the visual field range, the monitoring range and the monitoring area of the first image acquisition device and the second image acquisition device according to the captured object by adjusting the parameters of the first image acquisition device and the second image acquisition device, adjust the monitoring area according to the movement change of the monitored object to track and capture the specific object for directional capture, and realize real-time tracking in a certain area range. The invention can carry out positioning monitoring on the target object without acquiring the authorization of the target object or establishing a related communication protocol or installing a positioning device such as a GPS on the target object, and is not limited by a communication scene and the permission of the target object.

Drawings

Fig. 1 is a partial flowchart of a positioning method according to an embodiment of the present invention;

FIG. 2 is a schematic diagram of an image capturing device according to an embodiment of the present invention, associating with a target object on a captured image;

FIG. 3 is a schematic diagram illustrating a relative position relationship between a first image capturing device, a second image capturing device and a target object according to an embodiment of the present invention;

FIG. 4 is a schematic diagram of a relative position relationship between a first image capturing device, a second image capturing device and a target object according to another embodiment of the present invention;

fig. 5 is a partial schematic view of an application scenario of the positioning system according to an embodiment of the invention.

Detailed Description

In order to further understand the objects, structures, features and functions of the present invention, the following embodiments are described in detail.

Certain terms are used throughout the description and following claims to refer to particular components. As one of ordinary skill in the art will appreciate, manufacturers may refer to a component by different names. The present specification and claims do not intend to distinguish between components that differ in name but not function. In the following description and in the claims, the terms "include" and "comprise" are used in an open-ended fashion, and thus should be interpreted to mean "include, but not limited to.

As shown in fig. 1, a flow chart of a positioning method according to a first embodiment of the present invention is disclosed, which includes the following steps:

step A, a first image acquisition device 10 acquires a first image I1 of an object 30, and a second image acquisition device 20 acquires a second image of the object 30;

step B, obtaining a first offset angle of the object 30 relative to the first image capturing device 10 according to the relative position of the object 30 in the first image I1 and the shooting angle range of the first image capturing device 10, and obtaining a second offset angle of the object 30 relative to the second image capturing device 20 according to the relative position of the object 30 in the second image and the shooting angle range of the second image capturing device 20;

and step C, obtaining the position of the target object 30 according to the position information or the relative position information of the first image acquisition device 10, the second image acquisition device 20 and the target object 30, and the first offset angle and the second offset angle.

In a preferred embodiment, in step a, the first image I1 and the second image I2 are motion picture frames or still pictures acquired by the first image capturing device 10 and the second image capturing device 20 at the same time, respectively.

Further, in step a of this embodiment: an object common to the first image I1 and the second image I2 is identified as the object 30. Image recognition techniques may be used for recognition, such as recognizing the license plate number of a vehicle in motion, or extracting features of the target object 30 for feature matching.

Further, in the present embodiment, the position information of the first image capturing device 10 includes: a first GPS location and first orientation information. The position information of the second image capturing device 20 includes: a second GPS location and second position information.

Preferably, the photographing angle range of the first image pickup device 10 or the photographing angle range of the second image pickup device 20 in the present embodiment is a horizontal angle range or a horizontal angle of view of the photographing range.

The offset angle in the present embodiment is a relative offset angle of the target 30 with respect to the angle of view of the image pickup device, or the initial angle or the central angle of the shooting angle range.

The image acquisition device of the embodiment can adopt a camera device to acquire images, such as continuously acquiring dynamic images and intermittently acquiring static pictures; in a system with a plurality of cameras, the image acquisition operation of other cameras in a new entering road section can be triggered according to the fact that the intersection camera acquires the target object 30 of the new entering road section, or the image acquisition operation of other cameras in the site or in a specific area in the site can be triggered according to the fact that the entrance camera acquires the target object 30 of the entering site, and the specific acquisition mode is set according to actual needs.

Further, in the present embodiment, the position information of the first image capturing device 10 and the second image capturing device 20 may be manually recorded in the system when the first image capturing device 10 and the second image capturing device 20 are installed, or may be obtained according to the positioning devices of the first image capturing device 10 and the second image capturing device 20. The positioning device may be built in the image acquisition device or may be externally provided, and is mainly used to acquire position information such as GPS information. The positioning device may also be a built-in or external gyroscope for acquiring the orientation information of the image capturing device, such as the rotation angle, but the invention is not limited thereto.

Before calculating the position of the target 30 in step C, the position information of the first image capturing device 10 and the second image capturing device 20 is acquired. For example, the first image capturing device 10 and the second image capturing device 20 are fixed, and the position information thereof may be pre-stored in a database, and when the identification information of the two image capturing devices is obtained, the identification information is extracted from the database and analyzed; for another example, the first image capturing device 10 and the second image capturing device 20 are fixed or mobile, and upload the GPS information and/or the orientation information obtained by the GPS device and the orientation device built in or associated with the image capturing devices when uploading the images.

The positioning method of the embodiment can position a fixed object and can also position an object in motion or movement. Preferably, the positioning method of the present embodiment monitors a specific object by positioning the object in motion; or the moving object in a certain specific area or a specific scene is monitored in real time, the specific object can be monitored in a certain area range, and the shooting angle range or the field angle, the field range, the scene, the shooting area and the like can be adjusted by adjusting the focal length and the lens steering of the image acquisition device. For example, vehicles traveling on roads are monitored in real time, people enter and exit places such as meetings, stations, and the like.

Further, the size of the first image I1 or the second image I2 of the present embodiment is represented by width × height. The width (w1 or w2) of the first image I1 or the second image I2, the first target image coordinates (image _ x1, image _ y1) of the target 30 in the first image I1, and the second target image coordinates (image _ x2, image _ y2) of the target 30 in the second image I2 may be obtained according to the number of image pixels or the measurement size, and the above coordinates may be specific feature points of the target image in the image, such as the center position, the center of gravity position, the leftmost edge, the rightmost edge, and the like of the target image, and the invention is not limited thereto.

In the present embodiment, the relative position of the object 30 in the first image I1 is determined based on the first object image abscissa image _ x1 and the width w1 of the first image I1. The relative position of the object 30 in the second image I2 is confirmed based on the same method, and will not be described herein.

Preferably, the imaging angle range or the angle of view in the present embodiment is a horizontal imaging angle range or a horizontal angle of view. The images of the stationary object or the moving object, such as the images or the images of the vehicles running on the road, are collected according to the horizontal shooting angle range or the horizontal field angle projected downwards by the first image capturing device 10 and the second image capturing device 20, so as to obtain the relative position of the target object 30 in the second image.

The shooting angle range, or field angle size, of the present embodiment is related to the image sensor, such as CCD sensor size and lens focal length: horizontal field angle is 2 × arctan (w'/2 f); the vertical field angle is 2 × arctan (h'/2 f); field angle 2 × arctan (d'/2 f); w ' is the width of the image sensor, e.g., CCD, h ' is the height of the image sensor, e.g., CCD, and d ' is the diagonal length of the image sensor, e.g., CCD.

Further, a first offset angle theta 'of the offset of the object 30 relative to the shooting angle range or the view angle starting line of the first image acquisition device 10 is estimated according to the relative position of the object 30 in the first image I1 and the shooting angle range or the view angle of the first image acquisition device 10'1. Estimating a second offset angle theta 'of the initial side/initial line offset of the object 30 relative to the shooting angle range or the view angle of the second image acquisition device 20 according to the relative position of the object 30 in the second image I2 and the shooting angle range or the view angle of the second image acquisition device 20'2

As shown in fig. 2, 3 and 4, further, in step B:

first offset Angle θ'1Comprises the following steps: theta'1=(image_x1)×(θ1110)/w1…………………………(1)

Wherein (image _ x)1)/w1Is the relative position of the object 30 in the first image I1, image _ x1Is the abscissa, w, of the object 30 in the second image I21The width of the first image I1 (theta)1110) Indicates the shooting angle range or the field angle, θ, of the first image pickup device 1010And theta11Respectively, a start angle and an end angle of the shooting angle range of the first image capturing apparatus 10.

Second offset Angle θ'2Comprises the following steps: theta'2=(image_x2)×(θ2120)/w2…………………………(2)

Wherein (image _ x)2)/w2Is the relative position of the object 30 in the second image I2, image _ x2Is the abscissa, w, of the object 30 in the second image I22Is the picture width of the second image I2 (theta)2120) Indicates a photographing angle range or an angle of view, θ, of the second image pickup device 2020And theta21Respectively, a start angle and an end angle of the shooting angle range of the second image capturing apparatus 20.

In the present embodiment, θ10、θ11May be a relative angle, θ, with respect to the current orientation of the first image capturing device 1020、θ21May be a relative angle with respect to the current orientation of the second image capturing device 20. As shown in fig. 2 and 3, the angles associated with the first image capturing device 10 are relative to the current pointing angle θ of the first image capturing device 1012In other words, the angles associated with the second image capturing device 20 are all relative to the current pointing angle θ at the second image capturing device 2022In other words. In this case, θ10、θ11Are generally equal, θ20、θ21The fields of view of image capture devices 10 and 20 are also generally equal or symmetric about their optical axes.

For example, the object image 31 or 32 is to the left of the central axis in the image I1 or I2 in fig. 2, at this time (θ)2120) Is a negative value, obtained is theta'1Is a negative value; in another exampleThe object image 31 or 32 is on the right side of the central axis in the image I1 or I2 in fig. 2, at this time (θ)2120) Is positive value, obtained is theta'1Positive values. At this time, the line connecting the object 30 with respect to the first image capturing device 10 forms an angle θ with the x-axis1(or, the normalized first deviation angle), the line connecting the target 30 with respect to the second image capturing device 10 forms an angle θ with the x-axis2(or, the normalized second offset angle).

At this time, θ is based on the coordinate relationship1=θ12+θ'1-(θ1110)/2,θ2=θ22-((θ2120)/2-θ'2)。

Further, θ of the present embodiment10、θ11The focus distance may be a fixed value, or may be a variable value according to the depth of field or the focal length of the first image capturing device 10; theta20、θ21The focus distance may be a fixed value or a variable value according to the depth of field or the focal length of the second image capturing device 20. For example, when the image pickup device adjusts the lens to perform optical zooming or digital zooming, the image of the object is enlarged in the field of view, and the field angle (θ) of the image pickup device at this time1110) Or (theta)2120) It becomes smaller. In other words, the image capture device 10/20 may be a fixed focus device with a fixed field of view. The image capture device 10/20 may also be a fixed focus device, with the field of view range varying with the adjustment of the focal length. The adjustment of the focal length includes, but is not limited to, changing the lens, optical zooming, digital zooming.

In another embodiment, θ10、θ11But also an absolute angle theta with respect to a preset initial position of the first image acquisition arrangement 1020、θ21Is an absolute angle relative to a preset initial position of the second image capturing device 20. As shown in FIG. 4, each angle associated with the first image capturing device 10 is a first offset angle θ'1The pointing angle theta of the first image capturing device 1012(or, lens barrel)Center axis, optical axis S1) and start angle θ of the shooting angle range of the first image pickup device 1010And an end angle theta11. The angles associated with the second image capturing device 20 are all relative to a second coordinate system established with the second image capturing device 20 as an origin, and are each a second offset angle θ'2The pointing angle theta of the second image capturing device 2022(or lens center axis, optical axis S2), and start angle θ of the shooting angle range of the second image pickup device 2020And an end angle theta21. The 0 degree direction and the rotation direction of the two coordinate systems are the same. Other similar reasoning can be used to obtain theta in the previous embodiment1And theta2And will not be described herein.

In one embodiment, step C comprises: the step C comprises the following steps: obtaining a third distance and a third angle of a connecting line between the first image acquisition device and the second image acquisition device according to the position information of the first image acquisition device and the second image acquisition device; and obtaining the position of the target object according to the third distance, the third angle, the first offset angle and the second offset angle.

In another embodiment, assuming that the position coordinates of the target object 30 are (X0, Y0), the position coordinates of the first image capturing device 10 are (X1, Y1), and the position coordinates of the second image capturing device 20 are (X2, Y2), the position coordinates may be GPS coordinates, and d is a distance between the first image capturing device and the second image capturing device, and the position coordinates may be GPS coordinates1Represents the distance, d, of the first image acquisition device 20 to the target object 302Indicates the distance, theta, from the second image pickup device 20 to the target 301Representing a first offset angle, theta, of the object 30 with respect to the first image acquisition arrangement 102Indicating a second offset angle of the object 30 relative to the second image capturing device 20. Step C comprises the following geometrical relationships according to the triangulation principle:

X0=x1+d1×sinθ1…………………………………(4)

X0=x2+d2×sinθ2…………………………………(5)

Y0=y1+d1×cosθ1…………………………………(6)

Y0=y2+d2×cosθ2…………………………………(7)

solving for d according to equations (4), (5), (6), (7) above1And d2To obtain equation (3):

the summary is as follows:

since x1, y1, x2 and y2 have been obtained in the foregoing manner, d can be obtained by calculation according to the formula (3)1And d2Then, X is obtained by calculation according to the formula (4) or the formula (5)0Y is obtained by calculation according to the formula (6) or the formula (7)0I.e., obtaining the position coordinates of the object 30.

The positioning system provided by the present invention may include a plurality of image capturing devices, wherein two adjacent image capturing devices may monitor and cover the same area, as shown in fig. 5, the above positioning method is adopted, and the whole monitoring area is monitored and covered.

The lenses of the first image capturing device 10 and the second image capturing device 20 of the present embodiment may be adjusted or controlled to be adjusted according to needs, and the orientation, the angle, and the parameters may be adjusted according to needs; the coordinate plane may be established by selecting three corresponding points according to the first image capturing device 10, the second image capturing device 20, and the target 30, or the coordinate system may be established by converting the first image capturing device 10, the second image capturing device 20, and the target 30 into one plane to perform the correlation calculation.

In this embodiment, the image capturing devices may be set at different positions on the road, and may be set on two sides of the road in a staggered manner, so as to increase the monitoring range, perform positioning monitoring on moving objects or moving vehicles on the road, or perform specific monitoring on specific objects or specific vehicles, and may control the image capturing devices to rotate the lens, adjust the focal length, and the like to perform positioning monitoring on the objects. The positioning method of the embodiment is not limited to be used on a straight road, and is also applicable to various curved roads or trails, and the first image acquisition device 10 and the second image acquisition device 20 are arranged in a staggered manner relative to the target object 30 or the image acquisition devices to be positioned on different sides and form a first image acquisition device 10 and a second image acquisition device 20 with adjacent opposite side image acquisition devices to position and monitor the object. The opposite sides of the present embodiment are only directed to the different direction sides of the object 30, or the two opposite sides of the road, and are not limited to symmetrical sides, and may be disposed on two sides of a curved segment or two sides of a multi-segment curved segment or two sides of a continuous turning lane, and are only directed to the different direction sides of the object 30, or different sides of the road, and are not limited to the two sides disposed opposite to each other.

Referring to FIG. 5, a schematic diagram of an embodiment of a positioning system of the present invention is disclosed, which includes: a monitoring device and at least two image acquisition devices (10, 20), wherein the monitoring device is connected with the at least two image acquisition devices in a communication way; wherein, the at least two image acquisition devices comprise a first image acquisition device 10 and a second image acquisition device 20;

the first image acquisition device 10 is used for acquiring a first image I1 of the object 30;

the second image capturing device 20 is used for acquiring a second image I2 of the object 30;

the monitoring device obtains a first offset angle of the object 30 relative to the first image capturing device 10 according to the relative position of the object 30 in the first image I1 and the shooting angle range of the first image capturing device 10, obtains a second offset angle of the object 30 relative to the second image capturing device 20 according to the relative position of the object 30 in the second image I2 and the shooting angle range of the second image capturing device 20, and obtains the position of the object 30 according to the position information or the relative position information of the first image capturing device 10, the second image capturing device 20 and the object 30, and the first offset angle and the second offset angle.

Further, in the present embodiment, the position information of the first image capturing device 10 includes: a first GPS location and first orientation information. The position information of the second image capturing device 20 includes: a second GPS location and second position information.

Preferably, the photographing angle range of the first image pickup device 10 or the photographing angle range of the second image pickup device 20 in the present embodiment is a horizontal angle range or a horizontal angle of view of the photographing range.

The monitoring device of this embodiment may be a module integrated with a certain image acquisition device, an independent data processing device, a server, a monitoring center, and the like. The operations of the first image capturing device 10 and the second image capturing device 20 may also be controlled via a monitoring device, for example, to control a switching operation state, a lens view area or a capturing area, to select a target object, and the like. The image acquisition device can be in communication connection with the monitoring device in a wired or wireless communication mode to upload corresponding image information; and control operation information from the monitoring device can be received, so that the working state of the monitoring device can be adjusted, and the like.

In addition, the positioning system can also comprise a database to store and backup the position information of each image acquisition device and the acquired images so as to facilitate subsequent online or offline analysis and processing.

The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

The present invention has been described in relation to the above embodiments, which are only exemplary of the implementation of the present invention. It should be noted that the disclosed embodiments do not limit the scope of the invention. Rather, it is intended that all such modifications and variations be included within the spirit and scope of this invention.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种用于海洋牧场的无线传感器网络监控装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!