Method for acquiring rotating speed of spherical motor by using single optical flow sensor

文档序号:404692 发布日期:2021-12-17 浏览:26次 中文

阅读说明:本技术 一种利用单个光流传感器获取球形电机旋转速度的方法 (Method for acquiring rotating speed of spherical motor by using single optical flow sensor ) 是由 徐志科 刘呈磊 金龙 冷静雯 朱星星 于 2021-10-13 设计创作,主要内容包括:本发明公开一种利用单个光流传感器获取球形电机旋转速度的方法,包括以下步骤:S1:确定光流传感器与球形电机转子的位置关系;S2:光流摄像头获取球形电机转子表面图像;S3:对球形电机转子图像进行数据处理;S4:球形电机转子图像特征点选取;S5:对每一个特征点进行L-K光流运算;S6:由每个特征点的光流计算完整图像的光流;S7:利用图像的光流获取图像投影速度;S8:利用球形电机与传感器之间的位置关系求解球形转子的速度。本发明的球形转子转速识别方法与传统的球形转子转速识别方法相比,具有无接触的优点,降低损耗;只利用单个摄像头,依靠单个光流摄像头与空间位置关系直接计算速度大小与方向。(The invention discloses a method for acquiring the rotating speed of a spherical motor by using a single optical flow sensor, which comprises the following steps: s1, determining the position relation between the optical flow sensor and the spherical motor rotor; s2, acquiring a surface image of the spherical motor rotor by the optical flow camera; s3, processing the data of the rotor image of the spherical motor; s4, selecting the image characteristic points of the spherical motor rotor; s5, performing L-K optical flow operation on each feature point; s6, calculating the optical flow of the complete image according to the optical flow of each feature point; s7, acquiring the image projection speed by using the optical flow of the image; and S8, solving the speed of the spherical rotor by using the position relation between the spherical motor and the sensor. Compared with the traditional method for identifying the rotating speed of the spherical rotor, the method for identifying the rotating speed of the spherical rotor has the advantages of no contact and reduced loss; only a single camera is utilized, and the speed and the direction are directly calculated by depending on the relation between the single optical flow camera and the space position.)

1. A method for acquiring a rotation speed of a spherical motor by using a single optical flow sensor, comprising the steps of:

s1, determining the position relation between the optical flow sensor and the spherical motor rotor;

s2, acquiring a surface image of the spherical motor rotor by the optical flow camera;

s3, processing the data of the rotor image of the spherical motor;

s4, selecting the image characteristic points of the spherical motor rotor;

s5, performing L-K optical flow operation on each feature point;

s6, calculating the optical flow of the complete image according to the optical flow of each feature point;

s7, acquiring the image projection speed by using the optical flow of the image;

and S8, solving the speed of the spherical rotor by using the position relation between the spherical motor and the sensor.

2. The method of claim 1, wherein the optical flow sensor is mounted on a spherical motor stator housing in S1, and a vertical distance between a sensor camera reference plane and a spherical motor rotor surface is within a camera working range.

3. The method according to claim 1, wherein the resolution of the camera for taking the picture in S2 is above 500, and the external lighting conditions are kept constant or changed slowly during shooting.

4. The method according to claim 1 or 2, wherein the S3 specifically comprises the steps of:

s31, carrying out gray scale calculation on the RGB image data;

and S32, converting the gray value into a double-precision value for computer processing.

5. The method according to claim 1, wherein the S4 specifically comprises the following steps:

s41, calculating the gradient matrix G of each pixel point in the image and the minimum eigenvalue lambda thereofm

In the formula px,pyFor calculating pixel pointsPosition, wx,wySelecting the window size for synthesis, wherein the selection range is within the range of 2-7 pixels; i isx,IyThe partial derivatives of the gray values of the pixels in the x and y directions;

s42 maximum value lambda of all minimum characteristic values in the image is foundmax

S43, reserving the minimum eigenvalue to be more than 10% lambdamaxThe pixel point of (2);

s44 reserving lambdamLocal maximum pixel, lambda of reserved pixelmLarger than any pixel point within the neighborhood of 3 x 3 pixels.

6. The method according to claim 1, wherein the S5 specifically comprises the following steps:

s51, establishing an image hierarchical pyramid:

wherein L is the number of layers, and I is the gray value of a pixel point at a certain position; taking the obtained image as the 0 th layer of the pyramid, calculating from the 0 th layer to obtain the 1 st layer, and calculating from the 1 st layer to obtain the 2 nd layer; obtaining an image pyramid in such a recursive manner; in the formula, x and y are positions of image pixel points, and the upper left corner of the image is (0, 0);

s52, the optical flow value of the uppermost feature point has no reliable initial estimation value:

in the formulaThe optical flow of the largest layer;

and S53, iteratively calculating the next layer of characteristic point light flow value:

in the formula dLIs the L-th layer residual error displacement vector;

and S54, if the final convergence is achieved, obtaining a final light flow value:

in the formulaEach feature point has an optical flow obtained by calculation as a final solution of the optical flow of the feature point.

7. The method according to claim 1, wherein the S6 calculates optical flow of the complete image from optical flow of each feature point:

in the formulaIs the optical flow of the ith feature point, n is the number of feature points,is the optical flow of the image.

8. The method according to claim 1, wherein said S7 calculates the projection velocity represented by the optical flow of the image from the resolution of the optical flow sensor camera:

in the formulaAnd ppi is the resolution of the camera, delta t is the interval time of two frames of images before and after, and the unit of the calculation result is cm/s.

9. The method according to claim 1, wherein the S8 specifically comprises the following steps:

and S81, calculating the distance from the camera to the imaging surface of the spherical rotor according to the image size to obtain:

in the formula, d is the distance from the camera to the image pickup surface of the spherical rotor, f is the focal length of the camera, R is the radius of the spherical rotor, d' is the distance from the camera to the surface of the spherical rotor, and A, B is the size of an image;

s82, the position relation between the sensor camera and the spherical rotor is as follows:

is the actual rotation speed in cm/s, direction and light flowThe direction is opposite.

Technical Field

The invention relates to the field of speed sensors and computer image processing, in particular to a method for acquiring the rotating speed of a spherical motor by using a single optical flow sensor.

Background

In the emerging high-tech fields of aerospace attitude control, robots, numerical control machines and the like, the spherical motor can output the moment of any shaft, has lower mass and smaller volume and has wider application prospect by virtue of the advantages of the spherical motor compared with the traditional motor. In order to realize high-precision control of the motor, closed-loop control of the spherical motor is required, and therefore, it is necessary to acquire the rotation speed of the spherical motor.

The rotating speed of the spherical motor is different from that of the traditional motor, and the rotating speed of the spherical motor is measured by measuring the rotating speed and the rotating speed direction. A plurality of cameras are adopted in the market at present to cooperate to obtain the rotating speed and the rotating direction of the spherical motor. The method mainly comprises a Hall sensor, a photoelectric encoder and other methods, and the methods need to be matched among a plurality of sensors in order to measure the rotating speed of the spherical motor with a plurality of degrees of freedom.

Disclosure of Invention

The invention aims to provide a method for acquiring the rotating speed of a spherical motor by using a single optical flow sensor, which can realize the rotating speed measurement of the spherical motor by using a single camera by determining the relative position of the sensor and a spherical rotor.

The purpose of the invention can be realized by the following technical scheme:

a method for acquiring a rotation speed of a spherical motor by using a single optical flow sensor, comprising the steps of:

s1: determining the position relation between the optical flow sensor and the spherical motor rotor;

s2: the optical flow camera acquires a surface image of the spherical motor rotor;

s3: processing data of the spherical motor rotor image;

s4: selecting image characteristic points of a spherical motor rotor;

s5: performing L-K optical flow operation on each feature point;

s6: calculating the optical flow of the complete image from the optical flow of each feature point;

s7: acquiring an image projection speed by using an optical flow of an image;

s8: and solving the speed of the spherical rotor by using the position relation between the spherical motor and the sensor.

Further, in S1, the optical flow sensor is mounted on the spherical motor stator housing, and the vertical distance between the reference plane of the sensor camera and the surface of the spherical motor rotor is within the working range of the camera.

Further, the resolution of the camera for taking the picture in S2 is above 500, and the external lighting conditions are kept unchanged or changed slowly during shooting.

Further, the S3 specifically includes the following steps:

s31: carrying out gray level calculation on RGB image data;

s32: and the gray value is converted into a double-precision value, so that the processing of a computer is facilitated.

Further, the S4 specifically includes the following steps:

s41: calculating the gradient matrix G of each pixel point in the image and the minimum eigenvalue lambda thereofm

In the formula px,pyTo calculate the position of a pixel point, wx,wySelecting the window size for synthesis, wherein the selection range is within the range of 2-7 pixels; i isx,IyThe partial derivatives of the gray values of the pixels in the x and y directions;

s42: finding the maximum lambda of all the minimum eigenvalues in the imagemax

S43: retain minimum eigenvalues greater than 10% lambdamaxThe pixel point of (2);

s44: retention of lambdamLocal maximum pixel, lambda of reserved pixelmLarger than any pixel point within the neighborhood of 3 x 3 pixels.

Further, the S5 specifically includes the following steps:

s51: establishing an image hierarchical pyramid:

wherein L is the number of layers, and I is the gray value of a pixel point at a certain position; taking the obtained image as the 0 th layer of the pyramid, calculating from the 0 th layer to obtain the 1 st layer, and calculating from the 1 st layer to obtain the 2 nd layer; obtaining an image pyramid in such a recursive manner; in the formula, x and y are positions of image pixel points, and the upper left corner of the image is (0, 0);

s52: the optical flow value of the uppermost feature point has no reliable initial estimation value:

in the formulaThe optical flow of the largest layer;

s53: and (3) iteratively calculating the optical flow value of the next layer of feature points:

in the formula dLIs the L-th layer residual error displacement vector;

s54: if the convergence is finally achieved, a final optical flow value is obtained:

in the formulaEach feature point has an optical flow obtained by calculation as a final solution of the optical flow of the feature point.

Further, the S6 calculates an optical flow of the complete image from the optical flow of each feature point:

in the formulaIs the optical flow of the ith feature point, n is the number of feature points,is the optical flow of the image.

Further, the S7 calculates the projection speed represented by the optical flow of the image according to the resolution of the optical flow sensor camera:

in the formulaAnd ppi is the resolution of the camera, delta t is the interval time of two frames of images before and after, and the unit of the calculation result is cm/s.

Further, the S8 specifically includes the following steps:

s81: the distance from the camera to the image pickup surface of the spherical rotor is obtained by calculating the image size:

in the formula, d is the distance from the camera to the image pickup surface of the spherical rotor, f is the focal length of the camera, R is the radius of the spherical rotor, d' is the distance from the camera to the surface of the spherical rotor, and A, B is the size of an image;

s82: the position relation between the sensor camera and the spherical rotor is as follows:

is the actual rotation speed in cm/s, direction and light flowThe direction is opposite.

The invention has the beneficial effects that:

compared with the traditional method for identifying the rotating speed of the spherical rotor, the method for identifying the rotating speed of the spherical rotor has the advantages of no contact and reduced loss; the speed and direction are directly calculated by using only a single camera and depending on the relation between the single optical flow camera and the space position, and other non-contact identification methods generally use two or more cameras to work cooperatively.

Drawings

FIG. 1 is a schematic view of an optical flow sensor of the present invention acquiring a photograph of a spherical motor rotor;

FIG. 2 is a schematic representation of the spherical rotor spatial coordinate system of the present invention;

FIG. 3 is a schematic diagram of rotor image pyramid layering and optical flow calculation according to the present invention;

fig. 4 is a schematic diagram of the relative position of the spherical motor rotor and the camera according to the present invention.

Detailed Description

The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

A method of acquiring a rotation speed of a spherical motor using a light flow sensor, comprising the steps of:

s1: the spherical motor rotor photo is obtained by means of a camera on the optical flow sensor, as shown in figure 1, the spherical motor application does not require high-speed output, the difference of two front and rear frames of images is not large due to low speed, the working environment of the motor is not changed generally, and the change of the external illumination environment is slow and uniform.

The two acquired 2D images of the front frame and the rear frame are an image I and an image J respectively.

S2: the gray level processing of image data processing depends on a gray level function to convert an RGB image into a gray level image; the gray-scale image is converted into double-precision data, so that the processing of a computer is facilitated. Gray scale function:

E′y=0.299R+0.587G+0.114B

in the formula, R, G and B are values of pixel points of the RGB image, and E' y is a gray value of the pixel points;

the gray values of the two images I and J are I (x, y) and J (x, y), respectively, and the pixel coordinate of the top left vertex of the images is [ 00 ]]T。nx、nyRespectively the width and height of the image.

S3: image feature point selection, comprising:

s31: calculating the gradient matrix G of each pixel point in the image 1 and the minimum characteristic value lambda thereofm

In the formula px,pyTo calculate the position of a pixel point, wx,wySelecting the size of a window for synthesis, wherein the selection range is within the range of 2-7 pixels; i isx,IyThe gray value of the pixel point is x,yThe deviation of the direction.

S32: find the maximum lambda of all the minimum eigenvalues in image 1max

S33: retain minimum eigenvalues greater than 10% lambdamaxThe pixel point of (2);

s34: retention of lambdamLocal maximum pixel point, reserving lambda of pixel pointmLarger than any pixel in the neighborhood (size 3 x 3);

the feature point of the first image I is obtained as μ ═ μx μy]TIn the second image J, the corresponding feature point is v ═ vx vy]T

S4: the optical flow calculation is based on the gray level consistency assumption, namely:

the optical flow d minimizes the residual equation described above.

Performing L-K optical flow operation on the feature points, specifically comprising:

s41: as shown in fig. 3, an image hierarchical pyramid is built:

to satisfy the above formula, in an imageSome virtual elements are defined around:

IL-1(-1,y)=IL-1(0,y)

IL-1(x,-1)=IL-1(x,0)

image ILWidth of (2)And heightSatisfies the following two conditions:

thus, an image pyramid is established, and two image pyramid sequences exist for two frames of images, such as 320 × 240, 160 × 120, and 80 × 60 for the first, second, and third layers of images with the image size of 640 × 480.

S42: gray level consistency assumption for the L-th layer:

in the formula gLIs the initial luminous flux of the L-th layer, dLIs a residual displacement vector.

The optical flow value of the uppermost feature point has no reliable initial estimation value:

in the formulaThe optical flow value of the maximum layer;

and (3) iteratively calculating the optical flow value of the next layer of feature points:

gL-1=2(gL+dL)

in the formula dLIs the L-th layer residual error displacement vector;

if the convergence is finally achieved, a final optical flow value is obtained:

d=g0+d0

where d is the final solution to the optical flow.

S43: the L-K optical flow iterative algorithm is an algorithm which needs to be run once for each pyramid layer, and the image gradient is calculated by using a central difference method:

and (3) obtaining an optical flow vector:

in the formulaFor an optimal optical flow vector, which minimizes the error function, G is a gradient matrix,comprises the following steps:

where k represents the number of iterations.

The image optical flow iterative computation is based on L-K optical flow iterative computation of pyramid hierarchy. The image pyramid hierarchy is generally 3-5 layers, and the L-K optical flow iteration number of each window of each layer of pyramid is generally finished within 5 times.

S5: solving for the speed of the spherical rotor using the positional relationship, comprising:

s51: calculating the optical flow of the whole image from the optical flow of each feature point:

in the formulaIs the optical flow of the ith feature point, n is the number of feature points,is the optical flow of the image.

S52: the projection speed represented by the optical flow is calculated from the resolution of the optical flow sensor camera:

in the formulaAnd ppi is the resolution of the camera, delta t is the interval time of two frames of images before and after, and the unit of the calculation result is cm/s.

S53: as shown in fig. 2, the distance from the camera to the imaging surface of the spherical rotor can be calculated by the image size:

wherein d is the distance from the camera to the image pickup surface of the spherical rotor, f is the focal length of the camera, R is the radius of the spherical rotor, d' is the distance from the camera to the surface of the spherical rotor, and A, B is the image size.

S54: as shown in fig. 4, the position relationship between the sensor camera and the spherical rotor is as follows:

is the actual rotation speed in cm/s, direction and light flowThe direction is opposite.

In the description herein, references to the description of "one embodiment," "an example," "a specific example" or the like are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.

The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed.

13页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:电机测速方法、装置、计算机设备和存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!