Method and device for processing data

文档序号:799594 发布日期:2021-04-13 浏览:19次 中文

阅读说明:本技术 处理数据的方法和装置 (Method and device for processing data ) 是由 叶爱学 李建飞 温丰 于 2019-10-10 设计创作,主要内容包括:本申请涉及人工智能领域,具体提供一种能够消除智能设备的运动所造成的激光点云数据畸变,以提高智能设备感知的环境信息的准确性的方法和装置。本申请提供的技术方案中,假设在激光雷达的同一个扫描周期内,智能设备的线加速度、前轮舵角和朝向角保持不变,并根据此假设建立智能设备的平面运动模型,根据该平面运动模型确定智能设备在基准时刻和点云时刻的运动数据,以及根据智能设备在基准时刻的运动数据和所述智能设备在点云时刻的运动数据,对智能设备在点云时刻的点云数据进行消旋处理。(The application relates to the field of artificial intelligence, and particularly provides a method and a device for eliminating laser point cloud data distortion caused by movement of intelligent equipment so as to improve accuracy of environmental information perceived by the intelligent equipment. In the technical scheme provided by the application, linear acceleration, a front wheel rudder angle and an orientation angle of the intelligent device are assumed to be kept unchanged in the same scanning period of the laser radar, a plane motion model of the intelligent device is established according to the assumption, motion data of the intelligent device at a reference moment and a point cloud moment are determined according to the plane motion model, and despun processing is carried out on point cloud data of the intelligent device at the point cloud moment according to the motion data of the intelligent device at the reference moment and the motion data of the intelligent device at the point cloud moment.)

1. A method of processing data, comprising:

the method comprises the steps of obtaining a plurality of motion data of the intelligent device, wherein each motion data comprises pose data, and the motion data are collected by a motion data collection unit of the intelligent device at a plurality of moments in the same scanning period of the laser radar;

determining values of parameters in a plane motion model of the intelligent device according to the motion data, wherein the plane motion model is a polynomial-based kinematic model;

according to the plane motion model with the determined parameter values, determining motion data of the intelligent device at a reference moment, wherein the motion data of the intelligent device at the reference moment comprises pose data of the intelligent device at the reference moment;

determining motion data of the intelligent device at a target moment according to the plane motion model with the determined parameter values, wherein the motion data of the intelligent device at the target moment comprises pose data of the intelligent device at the target moment, and the target moment and the reference moment are located in the same scanning period;

and performing despinning processing on the point cloud data of the intelligent equipment at the target time according to the motion data of the intelligent equipment at the reference time and the motion data of the intelligent equipment at the target time, wherein the point cloud data obtained through the despinning processing is used for determining the environmental information of the intelligent equipment.

2. The method of claim 1, wherein the planar motion model, the position data of the smart device and the pose data of the smart device satisfy the following relationship:

xt=ax·t2+bx·t+cx

yt=ay·t2+by·t+cy

θt=aθ·t2+bθ·t+cθ

wherein x istRepresenting position data of the intelligent device in a first direction at the moment t; y istRepresenting position data of the intelligent device in a second direction at the moment t, wherein the first direction is vertical to the second direction; thetatRepresenting pose data of the smart device at time t; a isx、bx、cx、aθ、bθ、cθ、ay、by、cyRepresenting parameters in the planar motion model.

3. The method of claim 1 or 2, wherein each of the plurality of motion data further comprises linear velocity data.

4. The method of claim 3, wherein in the planar motion model, the linear velocity data satisfies the following relationship:

vt·cos(θt)=2·ax·t+bx

vt·sin(θt)=2·ay·t+by

wherein v istRepresenting linear velocity data of the intelligent device at time t; thetatRepresenting pose data of the smart device at time t; a isx、bx、ay、byRepresenting the parameters in the planar motion model.

5. The method of any of claims 2 to 4, wherein each of the plurality of motion data further comprises angular velocity data.

6. The method of claim 5, wherein the angular velocity data satisfies the following relation in the planar motion model

ωt=2·aθ·t+bθ

Wherein, ω istData representing the angular velocity of the smart device at time t; a isθ、bθRepresenting the parameters in the planar motion model.

7. The method according to any one of claims 1 to 6, wherein the despinning the point cloud data of the smart device at the target time according to the motion data of the smart device at the reference time and the motion data of the smart device at the target time comprises:

determining a transformation relation between the motion data of the intelligent device at the target moment and the motion data of the intelligent device at the reference moment according to the motion data of the intelligent device at the reference moment and the motion data of the intelligent device at the target moment;

and performing racemization processing on the point cloud data of the intelligent equipment at the target moment according to the transformation relation.

8. An apparatus for processing data, comprising:

the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a plurality of motion data of the intelligent device, each motion data in the plurality of motion data comprises pose data, and the plurality of motion data are acquired by a motion data acquisition unit of the intelligent device at a plurality of moments in the same scanning period of a laser radar of the intelligent device;

the determining unit is used for determining the value of a parameter in a plane motion model of the intelligent equipment according to the motion data, wherein the plane motion model is a polynomial-based kinematic model;

the determining unit is further used for determining motion data of the intelligent device at a reference moment according to the plane motion model of which the parameter value is determined, wherein the motion data of the intelligent device at the reference moment comprises pose data of the intelligent device at the reference moment;

the determining unit is further configured to determine, according to the plane motion model with the determined parameter values, motion data of the intelligent device at a target time, where the motion data of the intelligent device at the target time includes pose data of the intelligent device at the target time, and the target time and the reference time are located in the same scanning cycle;

and the despinning unit is used for performing despinning processing on the point cloud data of the intelligent equipment at the target time according to the motion data of the intelligent equipment at the reference time and the motion data of the intelligent equipment at the target time, and the point cloud data obtained through the despinning processing is used for determining the environmental information of the intelligent equipment.

9. The apparatus of claim 8, wherein the planar motion model is characterized in that the position data of the smart device and the pose data of the smart device satisfy the following relation:

xt=ax·t2+bx·t+cx

yt=ay·t2+by·t+cy

θt=aθ·t2+bθ·t+cθ

wherein x istRepresenting position data of the intelligent device in a first direction at the moment t; y istRepresenting position data of the intelligent device in a second direction at the moment t, wherein the first direction is vertical to the second direction; thetatRepresenting pose data of the smart device at time t; a isx、bx、cx、aθ、bθ、cθ、ay、by、cyRepresenting the parameters in the planar motion model.

10. The apparatus of claim 8 or 9, wherein each of the plurality of motion data further comprises linear velocity data.

11. The apparatus of claim 10, wherein in the planar motion model, the linear velocity data satisfies the following relationship:

vt·cos(θt)=2·ax·t+bx

vt·sin(θt)=2·ay·t+by

wherein v istRepresenting linear velocity data of the intelligent device at time t; thetatRepresenting pose data of the smart device at time t; a isx、bx、ay、byRepresenting the parameters in the planar motion model.

12. The apparatus of any of claims 8 to 11, wherein each of the plurality of motion data further comprises angular velocity data.

13. The apparatus of claim 12, wherein in the planar motion model, the angular velocity data satisfies the following:

ωt=2·aθ·t+bθ

wherein, ω istData representing the angular velocity of the smart device at time t; a isθ、bθRepresenting the parameters in the planar motion model.

14. The apparatus according to any one of claims 8 to 13, wherein the racemization unit is specifically configured to:

determining a transformation relation between the motion data of the intelligent device at the target moment and the motion data of the intelligent device at the reference moment according to the motion data of the intelligent device at the reference moment and the motion data of the intelligent device at the target moment;

and performing racemization processing on the point cloud data of the intelligent equipment at the target moment according to the transformation relation.

15. An apparatus for processing data, comprising a processor and a memory, the memory for storing program instructions, the processor for invoking the program instructions to perform the method of any of claims 1-7.

16. A computer-readable storage medium, characterized in that the computer-readable medium stores program code for execution by a device, the program code comprising instructions for performing the method of any of claims 1 to 7.

17. A chip comprising a processor and a data interface, the processor reading instructions stored on a memory through the data interface to perform the method of any one of claims 1 to 7.

Technical Field

The present application relates to the field of artificial intelligence, and more particularly, to a method and apparatus for processing data.

Background

The lidar (light detection and ranging) can capture basic shape features and abundant local details of a target, has the advantages of high reliability and measurement accuracy and the like, and is widely applied to environment perception of intelligent equipment (such as unmanned vehicles, robots and the like) at present.

Lidar, such as scanning lidar, is arranged by a plurality of lasers in a vertical row, which are rotated 360 degrees around an axis, each laser scans a plane, and the lasers are longitudinally overlapped to present a three-dimensional figure. Specifically, the lidar detects a target by emitting a laser beam and acquires point cloud data by collecting the reflected beam. These point cloud data can generate an accurate three-dimensional stereo image.

Typically, a single frame of laser point cloud data is not acquired instantaneously. Currently, most lidar scans at frequencies from 5 hertz (Hz) to 20 Hz. Taking the scanning frequency of the laser radar as 10 hertz as an example, the acquisition time of the single-frame point cloud data is 100 microseconds (ms). However, during the acquisition of the single-frame point cloud data, the intelligent device may move. The motion of the smart device may cause distortion of the acquired laser point cloud data, for example, the speed, angular velocity, etc. of the smart device may cause distortion of the laser point cloud data.

Distortion of the laser point cloud data may reduce the accuracy of the sensed environmental information, so it is necessary to eliminate the laser point cloud data distortion caused by the movement of the smart device.

Therefore, how to eliminate the laser point cloud data distortion caused by the movement of the intelligent device becomes a problem to be solved urgently.

Content of application

The application provides a method and a device for processing data, which can eliminate laser point cloud data distortion caused by movement of intelligent equipment and improve accuracy of environmental information perceived by the intelligent equipment.

In a first aspect, the present application provides a method of processing data, the method comprising: the method comprises the steps of obtaining a plurality of motion data of the intelligent device, wherein each motion data comprises pose data, and the motion data are collected by a motion data collection unit of the intelligent device in the same scanning period of the laser radar; determining values of parameters in a planar motion model of the smart device according to the plurality of motion data, wherein the planar motion model is a polynomial-based kinematic model; according to the plane motion model with the determined parameter values, determining motion data of the intelligent device at a reference moment, wherein the motion data of the intelligent device at the reference moment comprises pose data of the intelligent device at the reference moment; determining motion data of the intelligent device at the target moment according to the plane motion model with the determined parameter values, wherein the motion data of the intelligent device at the target moment comprises pose data of the intelligent device at the target moment, and the target moment and the reference moment are located in the same scanning period; and performing despinning processing on the point cloud data of the intelligent equipment at the target time according to the motion data of the intelligent equipment at the reference time and the motion data of the intelligent equipment at the target time, wherein the point cloud data obtained through the despinning processing is used for determining the environmental information of the intelligent equipment.

According to the method, the parameter values in the plane motion model are determined through the motion data acquired by other sensors on the intelligent equipment, so that the motion data of the laser radar at the reference time and the target time can be determined according to the plane motion model with the parameter values determined, the point cloud data of the intelligent equipment at the target time can be despuned according to the motion data of the laser radar at the reference time and the target time, the laser point cloud data distortion caused by the motion of the intelligent equipment can be eliminated finally, and the accuracy of the environment information sensed by the intelligent equipment is improved.

In addition, in the method, because the plane motion model of the intelligent device is the polynomial-based plane motion model, the complexity according to the plane motion model can be simplified, so that the calculation amount of parameters in the plane model can be reduced, the calculation amount of motion data of the reference time and the target time can be reduced according to the plane model, and further the calculation time delay is saved.

With reference to the first aspect, in a first possible implementation manner, in the planar motion model, the position data of the smart device and the pose data of the smart device satisfy the following relation:

xt=ax·t2+bx·t+cx

yt=ay·t2+by·t+cy

θt=aθ·t2+bθ·t+cθ

wherein x istRepresenting position data of the intelligent device in a first direction at the moment t; y istRepresenting position data of the intelligent device in a second direction at the moment t, wherein the first direction is vertical to the second direction; thetatRepresenting pose data of the smart device at time t; a isx、bx、cx、aθ、bθ、cθ、ay、by、cyRepresenting the parameters in the planar motion model.

With reference to the first aspect, in a second possible implementation manner, each of the plurality of motion data further includes linear velocity data.

With reference to the second possible implementation manner, in a third possible implementation manner, in the planar motion model, the linear velocity data of the smart device satisfies the following relation:

vt·cos(θt)=2·ax·t+bx

vt·sin(θt)=2·ay·t+by

wherein v istRepresenting linear velocity data of the intelligent device at time t; thetatRepresenting pose data of the smart device at time t; a isx、bx、ay、byRepresenting the parameters in the planar motion model.

With reference to the first aspect or any one of the foregoing possible implementation manners, in a fourth possible implementation manner, each of the plurality of pieces of motion data further includes angular velocity data.

With reference to the fourth possible implementation manner, in a fifth possible implementation manner, in the plane motion model, the angular velocity of the smart device satisfies the following relation:

ωt=2·aθ·t+bθ

wherein, ω istData representing the angular velocity of the smart device at time t; a isθ、bθRepresenting the parameters in the planar motion model.

With reference to the first aspect or any one of the foregoing possible implementation manners, in a sixth possible implementation manner, the performing despinning on the point cloud data of the smart device at the target time according to the motion data of the smart device at the reference time and the motion data of the smart device at the target time includes: determining a transformation relation between the motion data of the intelligent device at the target moment and the motion data of the intelligent device at the reference moment according to the motion data of the intelligent device at the reference moment and the motion data of the intelligent device at the target moment; and performing racemization processing on the point cloud data of the intelligent equipment at the target moment according to the transformation relation.

In a second aspect, the present application provides an apparatus for processing data, the apparatus comprising: the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a plurality of motion data of the intelligent device, each motion data in the plurality of motion data comprises pose data, and the plurality of motion data are acquired in the same scanning period of the laser radar by a motion data acquisition unit of the intelligent device; the determining unit is used for determining the value of a parameter in a plane motion model of the intelligent equipment according to the motion data, wherein the plane motion model is a polynomial-based kinematic model; the determining unit is used for determining motion data of the intelligent device at a reference moment according to the plane motion model with the determined parameter values, wherein the motion data of the intelligent device at the reference moment comprises pose data of the intelligent device at the reference moment; the determining unit is further configured to determine, according to the plane motion model with the determined parameter values, motion data of the intelligent device at a target time, where the motion data of the intelligent device at the target time includes pose data of the intelligent device at the target time, and the target time and the reference time are located in the same scanning cycle; and the despinning unit is used for despinning the point cloud data of the intelligent equipment at the target time according to the motion data of the intelligent equipment at the reference time and the motion data of the intelligent equipment at the target time, and the point cloud data obtained through the despinning is used for determining the environmental information of the intelligent equipment.

In the device, the parameter values in the plane motion model are determined through the motion data acquired by other sensors on the intelligent equipment, so that the motion data of the laser radar at the reference time and the target time can be determined according to the plane motion model after the parameter values are determined, the despun processing can be performed on the point cloud data of the intelligent equipment at the target time according to the motion data of the laser radar at the reference time and the target time, the laser point cloud data distortion caused by the motion of the intelligent equipment can be eliminated finally, and the accuracy of the environment information sensed by the intelligent equipment is improved.

In addition, in the device, because the plane motion model of the intelligent device is the plane motion model based on the polynomial, the complexity according to the plane motion model can be simplified, so that the calculation amount of parameters in the plane model can be reduced, the calculation amount of motion data of the reference time and the target time can be reduced according to the plane model, and further the calculation time delay is saved.

With reference to the second aspect, in a first possible implementation manner, in the planar motion model, the position data of the smart device and the pose data of the smart device satisfy the following relation:

xt=ax·t2+bx·t+cx

yt=ay·t2+by·t+cy

θt=aθ·t2+bθ·t+cθ

wherein x istRepresenting position data of the intelligent device in a first direction at the moment t; y istRepresenting position data of the intelligent device in a second direction at the moment t, wherein the first direction is vertical to the second direction; thetatRepresenting pose data of the smart device at time t; a isx、bx、cx、aθ、bθ、cθ、ay、by、cyRepresenting the parameters in the planar motion model.

With reference to the first aspect, in a second possible implementation manner, each of the plurality of motion data further includes linear velocity data.

With reference to the second possible implementation manner, in a third possible implementation manner, in the planar motion model, the linear velocity data of the smart device satisfies the following relation:

vt·cos(θt)=2·ax·t+bx

vt·sin(θt)=2·ay·t+by

wherein v istRepresenting linear velocity data of the intelligent device at time t; thetatRepresenting pose data of the smart device at time t; a isx、bx、ay、byRepresenting the parameters in the planar motion model.

With reference to the second aspect or any one of the foregoing possible implementation manners, in a fourth possible implementation manner, each of the plurality of pieces of motion data further includes angular velocity data.

With reference to the fourth possible implementation manner, in a fifth possible implementation manner, in the plane motion model, the angular velocity data of the smart device satisfies the following relation:

ωt=2·aθ·t+bθ

wherein, ω istData representing the angular velocity of the smart device at time t; a isθ、bθRepresenting in said plane motion modelThe parameter is described.

With reference to the second aspect or any one of the foregoing possible implementation manners, in a sixth possible implementation manner, the racemization unit is specifically configured to: determining a transformation relation between the motion data of the intelligent device at the target moment and the motion data of the intelligent device at the reference moment according to the motion data of the intelligent device at the reference moment and the motion data of the intelligent device at the target moment; and performing racemization processing on the point cloud data of the intelligent equipment at the target moment according to the transformation relation.

In a third aspect, an apparatus for processing data is provided, the apparatus comprising: a memory for storing a program; a processor configured to execute the program stored in the memory, and when the program stored in the memory is executed, the processor is configured to perform the method in any one of the implementations of the first aspect.

In a fourth aspect, a computer-readable medium is provided that stores instructions for execution by an apparatus for processing data, the instructions being for implementing the method in any one of the implementations of the first aspect.

In a fifth aspect, a computer program product containing instructions is provided, which when run on a computer causes the computer to perform the method of any one of the implementations of the first aspect.

In a sixth aspect, a chip is provided, where the chip includes a processor and a data interface, and the processor reads instructions stored in a memory through the data interface to execute the method in any one of the implementation manners in the first aspect.

Optionally, as an implementation manner, the chip may further include a memory, where instructions are stored in the memory, and the processor is configured to execute the instructions stored in the memory, and when the instructions are executed, the processor is configured to execute the method in any one of the implementation manners of the first aspect.

The chip may be a field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC).

In a seventh aspect, there is provided a smart device comprising the apparatus for processing data in any one of the second or third aspects.

In a fourth aspect, there is provided a server comprising the apparatus for processing data of any one of the second or third aspects.

Drawings

Fig. 1 is a schematic diagram of an application scenario of the technical solution of the embodiment of the present application.

FIG. 2 is a schematic flow chart diagram of a method of processing data according to one embodiment of the present application.

FIG. 3 is a schematic flow chart diagram of an apparatus for processing data according to an embodiment of the present application.

Detailed Description

The technical solution in the present application will be described below with reference to the accompanying drawings.

The method of the embodiment of the application can be used in any scene needing to sense the ambient environment information through the sensor. For example, when an intelligent device (intelligent equipment) enters an unknown working environment, the intelligent device can efficiently and accurately construct (mapping) the surrounding environment by using sensor information, and obtain the position and posture (localization) of the intelligent device in the space.

The intelligent device in the embodiment of the application can be a robot (robot), an automatic vehicle (autonomous vehicles), an unmanned aerial vehicle (unmanned aerial vehicle), a smart home (smart home), a mobile phone terminal and the like, and the intelligent device is not limited in any way in the application.

Fig. 1 is a schematic diagram of an application scenario in which the method and apparatus of the embodiments of the present application may be applied. The laser radar 120 on the vehicle 100 is arranged vertically by multiple beams of laser, and rotates 360 degrees around an axis, and each beam of laser scans a plane and shows a three-dimensional figure after being longitudinally overlapped.

Specifically, the laser radar 120 emits a laser beam to detect a target, and acquires point cloud data by collecting the reflected beam. These point cloud data can generate an accurate three-dimensional stereo image.

The duration of the 360-degree rotation of the laser radar 120 around the axis may be referred to as a scanning period, and the point cloud data acquired in one scanning period may be referred to as a frame of point cloud data.

While the laser radar 120 acquires the point cloud data, the motion data acquisition unit 140 on the vehicle 100 acquires the motion data of the automobile, which may include pose data of the vehicle, and further, the motion data may also include linear velocity data and angular velocity data of the vehicle. Wherein the pose data of the vehicle includes position data of the vehicle and pose data of the vehicle. The position data refers to a position of the vehicle in the global coordinate system, and the attitude data refers to an orientation angle of the vehicle.

The motion data acquisition unit 140 may include a wheel speed meter, an Inertial Measurement Unit (IMU), a real-time dynamic carrier phase differential positioning (RTK), and the like.

It should be understood that the vehicle 100 shown in fig. 1 may be replaced with any other intelligent device having a need for awareness of environmental information.

FIG. 2 is a schematic flow chart diagram of a method of processing data according to one embodiment of the present application. It should be understood that fig. 2 shows steps or operations of the method, but these steps or operations are only examples, and other operations or variations of the operations in fig. 2 may be performed by the embodiments of the present application, or not all the steps need to be performed, or the steps may be performed in other orders. The method shown in fig. 2 may include S210 to S260.

The method of fig. 2 may be performed by a smart device. The method is executed by the intelligent device, and can be understood as follows: the method is performed by a processing unit on the smart device. The processing unit of the smart device may be a chip, a system of chips, a controller, or a processor in the smart device. It should be understood that the processing unit of the smart device may have other names, which are not limited herein.

Alternatively, the method of fig. 1 may be performed by a server.

S210, acquiring a plurality of motion data of the intelligent device, wherein each motion data in the plurality of motion data comprises pose data, and the plurality of motion data are acquired in the same scanning period of the laser radar of the intelligent device.

The pose data of the intelligent device comprises position data and posture data of the intelligent device.

The plurality of motion data are collected in the same scanning cycle of the lidar of the smart device, which can be understood as: the plurality of motion data of the intelligent device are acquired by a sensor on the intelligent device in one period of the laser radar scanning surrounding environment information.

For example, the motion data acquisition unit 140 acquires the plurality of motion data during one cycle of the point cloud data acquired by the laser radar 120 in fig. 1.

When the method illustrated in fig. 2 is performed by a smart device, acquiring a plurality of motion data of the smart device may include: the processing unit of the smart device obtains the plurality of motion data from the sensor of the smart device.

When the method shown in fig. 2 is executed by a server, acquiring a plurality of motion data of a smart device may include: the server receives the plurality of motion data from the smart device.

And S220, determining the values of parameters in a plane motion model of the intelligent equipment according to the motion data, wherein the plane motion model is a kinematic model based on a polynomial.

Estimating the values of the parameters in the planar motion model from the plurality of motion data may be understood as: and substituting the plurality of motion data into a plane motion model of the intelligent equipment, and calculating the values of the parameters in the plane motion model.

It should be noted that the values of these three variables can be considered as constants when the linear acceleration, the front wheel steering angle, and the orientation angle of the smart device are referred to in the plane motion model. This is because, in general, the linear acceleration, the rudder angle and the orientation angle of the front wheel of the smart device during one motion data acquisition cycle can be regarded as being constant.

When the linear acceleration, the front wheel rudder angle and the orientation angle of the intelligent device are involved in the plane motion model, the values of the three variables are fixed as constants, the plane motion model can be simplified, and the plane motion model based on a polynomial is obtained, so that the complexity of calculating parameters in the plane motion model according to the plane motion model can be simplified, the calculated amount during calculation according to the plane motion model is reduced, and further the calculation time delay is saved. It is to be understood that the polynomial-based plane motion model in the embodiments of the present application means that one or more kinematic equations included in the plane motion model include only polynomials.

In an exemplary planar motion model, the position data of the smart device and the pose data of the smart device may satisfy equation (1).

Wherein x istThe position data of the intelligent device in the first direction at the moment t is represented; y istThe position data of the intelligent device in a second direction at the moment t is represented, and the first direction is perpendicular to the second direction; thetatRepresenting attitude data of the smart device at time t; a isx、bx、cx、aθ、bθ、cθ、ay、by、cyRepresenting the parameters in the planar motion model.

One example of the first direction is a direction directly in front of the movement of the smart device and one example of the second direction is a direction to the left of the smart device.

Since the position data and the attitude data in the formula (1) are only quadratic functions of the time t, the parameter a in the planar motion model can be calculated quickly and simply by substituting the position data and the attitude data of the intelligent device at a plurality of moments in the formula (1)x、bx、cx、aθ、bθ、cθ、ay、by、cy

Alternatively, equation (1) may be linearly changed according to the requirement. Any linear variation of formula (1) to obtain a new expression is also included in the scope of the embodiments of the present application.

The derivation process of the following formula (1) is described below to describe how the planar motion model of the smart device is represented by formula (1) in the present scheme, so as to reduce the computational complexity and reduce the computational complexity.

In the method provided by the application, the intelligent equipment is considered to perform plane motion in the motion process of acquiring one frame of point cloud data by the laser radar. When the intelligent device performs plane motion, a corresponding plane motion model is shown as a formula (2).

Wherein x is0、y0And theta0Respectively represent the initial time t0The coordinates of the smart device in the x-axis, the coordinates in the y-axis and the orientation angle, x0、y0And theta0May be collectively referred to as an initial time t0The pose of the smart device; x is the number oft、yt、θtRespectively representing the coordinate of the intelligent device at the time t on the x axis, the coordinate of the intelligent device at the y axis and the orientation angle, xt、yt、θtCan be collectively referred to as the pose of the smart device at time t.

According to the ackermann steering principle, the angular velocity of the smart device is as follows:

wherein the content of the first and second substances,for the steering angle of the front wheel of the intelligent device, omega (tau) represents the angular speed of the intelligent device, and v (tau) represents the speed of the intelligent device at the time of tauγ (τ) represents the radius of motion of the smart device at time τ; l represents the distance between the center point of the front wheel and the center point of the rear wheel of the intelligent device,represents t0The velocity at time, a, represents the acceleration at time τ.

Substituting formula (3) into formula (2), wherein the plane motion model of the intelligent device is as shown in formula (4):

generally, the acceleration of the intelligent device does not change in one scanning period of the laser radar; alternatively, even if the acceleration of the smart device changes, the amount of the transformation is usually very small. Therefore, it can be considered that the smart device makes a uniform velocity movement within one scanning period of the lidar, that is, the acceleration does not change. Therefore, the speed of the intelligent device satisfies the following formula (5):

wherein the content of the first and second substances,for the intelligent device at time t0A is the acceleration of the smart device.

Formula (6) can be obtained by substituting formula (5) for formula (2):

the pose equation of the intelligent device in the equation (6) is an equation of time t, but the pose equation comprises a trigonometric function and an integral, has multiple parameters and a complex form, and is difficult to estimate. Therefore, the pose equations of the smart device can be continuously optimized.

Generally, under the condition that the intelligent device moves at a high speed, according to the rule control characteristic, when the change of the rudder angle of a front wheel of the intelligent device is too large, the intelligent device is easy to turn on one's side. That is, the smart device typically has very little change in the steering angle of the front wheels during motion. Accordingly, it can be considered that the steering angle of the front wheel of the smart device does not change during the scanning period of the laser radar. This may simplify the orientation angle θ of the smart devicetThe simplified formula is shown as formula (7).

As can be seen from the above equation (7), the orientation angle θ of the smart devicetThe method can be converted into a quadratic function of time t, wherein the parameters are few, the form is simple, and the calculation is facilitated.

In general, when the smart device performs planar motion, particularly high-speed planar motion, the change in the orientation angle of the smart device is very small in one scanning cycle of the laser radar, and therefore, it can be considered that the orientation angle of the smart device does not change in one scanning cycle. That is, the orientation angle θ (τ) of the smart device within one scanning period can be considered to be constant. In this way, the expression of the location of the smart device can be simplified, and the simplified expression is shown in equation (8).

Through the above analysis, in the plane motion model of the smart device, the position data of the smart device and the posture data of the smart device may satisfy equation (1), that is, satisfy the following relation:

it should be understood that equation (2) is only an exemplary expression of the planar motion model of the smart device, and therefore equation (1) obtained according to equation (2) is also only an exemplary expression of the simplified planar motion model. The planar motion model of the intelligent device is not limited in the embodiment of the present application, and the planar motion model can be included in the protection scope of the present application as long as the linear acceleration, the front wheel rudder angle and the heading angle involved in establishing the planar motion model are all fixed as constants.

When estimating the values of the parameters in the planar motion model according to the plurality of motion data, equation (1) may be expressed in a matrix form, specifically as equation (9).

The matrix form represented by formula (9) may be represented as a · x ═ b form, where x isb isA is

Using the least square method, a · x ═ b can be transformed into x ═ (a)TA)-1ATb, form (a). That is, substituting a and b into formula x ═ a (a)TA)-1ATIn b, x is calculated.

In the embodiment of the application, the estimation of the parameter value of the plane motion model of the intelligent equipment is realized by using a least square method, and the influence of the noise of the sensor on the calculation precision of the parameter can be effectively reduced.

It should be understood that other methods may be used in the embodiments of the present application to calculate the values of the parameters in the planar motion model. For example, optimization methods including gradient descent methods, newton methods, quasi-newton methods, conjugate gradient methods, and the like.

After the values of the parameters in the plane motion model are obtained through estimation according to a plurality of motion data of the intelligent equipment, the estimated values of the parameters are substituted into the plane motion model, and the values of the parameters are updated, so that the plane motion model with the determined parameter values is obtained.

Taking the plane motion model represented by the formula (1) as an example, a can be obtained by calculating in S220x、bx、cx、aθ、bθ、cθ、ay、by、cyThe values of (2) are substituted into the model shown in the formula (1), thereby obtaining a plane motion model of which the parameter values are determined.

And S230, determining motion data of the intelligent device at a reference moment according to the plane motion model with the determined parameter values, wherein the motion data of the intelligent device at the reference moment can include pose data of the intelligent device at the reference moment.

Specifically, the time information of the reference time is substituted into the plane motion model of which the parameter value is determined, and the motion data of the intelligent device at the reference time is calculated.

Taking the plane motion model represented by the formula (1) as an example, a can be obtained by calculating in S220x、bx、cx、aθ、bθ、cθ、ay、by、cySubstituting the reference time into the model shown in the formula (1), and substituting the reference time as t into the model shown in the formula (1), and calculating the obtained xt、ytAnd thetatThe pose data of the intelligent equipment at the reference moment is obtained.

In general, a reference time exists in the same scanning period, and the reference time is a time in the scanning period.

For example, the reference time may be a start time of one scanning cycle of the laser radar, or may be an end time of one scanning cycle.

And S240, determining motion data of the intelligent device at a target moment according to the plane motion model with the determined parameter values, wherein the motion data of the intelligent device at the target moment can include pose data of the intelligent device at the target moment, and the target moment and the reference moment are in a motion data acquisition cycle.

Specifically, the time information of the target time is substituted into the plane motion model of which the parameter value is determined, and the motion data of the intelligent device at the target time is calculated.

Taking the plane motion model represented by the formula (1) as an example, a can be obtained by calculating in S220x、bx、cx、aθ、bθ、cθ、ay、by、cySubstituting the target time into the model shown in the formula (1), and substituting the target time into the model shown in the formula (1) as t, and calculating the obtained xt、ytAnd thetatNamely the pose data of the intelligent equipment at the target moment.

And the target time corresponds to the time stamp of one point cloud data in the point cloud data set acquired by the laser radar. That is, pose data of the smart device at the time of the timestamp of the point cloud data is calculated. The target time may also be referred to as a point cloud time.

In general, the target time and the reference time are different times in the same motion data acquisition cycle.

And S250, performing racemization processing on the point cloud data of the intelligent device at the target time according to the motion data of the intelligent device at the reference time and the motion data of the intelligent device at the target time, wherein the point cloud data obtained through the racemization processing is used for determining the environmental information of the intelligent device.

That is, the object information in the environment where the smart device is located may be determined from the point cloud data obtained by the despinning in S250. The implementation manner of determining the object information in the environment where the intelligent device is located according to the point cloud data may refer to the prior art, and is not described herein again.

According to the method, the parameter values in the plane motion model are determined through the motion data acquired by other sensors on the intelligent device, so that the motion data of the laser radar at the reference time and the target time can be determined according to the plane motion model with the parameter values determined, the despun processing can be performed on the point cloud data of the intelligent device at the target time according to the motion data of the laser radar at the reference time and the target time, the distortion of the point cloud data of the laser caused by the motion of the intelligent device can be eliminated finally, and the accuracy of the environment information sensed by the intelligent device is improved.

In general, how many point cloud data are collected by the laser radar in one scanning period can be calculated in S240 to obtain a corresponding amount of pose data, and the pose data and the point cloud data are in one-to-one correspondence. In this way, in S250, the point cloud data at each target time may be despuned based on each pose data calculated in S230 and each pose data calculated in S240. For a specific implementation of the racemization treatment, reference may be made to the prior art.

In some possible implementations, performing despinning on the point cloud data of the smart device at the target time according to the motion data of the smart device at the reference time and the motion data of the smart device at the target time may include: determining a conversion mode of the motion data of the intelligent device at the target moment and the motion data of the intelligent device at the reference moment according to the motion data of the intelligent device at the reference moment and the motion data of the intelligent device at the target moment; and performing racemization processing on the point cloud data of the intelligent equipment at the target moment according to the conversion mode.

If the motion data of the intelligent equipment at the reference moment is recorded as TbaseRecording the motion data of the intelligent equipment at the target moment as TtThen, the conversion mode between the motion data of the smart device at the target time and the motion data of the smart device at the reference time satisfies equation (10).

If the external reference of the laser radar is TL-to-VAnd then the racemization transformation mode T of the point cloud data of the laser radar is shown as the formula (11).

Recording point cloud data of all target moments before racemization of the laser radar as PoriRecording the point cloud data of all the moments after derotation as PuntwistThe expression for racemization according to the racemization transformation mode is shown as formula (12).

Generally, raw point cloud data obtained by scanning with a laser radar mainly includes: scanning the distance from the point to the laser radar; the pitch angle of the scanning line where the scanning point is located, namely the angle of the scanning line in the vertical direction; the lidar, or the heading angle of the scan line in the horizontal direction.

Under the condition, when the point cloud is despuned, the original point cloud data can be firstly converted into a laser radar Cartesian coordinate system, and then the laser radar Cartesian coordinate system is converted into an intelligent equipment coordinate system, wherein the intelligent coordinate system usually takes the center point of a rear wheel axle as a coordinate origin, the x direction is horizontal forward, the y direction is horizontal leftward, and the z direction is vertical upward. Then, racemization treatment is carried out in the formula (12). The lidar cartesian coordinate system is referred to as the lidar coordinate system for short.

The origin of the lidar coordinate system is usually the center of the lidar, and the x-axis direction of the lidar coordinate system usually points to the opposite direction of an output cable of the lidar; if the laser radar is installed in a manner of pointing to the front of the automobile, the y-axis direction of the laser radar coordinate system usually points to the left side of the automobile; the z-axis of the lidar coordinate system is typically pointed skyward.

The origin of the intelligent device coordinate system is usually the center of the intelligent device, and the x-axis direction of the intelligent device coordinate system usually points to the motion direction of the intelligent device; the y-axis direction of the smart device coordinate system is generally directed to the left side of the automobile; the z-axis of the smart device coordinate system is typically pointed skyward.

For example, the three-dimensional raw point cloud data acquired by the laser radar may be converted to a laser radar cartesian coordinate system by equation (13).

Wherein ρ is the distance from a scanning point corresponding to the point cloud data to the laser, and α is the pitch angle of the scanning line where the scanning point is located, that is, the angle of the scanning line in the vertical direction; and theta is the heading angle of the laser radar in the horizontal direction.

In the embodiment of the application, because the linear acceleration, the front wheel rudder angle and the orientation angle of the intelligent device in the plane motion model are kept constant, the plane motion model can be simplified, so that the complexity of calculating the motion data of the intelligent device at the point cloud moment can be reduced, the calculation amount can be reduced, and the time delay of determining the environmental information can be reduced.

In an embodiment of the present application, optionally, each of the plurality of motion data of the smart device may further include linear velocity data and/or angular velocity data. That is to say, the point cloud despinning conversion mode of the intelligent device is determined according to the pose data of the intelligent device, and the point cloud despinning conversion mode of the intelligent device is further determined according to the linear velocity and/or the angular velocity of the intelligent device. This can greatly improve the accuracy of the derotated point cloud data, and thus can improve the accuracy of the environmental information.

For example, in the planar motion model shown in formula (1), the relationship between the linear velocity data of the smart device and time may also be increased. At this time, one relation that is satisfied between the linear velocity data of the smart device and the time is as shown in equation (14).

Wherein v istRepresenting linear velocity data of the intelligent device at the time t; a isx、bx、ay、byRepresenting parameters in the planar motion model.

Accordingly, the plane motion model corresponding to equation (14) can be represented in the form of a matrix represented by equation (15).

Alternatively, the matrix form shown in formula (15) may also be expressed in the form of Ax ═ b, where a isx isb is

Alternatively, x in equation (15) can be solved by the least square method, i.e. the parameters in the planar motion model are obtained. Solving the expression of x in equation (15) by the least square method as x ═ aTA)-1ATb. The least square method is utilized to realize the parameter estimation of the kinematic model, the influence of the noise of the sensor on the calculation precision of the intelligent equipment can be effectively reduced, and therefore the precision of the environmental information can be improved.

For example, in the plane motion model shown in formula (1), the relationship between the angular velocity data of the smart device and time may also be increased, and the relationship between the angular velocity data of the smart device and time may also be increased. At this time, one relationship among the pose data, the linear velocity data, and the angular velocity data of the smart device and time is as shown in equation (16).

Alternatively, the matrix form shown in formula (16) may also be expressed as Ax ═ b, where a isx isb is

Alternatively, x in equation (16) can be solved by the least square method, i.e. the parameters in the planar motion model are obtained. Solving the expression of x in equation (16) by the least square method as x ═ aTA)-1ATb. The least square method is utilized to realize the parameter estimation of the kinematic model, the influence of the noise of the sensor on the calculation precision of the intelligent equipment can be effectively reduced, and therefore the precision of the environmental information can be improved.

It should be understood that the specific examples in the embodiments of the present application are for the purpose of promoting a better understanding of the embodiments of the present application and are not intended to limit the scope of the embodiments of the present application.

It should also be understood that the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by the function and the inherent logic thereof, and should not constitute any limitation to the implementation process of the embodiments of the present application.

It should also be understood that in the embodiment of the present application, "pre-configuring", "pre-storing" may be implemented by pre-saving corresponding codes, tables, or other manners that may be used to indicate related information in a device (for example, including a smart device and a cloud server), and the present application is not limited to a specific implementation manner thereof.

It is also to be understood that the terminology and/or the description of the various embodiments herein is consistent and mutually inconsistent if no specific statement or logic conflicts exists, and that the technical features of the various embodiments may be combined to form new embodiments based on their inherent logical relationships.

It is understood that, in the embodiments of the present application, the method implemented by the smart device or the cloud server may also be implemented by a component (e.g., a chip or a circuit) that can be configured in the smart device or the cloud server.

The method provided by the embodiment of the present application is described in detail above with reference to fig. 1 and fig. 2. Hereinafter, the apparatus provided in the embodiment of the present application will be described in detail with reference to fig. 3. It should be understood that the description of the apparatus embodiment and the description of the method embodiment correspond to each other, and therefore, for the sake of brevity, some contents that are not described in detail may be referred to as the above method embodiment.

Fig. 3 is a schematic structural diagram of an apparatus 300 for processing data according to an embodiment of the present application. It should be understood that the apparatus 300 may implement the method illustrated in fig. 2. The apparatus may be a smart device and a server, or may be a component (e.g., a chip or a circuit) that is configurable in the smart device or the server.

The apparatus 300 may include means for performing various ones of the operations in the preceding method embodiments. And, each unit in the apparatus 300 is for implementing a corresponding flow of any of the aforementioned methods.

In one design, the apparatus 300 includes: an acquisition unit 310, a determination unit 330 and a racemization unit 340.

An obtaining unit 310, configured to obtain multiple pieces of motion data of the smart device, where each piece of motion data in the multiple pieces of motion data includes pose data, and the multiple pieces of motion data are collected by a motion data collection unit of the smart device in a same scanning cycle of the lidar.

A determining unit 330, configured to determine, according to the plurality of motion data, a value of a parameter in a planar motion model of the smart device, where the planar motion model is a polynomial-based kinematic model.

The determining unit 330 is further configured to determine, according to the determined planar motion model of the parameter, motion data of the smart device at a reference time, where the motion data of the smart device at the reference time includes pose data of the smart device at the reference time.

The determining unit 330 is further configured to determine, according to the plane motion model with the determined parameters, motion data of the smart device at a target time, where the motion data of the smart device at the target time includes pose data of the smart device at the target time, and the target time and the reference time are located in the same scanning cycle.

A despinning unit 340, configured to perform despinning on the point cloud data of the intelligent device at the target time according to the motion data of the intelligent device at the reference time and the motion data of the intelligent device at the target time, where the point cloud data obtained through the despinning is used to determine the environment information of the intelligent device.

Optionally, in the planar motion model, the position data of the smart device and the pose data of the smart device satisfy the following relation:

xt=ax·t2+bx·t+cx

yt=ay·t2+by·t+cy

θt=aθ·t2+bθ·t+cθ

wherein x istRepresenting position data of the intelligent device in a first direction at the moment t; y istRepresenting position data of the intelligent device in a second direction at the moment t, wherein the first direction is vertical to the second direction; thetatRepresenting pose data of the smart device at time t; a isx、bx、cx、aθ、bθ、cθ、ay、by、cyRepresenting the parameters in the planar motion model.

Optionally, each of the plurality of motion data further comprises linear velocity data.

When each piece of motion data in the plurality of pieces of motion data further includes linear velocity data, optionally, in the planar motion model, the linear velocity data of the smart device satisfies the following relational expression:

vt·cos(θt)=2·ax·t+bx

vt·sin(θt)=2·ay·t+by

wherein v istRepresenting speed data of the intelligent device at a time t; thetatRepresenting pose data of the smart device at time t; a isx、bx、ay、byRepresenting the parameters in the planar motion model.

Optionally, each of the plurality of motion data further comprises angular velocity data.

When each of the plurality of motion data further includes angular velocity data, optionally, in the planar motion model, the angular velocity data satisfies the following relation:

ωt=2·aθ·t+bθ

wherein, ω istData representing the angular velocity of the smart device at time t; a isθ、bθRepresenting the parameters in the planar motion model.

Optionally, the racemization unit 340 is specifically used for: determining a transformation relation between the motion data of the intelligent device at the target moment and the motion data of the intelligent device at the reference moment according to the motion data of the intelligent device at the reference moment and the motion data of the intelligent device at the target moment; and performing racemization processing on the point cloud data of the intelligent equipment at the target moment according to the transformation relation.

It should be understood that the specific processes of the units for executing the corresponding steps are already described in detail in the above method embodiments, and therefore, for brevity, detailed descriptions thereof are omitted.

It should also be understood that when the apparatus 300 is a server, a communication unit may be further included for transmitting the environment information to the smart device.

It should also be understood that the apparatus 300 may further include a storage unit for storing various data.

It should also be understood that when the apparatus 300 is an intelligent device, a chip or a chip system configured in an intelligent device, or a chip system configured in an intelligent device, the obtaining unit 310 in the apparatus 300 may be a data transmission interface, an interface circuit, a data transmission circuit, or a pin, the determining unit 330 and the despinning unit 340 may be a processor, a processing circuit, or a logic circuit, and the storage unit may be a memory or a storage circuit.

It should also be understood that when the apparatus 300 is a server, the obtaining unit 310 in the apparatus 300 may be a receiver or a transceiver, the determining unit 330 and the de-rotation unit 340 may be a processor, a processing circuit or a logic circuit, and the storage unit may be a memory or a storage circuit.

It should be understood that, when the apparatus 300 is a chip, the chip may be a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), a system on chip (SoC), a Central Processing Unit (CPU), a Network Processor (NP), a digital signal processing circuit (DSP), a Micro Controller Unit (MCU), a Programmable Logic Device (PLD), or other integrated chips.

It should be understood that each unit in the present application may also be referred to as a corresponding module, for example, an obtaining unit may also be referred to as an obtaining module, a determining unit may also be referred to as a determining module, and a racemizing unit may also be referred to as a racemizing module.

In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor. To avoid repetition, it is not described in detail here.

It should be noted that the processor in the embodiments of the present application may be an integrated circuit chip having signal processing capability. In implementation, the steps of the above method embodiments may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The processor described above may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.

It will be appreciated that the memory in the embodiments of the subject application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), double data rate SDRAM, enhanced SDRAM, SLDRAM, Synchronous Link DRAM (SLDRAM), and direct rambus RAM (DR RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.

According to the method provided by the embodiment of the present application, the present application further provides a computer program product, which includes: computer program code which, when run on a computer, causes the computer to perform the method of any of the preceding method embodiments.

According to the method provided by the embodiment of the present application, the present application further provides a computer-readable medium, which stores instructions that, when executed on a computer, cause the computer to perform the method in any one of the method embodiments.

According to the method provided by the embodiment of the present application, the present application also provides a system, which includes the aforementioned apparatus 300.

As used in this specification, the terms "unit," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a unit may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer.

Those of ordinary skill in the art will appreciate that the various illustrative logical blocks and steps (step) described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.

It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.

In the above embodiments, the functions of the functional units may be fully or partially implemented by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions (programs). The procedures or functions described in accordance with the embodiments of the present application are generated in whole or in part when the computer program instructions (programs) are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.

The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.

The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

23页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:定位电路和定位装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类