Robot indoor positioning method, device, equipment and storage medium

文档序号:1937584 发布日期:2021-12-07 浏览:25次 中文

阅读说明:本技术 机器人室内定位方法、装置、设备及存储介质 (Robot indoor positioning method, device, equipment and storage medium ) 是由 沈维国 于 2021-08-19 设计创作,主要内容包括:本发明公开了一种机器人室内定位方法、装置、设备及存储介质,该方法包括获取机器人采集的当前定位图像;其中,当前定位图像包括至少两个可识别特征图像;对当前定位图像进行识别处理,获得多个可识别特征图像之间的当前图像位置信息;利用预存的图像位置地图,匹配当前图像位置信息对应的特征图像,以获得机器人的当前位置坐标;其中,预存的图像位置地图存储有多个特征图像在预设行走空间内的图像位置信息。本发明通过获取目标机器人的当前定位图像,根据当前定位图像中多个可识别特征图像的图像位置信息,匹配图像位置地图中对应的特征图像,以此获得目标机器人的位置坐标,提高了机器人室内定位的准确率和效率。(The invention discloses an indoor positioning method, device, equipment and storage medium for a robot, wherein the method comprises the steps of acquiring a current positioning image acquired by the robot; the current positioning image comprises at least two identifiable characteristic images; identifying the current positioning image to obtain current image position information among a plurality of identifiable characteristic images; matching a characteristic image corresponding to the current image position information by using a pre-stored image position map to obtain the current position coordinate of the robot; the pre-stored image position map stores image position information of a plurality of characteristic images in a preset walking space. The invention obtains the position coordinate of the target robot by obtaining the current positioning image of the target robot and matching the corresponding characteristic images in the image position map according to the image position information of a plurality of identifiable characteristic images in the current positioning image, thereby improving the accuracy and efficiency of indoor positioning of the robot.)

1. A robot indoor positioning method is characterized in that a robot moves in a preset walking space, and a plurality of characteristic images are fixedly arranged in the preset walking space; the robot indoor positioning method comprises the following steps:

acquiring a current positioning image acquired by the robot; the current positioning image comprises at least two identifiable characteristic images, and the identifiable characteristic images are characteristic images capable of identifying position information and angle information in the current positioning image;

identifying the current positioning image to obtain current image position information among a plurality of identifiable characteristic images;

matching the characteristic image corresponding to the current image position information by using a pre-stored image position map so as to obtain the current position coordinate of the robot; the pre-stored image position map stores image position information of a plurality of characteristic images in the preset walking space.

2. The robot indoor positioning method according to claim 1, wherein the step of performing recognition processing on the current positioning image to obtain current image position information between the plurality of recognizable feature images specifically includes:

performing gradient calculation on the current positioning image to obtain a pixel gradient value of the current positioning image;

extracting contour points in the current positioning image according to the pixel gradient values;

and determining the position information and the angle information of the identifiable characteristic image corresponding to the contour point in the current positioning image, and obtaining the current image position information among a plurality of identifiable characteristic images.

3. The robot indoor positioning method according to claim 2, wherein the identifiable feature image is a polygonal image;

the step of determining the position information and the angle information of the identifiable feature image corresponding to the contour point in the current positioning image to obtain the current image position information among a plurality of identifiable feature images specifically includes:

performing linear regression fitting on the contour points to obtain fitted characteristic image lines;

matching polygonal images corresponding to the characteristic image lines according to the characteristic image lines;

and obtaining image position information among the plurality of identifiable characteristic images according to the position information and the angle information of the polygonal image in the current positioning image.

4. The robot indoor positioning method according to claim 1, wherein before the step of matching the feature image corresponding to the current image position information using a pre-stored image position map to obtain the current position coordinates of the robot, the method further comprises:

acquiring position coordinates acquired by the robot in a preset walking space and image position information of a plurality of characteristic images corresponding to the position coordinates;

and establishing an image position map according to the position coordinates and the image position information of the characteristic images.

5. The robot indoor positioning method according to claim 4, wherein the step of acquiring the position coordinates of the robot collected in the preset walking space and the image position information of the plurality of feature images corresponding to the position coordinates specifically includes:

acquiring position coordinates acquired by the robot in a preset walking space and position information and angle information of each characteristic image corresponding to the position coordinates;

according to the position information and the angle information, image position information among a plurality of characteristic images corresponding to the position coordinates is obtained;

and adjusting the position of the robot in a preset walking space, judging whether the position coordinate of the position is not collected, if so, executing the step to obtain the position coordinate collected by the robot in the preset walking space and the position information and the angle information of each characteristic image corresponding to the position coordinate.

6. The robot indoor positioning method according to claim 5, wherein the step of acquiring the position coordinates of the robot collected in the preset walking space and the position information and the angle information of each feature image corresponding to the position coordinates specifically includes:

acquiring position coordinates acquired by the robot in a preset walking space;

and acquiring the position information and the angle information of each characteristic image of the position coordinates according to the conversion relation between the image coordinates and the position coordinates.

7. The robot indoor positioning method according to claim 6, wherein the expression of the conversion relationship of the image coordinates and the position coordinates is:

s×Px=K×RT×Pw

where s is the depth of the image acquisition device of the robot, PxIs an image coordinate, K is an internal parameter of an image acquisition device of the robot, RT is an external parameter of the image acquisition device of the robot, PwAre position coordinates.

8. A robot indoor positioning device, characterized by comprising:

the acquisition module is used for acquiring a current positioning image acquired by the robot; the current positioning image comprises at least two identifiable characteristic images, and the identifiable characteristic images are characteristic images capable of identifying position information and angle information in the current positioning image;

the identification module is used for identifying the current positioning image to obtain current image position information among the identifiable characteristic images;

the matching module is used for matching the characteristic image corresponding to the current image position information by utilizing a pre-stored image position map so as to obtain the current position coordinate of the robot; the pre-stored image position map stores image position information of a plurality of characteristic images in the preset walking space.

9. A robot indoor positioning apparatus, characterized by comprising: memory, a processor and a robot indoor positioning program stored on the memory and executable on the processor, the robot indoor positioning program when executed by the processor implementing the steps of the robot indoor positioning method according to any one of claims 1 to 7.

10. A storage medium having stored thereon a robot indoor positioning program which, when executed by a processor, implements the steps of the robot indoor positioning method according to any one of claims 1 to 7.

Technical Field

The invention relates to the technical field of visual navigation, in particular to an indoor positioning method, device, equipment and storage medium for a robot.

Background

Along with the development of intelligent industry and intelligent commodity circulation, warehouse management more and more tends to the unmanned, and intelligent robot, unmanned fork truck use also more and more, and wherein positioning algorithm has occupied the important position in unmanned fork truck system, and unmanned fork truck location is divided into two mostly now, laser navigation and vision navigation, and some laser navigation can place the reflector panel in indoor, and vision navigation then can place the two-dimensional code and assist the location.

However, in the existing indoor positioning method, the reflector of the laser navigation may be blocked, and the two-dimensional code of the visual navigation may decrease the recognition rate with the increase of the distance. The existing robot indoor positioning method has low identification accuracy and efficiency. Therefore, how to improve the accuracy and efficiency of indoor positioning of the robot is a technical problem which needs to be solved urgently.

The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.

Disclosure of Invention

The invention mainly aims to provide a robot indoor positioning method, device, equipment and storage medium, and aims to solve the technical problems of low accuracy and efficiency of robot indoor positioning.

In order to achieve the above object, the present invention provides an indoor positioning method for a robot, wherein the robot moves in a preset walking space, and a plurality of characteristic images are fixedly arranged in the preset walking space; the robot indoor positioning method comprises the following steps:

acquiring a current positioning image acquired by the robot; the current positioning image comprises at least two identifiable characteristic images, and the identifiable characteristic images are characteristic images capable of identifying position information and angle information in the current positioning image;

identifying the current positioning image to obtain current image position information among a plurality of identifiable characteristic images;

matching the characteristic image corresponding to the current image position information by using a pre-stored image position map so as to obtain the current position coordinate of the robot; the pre-stored image position map stores image position information of a plurality of characteristic images in the preset walking space.

Optionally, the step of performing identification processing on the current positioning image to obtain current image position information between the multiple identifiable feature images specifically includes:

performing gradient calculation on the current positioning image to obtain a pixel gradient value of the current positioning image;

extracting contour points in the current positioning image according to the pixel gradient values;

and determining the position information and the angle information of the identifiable characteristic image corresponding to the contour point in the current positioning image, and obtaining the current image position information among a plurality of identifiable characteristic images.

Optionally, the recognizable feature image is a polygonal image;

the step of determining the position information and the angle information of the identifiable feature image corresponding to the contour point in the current positioning image to obtain the current image position information among a plurality of identifiable feature images specifically includes:

performing linear regression fitting on the contour points to obtain fitted characteristic image lines;

matching polygonal images corresponding to the characteristic image lines according to the characteristic image lines;

and obtaining image position information among the plurality of identifiable characteristic images according to the position information and the angle information of the polygonal image in the current positioning image.

Optionally, before the step of matching the feature image corresponding to the current image position information by using a pre-stored image position map to obtain the current position coordinate of the robot, the method further includes:

acquiring position coordinates acquired by the robot in a preset walking space and image position information of a plurality of characteristic images corresponding to the position coordinates;

and establishing an image position map according to the position coordinates and the image position information of the characteristic images.

Optionally, the step of obtaining a plurality of image position information of the feature image corresponding to the position coordinate and the position coordinate acquired by the robot in the preset walking space specifically includes:

acquiring position coordinates acquired by the robot in a preset walking space and position information and angle information of each characteristic image corresponding to the position coordinates;

according to the position information and the angle information, image position information among a plurality of characteristic images corresponding to the position coordinates is obtained;

and adjusting the position of the robot in a preset walking space, judging whether the position coordinate of the position is not collected, if so, executing the step to obtain the position coordinate collected by the robot in the preset walking space and the position information and the angle information of each characteristic image corresponding to the position coordinate.

Optionally, the step of obtaining the position coordinates acquired by the robot in the preset walking space and the position information and the angle information of each feature image corresponding to the position coordinates specifically includes:

acquiring position coordinates acquired by the robot in a preset walking space;

and acquiring the position information and the angle information of each characteristic image of the position coordinates according to the conversion relation between the image coordinates and the position coordinates.

Optionally, the expression of the conversion relationship between the image coordinate and the position coordinate is as follows:

s×Px=K×RT×Pw

where s is the depth of the image acquisition device of the robot, PxIs an image coordinate, K is an internal parameter of an image acquisition device of the robot, RT is an external parameter of the image acquisition device of the robot, PwAre position coordinates.

In order to achieve the above object, the present invention also provides a robot indoor positioning device including:

the acquisition module is used for acquiring a current positioning image acquired by the robot; the current positioning image comprises at least two identifiable characteristic images, and the identifiable characteristic images are characteristic images capable of identifying position information and angle information in the current positioning image;

the identification module is used for identifying the current positioning image to obtain current image position information among the identifiable characteristic images;

the matching module is used for matching the characteristic image corresponding to the current image position information by utilizing a pre-stored image position map so as to obtain the current position coordinate of the robot; the pre-stored image position map stores image position information of a plurality of characteristic images in the preset walking space.

Optionally, the identification module is further configured to perform gradient calculation on the current positioning image to obtain a pixel gradient value of the current positioning image; extracting contour points in the current positioning image according to the pixel gradient values; and determining the position information and the angle information of the identifiable characteristic image corresponding to the contour point in the current positioning image, and obtaining the current image position information among a plurality of identifiable characteristic images.

Optionally, the identification module is further configured to perform linear regression fitting on the contour points to obtain fitted feature image lines; matching polygonal images corresponding to the characteristic image lines according to the characteristic image lines; and obtaining image position information among the plurality of identifiable characteristic images according to the position information and the angle information of the polygonal image in the current positioning image.

Optionally, the robot indoor positioning device further includes an establishing module, and the establishing module is further configured to obtain a position coordinate acquired by the robot in a preset walking space and image position information of the plurality of feature images corresponding to the position coordinate; and establishing an image position map according to the position coordinates and the image position information of the characteristic images.

Optionally, the establishing module is further configured to acquire a position coordinate acquired by the robot in a preset walking space and position information and angle information of each feature image corresponding to the position coordinate; according to the position information and the angle information, image position information among a plurality of characteristic images corresponding to the position coordinates is obtained; and adjusting the position of the robot in a preset walking space, judging whether the position coordinate of the position is not collected, if so, executing the step to obtain the position coordinate collected by the robot in the preset walking space and the position information and the angle information of each characteristic image corresponding to the position coordinate.

Optionally, the establishing module is further configured to acquire position coordinates acquired by the robot in a preset walking space; and acquiring the position information and the angle information of each characteristic image of the position coordinates according to the conversion relation between the image coordinates and the position coordinates.

Optionally, the expression of the conversion relationship between the image coordinate and the position coordinate is as follows:

s×Px=K×RT×Pw

where s is the depth of the image acquisition device of the robot, PxIs an image coordinate, K is an internal parameter of an image acquisition device of the robot, RT is an external parameter of the image acquisition device of the robot, PwAre position coordinates.

In addition, in order to achieve the above object, the present invention also provides a robot indoor positioning apparatus including: a memory, a processor and a robot indoor positioning program stored on the memory and executable on the processor, the robot indoor positioning program being configured to implement the steps of the robot indoor positioning method as described above.

Further, in order to achieve the above object, the present invention also provides a storage medium having stored thereon a robot indoor positioning program which, when executed by a processor, implements the steps of the robot indoor positioning method as described above.

The embodiment of the invention provides an indoor positioning method and device for a robot, the robot and a storage medium, wherein the method comprises the steps of acquiring a current positioning image acquired by the robot; the current positioning image comprises at least two identifiable characteristic images; identifying the current positioning image to obtain current image position information among a plurality of identifiable characteristic images; matching a characteristic image corresponding to the current image position information by using a pre-stored image position map to obtain the current position coordinate of the robot; the pre-stored image position map stores image position information of a plurality of characteristic images in a preset walking space. According to the embodiment of the invention, the current positioning image of the target robot is obtained, and the corresponding characteristic images in the image position map are matched according to the image position information of the plurality of identifiable characteristic images in the current positioning image, so that the position coordinate of the target robot is obtained, and the accuracy and efficiency of indoor positioning of the robot are improved.

Drawings

Fig. 1 is a schematic structural diagram of an indoor positioning device of a robot in an embodiment of the present invention;

FIG. 2 is a schematic flow chart illustrating a first exemplary indoor robot positioning method according to the present invention;

FIG. 3 is a flowchart illustrating a second exemplary embodiment of an indoor positioning method for a robot according to the present invention;

FIG. 4 is a flowchart illustrating a robot indoor positioning method according to a third embodiment of the present invention;

fig. 5 is a schematic structural diagram of an indoor positioning device of a robot in an embodiment of the present invention.

The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.

Detailed Description

It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.

Along with the development of intelligent industry and intelligent commodity circulation, warehouse management more and more tends to the unmanned, and intelligent robot, unmanned fork truck use also more and more, and wherein positioning algorithm has occupied the important position in unmanned fork truck system, and unmanned fork truck location is divided into two mostly now, laser navigation and vision navigation, and some laser navigation can place the reflector panel in indoor, and vision navigation then can place the two-dimensional code and assist the location. However, in the existing indoor positioning method, the reflector of the laser navigation may be blocked, and the two-dimensional code of the visual navigation may decrease the recognition rate with the increase of the distance. The existing robot indoor positioning method has low identification accuracy and efficiency. Therefore, how to improve the accuracy and efficiency of indoor positioning of the robot is a technical problem which needs to be solved urgently.

To solve this problem, various embodiments of the robot indoor positioning method of the present invention are proposed. The robot indoor positioning method provided by the invention is based on the acquisition of the current positioning image of the target robot, and the matching of the corresponding characteristic images in the image position map is carried out according to the image position information of a plurality of identifiable characteristic images in the current positioning image, so as to obtain the position coordinate of the target robot.

Referring to fig. 1, fig. 1 is a schematic structural diagram of an indoor positioning device of a robot according to an embodiment of the present invention.

The device may be a User Equipment (UE) such as a Mobile phone, smart phone, laptop, digital broadcast receiver, Personal Digital Assistant (PDA), tablet computer (PAD), handheld device, vehicular device, wearable device, computing device or other processing device connected to a wireless modem, Mobile Station (MS), or the like. The device may be referred to as a user terminal, portable terminal, desktop terminal, etc.

Generally, the apparatus comprises: at least one processor 301, a memory 302, and a robot indoor positioning program stored on the memory and executable on the processor, the robot indoor positioning program configured to implement the steps of the robot indoor positioning method as described above.

The processor 301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 301 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. The processor 301 may further include an AI (Artificial Intelligence) processor for processing information about the robot indoor positioning operation, so that the robot indoor positioning model can be trained and learned autonomously, thereby improving efficiency and accuracy.

Memory 302 may include one or more computer-readable storage media, which may be non-transitory. Memory 302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 302 is used to store at least one instruction for execution by processor 801 to implement the robot indoor positioning method provided by method embodiments herein.

In some embodiments, the terminal may further include: a communication interface 303 and at least one peripheral device. The processor 301, the memory 302 and the communication interface 303 may be connected by a bus or signal lines. Various peripheral devices may be connected to communication interface 303 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 304, a display screen 305, and a power source 306.

The communication interface 303 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 301 and the memory 302. The communication interface 303 is used for receiving the movement tracks of the plurality of mobile terminals uploaded by the user and other data through the peripheral device. In some embodiments, processor 301, memory 302, and communication interface 303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 301, the memory 302 and the communication interface 303 may be implemented on a single chip or circuit board, which is not limited in this embodiment.

The Radio Frequency circuit 304 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 304 communicates with a communication network and other communication devices through electromagnetic signals, so as to obtain the movement tracks and other data of a plurality of mobile terminals. The rf circuit 304 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 304 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 304 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.

The display screen 305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 305 is a touch display screen, the display screen 305 also has the ability to capture touch signals on or over the surface of the display screen 305. The touch signal may be input to the processor 301 as a control signal for processing. At this point, the display screen 305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 305 may be one, the front panel of the electronic device; in other embodiments, the display screens 305 may be at least two, respectively disposed on different surfaces of the electronic device or in a folded design; in still other embodiments, the display screen 305 may be a flexible display screen disposed on a curved surface or a folded surface of the electronic device. Even further, the display screen 305 may be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display screen 305 may be made of LCD (liquid crystal Display), OLED (Organic Light-Emitting Diode), and the like.

The power supply 306 is used to power various components in the electronic device. The power source 306 may be alternating current, direct current, disposable or rechargeable. When the power source 306 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.

Those skilled in the art will appreciate that the configuration shown in fig. 1 does not constitute a limitation of the robot indoor positioning apparatus and may include more or fewer components than those shown, or some components in combination, or a different arrangement of components.

An embodiment of the present invention provides an indoor robot positioning method, and referring to fig. 2, fig. 2 is a schematic flowchart of a first embodiment of the indoor robot positioning method according to the present invention.

In this embodiment, the robot indoor positioning method includes the following steps:

step S100, acquiring a current positioning image acquired by the robot; the current positioning image comprises at least two identifiable characteristic images, and the identifiable characteristic images are characteristic images capable of identifying position information and angle information in the current positioning image.

In practical application, the robot may be a robot that can move in a preset walking space, such as an intelligent robot or an unmanned forklift, and the preset walking space may be a walking space required by a robot, such as a warehouse or a factory, to execute a corresponding task, which is not limited in this embodiment.

Specifically, a current positioning image acquired when the robot moves in a preset walking space is acquired, the current positioning image is a positioning image of the robot in a preset direction acquired at the current position, the positioning image comprises at least two identifiable characteristic images, a plurality of characteristic images are arranged in the preset direction of the robot, the characteristic images are image information of a marker used for positioning the robot, and the identifiable characteristic images are characteristic images capable of identifying position information and angle information in the characteristic images acquired by the robot.

For ease of understanding, the present embodiment specifically describes an example of acquiring a current positioning image acquired by a robot.

For example, in the automatic operation of an intelligent logistics warehouse, a robot may be used to transfer corresponding goods in the warehouse according to order information, and after obtaining the order information, the robot is driven and controlled to move to a corresponding coordinate position to transfer the goods according to the coordinates of the goods storage position corresponding to the order information stored in the system. At this time, the robot indoor positioning method of the embodiment may be used to obtain a current positioning image acquired by the robot, where the current positioning image may be an image of a warehouse top acquired by the robot, the warehouse top is provided with a plurality of markers for positioning the robot, and the obtained current positioning image further includes feature images of the markers. By analyzing and processing the current positioning image, the current position coordinate of the robot can be obtained, the robot is positioned in the warehouse, and the automatic operation of the intelligent logistics warehouse is completed.

In addition, when the robot collects the current positioning image in the preset direction, the vision angle of the image acquisition equipment limited to the robot often cannot collect all the characteristic images, and at the moment, the characteristic image capable of identifying the position information and the angle information is selected as an identifiable characteristic image, and the position coordinate of the robot is acquired by utilizing the identifiable characteristic image.

It is easy to understand, the robot gathers the characteristic image of the marker that the direction set up in advance when predetermineeing the walking space and removes, obtains distinguishable characteristic image, fixes a position the robot through distinguishable characteristic image, utilizes the vision navigation principle, simplifies indoor positioner, has reduced the cost that the robot realized the location.

Step S200, carrying out identification processing on the current positioning image to obtain current image position information among a plurality of identifiable characteristic images.

Specifically, after a current positioning image acquired by the robot is acquired, in order to obtain an identifiable feature image in the current positioning image to position the position coordinates of the robot, the current positioning image needs to be identified, current image position information between all identifiable feature images in the current positioning image is obtained, and the current position coordinates of the robot can be obtained by using the current image position information.

It should be noted that, the current positioning image is subjected to identification processing, that is, a feature image in the current positioning image is identified, where the feature image is a feature image of a marker disposed in a preset walking space in a preset direction, and by identifying all feature images in the current positioning image, current image position information between a plurality of identifiable feature images is obtained by using all feature images.

The characteristic image in the current positioning image can be identified through an image identification technology, and if the acquired current positioning image has the characteristic image, the position information and the angle information of the characteristic image in the current positioning image are extracted by utilizing the pre-stored characteristic information of the characteristic image. After the position information and the angle information of all the characteristic images in the current positioning image are obtained, the position information and the angle information of the current image among the plurality of identifiable characteristic images are obtained according to the position information and the angle information of each characteristic image.

It is easy to understand that when the robot is positioned, after the current image position information among a plurality of identifiable characteristic images is obtained, the current image position information of the current robot in the image position information can be obtained by matching according to the current image position information and the pre-stored image position information among all the characteristic images, and then the current position coordinate of the robot is obtained according to the current image position information in the image position information.

Step S300, matching a feature image corresponding to the current image position information by using a pre-stored image position map so as to obtain the current position coordinate of the robot; the pre-stored image position map stores image position information of a plurality of characteristic images in the preset walking space.

It should be noted that after obtaining the current image position information among the multiple identifiable feature images, the current image position information included in the current positioning image collected at the current position is used to put the current image position information into a pre-stored image position map for matching and searching, so as to obtain a feature image corresponding to the current image position information, where the position coordinate corresponding to the feature image is the current position coordinate of the robot.

Specifically, the image position map stores image position information of all feature images in the whole preset walking space, and if the current image position information is acquired, the current image position information can be positioned in the image position information of all feature images, so that the position coordinates of the current robot in the preset walking space are obtained, and the accurate positioning of the robot is realized.

The image position information of all the characteristic images in the whole preset walking space stored in the image position map is the image position information of each position coordinate acquired by the movement of the robot in the preset walking space before the robot is positioned indoors, after a large enough number of image position information is acquired, the repeated image position information acquired by adjacent coordinate positions is removed, and the image position information of all the characteristic images in the whole preset walking space, namely the image position map, can be obtained. After the image position map is established, if the image position information of any position is obtained, the position coordinates of the corresponding position can be obtained by matching the image position information in the image position map, and the accurate positioning of the indoor robot during the moving operation is realized.

In the embodiment, the current positioning image of the target robot is acquired, and the corresponding characteristic images in the image position map are matched according to the image position information of the plurality of identifiable characteristic images in the current positioning image, so that the position coordinate of the target robot is acquired, and the accuracy and the efficiency of indoor positioning of the robot are improved.

For easy understanding, referring to fig. 3, fig. 3 is a schematic flow chart of a second embodiment of the robot indoor positioning method of the present invention. Based on the first embodiment of the robot indoor positioning method shown in fig. 2, this embodiment provides a specific implementation scheme for performing recognition processing on the current positioning image to obtain current image position information between a plurality of recognizable feature images, which is specifically as follows:

step S201, performing gradient calculation on the current positioning image to obtain a pixel gradient value of the current positioning image.

Specifically, in the present embodiment, a method is provided for performing recognition processing on the current positioning image to obtain current image position information between a plurality of recognizable feature images. When the characteristic image in the current positioning image is identified, an image processing and identification technology is adopted, gradient calculation is carried out on the obtained current positioning image to obtain a pixel gradient value of the current positioning image, and the current image position information in the current positioning image is identified according to the pixel gradient value.

It should be noted that the markers set in the preset direction may be objects with a larger color difference from the background, and the feature images of the multiple markers can be easily identified by acquiring the current positioning image. Therefore, in this embodiment, gradient calculation is performed by using the obtained current positioning image to obtain a pixel gradient value of the current positioning image, and then the feature image of the marker with a large color difference in the current positioning image can be extracted according to the pixel gradient value.

In some embodiments, before performing gradient calculation on the current positioning image, noise processing may be performed on the acquired current positioning image through gaussian blurring to improve positioning accuracy.

And S202, extracting contour points in the current positioning image according to the pixel gradient values.

Specifically, in this embodiment, after obtaining the pixel gradient value of the current positioning image, in view of the large difference between the color of the marker and the color of the background, the position information and the angle information of the feature image in the current positioning image can be determined by extracting the pixel points with large gradient in the pixel gradient value as the contour points of the marker and using the contour points of the marker.

It is easy to understand that after the contour points corresponding to each pixel point are obtained, the recognized shape can be generated according to all the contour points, and whether the shape is the shape preset by the marker or not is judged, that is, all the characteristic images in the current positioned image can be located, so that the position positioning of the robot is realized.

In some embodiments, after extracting contour points in the current positioning image, clustering processing may be performed on neighboring points to obtain a feature image with a more regular shape, so as to improve the positioning accuracy.

Step S203, determining the position information and the angle information of the identifiable characteristic image corresponding to the contour point in the current positioning image, and obtaining the current image position information among a plurality of identifiable characteristic images.

Specifically, after extracting contour points in the current positioning image, generating an identified shape by using all contour points, judging whether the identified shape is a shape preset by a marker, if so, determining the shape formed by the contour points to be a feature image, and at the moment, determining the position of the feature image to obtain the position information and the angle information of each feature image. After the position information and the angle information of all the characteristic images in the current positioning image are obtained, the position information and the angle information of the current image among the plurality of identifiable characteristic images are obtained according to the position information and the angle information of each characteristic image.

In some embodiments, the recognizable feature images may be set as polygon images, and therefore, after extracting the contour points in the current positioning image, linear regression fitting may be performed on the obtained contour points to obtain fitted feature image lines, then the polygon images corresponding to the feature image lines are matched according to the feature image lines, and finally, image position information between a plurality of recognizable feature images is obtained according to position information and angle information of the polygon images in the current positioning image.

In this embodiment, a method for performing recognition processing on the current positioning image to obtain current image position information between a plurality of recognizable feature images is provided. Through image processing and image recognition technology, feature images in the current positioning image are recognized and extracted, current image position information among a plurality of recognizable feature images is obtained, and then the current position coordinates of the robot are positioned.

For easy understanding, referring to fig. 4, fig. 4 is a schematic flow chart of a third embodiment of the robot indoor positioning method of the present invention. Based on the second embodiment of the robot indoor positioning method shown in fig. 3, this embodiment provides a specific implementation scheme for establishing an image location map before the step of matching the feature image corresponding to the current image location information with a pre-stored image location map to obtain the current location coordinates of the robot, which is specifically as follows:

and S001, acquiring position coordinates acquired by the robot in a preset walking space and image position information of a plurality of characteristic images corresponding to the position coordinates.

In this embodiment, a method for establishing an image location map before matching a feature image corresponding to the current image location information with a pre-stored image location map to obtain current location coordinates of the robot is provided. Specifically, the image position information of a plurality of characteristic images corresponding to each position coordinate in the whole preset walking space needs to be acquired when the image position map is established, and therefore, the information required for establishing the image position map is the acquired position coordinate of the robot acquired in the preset walking space and the image position information of the plurality of characteristic images corresponding to the position coordinate.

As is easy to understand, in order to ensure that the robot acquires all position coordinates of the preset walking space to obtain corresponding image position information, in the acquisition process, the position coordinates acquired by the robot in the preset walking space and the position information and the angle information of each feature image corresponding to the position coordinates are acquired; according to the position information and the angle information, image position information among a plurality of characteristic images corresponding to the position coordinates is obtained; and adjusting the position of the robot in a preset walking space, judging whether the position coordinate of the position is not collected, if so, executing the step to obtain the position coordinate collected by the robot in the preset walking space and the position information and the angle information of each characteristic image corresponding to the position coordinate.

It should be noted that when acquiring a position coordinate acquired by the robot in a preset walking space and position information and angle information of each feature image corresponding to the position coordinate, the position coordinate acquired by the robot in the preset walking space is acquired, and then the position information and the angle information of each feature image of the position coordinate are acquired according to a conversion relationship between the image coordinate and the position coordinate. Wherein, the expression of the conversion relation between the image coordinate and the position coordinate is as follows:

s×Px=K×RT×Pw

wherein the content of the first and second substances, s is the depth of the image acquisition device of the robot, PxIs an image coordinate, u is a horizontal coordinate of a feature image point, v is a vertical coordinate of the feature image point, K is an internal reference of an image acquisition device of the robot, u0 is a horizontal coordinate of a center point of the feature image, v0 is a vertical coordinate of the center point of the feature image, f is a focal length of the image acquisition device of the robot, RT is an external reference of the image acquisition device of the robot, riIs an external parameter, wherein i ═ 1.., 9), tjIs an external parameter translation parameter, wherein j ═ 1,. 3), PwAre position coordinates.

As can be easily understood, after the position information and the angle information of each feature image of all the position coordinates in the preset walking space are obtained, the image position information of each position coordinate in the preset walking space can be obtained.

And step S002, establishing an image position map according to the position coordinates and the image position information of the plurality of characteristic images.

Specifically, after the position information and the angle information of each feature image of all position coordinates in the preset walking space are obtained, the image position information of each position coordinate in the preset walking space can be obtained, and an image position map in which the image position information of a plurality of feature images in the preset walking space is stored is established.

It is easy to understand that the image position information of all the feature images in the entire preset walking space stored in the image position map is that before the robot is positioned indoors, the image position information of each position coordinate acquired by the robot moving in the preset walking space is utilized, after a large enough amount of image position information is acquired, the repeated image position information acquired by the adjacent coordinate positions is removed, and the image position information of all the feature images in the entire preset walking space, namely the image position map, can be obtained. After the image position map is established, if the image position information of any position is obtained, the position coordinates of the corresponding position can be obtained by matching the image position information in the image position map, and the accurate positioning of the indoor robot during the moving operation is realized.

In this embodiment, a method for establishing an image location map before matching a feature image corresponding to the current image location information with a pre-stored image location map to obtain current location coordinates of the robot is provided. In this embodiment, before the robot performs indoor positioning, position coordinates in a preset walking space and image position information corresponding to each position coordinate are collected, an image position map in which image position information of a plurality of feature images in the preset walking space is stored is constructed, and when the robot performs a task, the current position information of the current image in the current positioning image can be extracted through the current positioning image obtained at the current position, so as to search the corresponding position coordinate in the image position map, thereby implementing indoor positioning of the robot.

Referring to fig. 5, fig. 5 is a block diagram illustrating a first embodiment of an indoor positioning device for a robot according to the present invention.

As shown in fig. 5, an indoor robot positioning device according to an embodiment of the present invention includes:

the acquisition module 10 is used for acquiring a current positioning image acquired by the robot; the current positioning image comprises at least two identifiable characteristic images, and the identifiable characteristic images are characteristic images capable of identifying position information and angle information in the current positioning image;

the identification module 20 is configured to perform identification processing on the current positioning image to obtain current image position information between a plurality of identifiable feature images;

the matching module 30 is configured to match a feature image corresponding to the current image position information by using a pre-stored image position map, so as to obtain a current position coordinate of the robot; the pre-stored image position map stores image position information of a plurality of characteristic images in the preset walking space.

The robot indoor positioning device provided by the embodiment matches the corresponding characteristic images in the image position map according to the image position information of a plurality of identifiable characteristic images in the current positioning image by acquiring the current positioning image of the target robot, so as to acquire the position coordinate of the target robot, and improve the accuracy and efficiency of the robot indoor positioning.

Based on the first embodiment of the indoor robot positioning device of the present invention, a second embodiment of the indoor robot positioning device of the present invention is provided. In this embodiment, the identification module 20 is further configured to perform gradient calculation on the current positioning image to obtain a pixel gradient value of the current positioning image; extracting contour points in the current positioning image according to the pixel gradient values; and determining the position information and the angle information of the identifiable characteristic image corresponding to the contour point in the current positioning image, and obtaining the current image position information among a plurality of identifiable characteristic images.

In one embodiment, the recognition module 20 is further configured to perform a linear regression fitting on the contour points to obtain fitted feature image lines; matching polygonal images corresponding to the characteristic image lines according to the characteristic image lines; and obtaining image position information among the plurality of identifiable characteristic images according to the position information and the angle information of the polygonal image in the current positioning image.

A third embodiment of the robot indoor positioning device of the present invention is proposed based on the first and second embodiments of the robot indoor positioning device of the present invention described above. In this embodiment, the robot indoor positioning device further includes an establishing module 40, and the establishing module 40 is further configured to obtain a position coordinate acquired by the robot in a preset walking space and image position information of the plurality of feature images corresponding to the position coordinate; and establishing an image position map according to the position coordinates and the image position information of the characteristic images.

As an implementation manner, the establishing module 40 is further configured to obtain a position coordinate acquired by the robot in a preset walking space and position information and angle information of each feature image corresponding to the position coordinate; according to the position information and the angle information, image position information among a plurality of characteristic images corresponding to the position coordinates is obtained; and adjusting the position of the robot in a preset walking space, judging whether the position coordinate of the position is not collected, if so, executing the step to obtain the position coordinate collected by the robot in the preset walking space and the position information and the angle information of each characteristic image corresponding to the position coordinate.

As an embodiment, the establishing module 40 is further configured to obtain position coordinates acquired by the robot in a preset walking space; and acquiring the position information and the angle information of each characteristic image of the position coordinates according to the conversion relation between the image coordinates and the position coordinates.

Other embodiments or specific implementation manners of the indoor positioning device of the robot of the present invention may refer to the above method embodiments, and are not described herein again.

Furthermore, an embodiment of the present invention further provides a storage medium, where the storage medium stores a robot indoor positioning program, and the robot indoor positioning program, when executed by a processor, implements the steps of the robot indoor positioning method as described above. Therefore, a detailed description thereof will be omitted. In addition, the beneficial effects of the same method are not described in detail. For technical details not disclosed in embodiments of the computer-readable storage medium referred to in the present application, reference is made to the description of embodiments of the method of the present application. It is determined that, by way of example, the program instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.

It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.

It should be noted that the above-described embodiments of the apparatus are merely schematic, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiment of the apparatus provided by the present invention, the connection relationship between the modules indicates that there is a communication connection between them, and may be specifically implemented as one or more communication buses or signal lines. One of ordinary skill in the art can understand and implement it without inventive effort.

Through the above description of the embodiments, those skilled in the art will clearly understand that the present invention may be implemented by software plus necessary general hardware, and may also be implemented by special hardware including special integrated circuits, special CPUs, special memories, special components and the like. Generally, functions performed by computer programs can be easily implemented by corresponding hardware, and specific hardware structures for implementing the same functions may be various, such as analog circuits, digital circuits, or dedicated circuits. However, the implementation of a software program is a more preferable embodiment for the present invention. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, where the computer software product is stored in a readable storage medium, such as a floppy disk, a usb disk, a removable hard disk, a Read-only memory (ROM), a random-access memory (RAM), a magnetic disk or an optical disk of a computer, and includes instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:确定物品信息的方法、装置、服务器及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!