Indoor positioning method based on WiFi and visual fusion

文档序号:90142 发布日期:2021-10-08 浏览:13次 中文

阅读说明:本技术 一种基于WiFi与视觉融合室内定位方法 (Indoor positioning method based on WiFi and visual fusion ) 是由 孙炜 唐晨俊 于 2021-05-08 设计创作,主要内容包括:本发明提出一种WiFi和视觉融合的室内定位方法,包括如下步骤:在室内定位区域中设定指纹点,同时采集WiFi数据和图像数据,并构建离线指纹库;在室内定位区域内任意位置设定待测点,并同时采集该点的WiFi数据和图像数据;采用WiFi定位算法筛选出WiFi候选点集;将WiFi候选点集的结果作为限制范围,采用图像定位算法筛选出图像候选点集;采用融合定位算法将WiFi估计位置与图像估计位置进行融合,从而实现室内定位。(The invention provides an indoor positioning method with WiFi and vision integrated, which comprises the following steps: setting fingerprint points in an indoor positioning area, simultaneously acquiring WiFi data and image data, and constructing an offline fingerprint database; setting a point to be measured at any position in an indoor positioning area, and simultaneously acquiring WiFi data and image data of the point; screening a WiFi candidate point set by adopting a WiFi positioning algorithm; taking the result of the WiFi candidate point set as a limiting range, and screening out an image candidate point set by adopting an image positioning algorithm; and fusing the WiFi estimated position and the image estimated position by adopting a fusion positioning algorithm, thereby realizing indoor positioning.)

1. An indoor positioning method with WiFi and vision integrated is characterized by comprising the following steps:

s1: establishing grid fingerprint points in an indoor positioning area, simultaneously acquiring WiFi data and image data at the grid fingerprint points, preprocessing the WiFi data and the image data, and constructing an offline fingerprint database;

s2: taking any position in the positioning area as a point to be measured, collecting WiFi data and image data of the point to be measured, and simultaneously recording the real position of the point to be measured;

s3: matching the WiFi data of the point to be measured with WiFi in an offline fingerprint library to obtain a plurality of WiFi candidate points of which the similarity of the WiFi data of the point to be measured is greater than or equal to a preset threshold;

s4: matching the image of the point to be measured with the off-line image data in the constraint condition by taking the WiFi candidate point as the constraint condition; before image matching, removing moving obstacles in the point image to be measured; screening out image candidate points by adopting an image positioning algorithm;

s5: and respectively endowing corresponding weight values for WiFi positioning and image positioning, and endowing corresponding weight values for the WiFi candidate point and the image candidate point, so that an indoor positioning result is finally obtained through the double-layer combined weight values and the position points mapped by the WiFi and image candidate points.

2. The WiFi and vision fusion based indoor positioning method according to claim 1, wherein the step S1 includes the following steps:

s11: setting grid fingerprint points with estimated sizes in the indoor positioning area, and setting a total of N in the indoor positioning areasA fingerprint point, NAPThe AP collects M WiFi data and image data at each fingerprint point at the same time, and the image data come from different directions of the same point; the set of indoor fingerprint points is therefore represented as,each fingerprint point is denoted as Li={RSSi,Ii,oi}(i=1,2,...,Ns) (ii) a Whereinoi={xi,yi},;RSSi、IiAnd oiRespectively representing WiFi characteristics, image characteristics and real position information contained in each fingerprint point;a WiFi feature of the jth AP representing the acquired ith fingerprint point;a jth image feature representing the acquired ith fingerprint point; x is the number ofi,yi(i=1,2,...,Ns) Coordinates respectively representing the positions of an X axis and a Y axis in a world coordinate system;

s12: preprocessing the acquired WiFi data and the image data: removing bad values in the WiFi data, and removing images with similarity greater than a preset threshold value in the image data;

s13: and respectively converting the preprocessed WiFi data and image data into WiFi characteristics and image characteristics, wherein the WiFi characteristics can be expressed by adopting a mean value, the image characteristics can be expressed by adopting a visual bag-of-words technology, and an offline fingerprint database is constructed by using the WiFi characteristics and the image characteristics.

3. The indoor positioning method based on WiFi and visual fusion of claim 2, wherein the WiFi characteristic is obtained by mean expression of WiFi data; the WiFi characteristics can be expressed by means; the image characteristics are obtained by expressing image data through a visual bag-of-words technology; the neural network includes, but is not limited to: R-CNN, fast R-CNN; the moving obstacle includes an indoor person.

4. The WiFi and vision fusion based indoor positioning method according to claim 2, wherein the step S2 specifically is:

setting a point to be measured at any position in an indoor positioning environment, and acquiring a plurality of WiFi data and image data at the point to be measured for preprocessing to obtain the WiFi data and the image data of the point to be measured; and (5) reserving the real data of the point to be measured for subsequent positioning precision error analysis.

5. The indoor positioning method based on WiFi and visual fusion as claimed in claim 4, wherein in step S3, the WiFi data of the point to be measured is converted into WiFi characteristics, the WiFi characteristics of the point to be measured are then matched with the off-line fingerprint database, and K WiFi candidate points are selected by using a WiFi positioning algorithm, wherein the WiFi positioning algorithm includes but is not limited to: KNN algorithm, RF algorithm, SVM algorithm, etc.

6. The WiFi-based and vision-fusion-based indoor positioning method according to claim 5, wherein the step S4 specifically comprises:

s41: and screening candidate points of the image by taking the WiFi candidate points as constraint conditions,

|Lxj1-xi|+|Lyj1-yi|≤γ(i=1,2,...,Ns,j1=1,2,...,K)

wherein (Lx)j1,Lyj1) And (x)i,yi) Respectively representing the position of a WiFi candidate point and the position of a fingerprint point, and taking gamma as a condition threshold; lxj1X-axis coordinate, Ly, representing the jth 1 WiFi candidate pointj1Y-axis coordinate, x, representing the j1 th WiFi candidate pointiX-axis coordinate, y, representing the ith fingerprint pointiY-axis coordinates representing i fingerprint points;

s42: carrying out target recognition and segmentation on the image data of the point to be measured, recognizing and segmenting the moving obstacle, and removing the moving obstacle in the image data of the point to be measured;

s43: after identification and segmentation, matching the image characteristics of the points to be measured with the image characteristics of the constrained fingerprint points, and screening the first S points with the maximum similarity as image candidate points.

7. The WiFi and visual fusion based indoor positioning method of claim 5, wherein the step S5 comprises the following steps:

s51: in the training stage, different weight values are given to the WiFi positioning result and the image positioning result: collecting NTAnd obtaining WiFi positioning results for the information of the points to be measured by adopting a WiFi positioning algorithm for the information of each point to be measured, wherein the WiFi positioning algorithm comprises but is not limited to: KNN algorithm, RF algorithm, SVM algorithm, etc.; and simultaneously, obtaining an image positioning result for each point to be measured information by adopting an image positioning algorithm, wherein the image positioning algorithm comprises but is not limited to: visual bag-of-words technology, feature matching technology, etc.; setting the WiFi positioning result of the ith point to be measured asThe result of image positioning isWiFi weight value of WwWeighted value of Wp(ii) a And satisfies the objective function:

the constraint conditions are as follows:

Ww+Wp=1

wherein lociThe real position of the ith point to be measured is taken as the real position of the ith point to be measured; r represents a set of real numbers; to obtain WwAnd WpA value of (d);

s52: in the online positioning stage, WiFi data and image data are collected for an unknown point position, through the above steps S3 and S4, K WiFi candidate points and S image candidate points are assigned with different weight values, and the manner of assigning the weight values includes but is not limited toIn the following steps: a weighted value based on similarity, based on an unsupervised learning method and based on a supervised learning method; weighted value W by WiFiwWeighted value of image WpThe weight value corresponding to each candidate point and the fingerprint position mapped by each candidate point jointly determine an indoor positioning result.

Technical Field

The invention relates to the technical field of indoor positioning, in particular to an indoor positioning method based on WiFi and visual fusion.

Background

In the 21 st century, WiFi has been rapidly developed as a new technology. In various public places such as: the WiFi signal is visible everywhere and convenient to use. At present, the WiFi signal is mainly applied to data transmission and communication, and the positioning function is rarely achieved through the WiFi signal. However, WiFi has almost no deployment cost, and only a receiving device needs to be adopted to obtain its information; WiFi does not belong to visual measurement, so there is no visual aliasing; the WiFi signal has certain containment to a complex indoor environment, so that the WiFi signal has extremely high research value when being applied to an indoor positioning technology. However, indoor environment information acquired only by using WiFi signals lacks richness, so that positioning accuracy and positioning stability need to be enhanced. Because the indoor environment is complicated and changeable, the WiFi signals are fluctuated and changed due to factors such as artificial motion, humidity, temperature, multipath propagation and the like in the environment, the WiFi positioning cannot realize centimeter-level positioning, and the positioning accuracy of a single WiFi signal is about 2-3 meters.

While WiFi positioning technology is popular, visual positioning is also one of the key technologies to achieve indoor positioning. The visual positioning technology does not need equipment deployment for indoor and outdoor environments, so that the equipment cost is low, the obtained environmental information is abundant, the positioning precision is high, and the like, so that the visual positioning technology is widely concerned and rapidly developed. However, there are two fatal disadvantages to visual localization: visual blending and computational expense. Particularly in large indoor environments, such as: the demand for computers at airports, libraries, museums, etc. is a significant challenge. The positioning system is low in cost, strong in positioning robustness, high in positioning precision and low in calculation cost in a large indoor environment.

Therefore, it is necessary to provide an indoor positioning method based on WiFi and visual fusion to solve the above problems.

The noun explains:

AP: i.e. a wireless access point.

Disclosure of Invention

Aiming at the technical problems to be solved, the invention provides an efficient indoor positioning method with WiFi and vision integrated.

The invention provides an indoor positioning method based on WiFi and visual fusion, which comprises the following steps:

s1: establishing grid fingerprint points in an indoor positioning area, simultaneously acquiring WiFi data and image data at the fingerprint points, preprocessing the WiFi data and the image data, and constructing an offline fingerprint database;

s2: taking any position in the positioning area as a point to be measured, collecting WiFi data and image data of the point, and simultaneously recording the real position of the point to be measured;

s3: matching WiFi data of the point to be measured with WiFi in an offline fingerprint library, and screening out a plurality of WiFi candidate points;

s4: and matching the image of the point to be measured with the offline image data in the constraint condition by taking the WiFi candidate point as the constraint condition. Before image matching, moving obstacles such as pedestrians in the images are removed. And then screening out image candidate points by adopting an image positioning algorithm.

S5: corresponding weight values are respectively given to WiFi positioning and image positioning, then corresponding weight values are also given to WiFi candidate points and image candidate points, and indoor positioning is achieved through double-layer combined weight values.

Preferably, the step S1 includes the following steps:

s11: setting grid fingerprint points with estimated sizes in the indoor positioning area, and setting a total of N in the indoor positioning areasA fingerprint point, NAPAnd the AP simultaneously acquires M WiFi data and image data at each fingerprint point, and the image data are derived from different directions of the same point. Thus, the set of indoor fingerprint points can be expressed as L ═ L1,L2,...,LNs}. Each fingerprint point may be represented as Li={RSSi,Ii,oi}(i=1,2,...,Ns). WhereinAnd oi={xi,yiAnd respectively representing WiFi characteristics, image characteristics and real position information contained in each fingerprint point.WiFi features of jth AP representing the ith fingerprint point collected.Representing the j image feature of the i fingerprint point acquired. x is the number ofi,yi(i=1,2,...,Ns) Representing the position in the world coordinate system.

S12: processing the acquired WiFi data and the image data, removing bad values in the WiFi data, and removing images with high similarity in the image data.

S13: and converting the preprocessed WiFi data and image data into corresponding features for expression, wherein the WiFi features can be expressed by adopting a mean value, and the image features can be expressed by adopting a visual bag-of-words technology, so that an offline fingerprint database is constructed. Simultaneously training off-line image data and some public indoor image data sets to realize target detection, wherein the training algorithm comprises but is not limited to: R-CNN, fast R-CNN, and the like.

Preferably, the step S2 includes the following steps:

the method comprises the steps of setting a point to be measured at any position in an indoor positioning environment, wherein the point to be measured can be a fingerprint point position or a non-fingerprint point position, simultaneously collecting a plurality of WiFi data and image data, and obtaining the WiFi data and the image data of the point to be measured through the same preprocessing mode. And the real data of the point to be measured is also reserved for subsequent positioning accuracy error analysis.

Preferably, the step S3 specifically includes:

s31: converting WiFi data of the point to be measured into WiFi characteristics, and then matching the WiFi characteristics of the point to be measured with an offline fingerprint database

Preferably, the step S4 specifically includes:

s41: and screening candidate points of the image by taking the WiFi candidate points as constraint conditions,

|Lxi-xj|+|Lyi-yj|≤γ(i=1,2,...,K,j=1,2,...,Ns)

wherein (Lx)i,Lyi) And (x)j,yj) Respectively representing the position of a WiFi candidate point and the position of a fingerprint point, and gamma is a condition threshold value.

S42: because people flow in indoor positioning often, before image matching, indoor people are identified and segmented by the image data of the point to be measured.

S43: after identification and segmentation, matching the image characteristics of the points to be measured with the image characteristics of the constrained fingerprint points, and screening the first S points with the maximum similarity as image candidate points.

Preferably, the step S5 includes the following steps:

s51: due to the fact that the WiFi positioning precision is different from the visual positioning precision, different weight values are given to the WiFi positioning result and the image positioning result. Collecting NTThe information of each point to be measured is respectively positioned by adopting a WiFi positioning algorithm and an image positioning algorithm, and the WiFi positioning result is set to be locw=(XW,YW) The result of image localization is locp=(XP,YP) WiFi weighted value is WwWeighted value of Wp. And satisfies the objective function:

the constraint conditions are as follows:

Ww+Wp=1

wherein lociThe weight values of WiFi and image positioning can be obtained by adopting various methods for the real position of the point to be measured.

S52: and then, corresponding weight values are given to the screened K WiFi candidate points and the S image candidate points, and an unsupervised learning method and a supervised learning method can be adopted, so that a final positioning result is realized.

Compared with the prior art, the indoor positioning method based on WiFi and vision fusion provided by the invention has the advantages that the image is identified and segmented by applying deep learning, the artificial interference in the indoor environment is effectively processed, and meanwhile, the WiFi and the vision are effectively fused by adopting a double-layer combined weight value to realize final positioning. The method has strong competitiveness in positioning robustness and positioning precision.

Detailed Description

The technical solutions in the embodiments of the present invention will be clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

The invention provides an indoor positioning method based on WiFi and visual fusion, which comprises the following steps:

s1: establishing grid fingerprint points in an indoor positioning area, simultaneously acquiring WiFi data and image data at the fingerprint points, preprocessing the WiFi data and the image data, and constructing an offline fingerprint database.

The method comprises the steps of setting grid fingerprint points in an indoor positioning area, collecting WiFi data and image data of all the grid fingerprint points, adopting a rotating mode in the collecting process, enabling the collected image data to come from different directions of the points, carrying out corresponding preprocessing on the data, and improving the quality of input signals.

The step S1 specifically includes:

s11: setting grid fingerprint points with estimated sizes in the indoor positioning area, and setting a total of N in the indoor positioning areasA fingerprint point, NAPAnd the AP simultaneously acquires M WiFi data and image data at each fingerprint point, and the image data are derived from different directions of the same point. Thus, the set of indoor fingerprint points can be expressed as L ═ L1,L2,...,LNs}. Each fingerprint point may be represented as Li={RSSi,Ii,oi}(i=1,2,...,Ns). WhereinAnd oi={xi,yiAnd respectively representing WiFi characteristics, image characteristics and real position information contained in each fingerprint point.WiFi features of jth AP representing the ith fingerprint point collected.Representing the j image feature of the i fingerprint point acquired. x is the number ofi,yi(i=1,2,...,Ns) Representing the position in the world coordinate system.

S12: processing the acquired WiFi data and the image data, removing bad values in the WiFi data, and removing images with high similarity in the image data.

S13: and converting the preprocessed WiFi data and image data into corresponding features for expression, wherein the WiFi features can be expressed by adopting a mean value, and the image features can be expressed by adopting a visual bag-of-words technology, so that an offline fingerprint database is constructed. Simultaneously training off-line image data and some public indoor image data sets to realize target detection, wherein the training algorithm comprises but is not limited to: R-CNN, fast R-CNN, and the like.

S2: and taking any position in the positioning area as a point to be measured, acquiring WiFi data and image data of the point to be measured, and recording the real position of the point.

Information of a plurality of points to be tested needs to be collected, wherein one part of the information is used as a training set for acquiring WiFi positioning and image positioning weight values, and the other part of the information is used as a test set for testing the positioning performance of the experimental method.

The step S2 specifically includes:

the method comprises the steps of setting a point to be measured at any position in an indoor positioning environment, wherein the point to be measured can be a fingerprint point position or a non-fingerprint point position, simultaneously collecting a plurality of WiFi data and image data, and obtaining the WiFi data and the image data of the point to be measured through the same preprocessing mode. And the real data of the point to be measured is also reserved for subsequent positioning accuracy error analysis.

S3: and matching the WiFi data of the point to be measured with WiFi in an offline fingerprint database, and screening a plurality of WiFi candidate points.

In the WiFi positioning stage, on one hand, WiFi positioning calculation cost is low, the WiFi positioning calculation cost can be used as a constraint condition of image positioning, calculation cost of image positioning is reduced, and indoor positioning time is shortened. And on the other hand, WiFi candidate points are obtained to provide conditions for subsequent fusion positioning.

The step S3 specifically includes:

s31: converting WiFi data of the point to be measured into WiFi characteristics, and then matching the WiFi characteristics of the point to be measured with an offline fingerprint database

S4: and matching the image of the point to be measured with the offline image data in the constraint condition by taking the WiFi candidate point as the constraint condition. Before image matching, moving obstacles such as pedestrians in the images are removed. And then screening out image candidate points by adopting an image positioning algorithm.

And in the image positioning stage, the interference of artificial movement in indoor positioning is eliminated by adopting a deep learning method, the robustness of the indoor positioning is improved, and meanwhile, image candidate points are screened out from the constrained WiFi candidate points by adopting a feature matching algorithm.

The step S4 includes the following steps:

s41: and screening candidate points of the image by taking the WiFi candidate points as constraint conditions,

|Lxi-xj|+|Lyi-yj|≤γ(i=1,2,...,K,j=1,2,...,Ns)

wherein (Lx)i,Lyi) And (x)j,yj) Respectively representing the position of a WiFi candidate point and the position of a fingerprint point, and gamma is a condition threshold value.

S42: because people flow in the indoor positioning process, before image matching, the image data of the point to be measured is subjected to target recognition and segmentation, and indoor people are recognized and segmented.

S43: after identification and segmentation, matching the image characteristics of the points to be measured with the image characteristics of the constrained fingerprint points, and screening the first S points with the maximum similarity as image candidate points.

S5: corresponding weight values are respectively given to WiFi positioning and image positioning, then corresponding weight values are also given to WiFi candidate points and image candidate points, and indoor positioning is achieved through double-layer combined weight values.

And the WiFi and the image candidate points are effectively fused, so that the indoor positioning precision is improved.

The step S5 includes the following steps:

s51: due to the fact that the WiFi positioning precision is different from the visual positioning precision, different weight values are given to the WiFi positioning result and the image positioning result. Collecting NTThe information of each point to be measured is respectively positioned by adopting a WiFi positioning algorithm and an image positioning algorithm, and the WiFi positioning result is set to be locw=(XW,YW) The result of image localization is locp=(XP,YP) WiFi weighted value is WwWeighted value of Wp. And satisfies the objective function:

the constraint conditions are as follows:

Ww+Wp=1

wherein lociThe weight values of WiFi and image positioning can be obtained by adopting various methods for the real position of the point to be measured.

S52: and then, corresponding weight values are given to the screened K WiFi candidate points and the S image candidate points, and an unsupervised learning method and a supervised learning method can be adopted, so that a final positioning result is realized.

Compared with the prior art, the indoor positioning method based on WiFi and vision fusion provided by the invention has the advantages that the image is identified and segmented by applying deep learning, the artificial interference in the indoor environment is effectively processed, and meanwhile, the WiFi and the vision are effectively fused by adopting a double-layer combined weight value to realize final positioning. The method has strong competitiveness in positioning robustness and positioning precision.

While the foregoing is directed to embodiments of the present invention, it will be understood by those skilled in the art that various changes may be made without departing from the spirit and scope of the invention.

8页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于北斗RDSS实现跨地区到近场的数据共享方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类