Navigation method, device, equipment and storage medium

文档序号:499138 发布日期:2022-01-07 浏览:2次 中文

阅读说明:本技术 导航方法、装置、设备以及存储介质 (Navigation method, device, equipment and storage medium ) 是由 肖玲 梅怀博 于 2021-09-30 设计创作,主要内容包括:本公开提供了一种导航方法、装置、设备以及存储介质,涉及人工智能技术领域,具体为智能交通和深度学习技术,可应用于地图导航场景。该方法包括:基于导航起点位置和导航终点位置,生成导航路线;获取终端设备在基于导航路线进行导航的过程中所处的导航场景;响应于确定导航场景为预设的目标导航场景,获取与目标导航场景匹配的虚拟导航员的特效数据;通过终端设备展示虚拟导航员的特效数据。本公开提供了一种更形象、直观的导航方法,提升了导航效率以及导航的准确性。(The disclosure provides a navigation method, a navigation device, navigation equipment and a storage medium, relates to the technical field of artificial intelligence, in particular to an intelligent traffic and deep learning technology, and can be applied to a map navigation scene. The method comprises the following steps: generating a navigation route based on the navigation starting point position and the navigation end point position; acquiring a navigation scene of the terminal device in the navigation process based on the navigation route; responding to the fact that the navigation scene is determined to be a preset target navigation scene, and obtaining special effect data of a virtual navigator matched with the target navigation scene; and displaying the special effect data of the virtual navigator through the terminal equipment. The method and the device for navigating the image have the advantages that the image and the intuition are realized, and the navigation efficiency and the navigation accuracy are improved.)

1. A navigation method, comprising:

generating a navigation route based on the navigation starting point position and the navigation end point position;

acquiring a navigation scene where the terminal equipment is located in the process of navigating based on the navigation route;

responding to the fact that the navigation scene is determined to be a preset target navigation scene, and obtaining special effect data of a virtual navigator matched with the target navigation scene;

and displaying the special effect data of the virtual navigator through the terminal equipment.

2. The method of claim 1, further comprising:

acquiring the real-time position of the terminal equipment;

acquiring a current road image acquired by a sensor in the terminal equipment;

and displaying the navigation route and the road real scene on the terminal equipment in a combined manner based on the real-time position, the current road image and a pre-constructed map.

3. The method according to claim 2, wherein the acquiring of the navigation scene in which the terminal device is located in the process of navigating based on the navigation route comprises:

determining that the real-time position corresponds to a navigation position in a navigation route displayed by the terminal equipment;

determining a navigation scene based on the navigation position.

4. The method of any of claims 1-3, wherein the target navigation scenario includes at least one of: the navigation method comprises the following steps of starting a navigation scene, going straight, reaching a scene of an inflection point position in a navigation route, reaching a preset facility scene in the navigation route and finishing the navigation.

5. The method of claim 4, wherein, in the case that the target navigation scenario is the corner location scenario in the arrival navigation route, the method further comprises:

determining a landmark building at the inflection location;

determining a target position of the landmark building in a navigation route displayed by the terminal equipment;

highlighting the target location; and

the displaying of the special effect data of the virtual navigator through the terminal device includes:

displaying the special effects data of the virtual navigator at the target position.

6. The method of claim 4, wherein the obtaining of special effects data of a virtual navigator matching the target navigation scenario in response to determining that the navigation scenario is a preset target navigation scenario comprises at least one of:

responding to the navigation scene as the navigation starting scene, and acquiring the boarding animation of the virtual navigator;

responding to the navigation scene as the straight scene, and acquiring the forward animation of the virtual navigator;

responding to the navigation scene as the scene of the inflection point position in the navigation route, and acquiring the inflection point guide animation of the virtual navigator;

responding to the navigation scene as a preset facility scene in the navigation route, and acquiring a facility guide animation of the virtual navigator;

and responding to the navigation scene as the navigation ending scene, and acquiring the departure animation of the virtual navigator.

7. A navigation device, comprising:

a generation module configured to generate a navigation route based on the navigation start point position and the navigation end point position;

the first acquisition module is configured to acquire a navigation scene in which the terminal device is positioned in the process of navigating based on the navigation route;

the second acquisition module is configured to respond to the fact that the navigation scene is determined to be a preset target navigation scene, and acquire special effect data of a virtual navigator matched with the target navigation scene;

a first presentation module configured to present the special effect data of the virtual navigator through the terminal device.

8. The apparatus of claim 7, further comprising:

a third obtaining module configured to obtain a real-time location of the terminal device;

a fourth acquisition module configured to acquire a current road image acquired by a sensor in the terminal device;

a second presentation module configured to present the navigation route in combination with a road live view on the terminal device based on the real-time location, the current road image, and a pre-constructed map.

9. The apparatus of claim 8, wherein the first obtaining means comprises:

a first determination submodule configured to determine that the real-time location corresponds to a navigation location in a navigation route presented by the terminal device;

a second determination submodule configured to determine a navigation scene based on the navigation position.

10. The apparatus of any of claims 7-9, wherein the target navigation scenario includes at least one of: the navigation method comprises the following steps of starting a navigation scene, going straight, reaching a scene of an inflection point position in a navigation route, reaching a preset facility scene in the navigation route and finishing the navigation.

11. The apparatus of claim 10, wherein, in a case that the target navigation scenario is the corner location scenario in the arrival navigation route, the apparatus further comprises:

a first determination module configured to determine a landmark building at the inflection location;

a second determination module configured to determine a target location of the landmark building in a navigation route presented by the terminal device;

a display module configured to highlight the target location; and

the first display module comprises:

a presentation sub-module configured to present the special effects data of the virtual navigator at the target position.

12. The apparatus of claim 10, wherein the second acquisition module comprises at least one of:

a first obtaining sub-module configured to obtain a boarding animation of the virtual navigator in response to the navigation scene being the navigation start scene;

a second obtaining sub-module configured to obtain a forward animation of the virtual navigator in response to the navigation scene being the straight scene;

a third obtaining sub-module configured to obtain a knee guidance animation of the virtual navigator in response to the navigation scene being the knee location scene in the arrival navigation route;

a fourth obtaining sub-module configured to obtain a facility guide animation of the virtual navigator in response to the navigation scene being a preset facility scene in the arrival navigation route;

a fifth obtaining sub-module configured to obtain an off-scene animation of the virtual navigator in response to the navigation scene being the navigation end scene.

13. An electronic device, comprising:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.

14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.

15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-6.

Technical Field

The present disclosure relates to the field of artificial intelligence technologies, and in particular, to an intelligent transportation and deep learning technology, and more particularly, to a navigation method, apparatus, device, and storage medium, which can be applied to a map navigation scene.

Background

The current navigation scheme generally carries out voice broadcast at a preset maneuvering point, the broadcasting time depends on satellite signals seriously, and the mode has the defects of poor intuition, limited provided navigation information and the like, so that the navigation efficiency is low.

Disclosure of Invention

The disclosure provides a navigation method, apparatus, device and storage medium.

According to a first aspect of the present disclosure, there is provided a navigation method comprising: generating a navigation route based on the navigation starting point position and the navigation end point position; acquiring a navigation scene of the terminal device in the navigation process based on the navigation route; responding to the fact that the navigation scene is determined to be a preset target navigation scene, and obtaining special effect data of a virtual navigator matched with the target navigation scene; and displaying the special effect data of the virtual navigator through the terminal equipment.

According to a second aspect of the present disclosure, there is provided a navigation device comprising: a generation module configured to generate a navigation route based on the navigation start point position and the navigation end point position; the terminal equipment comprises a first acquisition module, a second acquisition module and a navigation module, wherein the first acquisition module is configured to acquire a navigation scene in which the terminal equipment is positioned in the process of navigating based on a navigation route; the second acquisition module is configured to respond to the fact that the navigation scene is determined to be a preset target navigation scene, and acquire special effect data of the virtual navigator matched with the target navigation scene; the first display module is configured to display the special effect data of the virtual navigator through the terminal equipment.

According to a third aspect of the present disclosure, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described in any one of the implementations of the first aspect.

According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method as described in any one of the implementations of the first aspect.

According to a fifth aspect of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the method as described in any of the implementations of the first aspect.

It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.

Drawings

The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:

FIG. 1 is an exemplary system architecture diagram in which the present disclosure may be applied;

FIG. 2 is a flow diagram of one embodiment of a navigation method according to the present disclosure;

FIG. 3 is a flow chart of another embodiment of a navigation method according to the present disclosure;

FIG. 4 is a flow chart of yet another embodiment of a navigation method according to the present disclosure;

FIG. 5 is a schematic structural diagram of one embodiment of a navigation device according to the present disclosure;

FIG. 6 is a block diagram of an electronic device for implementing a navigation method of an embodiment of the present disclosure.

Detailed Description

Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.

It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.

Fig. 1 shows an exemplary system architecture 100 to which embodiments of the navigation method or navigation device of the present disclosure may be applied.

As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.

A user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or transmit information or the like. Various client applications may be installed on the terminal devices 101, 102, 103.

The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices including, but not limited to, smart phones, tablet computers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the above-described electronic apparatuses. It may be implemented as multiple pieces of software or software modules, or as a single piece of software or software module. And is not particularly limited herein.

The server 105 may provide various services. For example, the server 105 may analyze and process the navigation start position and the navigation end position acquired from the terminal apparatuses 101, 102, 103, and generate a processing result (e.g., a navigation route).

The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 105 is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.

It should be noted that the navigation method provided by the embodiment of the present disclosure is generally executed by the server 105, and accordingly, the navigation device is generally disposed in the server 105.

It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.

With continued reference to FIG. 2, a flow 200 of one embodiment of a navigation method according to the present disclosure is shown. The navigation method comprises the following steps:

step 201, generating a navigation route based on the navigation starting point position and the navigation end point position.

In this embodiment, an execution subject of the navigation method (for example, the server 105 shown in fig. 1) may generate the navigation route based on the navigation start position and the navigation end position. Firstly, the execution main body acquires a navigation starting point position and a navigation end point position, wherein the navigation starting point position can be a current position point of the terminal device, and the execution main body acquires the current position point of the terminal device and takes the current position point as a starting point position of a navigation route; the navigation starting position can also be a starting position point of user input received by the terminal equipment; the navigation end position may be an end position point of a user input received through the terminal device.

Then, the executing body generates a navigation route based on the acquired navigation starting point position and navigation end point position. The specific method for generating the navigation route can be implemented by the prior art, and is not described herein again.

Step 202, acquiring a navigation scene where the terminal device is located in the process of navigating based on the navigation route.

In this embodiment, the executing body may obtain a navigation scene in which the terminal device is located during navigation based on the navigation route. After the navigation route is generated in step 201, the executing entity sends the navigation route to the terminal device, so that the terminal device displays the navigation route and performs navigation based on the navigation route.

Then, the executing entity may obtain a navigation scene in which the terminal device is located in a navigation process based on the navigation route, for example, the executing entity may obtain real-time location information acquired by a GPS (Global Positioning System) module in the terminal device in real time, and correspond the real-time location information to the navigation route, that is, display the real-time location information at a corresponding location in the navigation route, so as to determine the navigation scene in which the terminal device is located, that is, determine a navigation state of the terminal device in the navigation process, for example, start navigation, end navigation, and the like.

Step 203, in response to determining that the navigation scene is the preset target navigation scene, special effect data of the virtual navigator matched with the target navigation scene is acquired.

In this embodiment, the executing body may acquire special effect data of the virtual navigator matched with the target navigation scene when determining that the navigation scene is the preset target navigation scene. In this embodiment, a target navigation scene, that is, a navigation scene that is often located in a navigation process, such as a navigation start scene, a navigation end scene, a turning scene, and the like, is set in advance according to an actual situation, and special effect data of a corresponding virtual navigator is set in advance for each target navigation scene, where the virtual navigator is a preset virtual image that guides a user to move forward in a navigation route, and the special effect data may be special effect animation or voice information. The execution main body acquires special effect data of the virtual navigator corresponding to the target navigation scene under the condition that the navigation scene is determined to be the preset target navigation scene.

As an example, it is assumed that, in a case where it is determined that the navigation scene is the preset navigation start scene, the execution subject acquires a boarding animation of the virtual navigator matching the navigation start scene. Optionally, the special effect data in this scenario may further include voice prompt data: "please follow me to go out the bar".

As another example, it is assumed that in the case where it is determined that the navigation scene is the preset navigation end scene, the execution subject acquires an off-scene animation of the virtual navigator matching the navigation end scene. Optionally, the special effect data in this scenario may further include voice prompt data: "you have arrived at the destination, navigation ends, thank you for use".

And step 204, displaying the special effect data of the virtual navigator through the terminal equipment.

In this embodiment, the executing body may display the special effect data of the virtual navigator acquired in step 203 through the terminal device, so as to display the navigation route and the navigation state more intuitively and vividly. For example, at the beginning of navigation, a virtual navigator's boarding animation is shown at the starting point of the navigation route; displaying an off-scene animation of the virtual navigator at the end point of the navigation route when the navigation is finished; and in the process of advancing according to the navigation route, displaying special effect data of the virtual navigator at the corresponding position of the navigation route by acquiring the real-time position information of the terminal equipment.

The navigation method provided by the embodiment of the disclosure includes generating a navigation route based on a navigation starting position and a navigation ending position; then acquiring a navigation scene of the terminal device in the navigation process based on the navigation route; then responding to the fact that the navigation scene is determined to be a preset target navigation scene, and obtaining special effect data of the virtual navigator matched with the target navigation scene; and finally, displaying the special effect data of the virtual navigator through the terminal equipment. In the navigation method in the embodiment, the virtual navigator guides the user to advance on the terminal device along the navigation route, so that the user does not need to distinguish the navigation direction and the navigation position, and only needs to advance along with the virtual navigator, the navigation visibility is improved, and the navigation efficiency and accuracy are further improved.

In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.

With continued reference to fig. 3, fig. 3 illustrates a flow 300 of another embodiment of a navigation method according to the present disclosure. The navigation method comprises the following steps:

step 301, generating a navigation route based on the navigation starting point position and the navigation end point position.

Step 301 is substantially the same as step 201 in the foregoing embodiment, and the specific implementation manner may refer to the foregoing description of step 201, which is not described herein again.

Step 302, acquiring a real-time position of the terminal device.

In this embodiment, an execution main body of the navigation method (for example, the server 105 shown in fig. 1) may obtain the real-time position of the terminal device, for example, the real-time position of the terminal device may be determined by obtaining real-time position information collected by a GPS module built in the terminal device, and the real-time position of the terminal device may also be obtained in other manners, which is not specifically limited in this embodiment.

Step 303, acquiring a current road image acquired by a sensor in the terminal device.

In this embodiment, the executing body may further acquire a current road image acquired by a sensor built in the terminal device. The sensor arranged in the terminal equipment can acquire the current road image in real time, and the execution main body can acquire the current road image acquired by the sensor in the terminal equipment in real time.

And 304, displaying the navigation route and the road real scene on the terminal equipment in a combined manner based on the real-time position, the current road image and the pre-constructed map.

In this embodiment, the executing entity may display the navigation route and the road real scene on the terminal device based on the real-time location obtained in step 302, the current road image obtained in step 303, and the pre-constructed map. For example, the real-time position is matched with a point in a pre-constructed map, so as to determine a position point corresponding to the real-time position in the map, and then the direction is identified based on the current road image, so that the route to be traveled (i.e. the navigation route) and the real scene are combined and tiled on the terminal device. Therefore, the navigation route and the road real scene are combined and displayed on the terminal equipment, and more visual and vivid navigation information is provided for users.

Step 305, determining that the real-time position corresponds to a navigation position in the navigation route displayed by the terminal equipment.

In this embodiment, the executing entity may determine that the real-time location of the terminal device corresponds to the navigation location in the displayed navigation route, that is, the executing entity may match the real-time location of the terminal device with a location point in a pre-constructed map, so as to determine that the real-time location of the terminal device corresponds to the navigation location in the displayed navigation route.

Step 306, determining a navigation scene based on the navigation position.

In this embodiment, the executing entity may determine the navigation scene based on the navigation position determined in step 305. For example, assuming that the determined navigation position is a starting position point of the navigation route, the navigation scene may be determined as a navigation start scene; assuming that the determined navigation position is an end position point of the navigation route, the navigation scene may be determined as a navigation end scene. The navigation scene is determined based on the steps, so that the real-time performance and the accuracy of the navigation scene can be ensured.

Step 307, in response to determining that the navigation scene is the preset target navigation scene, obtaining special effect data of the virtual navigator matched with the target navigation scene.

In this embodiment, the executing body may acquire special effect data of a virtual navigator matched with a target navigation scene when determining that the navigation scene is a preset target navigation scene. Step 307 is substantially the same as step 203 in the foregoing embodiment, and the specific implementation manner may refer to the foregoing description of step 203, which is not described herein again.

In some optional implementations of this embodiment, the target navigation scenario includes, but is not limited to, at least one of: the navigation method comprises the following steps of starting a navigation scene, going straight, reaching an inflection point position scene in a navigation route, reaching a preset facility scene in the navigation route and finishing the navigation.

Specifically, the navigation start scene is a scene when the user starts to navigate based on the navigation route. For example, in response to a user triggering a navigation start event, it may be determined that the terminal device is in a navigation start scenario. For another example, in response to detecting that the position of the terminal device is the start position of the navigation route, it may also be determined that the terminal device is in the navigation start scene.

The straight-going scene is a scene in which the user keeps going straight based on the navigation route.

And the scene of the inflection point position in the navigation route is the scene of the inflection point position in the navigation route reached by the user. For example, whether a distance difference between the current position and the inflection point position is less than a preset distance difference threshold may be determined based on the current position of the terminal device and the position of the inflection point included in the generated navigation route; if the position of the terminal equipment is smaller than the position of the inflection point in the arrival navigation route, the terminal equipment can be determined to be in the scene of the inflection point position in the arrival navigation route.

The preset facility scene in the arrival navigation route is the scene of the preset facility in the arrival navigation route of the user, wherein the preset facility can be a bridge, a slope, a straight ladder, an escalator and the like.

The navigation ending scene is a scene when the user reaches the navigation route end point.

In some optional implementations of this embodiment, step 307 includes, but is not limited to, at least one of: responding to the navigation scene as a navigation starting scene, and acquiring the boarding animation of the virtual navigator; responding to the situation that the navigation scene is a straight-going scene, and acquiring forward animation of the virtual navigator; responding to a navigation scene which is a scene reaching an inflection point position in a navigation route, and acquiring an inflection point guide animation of the virtual navigator; responding to a preset facility scene in the navigation route reached by the navigation scene, and acquiring a facility guide animation of the virtual navigator; and responding to the navigation scene as a navigation ending scene, and acquiring the off-scene animation of the virtual navigator.

In this implementation manner, since the corresponding special effect data is set for each target navigation scene in advance, when the execution subject determines that the navigation scene is the preset target navigation scene, the special effect data of the virtual navigator matching the target navigation scene is acquired.

When the navigation scene is a navigation starting scene, the boarding animation of the virtual navigator can be acquired, and the voice prompt data can be further included: "please follow me to go out the bar".

When the navigation scene is determined to be a straight-ahead scene, the forward-going animation of the virtual navigator is obtained, at this time, the virtual navigator can walk on the navigation route, the action can be a flying or jumping gesture, and the forward-going animation can further include voice prompt data: "please keep going straight".

When the navigation scene is determined to be the scene of the inflection point position in the arrival navigation route, the inflection point guide animation of the virtual navigator is obtained.

And when the navigation scene is determined to be a preset facility scene in the arrival navigation route, acquiring the facility guide animation of the virtual navigator.

When the navigation scene is determined to be a navigation ending scene, the departure animation of the virtual navigator can be obtained, and voice prompt data can be included at the same time: "you have arrived at the destination, navigation ends, thank you for use".

By acquiring special effect data of the virtual navigator matched with the target navigation scene and displaying the special effect data through the terminal equipment, richer and more intuitive navigation information can be provided for a user, and the navigation efficiency is improved.

And 308, displaying the special effect data of the virtual navigator through the terminal equipment.

Step 308 is substantially the same as step 204 in the foregoing embodiment, and the specific implementation manner may refer to the foregoing description of step 204, which is not described herein again.

As can be seen from fig. 3, compared with the embodiment corresponding to fig. 2, the navigation method in this embodiment first generates a navigation route based on the navigation starting position and the navigation ending position; acquiring the real-time position of the terminal equipment, acquiring a current road image acquired by a sensor in the terminal equipment, and displaying a navigation route and a road real scene on the terminal equipment in a combined manner based on the real-time position, the current road image and a pre-constructed map; then determining that the real-time position corresponds to a navigation position in a navigation route displayed by the terminal equipment; and finally, determining a navigation scene based on the navigation position, responding to the condition that the navigation scene is determined to be a preset target navigation scene, acquiring special effect data of the virtual navigator matched with the target navigation scene, and displaying the special effect data of the virtual navigator through the terminal equipment. According to the navigation method in the embodiment, the special effect data of the virtual navigator matched with the target navigation scene is obtained, and is displayed through the terminal equipment, so that richer and more intuitive navigation information can be provided for a user, and the navigation efficiency is improved.

With continued reference to fig. 4, fig. 4 illustrates a flow 400 of yet another embodiment of a navigation method according to the present disclosure. The navigation method comprises the following steps:

step 401, generating a navigation route based on the navigation starting point position and the navigation end point position.

Step 402, acquiring a navigation scene where the terminal device is located in the process of navigating based on the navigation route.

The steps 401-.

In response to determining that the navigation scene is a scene of an inflection point location in the arrival navigation route, landmark buildings at the inflection point location are determined, step 403.

In this embodiment, in a case that the navigation scene is determined to be a scene of an inflection point position in the arrival navigation route, an executing subject of the navigation method (for example, the server 105 shown in fig. 1) may obtain a landmark building at the inflection point position from a pre-constructed map, and use the landmark building as the landmark at the inflection point position, where a plurality of landmark buildings are marked in the pre-constructed map. For example, when it is determined that the navigation scene is a scene of reaching an inflection point position in the navigation route, if the executing entity acquires that there is a "kendyl" at the inflection point position in the pre-constructed map, the "kendyl" is taken as the landmark building at the inflection point position.

Step 404, determining the target position of the landmark building in the navigation route displayed by the terminal equipment.

In this embodiment, after determining the landmark building of the inflection point position, the execution subject determines the position of the landmark building in the navigation route displayed by the terminal device, and records the position as the target position.

Step 405, highlight target location.

In this embodiment, the executing entity highlights the target location determined in step 404. For example, assuming that the landmark building is "kendyy", the execution subject determines a target position of "kendyy" in the navigation route presented by the terminal device and highlights the target position.

At step 406, special effect data of the virtual navigator matching the scene of the inflection point position in the arrival navigation route is acquired.

In this embodiment, since it is determined that the target navigation scene is the corner position scene in the arrival navigation route, the executing entity may obtain special effect data of the virtual navigator matching the corner position scene in the arrival navigation route, where the special effect data may be an animation of the virtual navigator in the middle of waving, and further includes the voice prompt information "turn left in kentucky after 100 meters".

Step 407, displaying the special effect data of the virtual navigator at the target position.

In the present embodiment, special effect data of a virtual navigator can be presented at a target position of a navigation route in a terminal device. For example, a waving animation of a virtual director and a broadcast voice prompt message are shown on the "kendyy" side in the navigation route.

As can be seen from fig. 4, compared with the embodiment corresponding to fig. 3, the navigation method in this embodiment highlights a step of guiding the user to go forward based on the landmark building at the inflection point position, and the navigation method performs broadcast at the inflection point landmark building, so that the user only needs to remember the landmark building, and does not need to watch a detailed navigation route or listen to detailed voice broadcast; in addition, the navigation method in the embodiment does not depend on satellite signals, and direction broadcasting is performed through the left side and the right side, so that the broadcasting information is more accurate, and the navigation efficiency is improved.

With further reference to fig. 5, as an implementation of the method shown in the above figures, the present disclosure provides an embodiment of a navigation device, which corresponds to the embodiment of the method shown in fig. 2, and which is particularly applicable to various electronic devices.

As shown in fig. 5, the navigation device 500 of the present embodiment includes: a generating module 501, a first obtaining module 502, a second obtaining module 503 and a first showing module 504. The generating module 501 is configured to generate a navigation route based on the navigation starting point position and the navigation end point position; a first obtaining module 502 configured to obtain a navigation scene in which the terminal device is located during navigation based on the navigation route; a second obtaining module 503 configured to, in response to determining that the navigation scene is a preset target navigation scene, obtain special effect data of a virtual navigator matching the target navigation scene; a first presentation module 504 configured to present the special effects data of the virtual navigator through the terminal device.

In the present embodiment, in the navigation device 500: the specific processing and the technical effects of the generating module 501, the first obtaining module 502, the second obtaining module 503 and the first displaying module 504 can refer to the related descriptions of step 201 and step 204 in the corresponding embodiment of fig. 2, which are not described herein again.

In some optional implementations of the present embodiment, the navigation device 500 further includes: a third obtaining module configured to obtain a real-time location of the terminal device; the fourth acquisition module is configured to acquire a current road image acquired by a sensor in the terminal equipment; and the second display module is configured to display the navigation route and the road real scene on the terminal equipment in a combined manner based on the real-time position, the current road image and the pre-constructed map.

In some optional implementation manners of this embodiment, the first obtaining module includes: a first determination submodule configured to determine that the real-time location corresponds to a navigation location in a navigation route presented by the terminal device; a second determination submodule configured to determine a navigation scene based on the navigation position.

In some optional implementations of this embodiment, the target navigation scenario includes at least one of: the navigation method comprises the following steps of starting a navigation scene, going straight, reaching an inflection point position scene in a navigation route, reaching a preset facility scene in the navigation route and finishing the navigation.

In some optional implementations of the present embodiment, in a case that the target navigation scene is a scene of reaching an inflection point position in the navigation route, the navigation apparatus 500 further includes: a first determination module configured to determine a landmark building at an inflection location; a second determination module configured to determine a target location of the landmark building in a navigation route presented by the terminal device; a display module configured to highlight a target location; and the first display module comprises: a presentation sub-module configured to present the special effects data of the virtual navigator at the target position.

In some optional implementations of this embodiment, the second obtaining module includes at least one of: the first obtaining sub-module is configured to respond to the navigation scene being a navigation starting scene, and obtain the boarding animation of the virtual navigator; a second obtaining sub-module configured to obtain the forward animation of the virtual navigator in response to the navigation scene being a straight scene; a third obtaining sub-module configured to obtain an inflection point guide animation of the virtual navigator in response to the navigation scene being a scene of reaching an inflection point position in the navigation route; a fourth obtaining sub-module configured to obtain a facility guide animation of the virtual navigator in response to the navigation scene being a preset facility scene in the arrival navigation route; and the fifth acquisition sub-module is configured to respond to the navigation scene being a navigation ending scene, and acquire the departure animation of the virtual navigator.

The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.

FIG. 6 illustrates a schematic block diagram of an example electronic device 600 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.

As shown in fig. 6, the apparatus 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.

A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.

The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 601 performs the respective methods and processes described above, such as the navigation method. For example, in some embodiments, the navigation method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the navigation method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the navigation method by any other suitable means (e.g. by means of firmware).

Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.

Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.

In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.

The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.

The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.

It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.

The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:港口自动驾驶高精度地图的生成与精度评价方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!