Image output method and apparatus for navigation, medium, device, and vehicle

文档序号:65631 发布日期:2021-10-01 浏览:46次 中文

阅读说明:本技术 用于导航的图像输出方法和装置、介质、设备、车辆 (Image output method and apparatus for navigation, medium, device, and vehicle ) 是由 王永力 王平原 唐俊 于 2020-03-31 设计创作,主要内容包括:本公开涉及一种用于导航的图像输出方法和装置、介质、设备、车辆。所述方法包括:获取预定物体的定位信息,并根据所述定位信息确定所述预定物体的环境图像;获取所述预定物体与所述预定物体周围的被测物体之间的距离;根据所述预定物体与所述被测物体之间的距离随时间变化的对应关系确定所述被测物体的类型;根据所述被测物体的类型、当前所述预定物体与所述被测物体之间的距离,将所述被测物体标记在所述环境图像中,并输出标记有所述被测物体的环境图像。这样,能够在导航电子地图中实时地输出除固定建筑之外的运动的环境信息,为驾驶员提供正确的驾驶指引,从而减少行驶事故的发生。(The present disclosure relates to an image output method and apparatus for navigation, a medium, a device, and a vehicle. The method comprises the following steps: acquiring positioning information of a preset object, and determining an environment image of the preset object according to the positioning information; acquiring the distance between the preset object and a measured object around the preset object; determining the type of the object to be measured according to the corresponding relation of the distance between the preset object and the object to be measured along with the change of time; and marking the measured object in the environment image according to the type of the measured object and the distance between the preset object and the measured object, and outputting the environment image marked with the measured object. Thus, the environmental information of the movement except the fixed building can be output in real time in the navigation electronic map, and the accurate driving guidance is provided for the driver, thereby reducing the occurrence of driving accidents.)

1. An image output method for navigation, characterized in that the method comprises:

acquiring positioning information of a preset object, and determining an environment image of the preset object according to the positioning information;

acquiring the distance between the preset object and a measured object around the preset object;

determining the type of the object to be measured according to the corresponding relation of the distance between the preset object and the object to be measured along with the change of time;

and marking the measured object in the environment image according to the type of the measured object and the distance between the preset object and the measured object, and outputting the environment image marked with the measured object.

2. The image output method for navigation according to claim 1, wherein the correspondence is obtained according to:

respectively obtaining the distances between the measured object and the preset object at a plurality of continuous time points to obtain a plurality of data sets, wherein each data set comprises a time point and a corresponding distance;

and determining the corresponding relation according to the plurality of data groups.

3. The image output method for navigation according to claim 1, wherein determining the type of the object to be measured from a correspondence relationship in which a distance between the predetermined object and the object to be measured changes with time includes:

and inputting the corresponding relation of the distance between the preset object and the object to be measured changing along with time into a preset neural network model to obtain the type of the object to be measured.

4. The image output method for navigation according to claim 1, wherein the distance between the predetermined object and the object to be measured is detected by a plurality of ultrasonic sensors, and correspondingly, the object to be measured is marked in the environment image according to a type of the object to be measured, a current distance between the predetermined object and the object to be measured, and the environment image marked with the object to be measured is output, including:

determining the position of the object to be measured according to the distance between the preset object and the object to be measured detected by the plurality of ultrasonic sensors and the positions of the plurality of ultrasonic sensors;

and marking the measured object in the environment image according to the position and the type of the measured object, and outputting the environment image marked with the measured object.

5. The image output method for navigation according to claim 4, wherein the object to be measured is marked in the form of an icon, wherein different types correspond to different icons.

6. An image output method for navigation according to any one of claims 1 to 5, wherein the predetermined object is a vehicle or a drone, and the types include a vehicle, a bicycle, and a pedestrian.

7. An image output apparatus for navigation, characterized in that the apparatus comprises:

the first determining module is used for acquiring positioning information of a preset object and determining an environment image of the preset object according to the positioning information;

the acquisition module is used for acquiring the distance between the preset object and a measured object around the preset object;

the second determination module is used for determining the type of the object to be measured according to the corresponding relation of the distance between the preset object and the object to be measured changing along with time;

and the output module is used for marking the measured object in the environment image according to the type of the measured object and the distance between the preset object and the measured object and outputting the environment image marked with the measured object.

8. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, realizes the steps of the image output method for navigation according to any one of claims 1 to 6.

9. An electronic device, comprising:

a memory having a computer program stored thereon;

a processor for executing the computer program in the memory to implement the steps of the image output method for navigation of any one of claims 1 to 6.

10. A vehicle characterized by comprising a controller for executing the steps of the image output method for navigation according to any one of claims 1 to 6 and a display.

Technical Field

The present disclosure relates to the field of driving assistance, and in particular, to an image output method and apparatus, medium, device, and vehicle for navigation.

Background

The existing three-dimensional navigation electronic map is mostly manufactured by adopting a mode of superposing an orthophoto map on a three-dimensional landmark building model with a stereoscopic vision effect, and three-dimensional simulation navigation of the three-dimensional building model is placed on a two-dimensional navigation electronic map. But does not express key elements of road conditions (such as nearby vehicles, pedestrians on the roadside, and unexpected conditions of roads) in real time. In actual driving, drivers are more concerned about the influence of road conditions on driving.

The amount of information in conventional three-dimensional navigation is limited, and the visual system can make up for these disadvantages. The vision processing system extracts related three-dimensional world information from a two-dimensional image, and has extremely large data volume and high requirements on hardware processing performance. On the other hand, the cameras are greatly interfered by the environment, the imaging effect is poor in different weather, such as rain and fog weather, night time and other environments, and even objects around the vehicle cannot be identified or misjudged at all, so that great hidden danger is brought to safe driving.

Disclosure of Invention

An object of the present disclosure is to provide a reliable and practical image output method and apparatus for navigation, medium, device, vehicle.

In order to achieve the above object, the present disclosure provides an image output method for navigation, the method including:

acquiring positioning information of a preset object, and determining an environment image of the preset object according to the positioning information;

acquiring the distance between the preset object and a measured object around the preset object;

determining the type of the object to be measured according to the corresponding relation of the distance between the preset object and the object to be measured along with the change of time;

and marking the measured object in the environment image according to the type of the measured object and the distance between the preset object and the measured object, and outputting the environment image marked with the measured object.

Optionally, the correspondence is obtained according to the following manner:

respectively obtaining the distances between the measured object and the preset object at a plurality of continuous time points to obtain a plurality of data sets, wherein each data set comprises a time point and a corresponding distance;

and determining the corresponding relation according to the plurality of data groups.

Optionally, determining the type of the object to be measured according to a correspondence relationship between the distance between the predetermined object and the object to be measured and the change with time includes:

and inputting the corresponding relation of the distance between the preset object and the object to be measured changing along with time into a preset neural network model to obtain the type of the object to be measured.

Optionally, the detecting the distance between the predetermined object and the object to be measured by a plurality of ultrasonic sensors, correspondingly, according to the type of the object to be measured and the current distance between the predetermined object and the object to be measured, marking the object to be measured in the environment image, and outputting the environment image marked with the object to be measured includes:

determining the position of the object to be measured according to the distance between the preset object and the object to be measured detected by the plurality of ultrasonic sensors and the positions of the plurality of ultrasonic sensors;

and marking the measured object in the environment image according to the position and the type of the measured object, and outputting the environment image marked with the measured object.

Optionally, the object to be measured is marked in the form of an icon, wherein different types correspond to different icons.

Optionally, the predetermined object is a vehicle or a drone, the types including vehicles, bicycles and pedestrians.

The present disclosure also provides an image output apparatus for navigation, the apparatus including:

the first determining module is used for acquiring positioning information of a preset object and determining an environment image of the preset object according to the positioning information;

the acquisition module is used for acquiring the distance between the preset object and a measured object around the preset object;

the second determination module is used for determining the type of the object to be measured according to the corresponding relation of the distance between the preset object and the object to be measured changing along with time;

and the output module is used for marking the measured object in the environment image according to the type of the measured object and the distance between the preset object and the measured object and outputting the environment image marked with the measured object.

The present disclosure also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the above-described image output method for navigation provided by the present disclosure.

The present disclosure also provides an electronic device, comprising:

a memory having a computer program stored thereon;

a processor for executing the computer program in the memory to implement the steps of the image output method for navigation provided by the present disclosure.

The present disclosure also provides a vehicle including a controller for performing the steps of the above-described image output method for navigation provided by the present disclosure and a display.

By the technical scheme, the distance between the preset object and the measured object is obtained, the measured object is classified according to the corresponding relation of the distance of the measured object changing along with time, and finally the measured object is marked in the environment image around the preset object according to the type of the measured object. In fact, the corresponding relation of the distance of the measured object changing with time is embodied in the environment image. Thus, the environmental information of the movement except the fixed building can be output in real time in the navigation electronic map, and the accurate driving guidance is provided for the driver, thereby reducing the occurrence of driving accidents.

Additional features and advantages of the disclosure will be set forth in the detailed description which follows.

Drawings

The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:

FIG. 1 is a flow diagram of an image output method for navigation provided by an exemplary embodiment;

FIG. 2 is a distance versus time graph of a bicycle as the object being measured provided by an exemplary embodiment;

FIG. 3 is a distance versus time graph with a pedestrian object provided in accordance with an exemplary embodiment;

FIG. 4 is a schematic illustration of an environmental image provided by an exemplary embodiment;

FIG. 5 is a schematic illustration of an environmental image marked with pedestrians and vehicles provided by an exemplary embodiment;

FIG. 6 is a block diagram of an image output device for navigation provided by an exemplary embodiment;

FIG. 7 is a block diagram of an electronic device, shown in an example embodiment.

Detailed Description

The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.

The scheme of this disclosure is applicable to the image output of electronic navigation equipment in vehicle or unmanned aerial vehicle auxiliary driving field. The electronic navigation image output in the vehicle is described in detail herein as an example.

Fig. 1 is a flowchart of an image output method for navigation according to an exemplary embodiment. As shown in fig. 1, the image output method for navigation may include the following steps.

Step S11, acquiring positioning information of the predetermined object, and determining an environment image of the predetermined object according to the positioning information.

Step S12, a distance between the predetermined object and the measured object around the predetermined object is acquired.

And step S13, determining the type of the object to be measured according to the corresponding relation of the distance between the preset object and the object to be measured changing along with time.

Step S14, according to the type of the object to be measured and the distance between the current predetermined object and the object to be measured, the object to be measured is marked in the environmental image, and the environmental image marked with the object to be measured is output.

Wherein, the predetermined object can be a vehicle or a drone, and the types can include a vehicle, a bicycle, and a pedestrian. Hereinafter, description will be given taking a vehicle as an example, and the predetermined object is a vehicle. The vehicle can be positioned by a GPS positioning device arranged on the vehicle to obtain positioning information, and then an environment image can be made in a three-dimensional landmark building model with a stereoscopic vision effect. The production of such an ambient image is well known to those skilled in the art and will not be described in detail here.

One or more ultrasonic sensors may be installed on the periphery of the vehicle for detecting the distance between an object (measured object) around the vehicle and the vehicle using the principle of ultrasonic ranging. When a plurality of ultrasonic sensors are provided, the position of the object to be measured with respect to the vehicle can also be calculated.

According to the data of the distance detected by the ultrasonic sensor and the clock, the corresponding relation of the distance between the object to be detected and the vehicle changing along with the time within a period of time can be obtained. Specifically, the correspondence may be obtained according to the following manner: and respectively obtaining the distances between the measured object and the preset object at a plurality of continuous time points to obtain a plurality of data sets, wherein each data set comprises the time point and the corresponding distance, and determining the corresponding relation according to the plurality of data sets. That is, the vehicle can track the distance of the object to be measured, and generate a plurality of data sets each including a time point and the distance detected at the time point. After tracking for a period of time, a smooth curve of distance changes over time, i.e. a corresponding relationship, can be obtained, for example, by fitting, based on the plurality of data sets that have been generated.

Because the distance of the measured object changes with time, the influence degree of the measured object on the vehicle driving is reflected, and therefore, the measured objects are marked in real time in the navigation image output in the vehicle navigation, and more comprehensive road condition information is provided.

The rule of the corresponding relation of the distance between each type of measured object and the vehicle along with the change of time in the running process of the vehicle can be predetermined in a test mode. In practical application, the objects to be measured are classified according to a predetermined rule. For example, if the correspondence relationship of the acquired distance change with time matches with the rule of a pedestrian, it is determined that the type of the object to be measured is a pedestrian.

The object to be measured can be marked in the environmental image according to the distance between the object and the vehicle. For example, the larger the distance between the measured object and the vehicle, the closer the measured object is marked at a position above the image to intuitively embody the farther the distance from the vehicle.

Meanwhile, the mark can be carried out according to the type of the object to be detected. I.e. the tag contains relevant information indicating the type of object to be measured. For example, a number or name of the type to which the measured object belongs may be marked. Therefore, the position and type information of the measured object and the positioning information of the vehicle are spatially fused, so that the driver can obtain real-time driving road condition information.

For example, the distance between the object to be measured and the vehicle may be detected by an ultrasonic sensor provided in the vehicle. Firstly, the ultrasonic sensor sends out ultrasonic waves, when the ultrasonic sensor touches an object, the ultrasonic waves are reflected to form echoes, the emission and the return of the ultrasonic waves are detected, and then the detection distance of the ultrasonic waves can be calculated:

s=ct/2

wherein s is a distance between the ultrasonic sensor and the object to be measured, c is an ultrasonic velocity, and t is a time taken from the ultrasonic sensor to transmit the ultrasonic pulse to the ultrasonic pulse. According to the principle, detection data of the ultrasonic sensor are obtained, the single chip microcomputer system identifies the type and the distance of the object to be detected through the distance and time data set, and the information is output to a main control computer; the GPS navigation obtains the position and street information of the vehicle and outputs the information to the main control computer, the main control computer fuses the two parts of information and outputs a digital image signal, and an environmental image which changes around the vehicle in real time and is marked with a measured object is displayed through a display system, so that a driver can timely make a correct driving instruction for the driven vehicle.

Compared with the scheme of acquiring road condition information by using a camera, the method for measuring the distance by using the ultrasonic waves has the advantages of small data volume, high processing speed and low possibility of error, and the navigation image is not influenced by the environmental conditions such as weather.

By the technical scheme, the distance between the preset object and the measured object is obtained, the measured object is classified according to the corresponding relation of the distance of the measured object changing along with time, and finally the measured object is marked in the environment image around the preset object according to the type of the measured object. In fact, the corresponding relation of the distance of the measured object changing with time is embodied in the environment image. Thus, the environmental information of the movement except the fixed building can be output in real time in the navigation electronic map, and the accurate driving guidance is provided for the driver, thereby reducing the occurrence of driving accidents.

In another embodiment, on the basis of fig. 1, the step of determining the type of the measured object according to the correspondence of the distance between the predetermined object and the measured object changing with time (step S13) may include:

and inputting the corresponding relation of the distance between the preset object and the object to be measured changing along with time into a preset neural network to obtain the type of the object to be measured.

The ultrasonic wave data set when different types of objects are used as the measured object can be collected to train and identify through the artificial learning-based neural network, and the types of the objects can be accurately judged through repeated learning. For example, more than 1000 sets of ultrasound data may be collected in advance for each sample of different objects to be measured, and these ultrasound data sets that can reflect different types of objects may be classified and trained using a deep learning neural network. The artificial neural network extracts and selects effective characteristics from the ultrasonic data sets of the objects to be measured to obtain a correct neural network model.

During identification, the ultrasonic data of the object to be detected can be directly input, and an output result of which type the object to be detected belongs to is obtained.

FIG. 2 is a distance versus time graph of a bicycle as the object being measured, according to an exemplary embodiment. FIG. 3 is a distance versus time graph with a pedestrian object as provided by an exemplary embodiment. In fig. 2 and 3, the abscissa is time (t/s) and the ordinate is distance (cm) measured by the ultrasonic sensor. In the embodiment of fig. 2 and 3, the number of the ultrasonic sensors is plural, and each curve represents data detected by one of the ultrasonic sensors. The data in fig. 2 and 3 may be data obtained experimentally in advance for training a neural network model.

In the embodiment, the type of the measured object can be identified by applying the pre-trained neural network model, a complex image analysis algorithm is not needed, the method is simple, and the identification speed is high.

In the above embodiment, the marked object to be measured is provided with information of the distance, and when the number of the ultrasonic sensors is plural, the position of the object to be measured with respect to the vehicle can also be calculated. In yet another embodiment, the distance between the predetermined object and the object to be measured is detected by a plurality of ultrasonic sensors. Correspondingly, the step of marking the measured object in the environment image according to the type of the measured object and the distance between the current predetermined object and the measured object, and outputting the environment image marked with the measured object (step S14) may include:

determining the position of the object to be measured according to the distance between the preset object and the object to be measured detected by the ultrasonic sensors and the positions of the ultrasonic sensors; according to the position and the type of the object to be measured, the object to be measured is marked in the environment image, and the environment image marked with the object to be measured is output.

That is, the marked object to be measured carries not only information on the distance but also information on the position. For example, the type of the object to be measured is a pedestrian and is located on the right side of the vehicle. The specific position of the target is calculated by using the distances detected by the plurality of distance sensors and the positions of the plurality of distance sensors, which are well known to those skilled in the art and will not be described herein.

After the position of the object to be measured is determined, it is possible to mark the appropriate position in the environmental image. The position of the mark represents the actual relative position of the object to be measured. Therefore, the driver can know the real-time road condition more intuitively through the output image.

In addition, the object to be measured can be marked in the form of an icon. Wherein the different types correspond to different icons.

That is, in addition to the above-described labeling methods of numbers and names, the labeling may be performed in the form of icons. The selected icon may be a mark that can visually represent the type of the object to be measured. For example, the pedestrian type corresponds to an icon of a pedestrian, and the vehicle type corresponds to an icon of a vehicle. When the type of the object to be detected is a vehicle, the icon of the rear view of the vehicle can be applied; when the type of the object to be measured detected by the ultrasonic sensor arranged on the right side of the vehicle is a vehicle, the icon of the left view of the vehicle can be applied.

FIG. 4 is a schematic illustration of an environmental image provided by an exemplary embodiment. As shown in fig. 4, when the positioning information indicates that the current vehicle is traveling on a straight national road, the environment image may be an image of a national road having a stereoscopic visual effect, which is set in advance. When the object to be measured is marked in the environment image according to the above-described method of the present disclosure, the detected icon of the object to be measured is added to the environment image of fig. 4 in the navigation image that is finally output. FIG. 5 is a schematic illustration of an environmental image marked with pedestrians and vehicles provided by an exemplary embodiment. As shown in fig. 5, in this embodiment, two detected objects are one of a vehicle type and the other of a bicycle type. And marking the two measured objects in the environment image in the form of vehicle and bicycle icons respectively, and outputting the two measured objects together with the environment image.

In the embodiment, in the environment image, the graphical measured object matched with the type is marked, so that the navigation image is more intuitive, the driver is personally on the scene, and therefore the driver is easier and less fatigued when driving according to navigation, and the accident rate is reduced.

It will be appreciated by those skilled in the art that the foregoing describes the inventive concepts of the present disclosure. The above scheme of the present disclosure can be applied in various scenarios. For example, the multi-radar detection and recognition function is combined with a voice module, the type and the position of a detected object are broadcasted through voice, and the device can be used for road condition recognition and advancing guidance of bicycles, motorcycles and blind people, so that a driver or the blind people can sense and predict in advance, and obstacle avoidance is assisted, and the device is particularly suitable for image output of electronic navigation equipment in the fields of automatic vehicle driving and vehicle auxiliary driving.

For another example, the output image of the present disclosure may also be sent to a mobile terminal (e.g., a mobile phone, a mobile bracelet, etc.) through bluetooth or other wireless methods, so that the display interface of the mobile terminal can display the environmental information around the device where the ultrasonic sensor is located in real time. For example, an environment image marked with a measured object and displayed on a display screen in a vehicle can be sent to a mobile phone bound with the vehicle through the internet of vehicles, and a user can remotely see real-time road conditions around the vehicle in the driving process on the mobile phone.

If the scheme is matched with the imaging sensor, the method can also be used for quickly detecting the geographic morphology so as to obtain more detailed object type information and three-dimensional information. If the unmanned aerial vehicle is in the detection investigation during operation of bad weather or night, the image output method can be applied to avoid the influence of bad weather and environment.

Fig. 6 is a block diagram of an image output apparatus for navigation according to an exemplary embodiment. As shown in fig. 6, the image output apparatus 10 for navigation may include a first determining module 11, an acquiring module 12, a second determining module 13, and an output module 14.

The first determining module 11 is configured to obtain positioning information of a predetermined object, and determine an environment image of the predetermined object according to the positioning information.

The obtaining module 12 is configured to obtain a distance between the predetermined object and a measured object around the predetermined object.

The second determining module 13 is configured to determine the type of the object to be measured according to a correspondence relationship between the distance between the predetermined object and the object to be measured and a change over time.

The output module 14 is configured to mark the object to be measured in the environment image according to the type of the object to be measured and the distance between the predetermined object and the object to be measured, and output the environment image marked with the object to be measured.

Optionally, the correspondence is obtained according to the following manner:

respectively obtaining the distances between the measured object and the preset object at a plurality of continuous time points to obtain a plurality of data sets, wherein each data set comprises a time point and a corresponding distance; and determining the corresponding relation according to the plurality of data groups.

Optionally, the second determination module 13 may include a first determination sub-module.

The first determining submodule is used for inputting the corresponding relation of the distance between the preset object and the object to be measured changing along with time into a preset neural network model to obtain the type of the object to be measured.

Optionally, the distance between the predetermined object and the object to be measured is detected by a plurality of ultrasonic sensors, and correspondingly, the output module 14 includes a second determining sub-module and a first output sub-module.

The second determining submodule is used for determining the position of the object to be measured according to the distance between the preset object and the object to be measured, which is detected by the ultrasonic sensors, and the positions of the ultrasonic sensors.

The first output sub-module is used for marking the measured object in the environment image according to the position and the type of the measured object and outputting the environment image marked with the measured object.

Optionally, the object to be measured is marked in the form of an icon, wherein different types correspond to different icons.

Optionally, the predetermined object is a vehicle or a drone, the types including vehicles, bicycles and pedestrians.

With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.

By the technical scheme, the distance between the preset object and the measured object is obtained, the measured object is classified according to the corresponding relation of the distance of the measured object changing along with time, and finally the measured object is marked in the environment image around the preset object according to the type of the measured object. In fact, the corresponding relation of the distance of the measured object changing with time is embodied in the environment image. Thus, the environmental information of the movement except the fixed building can be output in real time in the navigation electronic map, and the accurate driving guidance is provided for the driver, thereby reducing the occurrence of driving accidents.

The present disclosure also provides an electronic device comprising a memory and a processor.

The memory has a computer program stored thereon; the processor is used for executing the computer program in the memory to realize the steps of the above method provided by the present disclosure.

Fig. 7 is a block diagram of an electronic device 700, shown in an exemplary embodiment. As shown in fig. 7, the electronic device 700 may include: a processor 701 and a memory 702. The electronic device 700 may also include one or more of a multimedia component 703, an input/output (I/O) interface 704, and a communication component 705.

The processor 701 is configured to control the overall operation of the electronic device 700, so as to complete all or part of the steps in the image output method. The memory 702 is used to store various types of data to support operation at the electronic device 700, such as instructions for any application or method operating on the electronic device 700 and application-related data, such as contact data, transmitted and received messages, pictures, audio, video, and the like. The Memory 702 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk. The multimedia components 703 may include screen and audio components. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 702 or transmitted through the communication component 705. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 704 provides an interface between the processor 701 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 705 is used for wired or wireless communication between the electronic device 700 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or a combination of one or more of them, which is not limited herein. The corresponding communication component 705 may thus include: Wi-Fi module, Bluetooth module, NFC module, etc.

In an exemplary embodiment, the electronic Device 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the image output method described above.

In another exemplary embodiment, there is also provided a computer-readable storage medium including program instructions which, when executed by a processor, implement the steps of the image output method described above. For example, the computer readable storage medium may be the memory 702 described above including program instructions that are executable by the processor 701 of the electronic device 700 to perform the image output method described above.

The present disclosure also provides a vehicle including a controller for performing the steps of the above-described image output method for navigation provided by the present disclosure and a display.

The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.

It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, various possible combinations will not be separately described in this disclosure.

In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于单星投影的空间指向测量仪器精度评估方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!