Vehicle and control method for controlling image on vehicle-mounted combination instrument panel

文档序号:1396836 发布日期:2020-03-03 浏览:12次 中文

阅读说明:本技术 车辆以及用于控制车载组合仪表板上的图像的控制方法 (Vehicle and control method for controlling image on vehicle-mounted combination instrument panel ) 是由 金成云 于 2018-11-28 设计创作,主要内容包括:本发明涉及一种车辆以及用于控制车载组合仪表板上的图像的控制方法。该车辆包括:图像传感器,其配置为获取前方图像;导航系统,其配置为提供对象信息,该对象信息指示关于车辆前方的一个或更多个对象的信息;控制器,其配置为当所获取的前方图像包括一个或更多个对象中的至少一个对象时,生成具有对象信息的所获取的前方图像的合成图像;以及组合仪表板,其配置为输出所生成的合成图像。(The present invention relates to a vehicle and a control method for controlling an image on an on-vehicle cluster panel. The vehicle includes: an image sensor configured to acquire a front image; a navigation system configured to provide object information indicating information about one or more objects in front of a vehicle; a controller configured to generate a composite image of the acquired front image having the object information when the acquired front image includes at least one object of the one or more objects; and a cluster panel configured to output the generated composite image.)

1. A vehicle, comprising:

an image sensor configured to acquire a front image;

a navigation system configured to provide object information indicating information about one or more objects in front of a vehicle;

a controller configured to generate a composite image of the acquired front image having the object information when the acquired front image includes at least one object of the one or more objects;

a cluster panel configured to output the composite image.

2. The vehicle according to claim 1, wherein,

the object information includes at least one of a name of a building, a building address, usage information of the building, or information of a business in the building.

3. The vehicle according to claim 1, wherein,

the controller modifies at least a portion of the acquired front image so as to correspond to the shape of the cluster.

4. The vehicle according to claim 1, wherein,

the controller converts the acquired front image from an actual image to a virtual image.

5. The vehicle according to claim 3, wherein,

the cluster panel outputs at least one of an actual image and a virtual image according to an input of a driver.

6. The vehicle according to claim 1, wherein,

the controller generates a composite image by superimposing object information on one or more objects in the front image.

7. The vehicle according to claim 1, wherein,

the controller generates a composite image having indicators indicating object information in regions adjacent to one or more objects, respectively, on a cluster board.

8. The vehicle according to claim 1, wherein,

the image sensor adjusts an acquisition period of the front image according to an input of the driver.

9. The vehicle according to claim 7, wherein,

the controller generates an image in which object information is given to at least one object included in a first front image and a second front image, wherein the second front image is acquired after the first front image is acquired,

the vehicle further includes a storage device configured to store the first front image and the second front image.

10. The vehicle according to claim 8,

the cluster board adjusts a speed at which the first front image is changed into the second front image to be proportional to a traveling speed of the vehicle.

11. The vehicle according to claim 1, wherein,

the controller selects a type of one or more objects displaying object information according to an input of a driver,

the cluster outputs a composite image displaying the type of the selected one or more objects.

12. A control method of a cluster board of a vehicle, comprising the steps of:

acquiring a front image of the vehicle through an image sensor;

receiving, by a controller, object information indicating information on one or more objects located in front of a vehicle;

generating, by a controller, a composite image of the acquired front image having the object information when at least one object of the one or more objects is included in the acquired front image;

the generated composite image is output on the cluster board through the controller.

13. The control method according to claim 12, wherein,

the acquiring step includes a step of modifying at least a portion of the acquired front image so as to correspond to the shape of the cluster.

14. The control method according to claim 12, wherein,

the acquiring step includes a step of converting the acquired front image from an actual image into a virtual image.

15. The control method according to claim 14, wherein,

the outputting step includes the step of outputting at least one of the actual image and the virtual image in accordance with an input of the driver.

16. The control method according to claim 12, wherein,

the generating step includes the step of generating a composite image having object information superimposed on one or more objects.

17. The control method according to claim 12, wherein,

the generating step includes the step of generating a composite image having indicators indicating object information in regions adjacent to the one or more objects, respectively, on the cluster.

18. The control method according to claim 12, wherein,

the acquiring step includes the step of adjusting an acquisition cycle of the front image in accordance with an input by the driver.

19. The control method according to claim 18,

the adjusting step comprises the following steps:

generating an image in which object information is given to at least one object contained in a first front image and a second front image acquired after acquiring the first front image;

the first front image and the second front image are stored.

20. The control method according to claim 19,

the outputting step includes the step of adjusting a speed at which the first front image is changed into the second front image to be proportional to a traveling speed of the vehicle.

21. The control method according to claim 12, wherein,

the outputting step comprises the following steps:

selecting a type of one or more objects displaying the object information according to an input of the driver;

and outputting and displaying the selected type of the composite image.

Technical Field

The present invention relates to a vehicle capable of controlling an image output from a cluster and a control method of the vehicle.

Background

Generally, an instrument cluster (instrument panel) provides information on the condition of a vehicle to a driver through various display devices.

Recently, with the spread of vehicles having Advanced Driver Assistance Systems (ADAS), instrument cluster panels provide various information in the form of images using data acquired by radar or a camera mounted on the vehicle.

For instrument cluster panels, electronic systems implemented by Liquid Crystal Display (LCD) are widely used. For example, the LCD type cluster panel displays a warning lamp, a turn signal indicator, a speedometer, a tachometer, and a temperature indicator by using various screen configurations.

In addition to the above-described screen configuration, a background image may be displayed on the cluster screen. In general, since the background image is set as a fixed image, the driver is not able to see it and is limited in effective transmission and transmission of various information.

Disclosure of Invention

An aspect of the present invention provides a combination instrument panel capable of outputting an image in which information on an object is displayed on an actual image acquired by recording the front of a vehicle.

Additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.

According to one aspect of the present invention, a vehicle includes: an image sensor configured to acquire a front image; a navigation system configured to provide object information indicating information about one or more objects in front of a vehicle; a controller configured to generate a composite image of the acquired front image having the object information when the acquired front image includes at least one object of the one or more objects; and a cluster panel configured to output the generated composite image.

The object information may include at least one of a name of a building, a building address, a use of the building, and merchant name information.

The controller may generate a map image in which at least one region of the acquired front image is displayed on an output region of the cluster panel.

The controller may convert the acquired front image from an actual image to a virtual image.

The cluster may output at least one of the real image and the virtual image according to an input of the driver.

The controller may generate a front image having the object information superimposed on the object.

The controller may generate a front image in which an indicator indicating the object information is marked to an area adjacent to the object.

The image sensor may adjust the acquisition period of the front image according to the input of the driver.

The controller may generate an image in which object information is given to at least one object included in the first front image and the second front image, wherein the second front image may be acquired after a time point at which the first front image is acquired.

The vehicle may further include a storage device configured to store the first front image and the second front image.

The cluster panel may adjust a speed at which the first front image is changed into the second front image to be proportional to a traveling speed of the vehicle.

The controller may select a type of an object displaying the object information according to an input of the driver, and the cluster may output a front image displaying the selected type of the object.

According to another aspect of the present invention, a control method of a cluster panel of a vehicle includes: acquiring a front image of the vehicle through an image sensor; receiving, by a controller, object information indicating information on one or more objects located in front of a vehicle; generating, by a controller, a composite image of the acquired front image having the object information when at least one object of the one or more objects is included in the acquired front image; and outputting a front image of the display object information on the cluster board through the controller. Acquiring the front image of the vehicle may include: a map image is generated in which at least one region of the acquired front image is displayed on an output region of the cluster.

Acquiring the front image of the vehicle may include: the acquired front image is converted from an actual image to a virtual image.

The outputting of the front image of the display object information may include: at least one of the actual image and the virtual image is output according to an input of the driver.

Generating the image in which the object information is given to the object may include: a front image is generated in which object information is superimposed on an object.

Generating the image in which the object information is given to the object may include: an indicator indicating the object information is marked to a front image of an area adjacent to the object is generated.

Acquiring the front image of the vehicle may include: the acquisition period of the front image is adjusted according to the input of the driver.

Adjusting the acquisition period of the front image according to the input of the driver may include: an image is generated in which object information is given to at least one object contained in a first front image and a second front image, which is acquired after a time point at which the first front image is acquired, and the first front image and the second front image are stored.

The outputting of the front image of the display object information may include: the speed at which the first front image is changed into the second front image is adjusted to be proportional to the traveling speed of the vehicle.

The outputting of the front image of the display object information may include: a type of an object displaying the object information is selected according to an input of the driver, and a front image displaying the selected object type is output.

Drawings

These and/or other aspects of the invention will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:

FIG. 1 is a schematic illustration of an interior of a vehicle according to an exemplary embodiment of the present invention;

FIG. 2 is a control block diagram of a vehicle according to an exemplary embodiment of the present invention;

FIG. 3 is a schematic diagram of an output screen of a cluster tool according to an exemplary embodiment of the present invention;

FIG. 4 is a schematic illustration of an output screen of a cluster tool according to another exemplary embodiment of the present invention;

FIG. 5 is a schematic diagram illustrating the generation of a virtual image according to an exemplary embodiment of the present invention;

fig. 6 is a schematic view illustrating an overlay object according to an exemplary embodiment of the present invention;

FIG. 7 is a diagram illustrating an object of a mark indicator according to another exemplary embodiment of the present invention; and

fig. 8 is a flowchart of a control method of a vehicle according to an exemplary embodiment of the present invention.

Detailed Description

The embodiments described in the present invention and the configurations shown in the drawings are merely examples of the embodiments of the present invention, and may be modified in various ways at the time of filing this application to replace the embodiments and drawings of the present invention.

Further, the same reference numerals or symbols shown in the drawings of the present invention refer to elements or components that perform substantially the same function.

The terminology used herein is for the purpose of describing embodiments and is not intended to be limiting and/or restrictive of the invention. The singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. In the present invention, the terms "comprises," "comprising," "has," "having," and the like, are used to specify the presence of stated features, values, steps, operations, elements, components, or combinations thereof, but do not preclude the presence or addition of one or more other features, values, steps, operations, elements, components, or combinations thereof.

It will be understood that, although the terms first, second, third, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and a second element could be termed a first element, without departing from the scope of the present invention. The term "and/or" includes a plurality of combinations of related terms or any one of a plurality of related terms.

Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings.

Fig. 1 is a schematic view of an interior of a vehicle according to an embodiment.

Referring to fig. 1, the vehicle interior 2 is provided with a driver seat 201, a co-driver seat 202, an instrument panel 210, a steering wheel 220, and a cluster panel 104.

Further, the vehicle interior 2 may include: an accelerator pedal 250 that is stepped on by the driver so as to accelerate according to the driver's intention; and a brake pedal 260 that is depressed by the driver to brake according to the driver's intention.

The instrument panel 210 refers to a panel of the type: which divides the vehicle 1 into the interior of the vehicle 1 and the engine compartment and provides space for mounting components required for various operations. The instrument panel 210 is disposed in front of the driver seat 201 and the co-driver seat 202. The dash panel 210 may include an upper panel, a center dash panel 211, and a transmission case 215.

The display portion 322 may be mounted on a center panel of the instrument panel 210. The display portion 322 may provide various information to the driver or passenger of the vehicle 1 in the form of images. For example, the display portion 322 provides various visual information (such as a map, weather, news, various moving images, or still images), and various information related to the state or operation of the vehicle 1. In particular, the display portion 322 may provide information related to the air conditioner. Further, the display portion 322 may be implemented using a navigation system (not shown) that is generally used.

The display portion 322 may be provided in a case integrally formed with the instrument panel 210 such that only the display panel is exposed to the outside. In addition, the display portion 322 may be installed at the middle or lower end of the center dash panel 211. Alternatively, the display portion 322 may be mounted on an inner surface of a windshield (not shown) or an upper surface of the instrument panel 210 by using a separate support member (not shown). In addition, the display portion 322 may be installed at various positions that the designer may consider.

Various devices such as a processor, a communication device, a global positioning system, and a storage device may be installed inside the instrument panel 210. The processor installed in the vehicle may be configured to control various electronic devices installed in the vehicle 1, and to perform the functions of the controller. The above-described apparatus may be implemented using various components, such as a semiconductor chip, a switch, an integrated circuit, a resistor, a volatile or non-volatile memory (not shown), or a printed circuit board.

The center dash panel 211 may be installed in the middle of the dash panel 210, and may include input devices 330a to 330c for receiving various instructions related to the vehicle. The input devices 330a to 330c may be implemented using physical buttons, knobs, touch pads, touch screens, stick-type operating devices, or trackballs. The driver can control various operations of the vehicle 1 by operating the input devices 330a to 330 c.

The transmission case 215 is disposed between the lower driver seat 201 and the front passenger seat 202 of the center dash panel 211. The gear box 215 may be provided with a gear 216, a magazine 217 and various input devices 330d to 330 e. The input devices 330d to 330e may be implemented using physical buttons, knobs, touch pads, touch screens, stick-type operating devices, or trackballs.

When the driver selects the navigation function, the input device 330: 330a to 330f may receive information about a destination and transmit the input destination information to the navigation system. Further, the driver may input the position where the vehicle is currently located as one of the first position and the second position through an additional input to the input device 330. The first position and the second position input through the input device 330 are positions where the automatic locking function or the automatic unlocking function, which is performed by the smart key system in a conventional manner, is not performed, the automatic locking function being performed when the driver is far from the vehicle and the automatic unlocking function being performed when the driver is close to the vehicle. Thus, the controller may be configured such that the control process according to an embodiment can operate by utilizing geographic information about the current location of the vehicle without receiving additional location information from the navigation system.

The steering wheel 220 and the cluster panel 140 are disposed on the instrument panel 210 in the direction of the driver's seat.

The steering wheel 220 is configured to be rotatable in a predetermined direction according to the operation of the driver. Since the front or rear wheels of the vehicle 1 rotate according to the rotation direction of the steering wheel 220, the vehicle 1 can be steered. A spoke 221 connected to the rotation shaft and a steering handle 222 coupled to the spoke 221 are provided in the steering wheel 220. An input device configured to receive various instructions may be provided in the spoke 221, and the input device may be implemented by a physical button, a knob, a touch pad, a touch screen, a stick-type operating device, or a trackball. The steering handle 222 may have a circular shape for the convenience of the driver, but the shape of the steering handle 222 is not limited thereto. In addition, an input device 330f of a direction indicator may be installed at the rear of the steering wheel 220. During the driving of the vehicle 1 by the driver, the driver can input a traveling direction or a signal for changing the direction through the direction indicator input device 330 f.

The cluster 140 is configured to provide the driver with various information related to the vehicle, including: the running speed, Revolutions Per Minute (RPM), remaining amount of fuel, oil temperature, whether or not the direction indicator is turned on or off, or the running distance of the vehicle 1. According to an embodiment, the cluster 140 may be implemented using an illumination lamp or a dial plate or a display panel. When the cluster 140 is implemented using a display panel, the cluster 140 may provide the driver with more various information, such as fuel consumption and whether to perform various functions provided in the vehicle 1, as well as the above-described information.

The material of the display panel of the cluster panel 140 may be implemented by a Liquid Crystal Display (LCD), a Light Emitting Diode (LED), a Plasma Display Panel (PDP), an Organic Light Emitting Diode (OLED), a cathode ray tube. The output screen of the cluster board 140 may be implemented in various shapes. The output screen of the cluster 140 may be set to: so that information is displayed on the rectangular display panel. Alternatively, the output screen of the cluster 140 may have an arched shape, thereby enabling the driver to monitor the entire area through the empty space of the steering wheel handle.

The controller 130 may include: at least one storage device 150 storing a program for performing an operation described later; and at least one processor (not shown) for executing stored programs. When a plurality of the storage devices 150 and the processors are provided, they may be integrated on one chip, or they may be provided at physically separate locations. For example, the controller 130 may be implemented as an Electronic Control Unit (ECU), and the storage device 150 may be implemented as a memory, which is a storage device.

Fig. 2 is a control block diagram of a vehicle according to an embodiment. This is merely an example and it should be understood that components are added or omitted. Hereinafter, the configuration and operation of the control block diagram according to the embodiment will be described with reference to fig. 2.

According to an embodiment, a vehicle comprises: an image sensor 110 that acquires a front image of the vehicle; a navigation system 120 that provides information on objects located around the front side of the vehicle; a controller 130 that generates an image that gives object information to an object contained in a front image (which is acquired by the image sensor 110); and a cluster 140 that outputs a composite image indicating object information on the object. The front image represents a scene in front of the vehicle taken in the traveling direction of the vehicle, and the object represents an object indicated on the front image and has information such as buildings and facilities. Further, the composite image represents an image in which: wherein information about buildings and facilities is given to the acquired front image.

The image sensor 110 is provided in the vehicle, and is configured to record a front side with respect to a traveling direction of the vehicle, and to acquire a recorded image. For example, the image sensor 110 may include a camera and a radar. Further, the image sensor 110 may be a black box device. When the image sensor 110 is implemented by a black box device, according to an embodiment, an image given to object information may be output by using image data received from the black box device.

Further, the image sensor 110 may adjust the acquisition period of the front image according to the input of the driver. The image sensor 110 may continuously record the front of the vehicle, thereby enabling the cluster panel 140 to output a composite image in real time. However, when the image sensor 110 continuously records and then gives the object information to the object contained in the continuous front image, the amount of calculation of the image processing and the data processing may increase. Accordingly, the image sensor 110 may discontinuously acquire the front image at a specific time point according to a specific period, which varies according to the input of the driver.

The navigation system 120 is provided in the vehicle, and is configured to provide information related to a specific location to the driver in the form of a map, text, or various symbols. Further, the navigation system 120 may provide information about objects contained in the image acquired by the image sensor 110, where the objects may represent buildings and facilities acquired from the image.

It should be understood that the navigation system 120 need not be installed in the vehicle 1. For example, the navigation system 120 may be a smart device in which a navigation application is installed, or the navigation system 120 may be various devices configured to provide location information and object information via a communication module (not shown) connecting the vehicle to an external device.

The controller 130 receives object information from the navigation system 120 and gives the object information to the front image provided from the image sensor 110. The controller 130 may transmit image data given to the object information to the cluster panel 140. The image data may include data related to an actual image indicating object information on each object, and data related to a virtual image indicating object information on each object.

The actual image refers to an unchanged actual travel image obtained by recording the scene in front of the vehicle by the image sensor 110. The virtual image refers to an image in which unnecessary image data elements are removed from actual image data received through the image sensor 110. According to an embodiment, the controller 130 may convert an actual image acquired by the image sensor 110 into a virtual image. Specifically, when the cluster 140 does not implement a specific color due to the characteristics of the display panel of the cluster 140, the controller 130 may convert the specific color into another color that can be implemented. In addition, the controller 130 may generate a virtual image that is simplified to be output in at least two colors, a gray tone, and a black and white mode, such that the kinds of colors implemented on the cluster board 140 are limited.

According to an embodiment, the controller 130 may generate the map image such that at least a portion of the front image acquired by the image sensor 110 is displayed on the output area of the screen of the cluster panel 140. The output area of the cluster 140 may be selected by the shape of a display panel that is a component of the cluster 140. For example, the output area of the cluster panel 140 may be selected according to the shape of a conventional instrument panel, and thus the output area of the cluster panel 140 may have an arch shape or an oval shape. The cluster 140 is not required to output an unchanged image acquired by the image sensor 110, and an unnecessary area needs to be removed from the image. Accordingly, the controller 130 may perform a process of correcting the area of the front image acquired by the controller 130. A detailed process of correcting the region of the front image will be described with reference to fig. 4 and 5.

The cluster 140 may output an unchanged actual front image acquired by the image sensor 110. Alternatively, the cluster board 140 may output a composite image of the display object information by receiving data related to the composite image of the display object information from the controller 130. The cluster 140 may output a virtual image by receiving data related to the virtual image displaying the object information.

According to an embodiment, the cluster 140 may output at least one of the real image and the virtual image according to the input of the driver. When the driver requires an image of the cluster panel 140 in which the background image is simpler than the actual image, the cluster panel 140 may output an image generated by displaying the object information on the virtual image instead of displaying the object information on the actual image.

The composite image output from the cluster 140 may depend on an acquisition period of the front image of the image sensor 110, which is adjusted according to the input of the driver. The front images acquired in a single cycle in time series may include a first front image and a second front image. For example, during the traveling of the vehicle, the front image of the vehicle may be changed from the first front image to the second front image. The cluster 140 may output the front image by adjusting the speed at which the first front image is changed into the second front image to be proportional to the traveling speed of the vehicle. Further, the cluster 140 may adjust a speed of changing the first front image into the second front image according to a setting input by the driver.

The storage device 150 may store a plurality of front images acquired by the image sensor 110. Accordingly, when the plurality of front images stored in the storage device 150 indicate that the vehicle passes a route on which the vehicle previously traveled, the storage device 150 may provide the stored front images to the cluster panel 140 with reference to the position information provided from the navigation system 120, so that the cluster panel 140 outputs the composite image. The composite image may be one of an actual image and a virtual image given to the object information.

Fig. 3 is a schematic diagram of an output screen of the cluster tool 140 according to an embodiment.

When the image sensor 110 acquires the front image (1100), the controller 130 may modify the front image to correspond to the output area such that the front image is displayed on the output area of the cluster panel 140. Specifically, the controller 130 may align the front image with the edge of the output region 1410 of the cluster panel 140 or remove an unnecessary region from the front image.

As shown in fig. 3, it is recognized that the modified front image is output on the output area 1410 of the cluster panel 140. The front image 1100 may be an actual image in which image preprocessing is not performed, or a virtual image in which unnecessary image data is removed and then simplified.

As shown in fig. 4, a tachometer screen, a speedometer screen, and a front image may be simultaneously output on the output screen 1420 of the cluster panel 140. The arrangement of the tachometer screen, the speedometer screen and the front image may be changed according to the settings input by the driver.

Fig. 5 is a schematic diagram illustrating generation of a virtual image according to an embodiment.

The virtual image 1102 represents an image in which unnecessary elements are removed from the actual image 1101 received by the image sensor 110. Specifically, when the cluster 140 does not implement a particular color, the cluster 140 may convert the color to an alternative color. The virtual image 1102 may be generated to be output in at least two colors, a gray tone, and a black and white mode such that the kinds of colors implemented on the cluster board 140 are limited.

According to an embodiment, the controller 130 may convert the front image acquired by the image sensor 110 into a virtual image 1102 and transmit the virtual image 1102 to the cluster board 140. Therefore, the cluster 140 can output the front image through the virtual image 1102 as an image for outputting a composite image, in addition to the actual image 1101. The controller 130 may control the cluster 140 such that the cluster 140 outputs at least one of the real image 1101 and the virtual image 1102 according to the driver input.

According to the embodiment, the driver may be provided with a visual element having improved information perception by displaying object information of objects (such as buildings and facilities) included in the composite image output on the cluster board 140.

Hereinafter, a method of displaying object information on the output composite image will be described in detail.

Fig. 6 is a schematic diagram illustrating an overlay object according to an embodiment.

As shown in fig. 6, the image output on the output area 1410 of the cluster tool 140 includes a first object 1421, a second object 1422, a third object 1423, a fourth object 1424, a fifth object 1425, and a sixth object 1426. Various types of buildings or facilities may be captured in a composite image that is acquired while the vehicle is traveling on the road. A plurality of objects are recognized to be displayed so that various buildings are classified and then included in the composite image.

The object information may be superimposed to display a plurality of objects so as to distinguish the plurality of objects from other objects. The object information represents a tool configured to distinguish the terrain such as buildings and facilities contained in the front image. For example, the object information may be associated with a name, address, and business name of a building.

The object information may be displayed by being divided for each object using various visual elements. For example, when the object is a hospital, the object may be displayed in green, and when the object is a gas station, the object may be displayed in red. Thus, the color may be one of the object information. As another example, different transparencies may be applied to distinguish different objects. In this case, the transparency may be one of the object information. For convenience of description, color and transparency are described as examples, but are not limited thereto. Therefore, various image synthesis methods can be used as long as the visual quality of the synthesized image is not impaired.

According to an embodiment, the controller 130 may generate a composite image in which object information is superimposed on each object. The composite image may be one of an actual image acquired by the image sensor 110 or a virtual image converted from the actual image. According to the embodiment, by intensifying the main building image displayed on the composite image using the superimposition method, it is possible to enable the driver to acquire information that is not easily recognized by the naked eye during driving by combining the output images of the instrument panel 140.

According to an embodiment, the controller 130 may control the object information so that the object information is selectively displayed according to the position of the object. For example, when the first object 1421 and the second object 1422 are located at the same distance with respect to the current position of the vehicle, and the third object 1423 and the fourth object 1424 are located at a greater distance with respect to the current position of the vehicle, the controller 130 may generate such an image: in which object information is superimposed on the first object 1421 and the second object 1422, but the object information is not superimposed on the third object 1423 and the fourth object 1424.

Fig. 7 is a schematic diagram illustrating an object of a mark indicator according to another embodiment.

According to another embodiment, the indicator represents a display means for displaying the object information using text, symbols, marks. According to another embodiment, the method of displaying the object information may employ a method of marking an indicator to an area adjacent to the object.

As shown in fig. 7, the image output from the output area 1410 of the combo dashboard includes a first object 1431, a second object 1432, a third object 1433, a fourth object 1434, a fifth object 1435, and a sixth object 1436. An indicator displaying text is marked to each object, wherein the text indicates information of the building.

According to another embodiment, the controller 130 may generate a composite image in which an indicator indicating object information is marked to an area adjacent to the object. According to another embodiment, since a main building image without change is used in the composite image and an indicator is displayed on an area adjacent to the main building image, the driver can acquire information about each building through the output image.

According to an embodiment, the controller 130 may select a type of the object displaying the object information according to an input of the driver. For example, when the driver needs to acquire information about a gas station during driving, the controller 130 may enable object information to be displayed on an object corresponding to the gas station according to a setting input by the driver. The cluster 140 may output a composite image displaying the selected object type according to the input of the driver selecting the object type.

Hereinafter, a method of controlling an instrument cluster of a vehicle according to an embodiment will be described.

Fig. 8 is a flowchart of a control method of a vehicle according to an embodiment. This is merely an example and it should be understood that steps may be added or omitted. In addition, the subject matter of the respective steps will be omitted as needed.

The vehicle receives a front image of the vehicle (S901).

The vehicle generates a virtual image by simplifying the front image (S903). The virtual image refers to image data from which unnecessary image data elements are removed based on received actual image data.

The vehicle modifies the image so that at least a part of the front image is displayed on the output screen of the cluster (S905). Specifically, the vehicle may map the front image by aligning the front image with an edge of an output screen of the cluster or by removing an unnecessary area from an area of the front image.

The vehicle receives object information adjacent to the vehicle (S907). The object information refers to information on an object contained in the composite image. The object information may be transmitted from a navigation system.

When receiving the information on the object from the navigation system, the vehicle gives the object information to the object contained in the front image (S909). As shown in fig. 6 and 7, when the vehicle gives the object information to the object, the vehicle indicates information about the object on the output composite image by using a superimposition method or an indicator mark method.

When the object information is given to at least one object, the vehicle transmits data related to the composite image displaying the object information to the cluster board (S911). In this case, the cluster board may output a composite image showing the object information. The front image may be output as at least one of an actual image and a virtual image according to an input of the driver, and various methods of displaying the object information may be changed according to a setting input by the driver.

As apparent from the above description, the control method of the vehicle and the cluster board of the vehicle improves driver's concentration on driving and improves recognition of additional information by the driver.

Although a few embodiments of the present invention have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.

19页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:车辆用周边显示装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类