Method for visualizing sensor data and/or measurement data

文档序号:1357875 发布日期:2020-07-24 浏览:20次 中文

阅读说明:本技术 用于将传感器数据和/或测量数据可视化的方法 (Method for visualizing sensor data and/or measurement data ) 是由 K·罗格 D·克纳普 于 2018-12-05 设计创作,主要内容包括:本发明涉及一种借助在车辆内室(1)中的光模块(3)将来自车辆环境的传感器数据和/或来自车辆的测量数据可视化的方法。本发明的特点是,传感器数据作为视频数据被采集,随后该视频数据关于可识别的关键结构被分析,接着,关键结构被转换为具有适配于各自光模块(2)的格式的视频序列,和/或未作为视频数据被采集的传感器数据和/或测量数据通过算法被换算成视频序列,接着,来自不同数据的视频序列被叠加并在光模块(3)上被显示。还指明一种用于执行该方法的装置。(The invention relates to a method for visualizing sensor data from a vehicle environment and/or measurement data from a vehicle by means of a light module (3) in a vehicle interior (1). The invention is characterized in that the sensor data is acquired as video data, which is subsequently analyzed with respect to identifiable key structures, which are then converted into video sequences having a format adapted to the respective light module (2), and/or the sensor data and/or measurement data not acquired as video data are converted into video sequences by means of an algorithm, and the video sequences from the different data are then superimposed and displayed on the light module (3). An apparatus for performing the method is also specified.)

1. A method for visualizing sensor data from the vehicle surroundings and/or measurement data from the vehicle by means of light modules (3) in a vehicle interior (1), characterized in that the sensor data is acquired as video data, that said video data is subsequently analyzed with respect to identifiable critical structures, that said critical structures are subsequently converted into video sequences having a format adapted to the respective light module (2), and/or that sensor data and/or measurement data not acquired as video data are converted into video sequences by means of an algorithm, that said video sequences from different data are subsequently superimposed and displayed on said light modules (3).

2. A method according to claim 1, characterized in that said video sequence is displayed as real-time data on said light module (3).

3. A method according to claim 1 or 2, characterized in that, as a complement to said light module (3), an ambience lighting (4) is provided in the vehicle, said ambience lighting being controlled in accordance with the video sequence displayed in said light module (3).

4. A method according to claim 1, 2 or 3, characterised in that the superimposition of the video sequences is carried out according to the priority of the data source on which the video sequences are based.

5. A method as claimed in one of claims 1 to 4, characterized in that a prestored video sequence associated with the predetermined sensor data and/or measurement values is retrieved from the memory (17) as a supplement and is superimposed with the other video sequences.

6. A method as claimed in claim 5, characterized in that the video sequence measured from the sensor data and/or measured values and generated or calculated by the algorithm is stored in the memory (17) as a pre-stored video sequence, when said video sequence is based on the characteristic sensor data and/or measured values.

7. Method according to one of claims 1 to 6, characterized in that the light module (3) is controlled by means of a converter (16).

8. An apparatus for performing the method according to one of claims 1 to 7, having: a vehicle atmosphere lighting device comprising at least a light module (3) within the vehicle; and a central control device (9) designed to generate video sequences from video data and to calculate video sequences from said sensor data and/or measurements, and to superimpose video sequences from different sources, wherein the central control device (9) is directly connected to said light modules (3) or indirectly connected through a separate high speed bus (13).

9. Device according to claim 8, characterized in that the light module (3) is formed as an annular light band (2) in the vehicle.

10. A device according to claim 8 or 9, characterized in that the vehicle ambience lighting device comprises, in addition to the light modules (3), an ambience lighting element (4) which is connected to a basic control device (11) having a data communication connection (10) with the central control device (9).

11. A device according to claim 8, 9 or 10, characterized in that a memory (17) for pre-storing video sequences is provided in the central control device (9) or that the memory is directly connected to the central control device.

Technical Field

The invention relates to a method for visualizing sensor data from the surroundings of a vehicle and/or measurement data from the vehicle by means of at least one light module in the interior of the vehicle. The invention also relates to a device for carrying out the method.

Background

DE 102016004175 a1 describes a driver assistance system for a vehicle. The basis in this case is the processing of the sensor data and a display device for displaying the processed sensor data. This is implemented essentially in the form of a light band that surrounds the entire vehicle interior. The light strip can be part of a so-called vehicle atmosphere interior lighting, for example, in which the interior of the vehicle is illuminated by means of individual lighting elements. In a manner known per se, the interior lighting can be adjusted in respect of various measurement data such as brightness, color matching, etc., or can be programmed by the person using the vehicle according to his mood. The adjustment of interior lighting based on, for example, the type of track being played is also well known and feasible herein.

In this case, even more complex information which symbolizes the respective measured values or sensor data can be displayed when using light modules, for example in the form of light strips as described in the prior art.

DE 102015221180 a1 describes the use of a central light control device for a plurality of peripheral light control devices distributed in a vehicle, which respectively control a plurality of light sources of the ambience lighting correspondingly. Configurations that include peripheral light control devices dispersed within a vehicle utilize stored patterns within the scope of these peripheral light control devices to thereby display certain data and conditions. The central control device then only has to transmit the corresponding code for the respective mode to the peripheral light control device. This can be achieved simply, efficiently and quickly by means of a common bus connection. The previously stored data set is then recalled in the peripheral control unit and displayed accordingly by the illumination of the ambience interior. A disadvantage of this configuration is that it can only be operated with pre-stored lighting programs and therefore can only be handled to a very limited extent for different situations occurring in traffic and different situations occurring in vehicles, since it is never possible to take into account all conceivable situations with a lighting pattern adapted thereto and stored correspondingly in the peripheral light control device. And thus cannot actually react to the currently occurring event.

Disclosure of Invention

The object of the present invention is to further develop a method for visualizing sensor data and/or measurement data by means of light modules in a vehicle interior, and to provide a device suitable for carrying out the method, which allows a simple and efficient visualization in terms of light modules.

According to the invention, this object is achieved by a method having the features of claim 1, in particular the features of claim 1. Advantageous embodiments and developments emerge from the dependent claims. An apparatus for carrying out the method having the features of claim 8 also accomplishes this task. Advantageous embodiments and developments are also found in the dependent device claims.

In the method of the invention this is: in order to visualize the sensor data or the measurement data by means of the light modules in the interior of the vehicle, the sensor data is acquired as video data, for example, by an environmental camera in the vehicle. The environmental camera (or other sensor data such as radar targets, vehicle conditions such as regeneration, voice input via SDS) may be used for the control of the driver assistance system itself or may be additionally provided in the vehicle for the inventive method. Here, the camera may be a single camera or a plurality of cameras. They can, individually or jointly, capture an image of the vehicle surroundings, in particular a 360 ° image of the entire vehicle surroundings. The captured video data is then analyzed with respect to identifiable critical structures. This analysis allows to reduce the process of each image sequence for light module control for each information. In particular, it is possible here to highlight content of interest to the vehicle user, for example objects moving toward the vehicle. To this end, a wide variety of image processing algorithms may be employed that highlight identifiable critical structures based on, for example, color, image segmentation, speed, and/or contrast.

The key structure is then converted into a video sequence having a format adapted to the respective light module. Depending on the light module used (in this case, in particular, light strips can be used in the vehicle interior, similarly to the prior art described above), the resolution is typically only a few individual luminous points, for example at the height of the light module. Therefore, video data cannot be directly played as a "movie". The key structures are thus highlighted and can be converted into a format adapted to the respective light module, via which the characterizing structures are displayed.

Alternatively or in particular in addition thereto, sensor data which are not acquired as video data by vehicle surroundings sensors, but rather by interior room sensors or by sensors which measure, for example, the respective set conditions of the air conditioner, can be processed. As an alternative or in addition to the sensor data, it is also possible to process measurement data, which may come from the field of vehicle telematics systems, for example. The sensor data and the measurement data can now partially overlap in terms of content, which is of less importance for the invention. On the basis of sensor data and/or measurement data which are not present as video data, a video sequence which is adapted to the respective light module in terms of format or resolution can also be calculated by means of a suitable algorithm. Thus, certain measurement data, for example from a vehicle telematics system or a driver assistance system, can be converted by suitable algorithms into an analog video sequence that characteristically presents the data.

The video sequences from the video data on the one hand and the measurement data and/or sensor data on the other hand are then superimposed on one another and displayed correspondingly as a total video sequence on the light module. That is, the current situation is always symbolically represented by displaying the data on the light module, so that it can be intuitively understood by the vehicle user.

In contrast to the prior art described in the introduction, the data to be used are derived here from direct measurements by sensors or from directly determined measurement data and are processed and converted into a video sequence in very real time. I.e. the current situation is reproduced by the visualization of the video sequence on the light module and in particular a situation that is present but not present before can be displayed, i.e. at this time the pre-stored video sequence cannot be exploited.

According to an advantageous further development of the concept, the video sequence is displayed as real-time data on the light modules. Such an implementation of a video sequence as real-time data requires a comparatively high output in the video data processing or in the calculation of the video sequence from sensor data and/or measurement data. However, the "real-time" visualization of the recorded data on the light modules of the vehicle is an important gain for the vehicle user, since it is in addition to the perception of the environment while the visualization is performed in the light modules, which is very beneficial for improving the perception and focusing attention, since the time deviation here is more easily distracting or overwhelming for the vehicle user.

The use of real-time data also has the following advantages:

this allows flexible handling of the measurement data even if the control system is changed or has been changed by the customer for purchase/upgrade or the like.

Furthermore, the system/system interface can then also be designed flexibly or variably.

A further advantageous embodiment of the method according to the invention also provides that, in addition to the light module, ambience lighting is provided in the vehicle, which are controlled as a function of the video sequence displayed in the light module. The light module is in particular only part of a so-called vehicle atmosphere interior lighting. The vehicle is provided with further lighting elements for the interior lighting of the ambience, for example light-emitting diodes whose color and intensity can be controlled, which are distributed in large numbers in different regions of the vehicle, in particular so that they are used for indirect lighting. The lighting elements can now also be controlled accordingly according to the advantageous development of the method according to the invention as a function of the video sequence, so that a coordinated overall lighting image is obtained, in order to allow advantageous perceptibility for the vehicle control or the vehicle user as a follower.

A very advantageous embodiment of the method according to the invention also provides that the superimposition of the video sequences is carried out according to the priority of the data source on which they are based. That is, the video sequences may be superimposed according to priority or weight accordingly, so that certain video sequences have a higher weight and are more emphasized or more perceptible in the superimposed overall image of the video sequence, for example by a higher light intensity or a higher contrast. This has the advantage that the display of the video sequence can be made dynamic and the video layer can react spontaneously to environmental changes. This can be done, inter alia, depending on the data source. If the video sequence is based on data, which are generated, for example, by safety-relevant sensors or measuring probes, for example, in the region of objects (for example, obstacles or the like) detected in the surroundings of the vehicle, the weighting of the video sequence is higher than, for example, measured data or sensor data from vehicle comfort control, i.e., for example, from the field of air conditioning, media playback devices or the like.

In addition to the currently generated video sequence, for certain situations which are always repeated, in particular from the field of comfort control or telematics systems, it is also possible to recall from the memory a pre-stored video sequence in addition to the currently generated video sequence and to superimpose it with other video sequences. Thus, for certain situations that are in fact always reproduced in exactly the same way, it is possible to use video sequences that are pre-stored or can be updated by the purchase of the user in order to reduce the computational costs. These optional pre-stored video sequences can then be stored, in particular for the definitely same case which has been detected from the sensor data or measurement data. It may for example relate to an alarm signal from the vehicle, such as for example no seat belt being fastened, the door not being closed, etc. Likewise, for example, for visualizing the temperature, certain setting conditions of the air conditioner, etc., are conceivable from this. For the situation known from sensor data, in particular video data, in the vehicle surroundings, the storage may be of little importance in practice, since the situation may hardly be reproduced in exactly the same way here.

The method according to the invention can now also provide that, in the case of a retrieval from a memory of a pre-stored video sequence for a given sensor data and/or measurement data, a new video sequence which is detected in conjunction with the sensor data and/or measurement data and is generated or calculated by an algorithm is stored in the memory as a pre-stored video sequence, the video sequence being based on the characteristic sensor data and/or measurement data which are expected to be reproduced. I.e. the current video sequence can also be stored as a pre-stored video sequence for the future, by means of the memory. At this point, existing video sequences may be overwritten and new video sequences may be deposited when expected to occur frequently as well in some cases. The system can thus approximate "learning" certain events in order to thus save computational costs in the same situation in the future and then can invoke new pre-stored video sequences.

The video sequences for control or for display in these light modules can then be transmitted directly to the light modules or, according to a very advantageous design of the method according to the invention, by means of a converter for controlling the light modules. Such a transformer, which is also referred to as a superposition/mixing algorithm in software technology, allows a transformation for, for example, light modules of different lengths, so that the video sequence is always passed to the transformer in the same way, which then adjusts the video sequence according to the light module hardware. For this reason, it is ensured that always similar displays are present on these light modules, even though they may have different resolutions.

One or more of the described variant embodiments of the device for carrying out the above-described method according to the invention provide that the vehicle ambience lighting device comprises at least a light module within the vehicle. The overall video sequence is generated by the central control device for the individual light modules by means of an overlay, the central control device being designed to generate video sequences from the video data and to calculate video sequences from the sensor data and/or the measurement data, and also to be used for the video sequence overlay. The central control device is then connected directly to the light module via a separate high-speed bus or indirectly via the converter mentioned above according to an advantageous development of the method. The central control device thus processes the data and superimposes the video sequences on one another. It is thus possible to transmit video sequences to the optical module via an independent high-speed bus, for example a high-speed CAN-FD bus, so that a near real-time display CAN be achieved. This makes it possible to display the video sequence described above, which is dependent on the sensor data and/or the measurement data, in real time, in connection with the corresponding advantages.

As already explained, the light module can be designed as an annular light strip in a vehicle according to an advantageous development. Such an annular light strip, as is known in principle from DE 2016004175, is particularly suitable for displaying video sequences based on sensor data and/or measurement data to a vehicle user and in particular to a person controlling the vehicle, which data is critical for the person in question, for example, in order to ensure a consistent flow with the traffic around the vehicle.

A very advantageous embodiment of the device according to the invention also provides that the vehicle atmosphere lighting device comprises, in addition to the light modules, an atmosphere lighting element which is connected to a base control device, which has a data communication connection to the central control device. Such basic control means for ambience lighting are therefore in principle always present for ambience lighting. In this advantageous development of the device design according to the invention, the base control device now has a data communication connection to the central control device. This allows data to be exchanged between the central control unit and the base control unit as described above in the method. The lighting elements of the ambience lighting, which are provided in addition to the light module, can thus be adjusted according to the video sequence displayed on the light module in order to thus obtain a harmonious overall ambience in the vehicle ambience interior room lighting.

In an advantageous development of the device according to the invention, provision can also be made for a memory for the pre-stored video sequences to be provided in the central control device or for the memory to be connected directly to the central control device. By directly connecting the memory for the pre-stored video sequence, it is also ensured here that the pre-stored video sequence is quickly, simply and efficiently incorporated into the superimposition of the entire video sequence in the central control unit, which can then transmit the video data correspondingly quickly via a separate high-speed bus provided only for this data. This is also advantageous for visualizing the sensor data and/or the measurement data by means of a video sequence without a time delay that is noticeable to the vehicle user.

Drawings

Other advantageous designs of the concept also come from the following detailed embodiments with reference to the drawings, in which:

FIG. 1 shows a schematic diagram of a vehicle interior with ambient interior lighting;

FIG. 2 shows a control architecture as a means for performing the method;

fig. 3 shows a visualization of the operating program in the central control unit.

Detailed Description

In the illustration of fig. 1, an interior 1 of a vehicle, not shown overall, can be seen. The interior 1 of the vehicle is provided with a vehicle atmosphere lighting device or an atmosphere interior lighting device. In the exemplary embodiment shown here, it comprises a light strip, indicated with 2, which is formed around the interior 1 of the vehicle. The optical strip 2 may in particular be divided into a plurality of individual optical modules, for example 8 individual optical modules, each of which is denoted here by 3. The ambient interior lighting device of a vehicle also comprises a plurality of individual ambient lighting elements 4, which can be formed, for example, in the form of light-emitting diodes for indirect illumination of the vehicle, for example of the foot area. In the view of fig. 1, only individual ambience lighting elements therein bear the reference numeral 4. In this case, up to 150 individual light-emitting diodes which can be controlled with regard to their color and luminous intensity can be provided as illumination elements 4 in total. The vehicle also comprises various sensors and measurement value recording devices, the data of which are to be visualized, in particular, by means of the light modules 3 of the annular light strip 2.

In the view of fig. 2, various sensor groups and measurement value recording devices are shown. All comfort-related sensors and measurement value recording devices should therefore be generically indicated by the box denoted by 5, for example. They can detect, for example, the setting conditions in the vehicle interior 1, in particular the setting conditions of air conditioners, acoustics, seat heaters, seat adjusters, etc. In a block denoted by 6, sensors of the driver assistance system are integrated, which, for example, recognize objects in the surroundings of the vehicle, recognize other vehicles, detect a lane departure of the vehicle, etc. The sensor integrated in the block 6 is also supplemented with at least one camera, which is symbolically indicated by the block 8 and can always be part of the sensor system of the driver assistance system 6. The measured values and sensor data from the domain of the telematics system 7 are also represented by box 7. All data are transmitted to the central control device 9 via a data communication connection, for example via an ethernet bus, which is shown here and indicated with 10.

The central control unit 9 is itself connected via an ethernet bus 10 to a basic control unit 11 for the vehicle atmosphere lighting. The lighting elements 4 can be controlled with respect to color and light intensity via a linear bus 12 for up to 150 lighting elements 4, for example, depending on the position in which the lighting elements 4 are located, each lighting element being individually addressable by means of a basic control device 11 for the ambience lighting of the vehicle interior 1.

In addition, the central control means 9 is connected to these optical modules 3, i.e. here eight optical modules 3 of the optical strip 2, via separate high-speed CAN-FD buses as video links, whereby e.g. up to 100L EDs per optical module 3 CAN be controlled very quickly in a single-row or multi-row video display.

The central control device 9 is now essentially responsible for the three different tasks schematically shown in the view of fig. 3. Via the ethernet bus 10, the data arrives in the central control unit area and here first in the video processing unit, which is divided into two blocks 14.1 and 14.2. The data of, for example, at least one camera 8 are correspondingly processed in a unit 14, which can also be referred to as a video processor, in order to analyze the video data on the basis of color, image section, speed and contrast in accordance with identifiable key structures and to screen out information about the time sequence of these frame rates. This data is then processed and can be utilized to control the light modules 3, in particular to adapt them to the display format of the respective light modules 3, for example a single line video display with up to 100 columns. The interesting content of the captured video data is thus finally highlighted in the area 14.1 of the video processor 14.

As already described, data from the fields of comfort control (5), driver assistance (6) and telematics (7) are also fed in via the ethernet bus 10. These data can also be processed by algorithms as required into video sequences, which are respectively formatted to adapt to the control of these light modules 3 of the light band 2. The video sequences from the video processor 14 then arrive at the video parser 15 where they are superimposed. An overall overlay of video, such as an overlay of up to five separate videos compiled from different data sources in regions 14.1 and/or 14.2 of video processor 14, may be overlaid into one overall video sequence by prioritization. The priority control makes sense here, for example, to prioritize and weight the safety-relevant information over the comfort-relevant information. This results in a total video, by means of which in principle all information can be recognized, but the information that is of greater importance to the vehicle user has a higher priority and is therefore more easily recognized in the total video by a corresponding selection of the light intensity and contrast. The data of the total video are then transmitted to the 8 light modules 3 directly or, as in the embodiment shown in fig. 3, by means of a transformer 16, also called mapper. The mentioned separate high-speed CAN-FD bus CAN for this purpose be designed, for example, in the form of four CAN-FD buses, each of which controls two of the optical modules 3. It is thereby possible that a video sequence is transmitted to the light modules 3 approximately in real time and thus a real-time display of the acquired sensor data and measured values is obtained in these light modules 3 of the light strip 2 by means of the video sequence.

A transformer or mapper 16 approximately "maps" the respective video pixels of the total video from the video parser 15 onto these light modules 3 or their belonging CAN-FD bus 13. Thereby, it is also possible to control light modules of different lengths without having to consider the video sequence in advance. By means of the central control device 9 with the mapper 16, these video pixels of the total video are thus directly processed for the light modules 3, so that the light modules 3 as a whole can be designed particularly simply.

Optionally, an external memory 17 can also be provided, which is arranged in the central control device 9 or is directly connected thereto. The central memory 17 may contain pre-stored video sequences which allow meaningful visualization of the data in certain situations which can be detected by sensor data and/or measurement data. In which case the computational costs in section 14.2 of video processor 14 can be saved. Here, these videos from the memory 17 are also superimposed on other videos in the video parser 15. This is shown correspondingly in the view of fig. 3.

In addition, the newly generated video sequence may be stored in the memory 17 by the video parser 15 or also by the video processor 14 so that it can be used as a pre-stored video sequence at a future time.

The central control device 9 is also connected to a base control device 11 via an ethernet bus 10, as can be seen from the illustration in fig. 2. Data from the central control device 9 and in this case in particular data relating to the overall video from the video parser 15 can then be transmitted to the base control device 11. It is then possible to adapt the lighting elements 4 of the ambiance interior lighting to the overall video sequence running in the area of the light strip 2 in terms of their color and light intensity at the respective location, in order thus to obtain a harmonious overall image of the interior lighting and to communicate the required information intuitively to the person using the vehicle.

The mapper 16 may of course also be dispensed with when the corresponding processing of the video sequence has taken place within the video processor 14 and within the overall video range of the video parser 15. The data can then be transmitted directly from the video parser 15 to these light modules 3, in particular when the light modules are of identical construction to each other and have the same size and pixel resolution.

10页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于释放燃料电池系统的方法以及燃料电池系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!