Occlusion information display method and device, electronic equipment and storage medium

文档序号:125161 发布日期:2021-10-22 浏览:26次 中文

阅读说明:本技术 遮挡信息显示方法、装置、电子设备及存储介质 (Occlusion information display method and device, electronic equipment and storage medium ) 是由 杨高雷 于 2021-09-07 设计创作,主要内容包括:本公开涉及无人驾驶技术领域,提供了一种遮挡信息显示方法、装置、电子设备及存储介质。该方法应用于无人车,即无人驾驶设备或自动驾驶设备,包括:获取在当前车道上行驶的当前车辆前方的第一视频图像和当前车辆后方的第二视频图像;基于第二视频图像确定当前车辆后方的预设区域内是否存在行驶车辆;在预设区域内存在行驶车辆的情况下,基于当前车辆和行驶车辆的位置关系确定行驶车辆被当前车辆遮挡的遮挡区域;至少将第一视频图像中与遮挡区域对应的视频图像显示给行驶车辆。本公开能够将被当前车辆遮挡的道路信息以视频的方式实时显示给后方的行驶车辆,以使行驶车辆及时了解前方的道路情况并做出正确的驾驶行为,因此,提高了驾驶的安全性。(The disclosure relates to the technical field of unmanned driving, and provides a shielding information display method and device, electronic equipment and a storage medium. The method is applied to an unmanned vehicle, i.e. an unmanned or autonomous device, comprising: acquiring a first video image in front of a current vehicle running on a current lane and a second video image behind the current vehicle; determining whether a running vehicle exists in a preset area behind the current vehicle based on the second video image; determining an occlusion area where the running vehicle is occluded by the current vehicle based on the position relationship between the current vehicle and the running vehicle under the condition that the running vehicle exists in the preset area; and at least displaying the video image corresponding to the shielding area in the first video image to the running vehicle. The method and the device can display the road information sheltered by the current vehicle to the driving vehicle behind in real time in a video mode, so that the driving vehicle can know the road condition in front in time and make correct driving behaviors, and therefore, the driving safety is improved.)

1. A method for displaying occlusion information, comprising:

acquiring a first video image in front of a current vehicle running on a current lane and a second video image behind the current vehicle;

determining whether a running vehicle exists in a preset area behind the current vehicle or not based on the second video image;

under the condition that the running vehicle exists in the preset area, determining an occlusion area, which is occluded by the running vehicle, of the running vehicle based on the position relation between the current vehicle and the running vehicle;

and at least displaying the video image corresponding to the shielding area in the first video image to the running vehicle.

2. The method of claim 1, wherein said obtaining a first video image forward of a current vehicle traveling in a current lane and a second video image rearward of the current vehicle comprises:

acquiring a road image in front of the current vehicle as the first video image by using a first camera device arranged at the top of the head of the current vehicle;

and acquiring a road image behind the current vehicle as the second video image by utilizing a second camera device arranged at the top of the tail of the current vehicle.

3. The method of claim 1, wherein determining whether a driving vehicle is present in a preset area behind the current vehicle based on the second video image comprises:

performing image processing on the second video image to obtain lane line information and position information of the running vehicle relative to the current vehicle;

and determining the preset area based on the lane line information, and determining whether the running vehicle exists in the preset area based on the position information.

4. The method according to claim 3, wherein the determining, in the case where the running vehicle exists within the preset area, an occlusion area where the running vehicle is occluded by the current vehicle based on a positional relationship between the current vehicle and the running vehicle includes:

and under the condition that the running vehicle exists in the preset area, determining an occlusion area in which the running vehicle is occluded by the current vehicle based on the position information of the running vehicle relative to the current vehicle.

5. The method of claim 1, wherein displaying at least a video image of the first video image corresponding to the obscured area to the moving vehicle comprises:

extracting a video image corresponding to the occlusion region from the first video image;

displaying at least the video image to the running vehicle with a display device installed behind the current vehicle.

6. The method of claim 1, further comprising:

and stopping displaying the first video image when the running vehicle does not exist in the preset area.

7. The method of any one of claims 1 to 6, wherein the current vehicle comprises an autonomous vehicle or an unmanned vehicle.

8. A shielded information display device, comprising:

an acquisition module configured to acquire a first video image in front of a current vehicle traveling on a current lane and a second video image behind the current vehicle;

a first determination module configured to determine whether a running vehicle exists in a preset area behind the current vehicle based on the second video image;

a second determination module configured to determine, when the running vehicle exists in the preset area, an occlusion area where the running vehicle is occluded by the current vehicle based on a positional relationship between the current vehicle and the running vehicle;

a display module configured to display at least a video image corresponding to the occlusion region in the first video image to the traveling vehicle.

9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when executing the computer program.

10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.

Technical Field

The present disclosure relates to the field of unmanned driving technologies, and in particular, to a method and an apparatus for displaying occlusion information, an electronic device, and a computer-readable storage medium.

Background

An unmanned vehicle, also called an automatic vehicle, an unmanned vehicle or a wheeled mobile robot, is an integrated and intelligent new-era technical product integrating multiple elements such as environment perception, path planning, state recognition, vehicle control and the like. The purpose of unmanned driving can be achieved by carrying out cloud control on vehicles equipped with intelligent software and various sensing devices through a remote driving end.

In the unmanned technology, when an unmanned vehicle runs on a road, some important road information, such as a traffic light, a road sign for measuring the front, and the like, is sometimes blocked due to the fact that the height, the length and the width of some front vehicle types (such as large and medium-sized trucks, buses and the like) are too high, too long and too wide; in addition, the problem that the front sudden road cannot be perceived and responded in time due to the fact that the distance between the unmanned automobile and the front vehicle is too short can be solved, potential safety hazards are caused to driving of the unmanned automobile, and the sight of the rear vehicle of the unmanned automobile is limited, so that the potential safety hazards are caused to driving of the rear vehicle.

In the prior art, the driverless automobile generally displays the road information to the rear vehicle in the form of text, so that the driver of the rear vehicle must read the text content to understand the meaning of the road information displayed by the driverless automobile, thereby increasing the burden of the driver of the rear vehicle and reducing the driving safety.

Disclosure of Invention

In view of this, the embodiments of the present disclosure provide a method, an apparatus, an electronic device, and a computer-readable storage medium for displaying occlusion information, so as to solve the problem in the prior art that an unmanned vehicle usually displays road information to a rear vehicle in a text form, so that a driver of the rear vehicle must read text content to understand the meaning of the road information displayed by the unmanned vehicle, thereby increasing the burden on the driver of the rear vehicle and reducing driving safety.

In a first aspect of the embodiments of the present disclosure, a method for displaying occlusion information is provided, including: acquiring a first video image in front of a current vehicle running on a current lane and a second video image behind the current vehicle; determining whether a running vehicle exists in a preset area behind the current vehicle based on the second video image; under the condition that a running vehicle exists in a preset area, determining an occlusion area of the running vehicle occluded by the current vehicle based on the position relation between the current vehicle and the running vehicle; and at least displaying the video image corresponding to the shielding area in the first video image to the running vehicle.

In a second aspect of the embodiments of the present disclosure, there is provided a blocking information display device including: an acquisition module configured to acquire a first video image in front of a current vehicle traveling on a current lane and a second video image behind the current vehicle; a first determination module configured to determine whether there is a running vehicle in a preset area behind the current vehicle based on the second video image; the second determination module is configured to determine an occlusion area in which the running vehicle is occluded by the current vehicle based on the position relation between the current vehicle and the running vehicle when the running vehicle exists in the preset area; and the display module is configured to display at least the video image corresponding to the shielding area in the first video image to the running vehicle.

In a third aspect of the embodiments of the present disclosure, an electronic device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the above method when executing the computer program.

In a fourth aspect of the embodiments of the present disclosure, a computer-readable storage medium is provided, which stores a computer program, which when executed by a processor, implements the steps of the above-mentioned method.

Compared with the prior art, the embodiment of the disclosure has the following beneficial effects: acquiring a first video image in front of a current vehicle running on a current lane and a second video image behind the current vehicle; determining whether a running vehicle exists in a preset area behind the current vehicle based on the second video image; under the condition that a running vehicle exists in a preset area, determining an occlusion area of the running vehicle occluded by the current vehicle based on the position relation between the current vehicle and the running vehicle; at least the video image corresponding to the shielding area in the first video image is displayed to the running vehicle, and the road information shielded by the current vehicle can be displayed to the running vehicle behind in real time in a video mode, so that the running vehicle can know the road condition in front in time and make correct driving behaviors, therefore, the burden of a driver of the running vehicle is reduced, and the driving safety is improved.

Drawings

To more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without inventive efforts.

FIG. 1 is a scenario diagram of an application scenario of an embodiment of the present disclosure;

fig. 2 is a schematic flowchart of a method for displaying occlusion information according to an embodiment of the present disclosure;

fig. 3 is a schematic flowchart of another occlusion information display method provided in the embodiment of the present disclosure;

fig. 4 is a schematic structural diagram of a shielding information display device according to an embodiment of the disclosure;

fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.

Detailed Description

In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the disclosed embodiments. However, it will be apparent to one skilled in the art that the present disclosure may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present disclosure with unnecessary detail.

A method and an apparatus for displaying occlusion information according to an embodiment of the present disclosure will be described in detail below with reference to the accompanying drawings.

Fig. 1 is a scene schematic diagram of an application scenario of an embodiment of the present disclosure. The application scene may include the vehicle 1, the first camera device 11, the second camera device 12 and the display device 13 mounted on the vehicle 1, the shooting area 111 of the first camera device 11, the shooting area 121 of the second camera device 12, the vehicle 2, the blocked area 21 of the vehicle 2, the shooting area 211 of the first camera device 11 corresponding to the blocked area 21, the vehicle 3, the lane 4 and the lane line 41.

Specifically, the vehicle 1 may be a vehicle that supports any one of smart driving, automatic driving, unmanned driving, and remote driving. Further, the vehicle 1 may be various devices that enable unmanned driving, such as an unmanned delivery vehicle, an unmanned vending vehicle, or the like; but may also be a Vehicle with an automatic cruise control function, such as a car, a caravan, a truck, an off-road Vehicle, a Sport Utility Vehicle (SUV), an electric Vehicle, a bicycle, etc., which is not limited by the disclosed embodiments.

The first and second image pickup devices 11 and 12 may be various devices for picking up video images of the front and rear of the vehicle 1 traveling on the current lane, including, but not limited to, a wide-angle camera, a binocular camera, a Charge Coupled Device (CCD) camera, a wireless camera, a zoom camera, a gun type camera, a dome camera, a wide dynamic camera, and the like. The first camera device 11 and the second camera device 12 may be installed at any position on the vehicle 1, for example, a head, a body, a tail, and the like, which is not limited by the embodiment of the present disclosure. Preferably, in the embodiment of the present disclosure, the first camera 11 is installed on the top of the head of the vehicle 1 and is used for capturing an image of a road in front of the vehicle 1, and an area captured by the first camera 11 is a capture area 111; the second camera 12 is installed on the top of the tail of the vehicle 1 and is used for capturing an image of the road behind the vehicle 1, and the region captured by the second camera 12 is a capture region 121. Further, wireless communication modules are provided in the first and second cameras 11 and 12 to transmit video images captured by the first and second cameras 11 and 12 to a server via a network.

The display device 13 is used for displaying the video data shot by the first camera device 11, and is a display tool for displaying a certain electronic file on a screen through a specific transmission device and reflecting the electronic file to human eyes. The Display device 13 may be a Liquid Crystal Display (LCD). The display device 13 may be mounted at any position on the vehicle 1, for example, a left side vehicle body, a right side vehicle body, a tail of a vehicle, and the like, which is not limited by the embodiment of the present disclosure.

The vehicles 2 and 3 may be motor vehicles such as cars, caravans, trucks, off-road vehicles, sport utility vehicles, and the like, electric vehicles, or bicycles, but the embodiment of the present disclosure is not limited thereto. Further, the vehicles 2 and 3 may also be vehicles that support any of the functions of smart driving, automatic driving, unmanned driving, and remote driving as described above. The shielded area 21 of the vehicle 2 refers to an area where the vehicle 2 is shielded by the vehicle 1 traveling ahead of it.

Lane 4 may be a road for vehicles 1, 2, and 3 to travel. There are both general highways and highways that use traffic lanes with legal rules, such as traffic lanes and passing lanes. Lane markings 41 refer to markings of lane 4 including, but not limited to, white dashed and solid lines, yellow dashed and solid lines, no stop lines, speed reduction markings, diversion lines, guidance indication lines, stop lines, illusion markings, inter-vehicle distance confirmation lines, and the like.

The server may be a server that provides various services, for example, a backend server that receives a request sent by the vehicle 1, 2, or 3 with which a communication connection is established, and the backend server may receive and analyze the request sent by the vehicle 1, 2, or 3, and generate a processing result. The server may be one server, or may be a server cluster composed of a plurality of servers, or may also be one cloud computing service center or video cloud server, which is not limited in this disclosure. The server may be hardware or software. When the server is hardware, it may be various electronic devices that provide various services to the vehicle 1, the vehicle 2, or the vehicle 3. When the server is software, it may be a plurality of software or software modules for providing various services for the vehicle 1, the vehicle 2, or the vehicle 3, or may be a single software or software module for providing various services for the vehicle 1, the vehicle 2, or the vehicle 3, which is not limited in the embodiment of the present disclosure.

The network may be a wired network connected by a coaxial cable, a twisted pair and an optical fiber, or may be a wireless network that can interconnect various Communication devices without wiring, for example, Bluetooth (Bluetooth), Near Field Communication (NFC), Infrared (Infrared), and the like, which is not limited in this disclosure.

The vehicle 1 and the vehicle 2 travel on the same lane 4, and the vehicle 1 is located in front of the vehicle 2; the two vehicles 3 travel on another lane 4 adjacent to the vehicle 1, and the traveling directions of the two vehicles 3 are the same as the traveling direction of the vehicle 1. The vehicle 1 acquires an image of a road in front of the vehicle 1 through a first camera device 11 mounted on the top of the vehicle head, namely an image in a shooting area 111; and captures an image of the road behind the vehicle 1, i.e., an image within the capture area 121, by the second camera device 12 mounted on the top of the vehicle's rear. Determining a preset area 121 behind the vehicle 1 based on the left and right lane lines 41 of the lane 4 and the maximum range photographed by the second camera 12; and determines an occlusion region 21 in which the vehicle 2 is occluded by the vehicle 1, based on the positional relationship between the vehicle 1 and the vehicle 2, in the case where the vehicle 2 exists within the preset region 121. Further, the image of the photographing region 211 corresponding to the blocking region 21 is extracted from the image in the photographing region 111 of the first photographing device 11 and displayed to the vehicle 2 through the display device 13 installed at the rear of the vehicle 1, so that the driver of the vehicle 2 can know the road condition ahead in time and make a correct driving behavior, thereby reducing the burden on the driver of the vehicle 2 and improving the driving safety.

It should be noted that the specific types, numbers and combinations of the vehicle 1, the vehicle 2, the vehicle 3, the first camera device 11, the second camera device 12, the display device 13, the server and the network may be adjusted according to the actual requirements of the application scenario, and the embodiment of the present disclosure does not limit this.

Fig. 2 is a schematic flow chart of a method for displaying occlusion information according to an embodiment of the present disclosure. The occlusion information display method of fig. 2 may be executed by a processor of the vehicle 1 of fig. 1. As shown in fig. 2, the occlusion information display method includes:

s201, acquiring a first video image in front of a current vehicle running on a current lane and a second video image behind the current vehicle;

s202, determining whether a running vehicle exists in a preset area behind the current vehicle or not based on the second video image;

s203, under the condition that a running vehicle exists in the preset area, determining an occlusion area of the running vehicle occluded by the current vehicle based on the position relation between the current vehicle and the running vehicle;

and S204, at least displaying the video image corresponding to the shielding area in the first video image to the running vehicle.

Specifically, the processor acquires a first video image in front of a current vehicle running on a current lane and a second video image behind the current vehicle, and determines whether the running vehicle exists in a preset area behind the current vehicle based on the second video image; under the condition that the running vehicle exists in the preset area, the processor determines an occlusion area where the running vehicle is occluded by the current vehicle based on the position relation between the current vehicle and the running vehicle, and at least displays a video image corresponding to the occlusion area in the first video image to the running vehicle.

Here, the lane, also referred to as a lane or a roadway, is a road on which a vehicle travels. There are both general highways and highways that use traffic lanes with legal rules, such as traffic lanes and passing lanes. The number of lanes depends on the actual application scenario, and may be, for example, 1, 2, 3, 4, and the like, which is not limited by the embodiments of the present disclosure. For example, when the number of lanes is 3, it indicates that there are 2 lanes in addition to the lane in which the vehicle is located; further, 2 lanes may be located on the left and right sides of the lane where the vehicle is located, or may be located on the same side of the lane where the vehicle is located, which is not limited in the embodiment of the present disclosure.

The vehicle may comprise any one of a smart drive vehicle, a assisted drive vehicle, an autonomous vehicle, and an unmanned vehicle. Further, the vehicle may be various devices that enable unmanned driving, such as an unmanned delivery vehicle, an unmanned vending vehicle, or the like; the vehicle may also be a vehicle with an automatic cruise control function, such as a car, a caravan, a truck, an all terrain vehicle, a sport utility vehicle, an electric vehicle, a bicycle, and the like, which is not limited by the embodiments of the present disclosure. Preferably, in the embodiment of the present disclosure, the current vehicle may be an unmanned vehicle that senses the surroundings of the vehicle using an in-vehicle sensor and controls the steering and speed of the vehicle according to the road, vehicle position, and obstacle information obtained by the sensing, thereby enabling the vehicle to safely and reliably travel on the road. The running vehicle may be a vehicle operated by a driver.

Video images are a sequence of consecutive still images that are a more visual and vivid description of objective objects. The first video image refers to an image of a road in front of the current vehicle, and the second video image refers to an image of a road behind the current vehicle. It should be understood that the first video image and the second video image may also be road images of the left and right sides of the current vehicle.

The preset area refers to an area range of a preset distance from the rear of the current vehicle to the current vehicle. The preset distance may be a distance threshold preset by the user according to empirical data, or may be a distance threshold obtained by adjusting the set distance threshold according to the requirement of the driver on the visual field, which is not limited in the embodiment of the present disclosure. For example, the preset distance may range from 5 meters to 20 meters. Preferably, in the disclosed embodiment, the preset distance is 10 meters.

The positional relationship of the current vehicle and the traveling vehicle refers to the relative positions of the current vehicle and the traveling vehicle, that is, the position of the traveling vehicle with respect to the current vehicle. The occlusion region refers to a region in which the running vehicle is occluded by the current vehicle, and the occlusion region may be determined based on a positional relationship between the current vehicle and the running vehicle.

According to the technical scheme provided by the embodiment of the disclosure, a first video image in front of a current vehicle running on a current lane and a second video image behind the current vehicle are obtained; determining whether a running vehicle exists in a preset area behind the current vehicle based on the second video image; under the condition that a running vehicle exists in a preset area, determining an occlusion area of the running vehicle occluded by the current vehicle based on the position relation between the current vehicle and the running vehicle; at least the video image corresponding to the shielding area in the first video image is displayed to the running vehicle, and the road information shielded by the current vehicle can be displayed to the running vehicle behind in real time in a video mode, so that the running vehicle can know the road condition in front in time and make correct driving behaviors, therefore, the burden of a driver of the running vehicle is reduced, and the driving safety is improved.

In some embodiments, acquiring a first video image in front of a current vehicle traveling in a current lane and a second video image behind the current vehicle includes: acquiring a road image in front of a current vehicle as a first video image by using a first camera device arranged at the top of the head of the current vehicle; and acquiring a road image behind the current vehicle as a second video image by using a second camera device arranged at the top of the tail of the current vehicle.

Specifically, the processor acquires a road image in front of the current vehicle as a first video image by using a first camera device installed at the top of the head of the current vehicle, and acquires a road image behind the current vehicle as a second video image by using a second camera device installed at the top of the tail of the current vehicle.

Here, the first and second image pickup devices may be various apparatuses for picking up road images of the vehicle in the traveling direction, including, but not limited to, a wide-angle camera, a binocular camera, a Charge Coupled Device (CCD) camera, a wireless camera, a zoom camera, a gun type camera, a dome camera, a wide dynamic camera, and the like. Preferably, lane lines, traffic markings, traffic sign information, etc. must be acquired by a camera, and vehicle, obstacle information, etc. may be acquired by a laser radar, etc. Further, wireless communication modules are arranged in the first camera device and the second camera device so as to transmit image information shot by the first camera device and the second camera device to the processor through a network.

In the embodiment of the disclosure, the first camera device is installed at the top of the head of the vehicle and is used for shooting road images in front of the current vehicle; the second camera device is arranged at the top of the tail of the vehicle and used for shooting a road image behind the current vehicle. Preferably, the first camera device is a three-camera, and comprises a 120-degree wide-angle monocular camera with a detection distance of 60 meters and a 30-degree main-view camera with a detection distance of 500 meters, the monocular camera is mainly used for detecting the category and the position information of the short-distance large-range target, and the binocular camera is mainly used for detecting the detection of the long-distance travelable area in front of the vehicle and the identification of the traffic sign.

The road image refers to an image of the periphery of the current vehicle, including but not limited to an image in front of the current vehicle, an image behind the current vehicle, and images on the left and right sides of the current vehicle. The road image may include road information such as lanes, lane lines, traffic markings, traffic signs (e.g., traffic lights, speed limits, traffic regulations, etc.), vehicles (e.g., cars, motorsides, bicycles, pedestrians, etc.), obstacles (e.g., pits, cones, uncovered holes, etc.), and so forth.

In some embodiments, determining whether there is a running vehicle in a preset area behind the current vehicle based on the second video image includes: performing image processing on the second video image to obtain lane line information and position information of a running vehicle relative to the current vehicle; a preset area is determined based on the lane line information, and whether a traveling vehicle exists in the preset area is determined based on the position information.

Specifically, the processor performs image processing on the second video image to obtain lane line information and position information of a running vehicle relative to the current vehicle; further, the processor determines a preset area based on the lane line information, and determines whether there is a traveling vehicle in the preset area based on the position information.

Here, image processing is also called video processing, which is a technique of analyzing an image with a computer to achieve a desired result. Image processing techniques generally include three parts, image compression, enhancement and restoration, matching, description and recognition. The lane line information and the vehicle information contained in the second video image can be identified by carrying out image identification on the second video image, and the preset area behind the current vehicle can be accurately determined based on the identified lane line information and the shooting range of the second camera device; further, based on the recognized vehicle information, it is possible to accurately determine whether or not there is a running vehicle within the preset area.

In some embodiments, in a case where there is a running vehicle within the preset area, determining an occlusion area where the running vehicle is occluded by the current vehicle based on a positional relationship between the current vehicle and the running vehicle includes: and under the condition that the running vehicle exists in the preset area, determining an occlusion area in which the running vehicle is occluded by the current vehicle based on the position information of the running vehicle relative to the current vehicle.

Specifically, when there is a running vehicle within the preset area, the processor calculates a relative position of the current vehicle and the running vehicle based on the recognized vehicle information, and determines an occlusion area where the running vehicle is occluded by the current vehicle based on the relative position.

In some embodiments, displaying at least a video image corresponding to the occlusion region in the first video image to the moving vehicle includes: extracting a video image corresponding to the occlusion area from the first video image; at least the video image is displayed to the running vehicle with a display device installed behind the current vehicle.

Specifically, the processor extracts the first video image to obtain a video image corresponding to the shielded area, and displays at least the video image to the running vehicle by using a display device (for example, a liquid crystal display) installed behind the current vehicle, so that the driver of the running vehicle only needs to pay attention to the road information of the shielded area without stripping required road information from the first video image, thereby reducing the burden of the driver of the running vehicle and improving the driving safety.

It should be noted that the first video image may be completely displayed to a running vehicle behind the current vehicle, or the video image corresponding to the occlusion region and the first video image may be simultaneously displayed to the running vehicle behind the current vehicle in a picture-in-picture manner, which is not limited in this disclosure.

In some embodiments, the occlusion information display method further includes: and stopping displaying the first video image under the condition that no running vehicle exists in the preset area.

Specifically, when it is detected that there is no running vehicle in the preset area behind the current vehicle, the processor controls the display device to be turned off to stop displaying the first video image, thereby reducing resource consumption of the current vehicle.

All the above optional technical solutions may be combined arbitrarily to form optional embodiments of the present application, and are not described herein again.

Fig. 3 is a schematic flow chart of another occlusion information display method according to the embodiment of the present disclosure. The occlusion information display method of fig. 3 may be executed by a processor of the vehicle 1 of fig. 1. As shown in fig. 3, the occlusion information display method includes:

s301, acquiring a first video image in front of a current vehicle running on a current lane and a second video image behind the current vehicle;

s302, performing image processing on the second video image to obtain lane line information and position information of a running vehicle relative to the current vehicle;

s303, determining a preset area based on the lane line information;

s304, determining whether a running vehicle exists in the preset area or not based on the position information, and if so, executing S305; otherwise, go to S308;

s305, determining an occlusion area of the running vehicle occluded by the current vehicle based on the position information of the running vehicle relative to the current vehicle;

s306, extracting a video image corresponding to the shielding area from the first video image;

s307, displaying at least the video image to the running vehicle by using a display device installed behind the current vehicle;

s308, the display of the first video image is stopped.

Specifically, the processor acquires a road image in front of the current vehicle as a first video image by using a first camera device arranged at the top of the head of the current vehicle, and acquires a road image behind the current vehicle as a second video image by using a second camera device arranged at the top of the tail of the current vehicle; the processor carries out image processing on the second video image to obtain lane line information and position information of a running vehicle relative to a current vehicle, and determines a preset area based on the lane line information; further, in the case where there is a running vehicle within the preset area, determining an occlusion area where the running vehicle is occluded by the current vehicle based on position information of the running vehicle relative to the current vehicle, extracting a video image corresponding to the occlusion area from the first video image, and displaying at least the video image to the running vehicle using a display device installed behind the current vehicle; and stopping displaying the first video image under the condition that no running vehicle exists in the preset area.

According to the technical scheme provided by the embodiment of the disclosure, the road information shielded by the current vehicle is displayed to the rear running vehicle in real time in a video mode, so that the running vehicle can know the road condition in front in time and make correct driving behaviors, the burden of a driver of the running vehicle is reduced, and the driving safety is improved. Further, by stopping displaying the first video image in the case where there is no traveling vehicle within the preset area, it is possible to reduce resource consumption of the current vehicle.

The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.

Fig. 4 is a schematic structural diagram of a shielding information display device according to an embodiment of the present disclosure. As shown in fig. 4, the blocking information display device includes:

an acquisition module 401 configured to acquire a first video image in front of a current vehicle traveling in a current lane and a second video image behind the current vehicle;

a first determination module 402 configured to determine whether there is a traveling vehicle in a preset area behind the current vehicle based on the second video image;

a second determination module 403 configured to determine, in a case where there is a running vehicle within the preset area, an occlusion area where the running vehicle is occluded by the current vehicle, based on a positional relationship between the current vehicle and the running vehicle;

a display module 404 configured to display at least a video image corresponding to the occlusion region in the first video image to the traveling vehicle.

According to the technical scheme provided by the embodiment of the disclosure, a first video image in front of a current vehicle running on a current lane and a second video image behind the current vehicle are obtained; determining whether a running vehicle exists in a preset area behind the current vehicle based on the second video image; under the condition that a running vehicle exists in a preset area, determining an occlusion area of the running vehicle occluded by the current vehicle based on the position relation between the current vehicle and the running vehicle; at least the video image corresponding to the shielding area in the first video image is displayed to the running vehicle, and the road information shielded by the current vehicle can be displayed to the running vehicle behind in real time in a video mode, so that the running vehicle can know the road condition in front in time and make correct driving behaviors, therefore, the burden of a driver of the running vehicle is reduced, and the driving safety is improved.

In some embodiments, the obtaining module 401 of fig. 4 uses a first camera mounted on the top of the head of the current vehicle to capture an image of a road in front of the current vehicle as a first video image, and uses a second camera mounted on the top of the tail of the current vehicle to capture an image of a road behind the current vehicle as a second video image.

In some embodiments, the first determining module 402 of fig. 4 performs image processing on the second video image to obtain lane line information and position information of the traveling vehicle relative to the current vehicle; a preset area is determined based on the lane line information, and whether a traveling vehicle exists in the preset area is determined based on the position information.

In some embodiments, in the case where there is a running vehicle within the preset area, the second determination module 403 of fig. 4 determines an occlusion area where the running vehicle is occluded by the current vehicle based on the position information of the running vehicle with respect to the current vehicle.

In some embodiments, the display module 404 of fig. 4 extracts a video image corresponding to the occlusion region from the first video image and displays at least the video image to the traveling vehicle using a display device installed behind the current vehicle.

In some embodiments, the occlusion information display device of fig. 4 further comprises: a stopping module 405 configured to stop displaying the first video image in a case where there is no traveling vehicle within the preset area.

In some embodiments, the current vehicle comprises an autonomous vehicle or an unmanned vehicle.

It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present disclosure.

Fig. 5 is a schematic structural diagram of an electronic device 5 provided in the embodiment of the present disclosure. As shown in fig. 5, the electronic apparatus 5 of this embodiment includes: a processor 501, a memory 502 and a computer program 503 stored in the memory 502 and operable on the processor 501. The steps in the various method embodiments described above are implemented when the processor 501 executes the computer program 503. Alternatively, the processor 501 implements the functions of the respective modules/units in the above-described respective apparatus embodiments when executing the computer program 503.

Illustratively, the computer program 503 may be partitioned into one or more modules/units, which are stored in the memory 502 and executed by the processor 501 to accomplish the present disclosure. One or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 503 in the electronic device 5.

The electronic device 5 may be a desktop computer, a notebook, a palm computer, a cloud server, or other electronic devices. The electronic device 5 may include, but is not limited to, a processor 501 and a memory 502. Those skilled in the art will appreciate that fig. 5 is merely an example of the electronic device 5, and does not constitute a limitation of the electronic device 5, and may include more or less components than those shown, or combine certain components, or be different components, e.g., the electronic device may also include input-output devices, network access devices, buses, etc.

The Processor 501 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.

The storage 502 may be an internal storage unit of the electronic device 5, for example, a hard disk or a memory of the electronic device 5. The memory 502 may also be an external storage device of the electronic device 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the electronic device 5. Further, the memory 502 may also include both internal storage units and external storage devices of the electronic device 5. The memory 502 is used for storing computer programs and other programs and data required by the electronic device. The memory 502 may also be used to temporarily store data that has been output or is to be output.

It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.

Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.

In the embodiments provided in the present disclosure, it should be understood that the disclosed apparatus/electronic device and method may be implemented in other ways. For example, the above-described apparatus/electronic device embodiments are merely illustrative, and for example, a module or a unit may be divided into only one logical function, and may be implemented in other ways, and multiple units or components may be combined or integrated into another system, or some features may be omitted or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.

Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, the present disclosure may implement all or part of the flow of the method in the above embodiments, and may also be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of the above methods and embodiments. The computer program may comprise computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain suitable additions or additions that may be required in accordance with legislative and patent practices within the jurisdiction, for example, in some jurisdictions, computer readable media may not include electrical carrier signals or telecommunications signals in accordance with legislative and patent practices.

The above examples are only intended to illustrate the technical solutions of the present disclosure, not to limit them; although the present disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present disclosure, and are intended to be included within the scope of the present disclosure.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种汽车应急灯自启动控制方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!