Information processing method, information processing device, information processing program, and information processing system

文档序号:539049 发布日期:2021-06-01 浏览:20次 中文

阅读说明:本技术 信息处理方法、信息处理装置、信息处理程序以及信息处理系统 (Information processing method, information processing device, information processing program, and information processing system ) 是由 井本淳一 庄田幸惠 岩崎正宏 于 2019-10-18 设计创作,主要内容包括:行车记录仪(1)从拍摄车辆的周围的摄像机获取影像数据,获取包含车辆的加速度、速度以及角速度的至少其中之一的传感数据,对影像数据和传感数据分别赋予表示获取的时刻的时刻信息,在车辆发生了事件时,将传感数据发送到数据解析服务器(2),数据解析服务器(2),基于传感数据识别在车辆发生的事件的内容,将识别结果发送到行车记录仪(1),行车记录仪(1)判断识别结果所示的被识别的事件是否为规定的事件,在判断为所识别的事件是规定的事件的情况下,确定被赋予了与识别所使用的传感数据被赋予的时刻信息相同的时刻信息的影像数据,将所确定的影像数据发送到影像储存服务器(3)。(A drive recorder (1) acquires image data from a camera that captures the surroundings of a vehicle, acquires sensor data including at least one of acceleration, velocity, and angular velocity of the vehicle, time information indicating the time of acquisition is given to the video data and the sensor data, respectively, and when an event occurs in the vehicle, transmitting the sensing data to a data analysis server (2), the data analysis server (2) identifying the content of an event occurring in the vehicle based on the sensing data, transmitting the identification result to a drive recorder (1), the drive recorder (1) judging whether the identified event shown by the identification result is a predetermined event or not, when the event identified is determined to be a predetermined event, video data to which the same time information as the time information to which the sensor data used for identification is given is specified, and the specified video data is transmitted to the video storage server (3).)

1. An information processing method of an information processing system, comprising the steps of:

acquiring image data from a camera that photographs surroundings of a vehicle;

acquiring sensing data including at least one of acceleration, speed, and angular velocity of the vehicle;

time information indicating the acquired time is given to the image data and the sensing data;

identifying content of an event occurring at the vehicle based on the sensory data;

judging whether the identified event is a specified event or not;

determining video data to which the same time information as the time information to which the sensor data used for recognition is given, when it is determined that the recognized event is the predetermined event;

and transmitting the determined image data.

2. The information processing method according to claim 1,

in the event recognition, the sensing data is input as input, the content of the event is output, and the content of the event is recognized by inputting the sensing data to a recognition model generated by machine learning.

3. The information processing method according to claim 1 or 2,

the predetermined event includes at least one of an event indicating dangerous driving, an event indicating a collision of the vehicle, an event indicating a maintenance condition of a road, and an event indicating a failure of the vehicle.

4. The information processing method according to any one of claims 1 to 3,

determining whether the event has occurred based also on the sensing data,

in the event recognition, when it is determined that the event has occurred, the content of the event occurring in the vehicle is recognized based on the sensed data.

5. The information processing method according to any one of claims 1 to 4,

the information processing system includes a portable terminal attachable to and detachable from the vehicle,

acquiring the sensing data from the portable terminal mounted on the vehicle in the acquisition of the sensing data,

the posture of the portable terminal is also acquired,

correcting the acquired sensory data based also on the acquired gesture.

6. The information processing method according to any one of claims 1 to 5,

the information processing system includes a terminal device mounted on the vehicle and a server communicably connected to the terminal device,

the terminal device further transmits the acquired sensing data to the server,

the server also receives the sensed data and,

in the identification of the event, the server identifies content of an event occurring at the vehicle based on the sensed data,

the server also sends the result of the recognition,

the terminal device further receives the recognition result transmitted by the server,

in the determination, the terminal device determines whether or not the identified event indicated by the identification result is a predetermined event.

7. An information processing apparatus characterized by comprising:

an image data acquisition unit that acquires image data from a camera that captures the surroundings of a vehicle;

a sensor data acquisition unit that acquires sensor data including at least one of acceleration, velocity, and angular velocity of the vehicle;

a time information providing unit configured to provide time information indicating an acquired time to the video data and the sensor data, respectively;

an identification unit that identifies the content of an event occurring in the vehicle based on the sensed data;

a determination unit configured to determine whether the identified event is a predetermined event;

a specifying unit that specifies video data to which the same time information as the time information to be assigned to the sensor data used for recognition is assigned, when it is determined that the recognized event is the predetermined event; and the number of the first and second groups,

and a transmitting unit configured to transmit the specified video data.

8. An information processing program for causing a computer to function as:

an image data acquisition unit that acquires image data from a camera that captures the surroundings of a vehicle;

a sensor data acquisition unit that acquires sensor data including at least one of acceleration, velocity, and angular velocity of the vehicle;

a time information providing unit configured to provide time information indicating an acquired time to the video data and the sensor data, respectively;

an identification unit that identifies the content of an event occurring in the vehicle based on the sensed data;

a determination unit configured to determine whether the identified event is a predetermined event;

a specifying unit that specifies video data to which the same time information as the time information to be assigned to the sensor data used for recognition is assigned, when it is determined that the recognized event is the predetermined event; and the number of the first and second groups,

and a transmitting unit configured to transmit the specified video data.

9. An information processing system including a terminal device mounted on a vehicle, a data analysis server communicably connected to the terminal device, and an image storage server communicably connected to the terminal device,

the terminal device includes:

an image data acquisition unit that acquires image data from a camera that captures the periphery of the vehicle;

a sensor data acquisition unit that acquires sensor data including at least one of acceleration, velocity, and angular velocity of the vehicle;

a time information providing unit configured to provide time information indicating an acquired time to the video data and the sensor data, respectively; and the number of the first and second groups,

a sensor data transmitting unit that transmits the sensor data to the data analysis server when an event occurs in the vehicle,

the data analysis server includes:

a sensing data receiving unit that receives the sensing data transmitted by the terminal device;

an identification unit that identifies the content of an event occurring in the vehicle based on the sensed data; and the number of the first and second groups,

an identification result transmitting unit for transmitting the identification result,

the terminal device includes:

an identification result receiving unit that receives the identification result transmitted by the data analysis server;

a determination unit configured to determine whether or not the identified event indicated by the identification result is a predetermined event;

a specifying unit that specifies video data to which the same time information as the time information to be assigned to the sensor data used for recognition is assigned, when it is determined that the recognized event is the predetermined event; and the number of the first and second groups,

an image data transmitting unit for transmitting the determined image data to the image storage server,

the image storage server includes:

a video data receiving unit that receives the video data transmitted by the terminal device; and the combination of (a) and (b),

and an image data storage unit for storing the image data.

Technical Field

The present invention relates to a technique for processing sensed data acquired by a sensor provided on a vehicle.

Background

Conventionally, there is known a drive recorder that determines that a dangerous situation has occurred when the acceleration of a vehicle exceeds a predetermined threshold value, and stores image information captured before and after the time when the dangerous situation has occurred. However, image information that is not actually dangerous may be included in the image information stored in the drive recorder.

For example, patent literature 1 discloses an imaging device in which an image captured by an imaging unit is input to a neural network learned by an image including a processing target, whether or not the processing target is included in the image captured by the imaging unit is determined based on an output from the neural network, and the image captured by the imaging unit is transmitted when it is determined that the processing target is included in the image.

However, in the above-described conventional art, the amount of image data used by the imaging device is large, the processing load of the imaging device is increased, and further improvement is required.

Documents of the prior art

Patent document

Patent document 1 Japanese patent laid-open publication No. 2018-125777

Disclosure of Invention

The present invention has been made to solve the above-described problems, and an object of the present invention is to provide a technique capable of reducing the processing load of an information processing system.

An information processing method according to an embodiment of the present invention is an information processing method of an information processing system, including the steps of: acquiring image data from a camera that photographs surroundings of a vehicle; acquiring sensing data including at least one of acceleration, speed, and angular velocity of the vehicle; time information indicating the acquired time is given to the image data and the sensing data; identifying content of an event occurring at the vehicle based on the sensory data; judging whether the identified event is a specified event or not; determining video data to which the same time information as the time information to which the sensor data used for recognition is given, when it is determined that the recognized event is the predetermined event; and transmitting the determined image data.

According to the present invention, since the content of an event occurring in a vehicle is identified based on the sensed data having a smaller data amount than the image data, the processing load of the information processing system can be reduced. Further, since only the video data when the predetermined event occurs is transmitted, transmission of unnecessary video data can be prevented, and data traffic and data communication cost can be reduced.

Drawings

Fig. 1 is a schematic diagram conceptually showing the overall configuration of an information processing system of the first embodiment.

Fig. 2 is a block diagram showing a configuration of the drive recorder according to the first embodiment.

Fig. 3 is a block diagram showing the configuration of the data analysis server according to the first embodiment.

Fig. 4 is a block diagram showing the configuration of the video storage server according to the first embodiment.

Fig. 5 is a first flowchart for explaining the processing of the information processing system of the first embodiment.

Fig. 6 is a second flowchart for explaining the processing of the information processing system of the first embodiment.

Fig. 7 is a schematic diagram showing the relationship between the acceleration in the X-axis direction, the Y-axis direction, and the Z-axis direction, which is measured when sudden braking, which is one of dangerous driving events, and the time in the first embodiment.

Fig. 8 is a schematic diagram showing the relationship between the acceleration in the X-axis direction, the Y-axis direction, and the Z-axis direction measured when the vehicle passes over a bump on the road and the time in the first embodiment.

Fig. 9 is a schematic diagram conceptually showing the entire configuration of the information processing system of the second embodiment.

Fig. 10 is a block diagram showing the configuration of a mobile terminal according to the second embodiment.

Fig. 11 is a first flowchart for explaining the processing of the information processing system of the second embodiment.

Fig. 12 is a second flowchart for explaining the processing of the information processing system of the second embodiment.

Detailed Description

(basic knowledge of the invention)

Generally, the amount of image data is large. Therefore, in the above-described conventional technique, the amount of learning data required to achieve practical accuracy increases, and the learning time may also increase. In addition, when image data having a large amount of data is processed as input data, it is necessary to create a complicated model in order to achieve practical accuracy.

Therefore, in the above-described conventional technique, the amount of image data used by the imaging device is large, and the processing load of the imaging device increases.

In order to solve the above problem, an information processing method according to an embodiment of the present invention is an information processing method of an information processing system, including: acquiring image data from a camera that photographs surroundings of a vehicle; acquiring sensing data including at least one of acceleration, speed, and angular velocity of the vehicle; time information indicating the acquired time is given to the image data and the sensing data; identifying content of an event occurring at the vehicle based on the sensory data; judging whether the identified event is a specified event or not; determining video data to which the same time information as the time information to which the sensor data used for recognition is given, when it is determined that the recognized event is the predetermined event; and transmitting the determined image data.

According to this configuration, since the content of the event occurring in the vehicle is identified based on the sensing data having a smaller data amount than the video data, the processing load of the information processing system can be reduced. Further, since only the video data when the predetermined event occurs is transmitted, transmission of unnecessary video data can be prevented, and data traffic and data communication cost can be reduced.

In the information processing method, the event recognition may be performed by inputting the sensing data into a recognition model generated by machine learning, with the sensing data as an input and the content of the event as an output.

According to this configuration, the content of the event is recognized by inputting the sensing data to the recognition model generated by machine learning, with the sensing data as input and the content of the event as output. When video data is input as input data to the recognition model, the recognition model becomes a complicated model to achieve practical accuracy because the amount of video data is relatively large, the amount of input data required for learning becomes large, and the learning time of the recognition model becomes long. On the other hand, in the case where the sensing data is input data to the recognition model, since the sensing data has a smaller data amount than the image data, the recognition model becomes a simple model, so that the data amount of the input data required for learning can be reduced, and the learning time of the recognition model can be shortened.

In the information processing method, the predetermined event may include at least one of an event indicating dangerous driving, an event indicating a collision of the vehicle, an event indicating a maintenance condition of a road, and an event indicating a failure of the vehicle.

According to this configuration, at least one of the image data when an event indicating dangerous driving occurs, the image data when an event indicating a collision of the vehicle occurs, the image data when an event indicating a maintenance condition of the road occurs, and the image data when an event indicating a failure of the vehicle occurs can be stored.

In the information processing method, it may be determined whether or not the event has occurred based on the sensing data, and in the event recognition, when it is determined that the event has occurred, the content of the event occurring in the vehicle may be recognized based on the sensing data.

According to this configuration, first, whether an event has occurred is determined based on the sensed data. When it is determined that an event has occurred, the content of the event occurring in the vehicle is identified based on the sensed data. Therefore, the content of the event is not recognized based on all the sensed data, but is recognized based on only the sensed data when it is determined that the event has occurred, and the time required for the recognition processing can be shortened.

In the information processing method, the information processing system may include a portable terminal that is attachable to and detachable from the vehicle, and in the acquisition of the sensory data, the sensory data may be acquired from the portable terminal attached to the vehicle, the posture of the portable terminal may be acquired, and the acquired sensory data may be corrected based on the acquired posture.

The posture of the portable terminal attachable to and detachable from the vehicle may change every time the portable terminal is attached to the vehicle. However, according to the above configuration, the posture of the portable terminal is acquired, and the acquired sensing data is corrected based on the acquired posture. Therefore, the portable terminal can be used to acquire the image data and the sensing data.

In the information processing method, the information processing system may include a terminal device mounted on the vehicle and a server communicably connected to the terminal device, the terminal device may further transmit the acquired sensing data to the server, the server may further receive the sensing data, the server may recognize, in the event recognition, the content of an event occurring in the vehicle based on the sensing data, the server may further transmit a recognition result, the terminal device may further receive the recognition result transmitted by the server, and in the determination, the terminal device may determine whether or not the recognized event indicated by the recognition result is a predetermined event.

According to this configuration, since the server on the network, rather than the terminal device mounted on the vehicle, recognizes the content of the event occurring in the vehicle based on the sensed data, the software necessary for the recognition processing can be easily updated, and the burden on the user can be reduced.

An information processing apparatus according to another embodiment of the present invention includes: an image data acquisition unit that acquires image data from a camera that captures the surroundings of a vehicle; a sensor data acquisition unit that acquires sensor data including at least one of acceleration, velocity, and angular velocity of the vehicle; a time information providing unit configured to provide time information indicating an acquired time to the video data and the sensor data, respectively; an identification unit that identifies the content of an event occurring in the vehicle based on the sensed data; a determination unit configured to determine whether the identified event is a predetermined event; a specifying unit that specifies video data to which the same time information as the time information to be assigned to the sensor data used for recognition is assigned, when it is determined that the recognized event is the predetermined event; and a transmission unit configured to transmit the specified video data.

According to this configuration, since the content of the event occurring in the vehicle is identified based on the sensing data having a smaller data amount than the video data, the processing load of the information processing device can be reduced. Further, since only the video data when the predetermined event occurs is transmitted, transmission of unnecessary video data can be prevented, and data traffic and data communication cost can be reduced.

An information processing program according to another embodiment of the present invention causes a computer to function as: an image data acquisition unit that acquires image data from a camera that captures the surroundings of a vehicle; a sensor data acquisition unit that acquires sensor data including at least one of acceleration, velocity, and angular velocity of the vehicle; a time information providing unit configured to provide time information indicating an acquired time to the video data and the sensor data, respectively; an identification unit that identifies the content of an event occurring in the vehicle based on the sensed data; a determination unit configured to determine whether the identified event is a predetermined event; a specifying unit that specifies video data to which the same time information as the time information to be assigned to the sensor data used for recognition is assigned, when it is determined that the recognized event is the predetermined event; and a transmission unit configured to transmit the specified video data.

According to this configuration, since the content of the event occurring in the vehicle is identified based on the sensing data having a smaller data amount than the video data, the processing load of the computer can be reduced. Further, since only the video data when the predetermined event occurs is transmitted, transmission of unnecessary video data can be prevented, and data traffic and data communication cost can be reduced.

An information processing system according to another embodiment of the present invention includes a terminal device mounted on a vehicle, a data analysis server communicably connected to the terminal device, and a video storage server communicably connected to the terminal device, wherein the terminal device includes: an image data acquisition unit that acquires image data from a camera that captures the periphery of the vehicle; a sensor data acquisition unit that acquires sensor data including at least one of acceleration, velocity, and angular velocity of the vehicle; a time information providing unit configured to provide time information indicating an acquired time to the video data and the sensor data, respectively; and a sensor data transmitting unit that transmits the sensor data to the data analysis server when an event occurs in the vehicle, the data analysis server including: a sensing data receiving unit that receives the sensing data transmitted by the terminal device; an identification unit that identifies the content of an event occurring in the vehicle based on the sensed data; and an identification result transmitting unit configured to transmit an identification result, wherein the terminal device includes: an identification result receiving unit that receives the identification result transmitted by the data analysis server; a determination unit configured to determine whether or not the identified event indicated by the identification result is a predetermined event; a specifying unit that specifies video data to which the same time information as the time information to which the sensor data used for recognition is given, when it is determined that the recognized event is the predetermined event; and a video data transmitting unit that transmits the specified video data to the video storage server, the video storage server including: a video data receiving unit that receives the video data transmitted by the terminal device; and a video data storage unit for storing the video data.

According to this configuration, since the content of the event occurring in the vehicle is identified based on the sensing data having a smaller data amount than the video data, the processing load of the information processing system can be reduced. Further, since only the video data when the predetermined event occurs is transmitted, transmission of unnecessary video data can be prevented, and data traffic and data communication cost can be reduced.

Hereinafter, embodiments of the present invention will be described with reference to the drawings. The following embodiments are merely examples for embodying the present invention, and do not limit the technical scope of the present invention.

(first embodiment)

Fig. 1 is a schematic diagram conceptually showing the overall configuration of an information processing system of the first embodiment.

The information processing system shown in fig. 1 includes a drive recorder 1, a data analysis server 2, and an image storage server 3. The drive recorder 1 is an example of a terminal device, and is mounted on a vehicle. The data analysis server 2 is an example of a server, and is communicably connected to the drive recorder 1 via the network 4. The network 4 is, for example, the internet. The image storage server 3 is communicably connected to the drive recorder 1 via the network 4.

The drive recorder 1 acquires image data in which the traveling direction of the vehicle is photographed. The drive recorder 1 acquires sensing data containing at least one of acceleration, speed, and angular velocity of the vehicle. The drive recorder 1 gives a time stamp to the image data and the sensor data. The drive recorder 1 stores the image data and the sensor data given with the time stamp. When an event occurs, the drive recorder 1 transmits a sensing data set including a third period between the time when the event occurs and before a predetermined time and a second period between the time when the event occurs and after the predetermined time to the data analysis server 2. When the event that has occurred is determined to be a predetermined event, the drive recorder 1 transmits the stored image data set to the image storage server 3.

The data analysis server 2 recognizes the content of an event occurring in the vehicle based on the sensor data received from the drive recorder 1. Specifically, the data analysis server 2 inputs the sensing data set received from the drive recorder 1 to the recognition model generated by machine learning, and acquires the content of the event output from the recognition model as the recognition result. The data analysis server 2 transmits the recognition result to the drive recorder 1.

The image storage server 3 stores image data transmitted by the drive recorder 1. Specifically, the video storage server 3 stores a video data set when a predetermined event occurs.

The following describes the configuration of the drive recorder 1, the data analysis server 2, and the video storage server 3 in detail.

Fig. 2 is a block diagram showing a configuration of the drive recorder according to the first embodiment.

The drive recorder 1 is mounted on a front windshield or an instrument panel of a vehicle, for example. The drive recorder 1 includes a camera 11, an acceleration sensor 12, a position measuring unit 13, a processor 14, a memory 15, and a communication unit 16.

The camera 11 photographs the surroundings of the vehicle. Specifically, the camera 11 captures the front of the vehicle. The camera 11 may capture images of the left, right, and rear of the vehicle.

The acceleration sensor 12 is, for example, a 3-axis acceleration sensor, and measures accelerations in an X-axis direction indicating the left-right direction of the vehicle, a Y-axis direction indicating the front-rear direction of the vehicle, and a Z-axis direction indicating the vertical direction of the vehicle.

The position measurement unit 13 is, for example, a gps (global Positioning system) receiver, and measures vehicle position information, which is position information of the drive recorder 1. The position information is expressed by latitude and longitude. The position information measured by the position measuring unit 13 is used to calculate the speed of the vehicle. For example, if the distance between the first position and the second position and the moving time from the first position to the second position are known, the moving speed of the vehicle from the first position to the second position can be calculated.

The processor 14 is, for example, a CPU (central processing unit), and includes an image data acquisition unit 141, a sensor data acquisition unit 142, a time stamp assignment unit 143, an event occurrence detection unit 144, an event determination unit 145, and an image data determination unit 146.

The memory 15 is, for example, a semiconductor memory, and includes a video data storage unit 151 and a sensor data storage unit 152.

The image data acquiring unit 141 acquires image data from the camera 11.

The sensing data acquisition unit 142 acquires sensing data including at least one of acceleration, velocity, and angular velocity of the vehicle. The sensing data acquisition unit 142 acquires the acceleration of the vehicle from the acceleration sensor 12. Further, the sensing data acquisition unit 142 acquires the position information of the vehicle from the position measurement unit 13, and calculates the speed of the vehicle based on the acquired position information.

In the first embodiment, the drive recorder 1 may include a gyro sensor for measuring an angular velocity. The sensing data acquisition unit 142 may acquire an angular velocity from a gyro sensor.

In the first embodiment, the sensor data acquiring unit 142 acquires the acceleration and the speed of the vehicle, but the present invention is not particularly limited thereto, and the sensor data acquiring unit 142 may acquire only the acceleration of the vehicle. The sensor data acquiring unit 142 may acquire any one of the acceleration in the X-axis direction, the acceleration in the Y-axis direction, and the acceleration in the Z-axis direction, but preferably acquires the acceleration in the X-axis direction, the acceleration in the Y-axis direction, and the acceleration in the Z-axis direction.

In the first embodiment, the sensing data acquisition unit 142 calculates the speed of the vehicle from the position information of the vehicle acquired from the position measurement unit 13, but the present invention is not particularly limited thereto, and the speed may be acquired directly from the vehicle. In this case, the drive recorder 1 may not include the position measuring unit 13.

The time stamp providing unit 143 provides the acquired video data and the sensor data with time stamps (time information) indicating the time of acquisition. The time stamp adding unit 143 stores the video data to which the time stamp is added in the video data storage unit 151 and stores the sensing data to which the time stamp is added in the sensing data storage unit 152.

The video data storage unit 151 stores the video data acquired by the video data acquisition unit 141 and given a time stamp by the time stamp giving unit 143.

The sensing data storage unit 152 stores the sensing data acquired by the sensing data acquisition unit 142 and given a time stamp by the time stamp giving unit 143.

The event occurrence detection unit 144 determines whether an event has occurred based on the sensed data acquired by the sensed data acquisition unit 142. Specifically, the event occurrence detection unit 144 determines that an event has occurred when at least one of the X-axis acceleration, the Y-axis acceleration, and the Z-axis acceleration exceeds a threshold value.

When the event occurrence detection unit 144 determines that an event has occurred, the communication unit 16 transmits the sensor data to the data analysis server 2. At this time, the communication unit 16 transmits the sensing data set for a predetermined period with reference to the time when the event occurred to the data analysis server 2. More specifically, the communication unit 16 transmits the sensing data group including a first period before a predetermined time from the time when the event occurs and a third period after the predetermined time from the time when the event occurs to the data analysis server 2. For example, the communication unit 16 transmits the sensing data group including a first period before 10 seconds from the time when the event occurs and a third period after 5 seconds from the time when the event occurs to the data analysis server 2.

The communication unit 16 receives the recognition result of the content of the event transmitted from the data analysis server 2.

The event determination unit 145 determines whether or not the event identified by the data analysis server 2 is a predetermined event. The event determination unit 145 determines whether or not the event indicated by the recognition result received by the communication unit 16 is a predetermined event. Here, the predetermined event includes at least one of an event indicating dangerous driving, an event indicating a collision of the vehicle, an event indicating a maintenance condition of the road, and an event indicating a failure of the vehicle.

When the recognized event is determined to be a predetermined event, the video data specifying unit 146 specifies video data to which a timestamp (time information) identical to a timestamp (time information) assigned to the sensor data used for recognition is assigned. That is, when the event determination unit 145 determines that the event occurred is a predetermined event, the video data identification unit 146 identifies a video data set to which the same time stamp as the time stamp assigned to the sensor data set used for identification is assigned.

The communication unit 16 transmits the video data specified by the video data specifying unit 146 to the video storage server 3. That is, the communication unit 16 transmits the video data set specified by the video data specifying unit 146 to the video storage server 3.

Fig. 3 is a block diagram showing the configuration of the data analysis server according to the first embodiment.

The data analysis server includes a communication unit 21, a processor 22, and a memory 23.

The communication unit 21 receives the sensor data transmitted by the drive recorder 1. That is, the communication unit 21 receives the sensing data group transmitted by the drive recorder 1.

The processor 22 is, for example, a CPU and includes an event recognition unit 221.

The memory 23 is, for example, a semiconductor memory or a hard disk drive, and includes a recognition model storage unit 231.

The recognition model storage unit 231 stores in advance a recognition model generated by machine learning, with the sensing data as input and the contents of the event as output. In addition, for example, deep learning using a multilayer neural network is adopted as machine learning.

The event recognition unit 221 recognizes the content of an event occurring in the vehicle based on the sensed data. The event recognition unit 221 recognizes the content of the event by inputting the sensing data to the recognition model stored in the recognition model storage unit 231. The event recognition unit 221 inputs the sensing data set to the recognition model, and acquires the content of the event output from the recognition model as a recognition result.

In the first embodiment, the event recognition unit 221 recognizes the content of the event by inputting the sensing data set to the recognition model learned by the machine, but the present invention is not limited to this, and the memory 23 may store the content of the event and waveform data representing the waveform of the sensing data set in association with each other. In this case, the event recognition unit 221 may perform pattern matching (pattern matching) on the waveform of the received sensing data set and the waveform data stored in the memory 23, and acquire the content of the event corresponding to the matched waveform data as a recognition result.

In the first embodiment, the recognition model storage unit 231 may store a plurality of recognition models according to the contents of the recognized events. For example, the recognition model storage unit 231 may store a first recognition model for recognizing a dangerous driving event and a second recognition model for recognizing a collision event.

The communication unit 21 transmits the recognition result of the event recognition unit 221 to the drive recorder 1.

Fig. 4 is a block diagram showing the configuration of the video storage server according to the first embodiment.

The video storage server 3 includes a communication unit 31, a processor 32, and a storage unit 33.

The communication unit 31 receives the video data transmitted from the drive recorder 1. The video data transmitted by the drive recorder 1 is video data acquired with reference to the time when a predetermined event occurs.

The processor 32 is, for example, a CPU, and includes an image data management unit 321.

The video data management unit 321 stores the video data received through the communication unit 31 in the video data storage unit 331. That is, the video data management unit 321 stores a video data set acquired based on the time when the predetermined event has occurred in the video data storage unit 331.

The memory 33 is, for example, a semiconductor memory or a hard disk drive, and includes a video data storage unit 331.

The video data storage unit 331 stores video data. The video data stored in the video data storage unit 331 is video data acquired with reference to the time when a predetermined event occurs. The video data is used for driving evaluation, dynamic management, assistance for safe driving, and the like.

Fig. 5 is a first flowchart for explaining the processing of the information processing system of the first embodiment, and fig. 6 is a second flowchart for explaining the processing of the information processing system of the first embodiment.

First, in step S1, the image data acquisition unit 141 of the drive recorder 1 acquires image data from the camera 11.

Next, in step S2, the sensor data acquiring unit 142 acquires sensor data including the acceleration and speed of the vehicle. At this time, the sensing data acquisition unit 142 acquires the acceleration in the X-axis direction, the acceleration in the Y-axis direction, and the acceleration in the Z-axis direction from the acceleration sensor 12. Then, the sensing data acquisition unit 142 acquires the position of the vehicle from the position measurement unit 13, calculates the distance between the position acquired this time and the position acquired last time, and calculates the speed of the vehicle by dividing the calculated distance by the acquisition time interval.

Next, in step S3, the time stamp providing unit 143 provides the video data acquired by the video data acquiring unit 141 and the sensor data acquired by the sensor data acquiring unit 142 with time stamps indicating the time of acquisition.

Next, in step S4, the time stamp providing unit 143 stores the video data to which the time stamp is provided in the video data storage unit 151.

Next, in step S5, the time stamp providing unit 143 stores the sensing data to which the time stamp is provided in the sensing data storage unit 152.

Next, in step S5, the event occurrence detection unit 144 determines whether an event has occurred based on the sensed data acquired by the sensed data acquisition unit 142. At this time, the event occurrence detector 144 determines whether or not at least one of the X-axis acceleration, the Y-axis acceleration, and the Z-axis acceleration exceeds a threshold value. The event occurrence detection unit 144 determines that no event has occurred when determining that any of the acceleration of the X axis, the acceleration of the Y axis, and the acceleration of the Z axis does not exceed the threshold. The event occurrence detection unit 144 determines that an event has occurred when determining that at least one of the acceleration of the X axis, the acceleration of the Y axis, and the acceleration of the Z axis exceeds a threshold value.

Here, if it is determined that no event has occurred (no in step S6), the processing returns to step S1.

On the other hand, when determining that an event has occurred (yes in step S6), in step S7, the event occurrence detector 144 determines whether or not video data and sensor data have been acquired for a predetermined period. Here, the predetermined period is a period in which a first period from the time when the event occurs to before the predetermined time and a second period from the time when the event occurs to after the predetermined time are combined. For example, the event occurrence detection unit 144 determines whether or not the image data and the sensor data are acquired for 15 seconds after a first period of 10 seconds before the time when the event occurs and a second period of 5 seconds after the time when the event occurs are combined.

Here, if it is determined that the video data and the sensor data are not acquired for the predetermined period (no in step S7), the process returns to step S1.

On the other hand, when determining that the video data and the sensor data are acquired for the predetermined period (yes at step S7), the communication unit 16 transmits the sensor data group acquired for the predetermined period to the data analysis server 2 at step S8.

Next, in step S9, the communication unit 21 of the data analysis server 2 receives the sensing data group transmitted by the drive recorder 1.

Next, in step S10, the event recognition unit 221 recognizes the content of the event occurring in the vehicle based on the sensing data set received by the communication unit 21. At this time, the event recognition unit 221 inputs the sensing data set to the recognition model stored in the recognition model storage unit 231, and acquires the content of the event output from the recognition model as a recognition result.

Next, in step S11, the communication unit 21 transmits the result of recognition of the event content by the event recognition unit 221 to the drive recorder 1.

Next, in step S12, the communication unit 16 of the drive recorder 1 receives the recognition result of the event content transmitted from the data analysis server 2.

Next, in step S13, the event determination unit 145 determines whether or not the identified event is a predetermined event based on the identification result received by the communication unit 16. In this case, the predetermined event differs depending on which event the video storage server 3 collects video data when the event occurs.

For example, if the predetermined event is an event indicating dangerous driving, the event determination unit 145 determines whether or not the identified event is any one of an event indicating sudden braking, an event indicating sharp turning, and an event indicating sudden acceleration, which are events indicating dangerous driving.

For example, if the predetermined event is an event indicating a collision, the event determination unit 145 determines whether or not the identified event is an event indicating a collision.

Further, for example, if the predetermined event is an event indicating a maintenance condition of a road, the event determination unit 145 determines whether or not the identified event is an event indicating a bad road condition. When it is determined that the predetermined event is an event indicating a maintenance state of the road, it is preferable that the sensing data include position information. Thus, the position where the road condition is not good can be determined.

Further, for example, if the predetermined event is an event indicating a failure of the vehicle, the event determination unit 145 determines whether or not the identified event is an event indicating a failure of the vehicle. The failure of the vehicle indicates, for example, a state in which the air pressure of the tires of the vehicle is low. In this case, the memory 23 may also store in advance the position of the protrusion on the road that is recognized to have passed through based on the sensed data of the other vehicle. Further, the event recognition unit 221 may determine that the tire air pressure of the vehicle is in a low state when the other vehicle does not pass the bump on the road but recognizes that the vehicle passes the bump on the road.

Here, if it is determined that the recognized event is not the predetermined event (no in step S13), the process returns to step S1.

On the other hand, when it is determined that the recognized event is the predetermined event (yes at step S13), at step S14, the video data identification unit 146 identifies a video data set to which the same time stamp as the time stamp assigned to the sensor data set used for recognition is assigned from the video data storage unit 151.

Next, in step S15, the communication unit 16 transmits the video data set specified by the video data specifying unit 146 and the sensor data set used for the recognition to the video storage server 3.

Next, in step S16, the communication unit 31 of the video storage server 3 receives the video data set and the sensor data set transmitted by the drive recorder 1.

Next, in step S17, the video data management unit 321 stores the video data set and the sensor data set received by the communication unit 31 in the video data storage unit 331.

According to the first embodiment, since the content of the event occurring in the vehicle is identified based on the sensed data having a smaller data amount than the image data, the processing load of the information processing system can be reduced. Further, since only the video data when the predetermined event occurs is transmitted, transmission of unnecessary video data can be prevented, and data traffic and data communication cost can be reduced.

In the first embodiment, the information processing system includes the data analysis server 2 and the video storage server 3, but the present invention is not particularly limited thereto, and may include one server including the functions of the data analysis server 2 and the video storage server 3. In this case, since the server has already acquired the sensing data set, the communication unit 16 of the drive recorder 1 transmits only the image data set to the image storage server 3 at step S15. The communication unit 31 of the video storage server 3 receives the video data set transmitted from the drive recorder 1. The video data management unit 321 then stores the video data set received by the communication unit 31 and the sensing data set used for recognition in the video data storage unit 331.

In the first embodiment, the communication unit 16 of the drive recorder 1 transmits the video data set and the sensor data set to the video storage server 3, but the present invention is not particularly limited to this, and the event content, which is the recognition result of the event, may be transmitted to the video storage server 3 in addition to the video data set and the sensor data set. The video data management unit 321 of the video storage server 3 may store the video data set, the sensor data set, and the event recognition result received by the communication unit 31 in the video data storage unit 331.

Next, the event recognition in the first embodiment will be further described.

Fig. 7 is a schematic diagram showing the relationship between the acceleration in the X-axis direction, the acceleration in the Y-axis direction, and the acceleration in the Z-axis direction, which are measured when sudden braking is performed as one of dangerous driving events, and the time in the first embodiment. In fig. 7, the vertical axis represents acceleration (G) and the horizontal axis represents time (sec). The solid line represents the acceleration in the X-axis direction, the broken line represents the acceleration in the Y-axis direction, and the dashed dotted line represents the acceleration in the Z-axis direction. Time 0(sec) represents a time at which it is determined that an event has occurred.

As shown in fig. 7, when sudden braking is performed, it is determined that the acceleration in the Y-axis direction exceeds the threshold value, and the sensing data group is transmitted to the data analysis server 2. As shown in fig. 7, when the sensing data set is input to the recognition model, it is recognized that a sudden braking event has occurred.

Fig. 8 is a schematic diagram showing the relationship between the acceleration in the X-axis direction, the acceleration in the Y-axis direction, and the acceleration in the Z-axis direction, which are measured when the vehicle crosses a bump on the road, and the time in the first embodiment. In fig. 8, the vertical axis represents acceleration (G) and the horizontal axis represents time (sec). The solid line represents the acceleration in the X-axis direction, the broken line represents the acceleration in the Y-axis direction, and the dashed dotted line represents the acceleration in the Z-axis direction. Time 0(sec) represents a time at which it is determined that an event has occurred.

As shown in fig. 8, when the vehicle passes over a bump on the road, it is determined that the acceleration in the X-axis direction exceeds the threshold value, and the sensing data group is transmitted to the data analysis server 2. As shown in fig. 8, when the sensing data set is input to the recognition model, it is recognized that a normal event has occurred. The image captured when the vehicle passes over the road is not the image to be stored in the image storage server 3. For this reason, in a case where the sensing data group indicating that the vehicle has passed a bump on the road is input to the recognition model, it is recognized that the normal event has occurred. Normal events are not defined events. Therefore, the event determination unit 145 determines that the event is not a predetermined event when the event indicated by the recognition result received by the communication unit 16 is a normal event.

When the vehicle travels on a circular road, the steering wheel is continuously switched in a certain direction, and it is determined that the acceleration in the X-axis direction exceeds the threshold value, and the sensor data group is transmitted to the data analysis server 2. In this way, even when the sensing data group indicating that the vehicle is traveling on the annular road is input to the recognition model, it can be recognized that the normal event has occurred.

Then, in the case of a vehicle collision, it is determined that the acceleration in the X-axis direction and the acceleration in the Y-axis direction exceed the threshold values, and the sensing data group is transmitted to the data analysis server 2. When the sensing data set is input to the recognition model, it is recognized that a collision event has occurred.

In the first embodiment, the event recognition unit 221 can recognize not only the collision event but also the degree of the collision. That is, the event recognition unit 221 may recognize that the degree of the collision is any one of "large", "medium", and "small" from the maximum value of the acceleration in the X-axis direction, the Y-axis direction, and the Z-axis direction.

When the vehicle passes over a bump on the road, it is determined that the acceleration in the X-axis direction exceeds the threshold value, and the sensing data group is transmitted to the data analysis server 2. When the sensing data set is input to the recognition model, it is recognized that a normal event has occurred. The image captured when the vehicle passes over a bump on the road is not the image that should be stored in the image storage server 3. For this reason, in a case where the sensing data group indicating that the vehicle has passed a bump on the road is input to the recognition model, it is recognized that the normal event has occurred. Normal events are not defined events. Therefore, the event determination unit 145 determines that the event is not a predetermined event when the event indicated by the recognition result received by the communication unit 16 is a normal event.

In the first embodiment, the sensed data includes 3-axis acceleration and speed, but the present invention is not particularly limited to this, and may include data indicating the presence or absence of braking, data indicating the presence or absence of a right turn signal, and data indicating the presence or absence of a left turn signal.

Further, the information processing system according to the first embodiment includes the drive recorder 1, the data analysis server 2, and the video storage server 3, but the present invention is not particularly limited thereto, and only the drive recorder 1 and the video storage server 3 may be provided without the data analysis server 2. In this case, the drive recorder 1 may include an event recognition unit and a recognition model storage unit of the data analysis server 2. In this case, when the event occurrence detection unit 144 determines that an event has occurred, the event recognition unit recognizes the content of the event occurring in the vehicle based on the sensed data.

(second embodiment)

The information processing system of the first embodiment includes a drive recorder 1 fixed in advance to a vehicle, and the information processing system of the second embodiment includes a portable terminal detachable from the vehicle instead of the drive recorder 1.

Fig. 9 is a schematic diagram conceptually showing the entire configuration of the information processing system of the second embodiment.

The information processing system shown in fig. 9 includes a mobile terminal 1A, a data analysis server 2, and a video storage server 3. The portable terminal 1A is an example of a terminal device, and is attachable to and detachable from a vehicle. The data analysis server 2 is communicably connected to the mobile terminal 1A via the network 4. The video storage server 3 is communicably connected to the mobile terminal 1A via the network 4. In the second embodiment, the same components as those in the first embodiment are denoted by the same reference numerals, and descriptions thereof are omitted.

Since the drive recorder 1 of the first embodiment is fixed to the vehicle, the posture of the drive recorder 1 with respect to the vehicle does not change. On the other hand, since the portable terminal 1A according to the second embodiment is detachable, the posture of the portable terminal 1A with respect to the vehicle may change every time the portable terminal 1A is mounted on the vehicle. Here, the portable terminal 1A acquires the posture of the portable terminal 1A, and corrects the acquired sensing data based on the acquired posture.

Fig. 10 is a block diagram showing the configuration of a mobile terminal according to the second embodiment.

The portable terminal 1A is, for example, a smartphone. The portable terminal 1A is mounted on, for example, a dashboard of a vehicle. The mobile terminal 1A includes a camera 11, an acceleration sensor 12, a position measuring unit 13, a processor 14A, a memory 15, a communication unit 16, and a gyro sensor 17.

The gyro sensor 17 detects the posture of the portable terminal 1A. The posture of the mobile terminal 1A is represented by the angular velocity of the mobile terminal 1A. The gyro sensor 17 is, for example, a 3-axis gyro sensor, and detects an angular velocity around the X axis, an angular velocity around the Y axis, and an angular velocity around the Z axis of the portable terminal 1A.

The processor 14A is, for example, a CPU, and includes a video data acquisition unit 141, a sensor data acquisition unit 142, a time stamp imparting unit 143, an event occurrence detection unit 144, an event determination unit 145, a video data specifying unit 146, and a preprocessing unit 147.

The preprocessor 147 acquires the posture of the mobile terminal 1A from the gyro sensor 17. That is, the preprocessing section 147 acquires the angular velocity of the portable terminal 1A about the X axis, the angular velocity about the Y axis, and the angular velocity about the Z axis from the gyro sensor 17. The preprocessing section 147 corrects the acquired sensing data based on the acquired pose. The portable terminal 1A has an attachment posture as a reference. The preprocessing section 147 calculates to what degree the portable terminal 1A is tilted about the X-axis, about the Y-axis, and about the Z-axis with respect to the reference posture, based on the acquired angular velocity of the portable terminal 1A about the X-axis, the angular velocity about the Y-axis, and the angular velocity about the Z-axis. The preprocessor 147 corrects the accelerations in the X-axis direction, the Y-axis direction, and the Z-axis direction detected by the acceleration sensor 12 based on the calculated inclination angle.

The time stamp adding unit 143 adds time stamps indicating the time of acquisition to the video data acquired by the video data acquiring unit 141 and the sensed data corrected by the preprocessing unit 147.

Fig. 11 is a first flowchart for explaining the processing of the information processing system of the second embodiment, and fig. 12 is a second flowchart for explaining the processing of the information processing system of the second embodiment.

The processing of steps S21 and S22 shown in fig. 11 is the same as the processing of steps S1 and S2 shown in fig. 5, and therefore, the description thereof is omitted.

Next, in step S23, the preprocessor 147 of the portable terminal 1A acquires the posture of the portable terminal 1A from the gyro sensor 17. At this time, the preprocessing section 147 acquires the angular velocity of the portable terminal 1A about the X axis, the angular velocity about the Y axis, and the angular velocity about the Z axis from the gyro sensor 17.

Next, in step S24, the preprocessing section 147 corrects the sensing data acquired by the sensing data acquisition section 142 based on the acquired posture of the portable terminal 1A. Specifically, the preprocessing unit 147 corrects the accelerations in the X-axis direction, the Y-axis direction, and the Z-axis direction acquired by the sensing data acquisition unit 142 based on the acquired angular velocity of the portable terminal 1A about the X-axis, the angular velocity about the Y-axis, and the angular velocity about the Z-axis.

Next, in step S25, the time stamp providing unit 143 provides the video data acquired by the video data acquiring unit 141 and the sensed data corrected by the preprocessing unit 147 with time stamps indicating the time of acquisition.

The processing of step S26 shown in fig. 11 to step S39 shown in fig. 12 is the same as the processing of step S4 shown in fig. 5 to step S17 shown in fig. 6, and therefore, the description thereof is omitted.

In the present embodiment, the mobile terminal 1A acquires the posture of the mobile terminal 1A and corrects the sensory data each time the sensory data is acquired, but the present invention is not particularly limited thereto. The portable terminal 1A may also acquire the posture of the portable terminal 1A before the sensing data is initially acquired. Further, the portable terminal 1A may correct the sensory data using the posture of the portable terminal 1A acquired first each time the sensory data is acquired.

In the above embodiments, each component may be configured by dedicated hardware, or may be realized by executing a software program suitable for each component. Each component may be realized by causing a program execution unit such as a CPU or a processor to read out and execute a software program stored in a storage medium such as a hard disk or a semiconductor memory.

A part or all of the functions of the apparatus according to the embodiment of the present invention may be typically realized as an lsi (large Scale integration) integrated circuit. Some or all of these functions may be formed into chips, or may be formed into chips including some or all of these functions. The integrated circuit is not limited to the LSI, and may be realized by a dedicated circuit or a general-purpose processor. An fpga (field Programmable gate array) which can be programmed after LSI manufacturing or a reconfigurable processor which can reconfigure connection or setting of circuit cells inside LSI can be used.

In addition, a part or all of the functions of the apparatus according to the embodiment of the present invention may be realized by causing a processor such as a CPU to execute a program.

Further, the numbers used in the above are examples given for specifically explaining the present invention, and the present invention is not limited to these exemplified numbers.

The order in which the steps shown in the flowcharts are executed is merely an example given for specifically explaining the present invention, and may be an order other than the above, as long as the same effects can be obtained. Moreover, some of the above steps may be performed concurrently with (or in parallel with) other steps.

Industrial applicability

The technique according to the present invention can reduce the processing load of the information processing system, and is useful as a technique for processing sensor data acquired by a sensor provided in a vehicle.

27页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种图像处理方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类