Data processing method and device, electronic equipment and storage medium

文档序号:923965 发布日期:2021-03-02 浏览:7次 中文

阅读说明:本技术 数据处理方法、装置、电子设备及存储介质 (Data processing method and device, electronic equipment and storage medium ) 是由 江航 于 2020-11-23 设计创作,主要内容包括:本申请提供一种数据处理方法、装置、电子设备及存储介质,该方法包括:接收视频采集设备发送的视频文件,所述视频文件的视频封装的私有信息中携带有定位信息,所述定位信息由所述视频采集设备在视频采集过程中按照预设时间间隔定位得到,所述定位信息包括通过定位获取到的位置信息以及获取到该位置信息时的时间信息;提取所述视频文件中的定位信息,保存所述定位信息,并记录所述定位信息与所述视频文件的对应关系;基于所述定位信息对所述视频文件进行展示。该方法可以为针对位置信息进行视频检索提供了数据支持。(The application provides a data processing method, a data processing device, an electronic device and a storage medium, wherein the method comprises the following steps: receiving a video file sent by video acquisition equipment, wherein positioning information is carried in video packaged private information of the video file, the positioning information is obtained by positioning the video acquisition equipment according to a preset time interval in a video acquisition process, and the positioning information comprises position information obtained through positioning and time information obtained when the position information is obtained; extracting positioning information in the video file, storing the positioning information, and recording the corresponding relation between the positioning information and the video file; and displaying the video file based on the positioning information. The method can provide data support for video retrieval aiming at the position information.)

1. A data processing method, comprising:

receiving a video file sent by video acquisition equipment, wherein positioning information is carried in video packaged private information of the video file, the positioning information is obtained by positioning the video acquisition equipment according to a preset time interval in a video acquisition process, and the positioning information comprises position information obtained through positioning and time information obtained when the position information is obtained;

extracting positioning information in the video file, storing the positioning information, and recording the corresponding relation between the positioning information and the video file;

and displaying the video file based on the positioning information.

2. The method of claim 1, wherein the presenting the video file based on the positioning information comprises:

displaying video file information in a target area of a video file display interface based on the positioning information corresponding to the video file; the target area is associated with the geographic position represented by the positioning information, and the video file information comprises a video file icon, and the acquisition starting time or the acquisition ending time of the video file.

3. The method according to claim 1 or 2, wherein said presenting the video file based on the positioning information comprises:

when a selection instruction for a target video file is received, displaying and recording a motion track of the target video file in a video file display interface based on the positioning information corresponding to the target video file.

4. The method according to claim 3, wherein after the displaying and recording the motion trail of the target video file process in the video file display interface, the method further comprises:

and when a selection instruction aiming at a target track point in the motion track is detected, jumping the playing time of the target video file to the time corresponding to the target track point for playing, wherein the target track point is any point on the motion track.

5. A data processing apparatus, comprising:

the receiving unit is used for receiving a video file sent by video acquisition equipment, positioning information is carried in video packaged private information of the video file, the positioning information is obtained by positioning the video acquisition equipment according to a preset time interval in a video acquisition process, and the positioning information comprises position information obtained through positioning and time information obtained when the position information is obtained;

the extraction unit is used for extracting the positioning information in the video file;

the storage unit is used for storing the positioning information and recording the corresponding relation between the positioning information and the video file;

and the display unit is used for displaying the video file based on the positioning information.

6. The apparatus of claim 5,

the display unit is specifically used for displaying video file information in a target area of a video file display interface based on the positioning information corresponding to the video file; the target area is associated with the geographic position represented by the positioning information, and the video file information comprises a video file icon, and the acquisition starting time or the acquisition ending time of the video file.

7. The apparatus of claim 5 or 6,

the display unit is further used for displaying and recording the motion trail of the target video file in a video file display interface based on the positioning information corresponding to the target video file when a selection instruction for the target video file is received.

8. The apparatus of claim 7, further comprising:

and the control unit is used for jumping the playing time of the target video file to the time corresponding to the target track point to play when a selection instruction aiming at the target track point in the motion track is detected, wherein the target track point is any point on the motion track.

9. An electronic device, comprising:

a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; the processor is configured to execute machine executable instructions to implement the method steps of any of claims 1-4.

10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 4.

Technical Field

The present application relates to the field of video retrieval technologies, and in particular, to a data processing method and apparatus, an electronic device, and a storage medium.

Background

In the current video retrieval scheme, a video file list is usually presented according to a time sequence for presentation of video files, and a retrieval person performs video retrieval based on time.

Practice shows that in the above video retrieval scheme, when a video file associated with an event needs to be queried, all video files within the occurrence time of the event need to be retrieved to query the video file associated with the event, and the retrieval efficiency is too low.

Disclosure of Invention

In view of the above, the present application provides a data processing method, an apparatus, an electronic device and a storage medium.

Specifically, the method is realized through the following technical scheme:

according to a first aspect of embodiments of the present application, there is provided a data processing method, including:

receiving a video file sent by video acquisition equipment, wherein positioning information is carried in video packaged private information of the video file, the positioning information is obtained by positioning the video acquisition equipment according to a preset time interval in a video acquisition process, and the positioning information comprises position information obtained through positioning and time information obtained when the position information is obtained;

extracting positioning information in the video file, storing the positioning information, and recording the corresponding relation between the positioning information and the video file;

and displaying the video file based on the positioning information.

According to a second aspect of embodiments of the present application, there is provided a data processing apparatus comprising:

the receiving unit is used for receiving a video file sent by video acquisition equipment, positioning information is carried in video packaged private information of the video file, the positioning information is obtained by positioning the video acquisition equipment according to a preset time interval in a video acquisition process, and the positioning information comprises position information obtained through positioning and time information obtained when the position information is obtained;

the extraction unit is used for extracting the positioning information in the video file;

the storage unit is used for storing the positioning information and recording the corresponding relation between the positioning information and the video file;

and the display unit is used for displaying the video file based on the positioning information.

According to a third aspect of embodiments of the present application, there is provided an electronic apparatus including:

a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; the processor is configured to execute machine executable instructions to implement the data processing method of the first aspect.

According to a fourth aspect of embodiments herein, there is provided a computer-readable storage medium having stored therein a computer program which, when executed by a processor, implements the data processing method of the first aspect.

The data processing method of the embodiment of the application receives a video file sent by video acquisition equipment, wherein the private information of video encapsulation of the video file carries positioning information, the positioning information is obtained by the video acquisition equipment through positioning according to a preset time interval in the video acquisition process, and the positioning information comprises position information obtained through positioning and time information obtained when the position information is obtained; the method comprises the steps of extracting positioning information in the video file, storing the positioning information, recording the corresponding relation between the positioning information and the video file, displaying the video file based on the positioning information, providing data support for video retrieval aiming at the position information, and improving the retrieval efficiency of the video file related to the event.

Drawings

Fig. 1 is a schematic flow chart diagram illustrating a data processing method according to an exemplary embodiment of the present application;

FIG. 2 is a diagram illustrating a video file recording a list of anchor points during an entire video capture process according to an exemplary embodiment of the present application;

FIG. 3 is a diagram illustrating a backend device storing a video file according to an exemplary embodiment of the present application;

FIG. 4 is a schematic diagram of a video file presentation interface shown in an exemplary embodiment of the present application;

fig. 5 is a schematic diagram illustrating a method for controlling a play time jump of a corresponding video file according to an exemplary embodiment of the present application;

FIG. 6 is a block diagram of a data processing apparatus according to an exemplary embodiment of the present application;

FIG. 7 is a block diagram of another data processing apparatus according to an exemplary embodiment of the present application;

fig. 8 is a schematic diagram of a hardware structure of the apparatus shown in fig. 6 or fig. 7 according to an exemplary embodiment of the present application.

Detailed Description

Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.

The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.

In order to make the technical solutions provided in the embodiments of the present application better understood and make the above objects, features and advantages of the embodiments of the present application more comprehensible, the technical solutions in the embodiments of the present application are described in further detail below with reference to the accompanying drawings.

Referring to fig. 1, a schematic flow chart of a data processing method according to an embodiment of the present disclosure is shown in fig. 1, where the data processing method may include the following steps:

step S100, receiving a video file sent by a video acquisition device, wherein the video-encapsulated private information of the video file carries positioning information, the positioning information is obtained by the video acquisition device by positioning according to a preset time interval in the video acquisition process, and the positioning information comprises position information obtained by positioning and time information obtained when the position information is obtained.

In the embodiment of the application, a positioning chip may be disposed on the video acquisition device, and when video acquisition is performed, the video acquisition device may acquire position information (e.g., latitude and longitude information) through the positioning chip based on a preset time interval (which may be set according to an actual scene, for example, 1 second), and carry positioning information in a video file, where the positioning information may include the position information acquired through the positioning chip and time information when the position information is acquired.

For example, to avoid the influence of the positioning information on the decoding of the video file, the positioning information may be carried in private information of the encapsulation information of the video file.

Step S110, extracting the positioning information in the video file, storing the positioning information, and recording the corresponding relation between the positioning information and the video file.

In the embodiment of the application, when a video file sent by video acquisition equipment is received, the positioning information in the video file can be extracted, the positioning information is stored in a database, and the corresponding relation between the positioning information and the video file is recorded.

And step S120, displaying the video file based on the positioning information.

In the embodiment of the application, when the corresponding relation between the positioning information and the video file is determined, the video file can be displayed based on the positioning information,

it can be seen that, in the method flow shown in fig. 1, the positioning information of the video acquisition process is acquired by the video acquisition device, and is carried in the private information of the encapsulation information of the video file and sent to the back-end device, when the back-end device receives the video file, the positioning information in the video file can be extracted, the positioning information is stored, and the corresponding relationship between the positioning information and the video file is recorded, so that the video file is displayed based on the positioning information, data support is provided for video retrieval aiming at the positioning information, and the retrieval efficiency of the video file associated with the event can be improved.

As a possible embodiment, in step S120, the presenting the video file based on the positioning information may include:

displaying the video file in a target area of a video file display interface based on the positioning information corresponding to the video file; the target area is associated with a geographic location represented by the positioning information, and the video file information comprises a video file icon, and a start time or an end time of acquisition of the video file.

For example, in the embodiment of the present application, a function of displaying a video file based on a geographical area may be provided so that a search person can know an area covered by the video file.

For example, a geographic location represented by the positioning information may be determined based on the positioning information corresponding to the video file, and the video file may be displayed in an area (referred to as a target area herein) associated with the geographic location in the video file display interface.

For example, the video file information may include, but is not limited to, a video file icon, a capture start time or end time of the video file, and the like, that is, when the video file information is presented in the target area of the video file presentation interface, the video file icon (for identifying the presence of the video file in the area) may be presented in the target area, and the capture time of the video file in the target area may be captured.

The capture time is usually the capture start time or the capture end time of the video file, that is, when the video file is displayed, the video file is typically displayed in an area associated with a geographic position where the video file starts to be captured or an area associated with a geographic position where the video file finishes to be captured.

It should be noted that, when the video file is displayed, an area corresponding to an intermediate time point (that is, an area associated with the geographical location where the video capture device is located when the system time is the intermediate time point) may also be determined based on any intermediate time point between the capture start time and the capture end time of the video file, and the video file icon and the intermediate time point are displayed in the area.

It should be noted that the video file information may further include a time length of the video file, and at this time, since the moving route and the average moving speed of the video capture device are known, when the capture start time or the capture end time of the video file is known and the capture time length (i.e., the time length of the video file) is known, the retriever may estimate the area covered by the video file based on the video file information displayed in the target area.

For example, assuming that the video file information includes the acquisition start time and duration of the video file, the retriever may estimate the area that the video file may cover based on the area (the area associated with the geographic location where the video acquisition device starts to acquire the video file) where the video file information is displayed in the video file display interface, the duration of the video file, the moving route of the video acquisition device, and the average moving speed of the video acquisition device.

For example, the average moving speed of the video capture device may be determined based on the type of video capture device.

For example, for a video capture device on board a vehicle, such as an on-board recorder, the average movement speed of the video capture device may be determined based on the average movement speed of the vehicle.

In one example, the step S120 of presenting the video file based on the positioning information may include:

and when a selection instruction for the target video file is received, displaying the motion track of the process of recording the target video file in a video file display interface based on the positioning information corresponding to the target video file.

For example, the target video file does not refer to a fixed video file, but may refer to any video file displayed in the video display interface.

For example, when a selection instruction, such as a click instruction, for a target video file is received, track information corresponding to the positioning information may be determined based on the positioning information corresponding to the target video file, and a motion track corresponding to the target video file may be displayed in a video file display interface, so that a search person may determine the motion track of a video capture device corresponding to the video file during recording of the target video file.

In an example, after the displaying the motion trajectory of the recording target video file process in the video file display interface, the method may further include:

and when a selection instruction aiming at a target track point in the motion track is detected, jumping the playing time of the target video file to the time corresponding to the target track point for playing, wherein the target track point is any point on the motion track.

Illustratively, the target track point does not refer to a fixed track point in the motion track corresponding to the target video file, but may refer to any track point in the motion track corresponding to the target video file.

Illustratively, when a selection instruction, such as a click instruction, for a target track point in a motion track corresponding to a target video file is detected, the playing time of the target video file may be skipped to the time corresponding to the target track point for playing based on the time information corresponding to the track point, so that a search person can quickly locate a video picture at a specific position.

In order to enable those skilled in the art to better understand the technical solutions provided in the embodiments of the present application, the technical solutions provided in the embodiments of the present application are described below with reference to specific embodiments.

In this embodiment, a video file map search (which may also be referred to as a video map search) is taken as an example, that is, in the video file display interface, the map is taken as a background, and the video file information is displayed on the map.

In this embodiment, the implementation flow of the video file map retrieval is as follows:

1. in a video acquisition process (video recording process), the video acquisition device (also referred to as video recording device) acquires the position information (e.g., longitude and latitude information) of the current position of the video acquisition device through the deployed positioning chip every 1 second (e.g., 1 second at a preset time interval), and writes the positioning information including the acquired position information and the time information (e.g., current system time) of the acquired position information into the private information of the video package (so that the positioning information does not affect the player to play a standard video file), so that a positioning point list in the whole video acquisition process can be recorded in the video file (also referred to as video recording file), and a schematic diagram of the positioning point list can be shown in fig. 2.

2. The backend device receives a video file carrying positioning information sent by the video acquisition device, extracts the positioning information by analyzing the private information of the video package, stores the positioning information in a database, and records the corresponding relationship between the positioning information and the video file, and a schematic diagram of the backend device can be as shown in fig. 3.

For example, taking the example of identifying a video file by its name, it is assumed that the positioning information carried in the video file sp1 includes: (T0, A0), (T0+1s, A1) …, (T0+ Ns, AN)). Where a0 is the location information in the first positioning information included in the video file, and T0 is the time when the location information is obtained, the backend device may record the correspondence between sp1 and (T0, a0), (T0+1s, a1) …, (T0+ Ns, AN)).

3. The back-end equipment can provide the function of displaying the video file based on the geographic area, so that a searching person can know the motion track in the recording process of the video file, and therefore whether the video file needs to be watched in detail or not is determined.

For example, as shown in fig. 4, a map is used as a background in the video file presentation interface, and for any video file, an area for presenting video file information may be determined based on the location information in the first positioning information corresponding to the video file, and the video file information corresponding to the video file may be presented in the area. The video file information includes a video file icon and a capture start time of the video file.

Still taking the above example as an example, the backend device may determine an area corresponding to a0 and present video file information corresponding to a video file sp1 in the area in the map.

4. When a click command (i.e. a selection command) for a video file icon is detected, a motion track in the process of recording the video file is displayed, so that a search person determines a motion track (which may be called a video track) of a video capture device corresponding to the video file in the process of capturing the video file, and a schematic diagram of the motion track may be as shown in fig. 4.

Still taking the above example as AN example, when the backend device detects AN operation instruction to click on a video file icon corresponding to the video file sp1, the backend device may determine a motion trajectory of the video capture device in capturing the video file sp1 based on the position information included in the positioning information corresponding to the video file sp1, that is, a0, a1, … AN, and display the motion trajectory in the map.

5. When a click instruction (namely a selection instruction) aiming at a track point in the motion track is detected, the playing time of the corresponding video file can be controlled to jump to the time corresponding to the track point for playing.

For example, the back-end device may calculate a play offset position of the video file based on time information included in the positioning information corresponding to the track point and the acquisition start time of the video file, and control the play time of the corresponding video file to jump to the time corresponding to the track point for playing, where a schematic diagram of the back-end device may be as shown in fig. 5.

Still taking the above example as an example, assuming that the positioning information corresponding to the selected track point is (T0+ ks, Ak) and the acquisition start time of the video file sp1 is T ', the playing time corresponding to the track point is (T0+ k-T') s, and the back-end device may control the playing time of the video file sp1 to jump to the corresponding time for playing.

The methods provided herein are described above. The following describes the apparatus provided in the present application:

referring to fig. 6, a schematic structural diagram of a data processing apparatus according to an embodiment of the present application is shown in fig. 6, where the data processing apparatus may include:

a receiving unit 610, configured to receive a video file sent by a video acquisition device, where private information of a video package of the video file carries positioning information, the positioning information is obtained by the video acquisition device by positioning according to a preset time interval in a video acquisition process, and the positioning information includes position information obtained by positioning and time information obtained when the position information is obtained;

an extracting unit 620, configured to extract positioning information in the video file;

a saving unit 630, configured to save the positioning information, and record a corresponding relationship between the positioning information and the video file;

a display unit 640, configured to display the video file based on the positioning information.

In some embodiments, the display unit 640 is specifically configured to display video file information in a target area of a video file display interface based on the positioning information corresponding to the video file; the target area is associated with the geographic position represented by the positioning information, and the video file information comprises a video file icon, and the acquisition starting time or the acquisition ending time of the video file.

In some embodiments, the displaying unit 640 is further configured to, when a selection instruction for a target video file is received, display a motion track of a process of recording the target video file in a video file displaying interface based on the positioning information corresponding to the target video file.

In some embodiments, as shown in fig. 7, the data processing apparatus may further include:

and the control unit 650 is configured to skip the playing time of the target video file to a time corresponding to the target track point for playing when a selection instruction for the target track point in the motion track is detected, where the target track point is any point on the motion track.

Correspondingly, the application also provides a hardware structure of the device shown in fig. 6 or fig. 7. Referring to fig. 8, the hardware structure may include: a processor and a machine-readable storage medium having stored thereon machine-executable instructions executable by the processor; the processor is configured to execute machine-executable instructions to implement the methods disclosed in the above examples of the present application.

Based on the same application concept as the method, embodiments of the present application further provide a machine-readable storage medium, where several computer instructions are stored, and when the computer instructions are executed by a processor, the method disclosed in the above example of the present application can be implemented.

The machine-readable storage medium may be, for example, any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.

It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

13页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于可配置API接口的医疗数据服务方法及系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!