Video view finding push method and system

文档序号:1889372 发布日期:2021-11-26 浏览:8次 中文

阅读说明:本技术 视频取景地推送方法及系统 (Video view finding push method and system ) 是由 季贇杰 戴永成 于 2020-05-22 设计创作,主要内容包括:本申请公开了一种视频取景地推送方法,该方法包括:获取视频数据,并从所述视频数据中提取视频基本信息;将所提取的所述视频基本信息与数据库进行匹配,根据匹配结果确定所述视频的关联取景地,并对所述视频进行关联取景地配置;根据所述视频的取景地配置信息,在播放所述视频时向用户推送所述视频关联的取景地,其中所述取景地配置信息包括推送时间点和每个所述推送时间点对应的取景地信息。本申请还公开了一种视频取景地推送系统、电子装置和计算机可读存储介质。由此,能够将视频作品与实际取景地进行关联并向用户推送,便于用户查询相关资讯,从而提升这些线下取景地在线上的发现率,并提升用户体验。(The application discloses a video viewfinding push method, which comprises the following steps: acquiring video data and extracting video basic information from the video data; matching the extracted basic video information with a database, determining a related view-finding place of the video according to a matching result, and configuring the related view-finding place of the video; and pushing a view place associated with the video to a user when the video is played according to the view place configuration information of the video, wherein the view place configuration information comprises push time points and view place information corresponding to each push time point. The application also discloses a video view-finding push system, an electronic device and a computer readable storage medium. Therefore, the video works can be associated with the actual view finding places and pushed to the user, so that the user can conveniently inquire related information, the finding rate of the offline view finding places on the line is improved, and the user experience is improved.)

1. A video viewfinding push method, the method comprising:

acquiring video data and extracting video basic information from the video data;

matching the extracted basic video information with a database, determining a related view-finding place of the video according to a matching result, and configuring the related view-finding place of the video; and

and pushing a view place associated with the video to a user when the video is played according to the view place configuration information of the video, wherein the view place configuration information comprises push time points and view place information corresponding to each push time point.

2. The video viewfinding push method according to claim 1, wherein the matching the extracted video basic information with a database, determining the associated viewfinding of the video according to the matching result, and configuring the associated viewfinding of the video comprises:

matching the extracted video basic information with the database, and judging whether keywords are hit or not;

when keywords are hit, the related framing place of the video mentioned in the database is shown, the key frame of the video is matched with the picture of the framing place mentioned in the database, and whether the key frame corresponding to the framing place can be matched or not is judged;

when the key frame corresponding to the view finding place can be matched, judging whether the database contains the address information of the view finding place;

when the database contains the address information of the viewing place, marking the pushing time point of the viewing place in the video according to the key frame corresponding to the viewing place, and configuring the viewing place information corresponding to the pushing time point, wherein the viewing place information comprises the name of the viewing place and the address information of the viewing place.

3. The video viewfinding push method according to claim 2, wherein the matching the extracted video basic information with a database, determining the associated viewfinding of the video according to the matching result, and configuring the associated viewfinding of the video further comprises:

when the database does not contain the address information of the viewing place, marking the pushing time point of the viewing place in the video, and manually completing the viewing place information corresponding to the pushing time point.

4. The video viewfinding push method according to claim 3, wherein the matching the extracted video basic information with a database, determining the associated viewfinding of the video according to the matching result, and configuring the associated viewfinding of the video further comprises:

when the keywords are not hit, calculating the probability of the related viewing place of the video in a preset mode according to the key information extracted from the video data, wherein the preset mode comprises the steps of presetting the weight coefficient corresponding to each kind of key information and calculating the probability according to the condition whether the key information appears and the weight coefficient;

judging whether the probability reaches a first threshold value;

and when the probability reaches the first threshold value or when keywords are hit but key frames corresponding to the framing place are not matched, taking the video as a video to be configured, and switching to the configuration of the framing place where the video is manually completed.

5. The video viewfinding push method according to any one of claims 1 to 4, wherein the method further comprises, before acquiring video data and extracting video basic information from the video data:

receiving video data uploaded by a user and framing configuration information of the video;

judging whether the video uploads framing configuration information or not;

when the video is not uploaded with the view configuration information, the steps of acquiring the video data and extracting the video basic information from the video data are executed;

when the video uploads the framing place configuration information, directly executing the step of pushing the framing place associated with the video to a user when the video is played according to the framing place configuration information of the video.

6. The video viewfinding push method according to claim 2, wherein the matching the key frame of the video with the viewfinding picture mentioned in the database comprises:

and extracting pictures of the video at equal time intervals, taking the pictures as the key frames, and matching the key frames with the pictures of the view-finding places.

7. The video viewfinding push method according to claim 2, wherein the matching the key frame of the video with the viewfinding picture mentioned in the database comprises:

acquiring bullet screen information corresponding to the video, calculating the frequency of key information appearing in the bullet screen information in each unit time, judging whether the frequency reaches a second threshold value, and taking the picture of the video corresponding to the unit time with the frequency reaching the second threshold value as the key frame to be matched with the picture of the viewing place.

8. The video viewfinding push method of claim 1, wherein the pushing the video associated viewfinding to a user while playing the video comprises:

and when the video is played to the pushing time point, acquiring the framing place information corresponding to the pushing time point, and displaying the framing place information to a user on a playing page of the video.

9. The video viewfinding push method of claim 8 wherein the pushing the video associated viewfinding to a user while playing the video further comprises:

displaying a video icon corresponding to each view finding coordinate on a map page, and playing the video or video segment corresponding to the video icon after receiving that a user clicks the video icon.

10. The video viewfinding push method of claim 1, wherein the pushing the video associated viewfinding to a user while playing the video comprises:

and determining the viewing place information which is pushed to the user and is associated with the video according to the user information, and displaying the viewing place information to the user on a playing page of the video when the video is played to a pushing time point corresponding to the determined viewing place information.

11. A video viewfinding push system, the system comprising:

the extraction module is used for acquiring video data and extracting basic video information from the video data;

the configuration module is used for matching the extracted basic video information with a database, determining a related view-finding place of the video according to a matching result, and configuring the related view-finding place of the video; and

and the pushing module is used for pushing the view-finding place related to the video to a user when the video is played according to the view-finding place configuration information of the video, wherein the view-finding place configuration information comprises pushing time points and view-finding place information corresponding to each pushing time point.

12. An electronic device, comprising: memory, processor and a video-viewfinding push program stored on the memory and executable on the processor, which when executed by the processor implements the video-viewfinding push method of any one of claims 1 to 7.

13. A computer-readable storage medium, characterized in that a video-viewfinding push program is stored on the computer-readable storage medium, which when executed by a processor implements the video-viewfinding push method according to any one of claims 1 to 7.

Technical Field

The present application relates to the field of data processing technologies, and in particular, to a method, a system, an electronic device, and a computer-readable storage medium for pushing a video frame.

Background

The viewing places (also called video-related 'holy places') appearing in the video works comprise actual shooting places in movie and television shooting works or actual places with high similarity and obtained by production parties in cartoon game works. Wherein the video work includes, but is not limited to: drama, anarchic, documentary, animation, user-made video, etc.

In the prior art, no effective mode exists for pushing the view-finding place appearing in the video for the user, and the view-finding place is actively discovered mainly by a part of users and popularized to other users in a barrage, a comment or other community platforms. That is to say, the existing video products lack the correlation to the viewing place appearing in the video works, that is, the user cannot timely and conveniently learn the actual place of the viewing place appearing in the video after watching the video works in a single product, so that the obstruction of the user in traveling to the on-site tourist souvenir and searching is increased, and the user experience is influenced.

It should be noted that the above-mentioned contents are not intended to limit the scope of protection of the application.

Disclosure of Invention

The application mainly aims to provide a video view-finding pushing method, a video view-finding pushing system, an electronic device and a computer readable storage medium, and aims to solve the problem of how to associate a video work with a view-finding place appearing in the video work and timely push the video work to a user.

In order to achieve the above object, an embodiment of the present application provides a video viewfinding push method, where the method includes:

acquiring video data and extracting video basic information from the video data;

matching the extracted basic video information with a database, determining a related view-finding place of the video according to a matching result, and configuring the related view-finding place of the video; and

and pushing a view place associated with the video to a user when the video is played according to the view place configuration information of the video, wherein the view place configuration information comprises push time points and view place information corresponding to each push time point.

Optionally, the matching the extracted basic video information with a database, determining an associated view-finding area of the video according to a matching result, and configuring the associated view-finding area of the video includes:

matching the extracted video basic information with the database, and judging whether keywords are hit or not;

when keywords are hit, the related framing place of the video mentioned in the database is shown, the key frame of the video is matched with the picture of the framing place mentioned in the database, and whether the key frame corresponding to the framing place can be matched or not is judged;

when the key frame corresponding to the view finding place can be matched, judging whether the database contains the address information of the view finding place;

when the database contains the address information of the viewing place, marking the pushing time point of the viewing place in the video according to the key frame corresponding to the viewing place, and configuring the viewing place information corresponding to the pushing time point, wherein the viewing place information comprises the name of the viewing place and the address information of the viewing place.

Optionally, the matching the extracted basic video information with a database, determining an associated view-finding area of the video according to a matching result, and configuring the associated view-finding area of the video further includes:

when the database does not contain the address information of the viewing place, marking the pushing time point of the viewing place in the video, and manually completing the viewing place information corresponding to the pushing time point.

Optionally, the matching the extracted basic video information with a database, determining an associated view-finding area of the video according to a matching result, and configuring the associated view-finding area of the video further includes:

when the keywords are not hit, calculating the probability of the related viewing place of the video in a preset mode according to the key information extracted from the video data, wherein the preset mode comprises the steps of presetting the weight coefficient corresponding to each kind of key information and calculating the probability according to the condition whether the key information appears and the weight coefficient;

judging whether the probability reaches a first threshold value;

and when the probability reaches the first threshold value or when keywords are hit but key frames corresponding to the framing place are not matched, taking the video as a video to be configured, and switching to the configuration of the framing place where the video is manually completed.

Optionally, before acquiring the video data and extracting the video basic information from the video data, the method further includes:

receiving video data uploaded by a user and framing configuration information of the video;

judging whether the video uploads framing configuration information or not;

when the video is not uploaded with the view configuration information, the steps of acquiring the video data and extracting the video basic information from the video data are executed;

when the video uploads the framing place configuration information, directly executing the step of pushing the framing place associated with the video to a user when the video is played according to the framing place configuration information of the video.

Optionally, the matching the key frame of the video with the frame of the view place mentioned in the database includes:

and extracting pictures of the video at equal time intervals, taking the pictures as the key frames, and matching the key frames with the pictures of the view-finding places.

Optionally, the matching the key frame of the video with the frame of the view place mentioned in the database includes:

acquiring bullet screen information corresponding to the video, calculating the frequency of key information appearing in the bullet screen information in each unit time, judging whether the frequency reaches a second threshold value, and taking the picture of the video corresponding to the unit time with the frequency reaching the second threshold value as the key frame to be matched with the picture of the viewing place.

Optionally, the pushing the video-associated viewfinding place to the user while playing the video includes:

and when the video is played to the pushing time point, acquiring the framing place information corresponding to the pushing time point, and displaying the framing place information to a user on a playing page of the video.

Optionally, the pushing the video-associated framing place to the user while playing the video further includes:

displaying a video icon corresponding to each view finding coordinate on a map page, and playing the video or video segment corresponding to the video icon after receiving that a user clicks the video icon.

Optionally, the pushing the video-associated viewfinding place to the user while playing the video includes:

and determining the viewing place information which is pushed to the user and is associated with the video according to the user information, and displaying the viewing place information to the user on a playing page of the video when the video is played to a pushing time point corresponding to the determined viewing place information.

In addition, to achieve the above object, an embodiment of the present application further provides a video viewfinding push system, where the system includes:

the extraction module is used for acquiring video data and extracting basic video information from the video data;

the configuration module is used for matching the extracted basic video information with a database, determining a related view-finding place of the video according to a matching result, and configuring the related view-finding place of the video; and

and the pushing module is used for pushing the view-finding place related to the video to a user when the video is played according to the view-finding place configuration information of the video, wherein the view-finding place configuration information comprises pushing time points and view-finding place information corresponding to each pushing time point.

In order to achieve the above object, an embodiment of the present application further provides an electronic device, including: the device comprises a memory, a processor and a video viewfinding push program stored on the memory and capable of running on the processor, wherein the video viewfinding push program realizes the video viewfinding push method when being executed by the processor.

To achieve the above object, an embodiment of the present application further provides a computer-readable storage medium, on which a video viewfinding push program is stored, and the video viewfinding push program, when executed by a processor, implements the video viewfinding push method as described above.

The video view-finding place pushing method, the video view-finding place pushing system, the electronic device and the computer readable storage medium can associate the video works with the actual view-finding places, clearly indicate view-finding place information at corresponding time points in the video playing page, and facilitate users to inquire related information, so that the finding rate of the offline view-finding places and the online experience of the users are improved.

Drawings

FIG. 1 is a diagram of an application environment architecture in which various embodiments of the present application may be implemented;

fig. 2 is a flowchart of a video viewfinding push method according to a first embodiment of the present application;

FIG. 3 is a detailed flowchart of step S202 in FIG. 2;

FIG. 4 is a schematic illustration of a push approach of the present application;

fig. 5 is a flowchart of a video viewfinding push method according to a second embodiment of the present application;

fig. 6 is a schematic hardware architecture diagram of an electronic device according to a third embodiment of the present application;

fig. 7 is a block diagram of a video viewfinding push system according to a fourth embodiment of the present application;

fig. 8 is a block diagram of a video viewfinder push system according to a fifth embodiment of the present application.

Detailed Description

In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

It should be noted that the descriptions relating to "first", "second", etc. in the embodiments of the present application are only for descriptive purposes and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present application.

Referring to fig. 1, fig. 1 is a diagram illustrating an application environment architecture for implementing various embodiments of the present application. The present application is applicable in application environments including, but not limited to, client 2, server 4, network 6.

The client 2 is configured to receive data such as videos uploaded by users and send the data to the server 4, obtain resources such as video data from the server 4 and play videos to the users, and receive operations of the users and the like. In the embodiment of the present application, the client 2 is further configured to receive viewfinder configuration information of the video uploaded by a user or viewfinder configuration information of the video from the server 4, and push the video to the user according to the viewfinder configuration information when the video is played. The client 2 may be a terminal device such as a PC (Personal Computer), a mobile phone, a tablet Computer, a portable Computer, and a wearable device.

The server 4 is configured to receive data such as the video uploaded in the client 2, and provide resources such as the video data to the client 2. In this embodiment, the server 4 is further configured to perform view-finding association on the video, and send view-finding configuration information corresponding to the video to the client 2. The server 4 may be a rack server, a blade server, a tower server, a cabinet server, or other computing devices, may be an independent server, or may be a server cluster formed by a plurality of servers.

The network 6 may be a wireless or wired network such as an Intranet (Intranet), the Internet (Internet), a Global System of Mobile communication (GSM), Wideband Code Division Multiple Access (WCDMA), a 4G network, a 5G network, Bluetooth (Bluetooth), Wi-Fi, and the like. The server 4 and one or more clients 2 are connected through the network 6 for data transmission and interaction.

Example one

Fig. 2 is a flowchart of a video viewfinding push method according to a first embodiment of the present application. It is to be understood that the flow charts in the embodiments of the present method are not intended to limit the order in which the steps are performed. In the present embodiment, the server 4 is taken as a main execution subject to be exemplarily described.

The method comprises the following steps:

s200, acquiring video data and extracting video basic information from the video data.

The user may upload the video in the client 2 and be sent by the client 2 to the server 4 and the user may provide the title, profile, etc. of the video when uploading a video feature. In addition, during the video playing process, the watching user may leave a message in a barrage (information popped up when the video is played to a specific time) or a comment. The video data includes, but is not limited to, the video feature and the title, the brief and the barrage, the comment. The server 4 acquires the video data and extracts various video basic information therefrom. In this embodiment, the video basic information may include a title, a profile, diversity, and the like. The server 4 may extract the video basic information by a method such as data crawling from public information of a video website.

S202, the extracted basic video information is matched with a database, the relevant view-finding place of the video is determined according to the matching result, and the relevant view-finding place configuration is carried out on the video.

The database includes, but is not limited to, articles such as travel columns, game columns, and tourist notes that may refer to the viewing location of the video work, and other related information that may refer to the viewing location of the video work. After extracting the video basic information of the video, matching the video basic information (mainly title here) and related words of the framing place (such as "framing place", "shooting place", "sheng di", etc.) with the database as keywords, and obtaining the framing place information corresponding to the video (that is, finding that the video has the corresponding framing place information according to the database) when the keywords are hit (that is, the basic information of the title of the video is matched in the text in the database, and the keywords of "framing place", "shooting place", "sheng di", etc.) are matched, and arranging the video in a related framing way according to the matching result. Wherein, the configuration information of the viewfinding comprises a pushing time point (used for pushing the video playing time point of the viewfinding to the user) and the viewfinding information (comprising the name of the viewfinding place, the brief description of the viewfinding place address and the like) corresponding to each time point.

In this implementation, the matching result and corresponding processing may include the following cases:

(1) the key frame of the video can be matched according to the picture of the view finding place mentioned in the article text in the database, the address information of the view finding place is matched according to the article text in the database, the configuration information of the view finding place is directly and intelligently configured, and then manual examination is carried out;

(2) the method comprises the following steps that key frames can be matched but the address information of a framing place is not matched (for example, only the name of the framing place can be mentioned), and the information of the framing place is manually configured after the pushing time point is marked;

(3) matching a place with a view field but not matching a key frame, and manually processing the video serving as the video to be configured;

(4) and (3) the probability that the view place cannot be matched (the keyword is not hit) is calculated by adopting other preset modes, and the video with the probability reaching a preset threshold value is taken as the video to be configured to be manually processed.

Specifically, further refer to fig. 3, which is a schematic view of the detailed flow of step S202. It is to be understood that the flow chart is not intended to limit the order in which the steps are performed. Some steps in the flowchart may be added or deleted as desired. In this embodiment, the step S202 specifically includes:

s2020, matching the extracted basic information of the video with the database, and determining whether a keyword is hit (i.e. matching the basic information such as the title of the video in the text body in the database, and matching the keyword such as "view finder", "shooting place", and "holy land"). If the keyword is hit, it indicates that the database refers to the associated view finding place of the video, and step S2021 is performed; if the keyword is not hit, it indicates that the database does not refer to the associated view of the video, and steps S2025-S2026 are performed.

S2021, matching the key frame of the video with the frame of the finder mentioned in the database, and determining whether the key frame corresponding to the finder can be matched. If the keyframe corresponding to the viewfinder can be matched, executing step S2022; if the key frame corresponding to the finder cannot be matched, step S2027 is performed.

In this embodiment, after matching the framing place of the video mentioned in the database, the key frame is further matched. Generally, when the framing place of the video is mentioned in the related section or the narrative text, a related screen such as a screenshot of the video work (a screenshot when the framing place appears) and/or a solid photographic picture of the framing place is uploaded at a large rate. Therefore, by adopting the picture similarity matching mode, the matching object is the picture and the key frame in the video, and the time point (namely the push time point) of the framing place in the video can be determined. And if the key frame corresponding to the view finding place can be matched, the video playing time point of the key frame is the pushing time point corresponding to the view finding place.

It is to be noted that the present embodiment may determine the key frame in the video in the following manner:

(1) and extracting pictures of the video at equal time intervals to serve as the key frames. For example, a picture is extracted once every second, and the extracted picture is taken as a key frame of the video.

In addition, the web page video progress bar displays the still image at the moment when the mouse is hovered, and if the server 4 stores the image of the still image, the stored image of the still image can be directly used as the key frame to be matched with the image of the view finding place.

(2) Acquiring barrage information corresponding to the video, calculating the frequency of key information (key words related to a view finding place, such as a view finding place, a shooting place, a holy place, a card punching place and the like) appearing in the barrage information in each unit time, judging whether the frequency reaches (is greater than or equal to) a preset threshold value, and taking a picture of the video corresponding to the unit time with the frequency reaching the threshold value as the key frame for matching with the picture of the view finding place.

S2022, determining whether the database includes the address information of the finder area. If the framing address information is included, step S2023 is executed; if the framing address information is not included, step S2024 is executed.

In this embodiment, after the place of view of the video (for example, the text refers to the title and name of the place of view of the video) is matched from the text in the database, it is further determined whether the address information of the place of view is included from the context of the text. The viewfinder address information may be a textual description or a map, etc.

In another embodiment, after the name of the view location of the video is matched (or after it is determined that the database does not contain the address information of the view location), the address information of the view location may be automatically searched from other approaches (including map applications, such as a Baidu map, a Gade map, etc.), so as to perform intelligent configuration. If the address information of the finder is searched, step S2023 is executed, otherwise step S2024 is executed.

S2023, marking a push time point of the view location in the video, and configuring view location information corresponding to the push time point.

In this embodiment, a push time point of a view location is marked on the time axis of the video according to the matched key frame corresponding to the view location, and view location information corresponding to the push time point is configured, including a name of the view location, a brief description of the view location address, and the like. And after configuration is completed, transferring the viewfinder configuration information of the video to manual review so as to further confirm the accuracy of the configuration result.

S2024, marking the pushing time point of the view finding place in the video, and turning to the manual completion of the view finding place information corresponding to the pushing time point.

In this embodiment, when the address information of the view location is not available, a push time point of the view location is marked on the time axis of the video according to the matched key frame corresponding to the view location, and a view location name corresponding to the push time point is configured. And then, the configured information is transferred to manual processing, and address information and the like of the view finding place are added manually, so that the configuration of the view finding place information corresponding to the video is perfected.

S2025, calculating the probability of the video having the associated view area by adopting a preset mode according to the key information extracted from the video data.

When the framing place of the video (a missing keyword) cannot be matched according to the database, key information is extracted from the video data, wherein the key information comprises framing place information (such as a framing place name in the brief description of the video) appearing in the video basic information and framing place related key words (such as 'framing place', 'shooting place', 'holy place', 'card punching' and the like) contained in the barrage and/or the comment of the video. And then, calculating the probability of the video having the associated view place by adopting a preset mode according to the key information. The preset mode may be to preset a weight coefficient corresponding to each kind of key information, and calculate the probability according to whether the key information appears and the weight coefficient.

For example, the weight coefficient corresponding to the place name appearing in the video introduction is set to 0.2, and the weight coefficient corresponding to the density of the key words appearing in the bullet screen is set to 0.6. The calculation formula for calculating the probability that the video has an associated view location is: probability is whether the place name (0/1) × weight coefficient 1(0.4) + bullet screen density × weight coefficient 2(0.6) appears in the introduction. If the place name directly hits in the video introduction (such as the center of the country), 1 is selected, and if not, 0 is selected; the video bullet screen list is read, and the density of the key words appearing in unit time (such as 3 seconds) is calculated, wherein the density is the number of hit key words divided by the total number of bullet screens in unit time. And substituting the values into a calculation formula to obtain the probability. Assuming that "the center of the country" appears in the brief of a certain video, the total number of barrages in the playback time of 03:45 to 03:48 is 100, and the number of barrages including the word "punch" is 80, that is, the density is 0.8, the probability of 1 × 0.4+0.8 × 0.6 being 0.4+0.48 being 0.88 is calculated.

It is noted that the probability calculated according to the density of the key words appearing in the bullet screen within the unit time is the probability that the video has the associated view place within the unit time. That is, if it is determined that the video has the potential of the associated view location according to the probability, the push time point corresponding to the view location may be configured as the unit time.

S2026, determining whether the probability reaches (is greater than or equal to) a preset threshold. If the preset threshold is reached, executing step S2027; if the preset threshold value is not reached, the process is ended.

For example, the preset threshold is set to 0.5, and if the calculation result of the probability reaches 0.5, the video is considered to have the potential of having an associated view place. Otherwise, the video does not have the potential of having the associated view place artificially, and the video and the related data are discarded (other processing related to the information configuration of the view place is not performed).

S2027, taking the video as a video to be configured, and switching to a configuration where the view finding of the video is manually completed.

When the framing places of the videos mentioned in the database are matched according to the hit keywords but the key frames corresponding to the framing places cannot be matched, or when the keywords are not hit but the probability that the videos have the associated framing places is calculated to reach a preset threshold value, the videos are marked as videos to be configured, the videos are switched to manual processing together with the matched framing place names (or place names appearing in the brief introduction and the like), the pushing time points of the framing places are judged manually, address information of the framing places is added, and the like, so that the framing place configuration of the videos is completed.

The embodiment can realize automatic data association by matching the basic video information with a database (including but not limited to bulletin articles and tourist articles) and matching and confirming the view field of the specific video segment in some cases. And moreover, the probability of the video or the segment thereof appearing in the view place can be calculated according to the key information in the video data, and the video is taken as the basis of the video to be configured and then is manually processed.

Returning to fig. 2, in step S204, according to the framing place configuration information of the video, the framing place associated with the video is pushed to the user when the video is played.

And when the video is played to the pushing time point, obtaining the framing information corresponding to the pushing time point, and pushing the framing information to a user (or obtaining the framing information in advance by a certain time and pushing the framing information at the pushing time point). In this embodiment, the push mode may be that the video is displayed to the user in the form of text description or map coordinates on the playing page of the video. Entries may be added within the video player page to show the viewfinding to which the video relates, in forms including but not limited to: detailed address, abbreviated map coordinates of the textual description; clicking the entrance to enter a map page in the product can show the position of the view-finding place related to the video in detail. For example, referring to fig. 4, a schematic diagram of one of the pushing manners is shown. When the video is played to the pushing time point, displaying a suggestion of finding the holy ground at the position of the pushing time point of a time axis or a player interface or any position of the current page, and displaying the map coordinate of the viewing place corresponding to the pushing time point on the video detail part of the current page.

In this embodiment, in addition to displaying the map coordinates of the view locations (which may be one or more) on the playing page of the video according to the address information of the view locations, a video icon corresponding to each view location coordinate may be displayed on the map page, and the user may view the video or a video segment (including the view location segment) after clicking the video icon.

In other embodiments, the viewfinder information associated with the video may also be pushed to the user in other feasible manners, for example, a prompt pop-up screen, a voice prompt, or a pop-up small display box is displayed at a specific position, which is not limited herein.

In addition, when the framing place related to the video is pushed to the user, the framing place information related to the video pushed to the user can be determined according to user information. The method specifically comprises the following steps: to some users (e.g., users who have been set not to receive a push) not to push viewfinding information; pushing different types of viewfinder place information according to personal attributes such as geographical positions, interests and the like of users, and the like. Therefore, corresponding different pushing can be performed for different users, the user requirements can be better met, and the user experience is improved.

The viewing place appearing in the film and television works can bring practical promotion of offline passenger flow, and the embodiment can promote the conversion rate of finding offline on the line, namely, a clear and convenient path for a video viewer to locate to the viewing place contained in the video is provided.

The video view ground pushing method provided by the embodiment can associate the video works with the actual view ground, and clearly indicate the view ground information at the corresponding time point in the video playing page, so that a user can conveniently inquire related information, thereby improving the online discovery rate of the offline view ground and improving the user experience.

Example two

Fig. 5 is a flowchart of a video viewfinding push method according to a second embodiment of the present application. In the second embodiment, the video viewfinding push method further includes steps S500-S502 on the basis of the first embodiment. It is to be understood that the flow charts in the embodiments of the present method are not intended to limit the order in which the steps are performed.

The method comprises the following steps:

and S500, receiving the video data uploaded by the user and the framing configuration information of the video.

The user may upload the video in the client 2 and be sent by the client 2 to the server 4 and the user may provide the title, profile, etc. of the video when uploading a video feature. In addition, when the user uploads the video, the framing configuration information of the video can be uploaded together, that is, the pushing time point and the framing information (such as the framing name and the detailed coordinates in the map) can be configured while uploading the video and the basic information thereof in a manual configuration mode. The subsequent videos do not need to be subjected to framing association operation any more, and can be directly pushed to the user when the videos are played according to the uploaded framing configuration information.

S502, judging whether the video uploads the configuration information of the view place. If the configuration information of the view is uploaded, go to step S508; if the configuration information of the view is not uploaded, step S504 is executed.

Before performing the operation of associating the viewfinding of the video, whether the video uploads the viewfinding configuration information is judged. If the information is uploaded, subsequent operation is not needed, and the information is directly pushed; and if the video is not uploaded, the video and the corresponding view-finding place need to be associated according to the database.

S504, video data are obtained, and basic video information is extracted from the video data.

The user, when uploading the video, provides the title, profile, etc. of the video. In addition, during the video playing process, the watching user may leave a message in a barrage (information popped up when the video is played to a specific time) or a comment. The video data includes, but is not limited to, the video feature and the title, the brief and the barrage, the comment. The server 4 acquires the video data to be associated and extracts various video basic information from the video data. In this embodiment, the video basic information may include a title, a profile, diversity, and the like. The server 4 may extract the video basic information by a method such as data crawling from public information of a video website.

S506, matching the extracted basic video information with a database, determining the relevant view-finding area of the video according to the matching result, and configuring the relevant view-finding area of the video.

The database includes, but is not limited to, articles such as travel columns, game columns, and tourist notes that may refer to the viewing location of the video work, and other related information that may refer to the viewing location of the video work. After extracting the video basic information of the video, matching the video basic information (mainly title here) and related words of the framing place (such as "framing place", "shooting place", "sheng di", etc.) with the database as keywords, and obtaining the framing place information corresponding to the video (that is, finding that the video has the corresponding framing place information according to the database) when the keywords are hit (that is, the basic information of the title of the video is matched in the text in the database, and the keywords of "framing place", "shooting place", "sheng di", etc.) are matched, and arranging the video in a related framing way according to the matching result. Wherein, the configuration information of the viewfinding comprises a pushing time point (used for pushing the video playing time point of the viewfinding to the user) and the viewfinding information (comprising the name of the viewfinding place, the brief description of the viewfinding place address and the like) corresponding to each time point.

In this implementation, the matching result and corresponding processing may include the following cases:

(1) the key frame of the video can be matched according to the picture of the view finding place mentioned in the article text in the database, the address information of the view finding place is matched according to the article text in the database, the configuration information of the view finding place is directly and intelligently configured, and then manual examination is carried out;

(2) the method comprises the following steps that key frames can be matched but the address information of a framing place is not matched (for example, only the name of the framing place can be mentioned), and the information of the framing place is manually configured after the pushing time point is marked;

(3) matching a place with a view field but not matching a key frame, and manually processing the video serving as the video to be configured;

(4) and (3) the probability that the view place cannot be matched (the keyword is not hit) is calculated by adopting other preset modes, and the video with the probability reaching a preset threshold value is taken as the video to be configured to be manually processed.

The specific process of this step is shown in fig. 3 and related description, and will not be described herein again.

S508, pushing the view-finding place related to the video to the user when the video is played according to the view-finding place configuration information of the video.

And when the video is played to the pushing time point, obtaining the framing information corresponding to the pushing time point, and pushing the framing information to a user (or obtaining the framing information in advance by a certain time and pushing the framing information at the pushing time point). In this embodiment, the push mode may be that the video is displayed to the user in the form of text description or map coordinates on the playing page of the video. Entries may be added within the video player page to show the viewfinding to which the video relates, in forms including but not limited to: detailed address, abbreviated map coordinates of the textual description; clicking the entrance to enter a map page in the product can show the position of the view-finding place related to the video in detail. For example, referring to fig. 4, a schematic diagram of one of the pushing manners is shown. When the video is played to the pushing time point, displaying a suggestion of finding the holy ground at the position of the pushing time point of a time axis or a player interface or any position of the current page, and displaying the map coordinate of the viewing place corresponding to the pushing time point on the video detail part of the current page.

In this embodiment, in addition to displaying the map coordinates of the view locations (which may be one or more) on the playing page of the video according to the address information of the view locations, a video icon corresponding to each view location coordinate may be displayed on the map page, and the user may view the video or a video segment (including the view location segment) after clicking the video icon.

In other embodiments, the viewfinder information associated with the video may also be pushed to the user in other feasible manners, for example, a prompt pop-up screen, a voice prompt, or a pop-up small display box is displayed at a specific position, which is not limited herein.

In addition, when the framing place related to the video is pushed to the user, the framing place information related to the video pushed to the user can be determined according to user information. The method specifically comprises the following steps: to some users (e.g., users who have been set not to receive a push) not to push viewfinding information; pushing different types of viewfinder place information according to personal attributes such as geographical positions, interests and the like of users, and the like. Therefore, corresponding different pushing can be performed for different users, the user requirements can be better met, and the user experience is improved.

The video view-finding place pushing method provided by the embodiment can upload view-finding place configuration information of the video while uploading a video positive film, so that the view-finding place is directly pushed to a user when the video is played, and the user can conveniently view related information.

EXAMPLE III

As shown in fig. 6, a hardware architecture of an electronic device 20 is provided for a third embodiment of the present application. In the present embodiment, the electronic device 20 may include, but is not limited to, a memory 21, a processor 22, and a network interface 23, which are communicatively connected to each other through a system bus. It is noted that fig. 6 only shows the electronic device 20 with components 21-23, but it is to be understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead. In this embodiment, the electronic device 20 may be the server 4.

The memory 21 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the storage 21 may be an internal storage unit of the electronic device 20, such as a hard disk or a memory of the electronic device 20. In other embodiments, the memory 21 may also be an external storage device of the electronic apparatus 20, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the electronic apparatus 20. Of course, the memory 21 may also include both an internal storage unit and an external storage device of the electronic apparatus 20. In this embodiment, the memory 21 is generally used for storing an operating system installed in the electronic device 20 and various application software, such as program codes of the video viewfinding push system 60. Further, the memory 21 may also be used to temporarily store various types of data that have been output or are to be output.

The processor 22 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 22 is generally used to control the overall operation of the electronic device 20. In this embodiment, the processor 22 is configured to execute the program code stored in the memory 21 or process data, such as executing the video viewfinding push system 60.

The network interface 23 may include a wireless network interface or a wired network interface, and the network interface 23 is generally used for establishing a communication connection between the electronic apparatus 20 and other electronic devices.

Example four

Fig. 7 is a block diagram of a video viewfinder push system 60 according to a fourth embodiment of the present application. The video viewfinding push system 60 may be partitioned into one or more program modules, which are stored in a storage medium and executed by one or more processors to implement embodiments of the present application. The program modules referred to in the embodiments of the present application refer to a series of computer program instruction segments capable of performing specific functions, and the following description will specifically describe the functions of each program module in the embodiments.

In this embodiment, the video viewfinding push system 60 includes:

the extracting module 600 is configured to obtain video data and extract basic video information from the video data.

The user may upload the video in the client 2 and be sent by the client 2 to the server 4 and the user may provide the title, profile, etc. of the video when uploading a video feature. In addition, during the video playing process, the watching user may leave a message in a barrage (information popped up when the video is played to a specific time) or a comment. The video data includes, but is not limited to, the video feature and the title, the brief and the barrage, the comment. The extraction module 600 obtains the video data and extracts various video basic information from the video data. In this embodiment, the video basic information may include a title, a profile, diversity, and the like. In addition, the extraction module 600 may extract the video basic information by using a method of crawling data from public information of a video website, and the like.

A configuration module 602, configured to match the extracted basic video information with a database, determine a related view-finding area of the video according to a matching result, and configure the related view-finding area of the video.

The database includes, but is not limited to, articles such as travel columns, game columns, and tourist notes that may refer to the viewing location of the video work, and other related information that may refer to the viewing location of the video work. After extracting the video basic information of the video, matching the video basic information (mainly title here) and related words of the framing place (such as "framing place", "shooting place", "sheng di", etc.) with the database as keywords, and obtaining the framing place information corresponding to the video (that is, finding that the video has the corresponding framing place information according to the database) when the keywords are hit (that is, the basic information of the title of the video is matched in the text in the database, and the keywords of "framing place", "shooting place", "sheng di", etc.) are matched, and arranging the video in a related framing way according to the matching result. Wherein, the configuration information of the viewfinding comprises a pushing time point (used for pushing the video playing time point of the viewfinding to the user) and the viewfinding information (comprising the name of the viewfinding place, the brief description of the viewfinding place address and the like) corresponding to each time point.

In this implementation, the matching result and corresponding processing may include the following cases:

(1) the key frame of the video can be matched according to the picture of the view finding place mentioned in the article text in the database, the address information of the view finding place is matched according to the article text in the database, the configuration information of the view finding place is directly and intelligently configured, and then manual examination is carried out;

(2) the method comprises the following steps that key frames can be matched but the address information of a framing place is not matched (for example, only the name of the framing place can be mentioned), and the information of the framing place is manually configured after the pushing time point is marked;

(3) matching a place with a view field but not matching a key frame, and manually processing the video serving as the video to be configured;

(4) and (3) the probability that the view place cannot be matched (the keyword is not hit) is calculated by adopting other preset modes, and the video with the probability reaching a preset threshold value is taken as the video to be configured to be manually processed.

The specific processing procedure of the configuration module 602 is shown in fig. 3 and related description, and is not described herein again.

The pushing module 604 is configured to push a viewing place associated with the video to a user when the video is played according to the viewing place configuration information of the video.

And when the video is played to the pushing time point, obtaining the framing information corresponding to the pushing time point, and pushing the framing information to a user (or obtaining the framing information in advance by a certain time and pushing the framing information at the pushing time point). In this embodiment, the push mode may be that the video is displayed to the user in the form of text description or map coordinates on the playing page of the video. Entries may be added within the video player page to show the viewfinding to which the video relates, in forms including but not limited to: detailed address, abbreviated map coordinates of the textual description; clicking the entrance to enter a map page in the product can show the position of the view-finding place related to the video in detail. For example, referring to fig. 4, a schematic diagram of one of the pushing manners is shown. When the video is played to the pushing time point, displaying a suggestion of finding the holy ground at the position of the pushing time point of a time axis or a player interface or any position of the current page, and displaying the map coordinate of the viewing place corresponding to the pushing time point on the video detail part of the current page.

In this embodiment, in addition to displaying the map coordinates of the view locations (which may be one or more) on the playing page of the video according to the address information of the view locations, a video icon corresponding to each view location coordinate may be displayed on the map page, and the user may view the video or a video segment (including the view location segment) after clicking the video icon.

In other embodiments, the viewfinder information associated with the video may also be pushed to the user in other feasible manners, for example, a prompt pop-up screen, a voice prompt, or a pop-up small display box is displayed at a specific position, which is not limited herein.

In addition, when the framing place related to the video is pushed to the user, the framing place information related to the video pushed to the user can be determined according to user information. The method specifically comprises the following steps: to some users (e.g., users who have been set not to receive a push) not to push viewfinding information; pushing different types of viewfinder place information according to personal attributes such as geographical positions, interests and the like of users, and the like. Therefore, corresponding different pushing can be performed for different users, the user requirements can be better met, and the user experience is improved.

The video view-finding ground pushing system provided by the embodiment can associate the video works with the actual view-finding ground, clearly show view-finding ground information at the corresponding time point in the video playing page, and is convenient for a user to inquire related information, so that the online discovery rate of the offline view-finding ground is improved, and the user experience is improved.

EXAMPLE five

Fig. 8 is a block diagram of a video viewfinder push system 60 according to a fifth embodiment of the present application. In this embodiment, the video viewfinder push system 60 includes a receiving module 606 and a determining module 608 in addition to the extracting module 600, the configuring module 602 and the push module 604 in the fourth embodiment.

The receiving module 606 is configured to receive video data uploaded by a user and framing configuration information of the video.

The user may upload the video in the client 2 and be sent by the client 2 to the server 4 and the user may provide the title, profile, etc. of the video when uploading a video feature. In addition, when the user uploads the video, the framing configuration information of the video can be uploaded together, that is, the pushing time point and the framing information can be configured while uploading the video positive and the basic information thereof in a manual configuration mode. The subsequent videos do not need to be subjected to framing association operation any more, and can be directly pushed to the user when the videos are played according to the uploaded framing configuration information.

The determining module 608 is configured to determine whether the video has uploaded framing configuration information. If the configuration information of the framing place is uploaded, the pushing module 604 is triggered to directly push the configuration information of the framing place to the user; if the configuration information of the view is not uploaded, the extraction module 600 is triggered for further processing.

The characteristic image recognition positioning system provided by the embodiment can upload the framing place configuration information of the video while uploading the video positive film, so that the framing place is directly pushed to a user when the video is played, and the user can conveniently check related information.

EXAMPLE six

The present application further provides another embodiment, which is a computer-readable storage medium storing a video-viewfinding push program, which is executable by at least one processor to cause the at least one processor to perform the steps of the video-viewfinding push method as described above.

It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.

The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.

It will be apparent to those skilled in the art that the modules or steps of the embodiments of the present application described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different from that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.

The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications that can be made by the use of the equivalent structures or equivalent processes in the specification and drawings of the present application or that can be directly or indirectly applied to other related technologies are also included in the scope of the present application.

22页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于电子地图的视频检索方法及系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!