Video playing method and device, electronic equipment and storage medium

文档序号:245289 发布日期:2021-11-12 浏览:3次 中文

阅读说明:本技术 一种视频播放方法、装置、电子设备及存储介质 (Video playing method and device, electronic equipment and storage medium ) 是由 秦文煜 于 2020-05-11 设计创作,主要内容包括:本申请关于一种视频播放方法、装置、电子设备及存储介质,涉及信息处理领域,可以智能的在播放视频的过程中进行隐私保护。具体方案包括:播放视频文件;视频文件包括K个视频片段和每个视频片段的内容标签,每个视频片段包括N帧图像,K和N均为正整数;检测是否有受限对象观看视频文件;受限对象为不具有观看视频文件中的隐私片段的权限的对象;若检测到受限对象观看视频文件,则根据正在播放的视频片段的内容标签,判断正在播放的视频片段是否属于隐私片段;若正在播放的视频片段属于隐私片段,则停止播放正在播放的视频片段。(The application relates to a video playing method, a video playing device, electronic equipment and a storage medium, relates to the field of information processing, and can intelligently perform privacy protection in the video playing process. The specific scheme comprises the following steps: playing the video file; the video file comprises K video clips and a content label of each video clip, each video clip comprises N frames of images, and K and N are positive integers; detecting whether a limited object watches the video file; the restricted object is an object that does not have the right to view the private section in the video file; if the limited object is detected to watch the video file, judging whether the video clip being played belongs to the privacy clip or not according to the content label of the video clip being played; and if the video clip being played belongs to the privacy clip, stopping playing the video clip being played.)

1. A video playback method, the method comprising:

playing the video file; the video file comprises K video clips and a content label of each video clip, each video clip comprises N frames of images, and K and N are positive integers;

detecting whether a restricted object views the video file; the restricted object is an object that does not have permission to view a private segment in the video file;

if the limited object is detected to watch the video file, judging whether the video clip being played belongs to a privacy clip or not according to the content label of the video clip being played;

and if the video clip being played belongs to the privacy clip, stopping playing the video clip being played.

2. The method according to claim 1, wherein after stopping playing the playing video segment if the playing video segment belongs to the privacy segment, the method further comprises:

playing a preset media file; the preset media file comprises: presetting a video, a picture or a non-privacy segment in the video file.

3. The method of claim 1 or 2, wherein prior to said playing the video file, the method further comprises:

acquiring a video file to be processed;

dividing the video file to be processed into the K video segments, and acquiring the image characteristic and the audio characteristic of each video segment in the K video segments; the frame number of the video frames included in each video clip is less than or equal to a preset frame number;

determining a content label of each of the K video clips according to the image characteristics and the audio characteristics of each of the K video clips, and generating the video file comprising the content label;

the content tag of each video clip is one of a plurality of preset content tags, the preset content tags are used for indicating the category of the video clip, and the preset content tags include any one of the following: normal, laughter, violence, emotional and vulgar.

4. The method of claim 3, wherein determining the content label of each of the K video segments according to the image feature and the audio feature of each of the K video segments comprises:

determining image label information of the ith video clip according to the image characteristics of the ith video clip in the K video clips; the image tag information is used for indicating the probability that the image of the ith video clip belongs to the category indicated by each preset content tag in the plurality of preset content tags; i takes values from 1 to K in turn;

determining audio label information of the ith video clip according to the audio characteristics of the ith video clip; the audio tag information is used for indicating the probability that the audio of the ith video clip belongs to the category indicated by each preset content tag in the plurality of preset content tags;

calculating the probability that the ith video clip belongs to the category indicated by each preset content tag in the plurality of preset content tags according to the image tag information and the preset image weight of the ith video clip and the audio tag information and the preset audio weight of the ith video clip;

and determining the preset content label corresponding to the category with the highest probability as the content label of the ith video clip.

5. The method of claim 1 or 2, wherein prior to said playing the video file, the method further comprises:

responding to a video playing instruction or a label setting instruction, and displaying the plurality of preset content labels;

receiving a label type corresponding to each preset content label in the plurality of preset content labels; the tag type comprises a privacy tag or a conventional tag;

wherein, the determining whether the video clip being played belongs to the privacy clip according to the content tag of the video clip being played includes:

judging that the content tag of the video clip being played is the privacy tag or the conventional tag according to the tag type corresponding to each preset content tag in the plurality of preset content tags;

if the content tag of the video clip being played is the privacy tag, determining that the video clip being played belongs to the privacy clip;

and if the content tag of the video clip being played is the conventional tag, determining that the video clip being played does not belong to the privacy clip.

6. The method of claim 1 or 2, wherein the detecting whether the video file is viewed by a restricted object comprises:

acquiring a scene image in the current environment for playing the video file;

if the face image included in the scene image is a first face image, determining that the limited object is detected to watch the video file; the first face image is a face image of the limited object configured in advance;

or if the face image included in the scene image is not the second face image, determining that the limited object is detected to watch the video file; the second face image is a face image of a user with a preset watching authority.

7. The method according to claim 6, wherein after stopping playing the playing video segment if the playing video segment belongs to the privacy segment, the method further comprises:

acquiring position information of a video frame which stops playing; the video frame which stops playing belongs to the video clip which is playing;

acquiring a scene image in the current environment;

detecting whether the scene image comprises the first face image or not, if the scene image does not comprise the first face image, skipping to the video frame which is stopped to be played in the video file according to the position information of the video frame which is stopped to be played, and starting to play the video file;

or detecting whether the scene image only comprises the second face image, if so, skipping to the video frame which is stopped to be played in the video file according to the position information of the video frame which is stopped to be played, and starting to play the video file.

8. A video playback apparatus, comprising:

the video control module is used for playing a video file; the video file comprises K video clips and a content label of each video clip, each video clip comprises N frames of images, and K and N are positive integers;

the judging module is used for detecting whether a limited object watches the video file; the restricted object is an object that does not have permission to view a private segment in the video file; if the limited object is detected to watch the video file, judging whether the video clip being played belongs to a privacy clip or not according to the content label of the video clip being played;

the video control module is further configured to stop playing the video clip being played if the video clip being played belongs to the privacy clip.

9. An electronic device, comprising: a processor and a memory for storing processor-executable instructions;

wherein the processor is configured to execute the instructions to cause the electronic device to perform the video playback method of any of claims 1-7.

10. A computer-readable storage medium having computer instructions stored thereon, which, when run on an electronic device, cause the electronic device to perform the video playback method of any of claims 1-7.

Technical Field

The present application relates to the field of information processing, and in particular, to a video playing method and apparatus, an electronic device, and a storage medium.

Background

Currently, in the process that a user watches a video by using an intelligent terminal, since the video may include some privacy segments (for example, segments that cannot be watched by minors, and individual private segments) that are inconvenient to watch by others, the user needs to manually operate (for example, pause playing or skip playing) the video according to whether there are bystanders around the user, who the bystanders are, and other environmental conditions, so as to avoid playing the privacy segments when others watch the video. That is to say, the user needs to manually control the skipping or pausing of the video, the intelligent degree of playing the video is low, and then the user cannot watch the video with concentration, and the convenience degree of watching the video by the user is reduced.

Disclosure of Invention

The embodiment of the application provides a video playing method and device, electronic equipment and a storage medium, which can intelligently perform privacy protection in the video playing process.

In order to achieve the technical purpose, the embodiment of the application adopts the following technical scheme:

in a first aspect, an embodiment of the present application provides a video playing method, where the method includes: playing the video file; the video file comprises K video clips and content labels of each video clip, each video clip comprises N frames of images, and K and N are positive integers. Detecting whether a limited object watches the video file; the restricted object is an object that does not have permission to view the private segment in the video file. If the limited object is detected to watch the video file, judging whether the video clip being played belongs to the privacy clip or not according to the content label of the video clip being played; and if the video clip being played belongs to the privacy clip, stopping playing the video clip being played.

In a possible embodiment, after stopping playing the video segment being played if the video segment being played belongs to the privacy segment, the method further includes: playing a preset media file; the preset media file includes: presetting a video, a picture or a non-privacy segment in a video file.

In another possible embodiment, before playing the video file, the method further includes: and acquiring a video file to be processed. Dividing the video file to be processed into K video segments, and acquiring the image characteristics and the audio characteristics of each video segment in the K video segments; the number of frames of the video frames included in each video clip is less than or equal to a preset number of frames. Then, a content tag of each of the K video clips is determined according to the image feature and the audio feature of each of the K video clips, and a video file including the content tag is generated. The content tag of each video clip is one of a plurality of preset content tags, the preset content tags are used for indicating the category of the video clip, and the preset content tags include any one of the following: normal, laughter, violence, emotional and vulgar.

In another possible implementation, determining a content tag of each of the K video segments according to the image feature and the audio feature of each of the K video segments includes: determining image label information of the ith video clip according to the image characteristics of the ith video clip in the K video clips; the image label information is used for indicating the probability that the image of the ith video clip belongs to the category indicated by each preset content label in the plurality of preset content labels; and i takes values from K numerical values from 1 to K in sequence. Determining audio label information of the ith video clip according to the audio characteristics of the ith video clip; the audio tag information is used for indicating the probability that the audio of the ith video clip belongs to the category indicated by each preset content tag in the plurality of preset content tags. Then, according to the image tag information and the preset image weight of the ith video clip and the audio tag information and the preset audio weight of the ith video clip, the probability that the ith video clip belongs to the category indicated by each preset content tag in the plurality of preset content tags is calculated. And finally, determining that the preset content label corresponding to the category with the highest probability is the content label of the ith video clip.

In another possible embodiment, before playing the video file, the method further includes: and responding to a video playing instruction or a label setting instruction, and displaying a plurality of preset content labels. Receiving a label type corresponding to each preset content label in a plurality of preset content labels; the tag type includes a privacy tag or a regular tag. Wherein, according to the content label of the video clip being played, judging whether the video clip being played belongs to the privacy clip, including: and judging the content label of the video clip being played to be a privacy label or a conventional label according to the label type corresponding to each preset content label in the plurality of preset content labels. If the content tag of the video clip being played is a privacy tag, determining that the video clip being played belongs to the privacy clip; and if the content label of the video clip being played is a conventional label, determining that the video clip being played does not belong to the privacy clip.

In another possible implementation, the detecting whether the limited object watches the video file includes: firstly, a scene image in the current environment of playing a video file is collected. Then, if the face image included in the scene image is a first face image, determining that the limited object is detected to watch the video file; the first face image is a face image of a pre-configured restricted object. Or if the face image included in the scene image is not the second face image, determining that the limited object is detected to watch the video file; the second face image is a face image of a user with a preset watching authority.

In another possible embodiment, before detecting whether a restricted object views the video file, the method further comprises: receiving and storing a first face image; or receiving and saving the second face image.

In another possible embodiment, after stopping playing the video segment being played if the video segment being played belongs to the privacy segment, the method further includes: firstly, acquiring position information of a video frame which stops playing; the video frame which stops playing belongs to the video clip which is playing. And then acquiring a scene image in the current environment. And finally, detecting whether the scene image comprises a first face image, if the scene image does not comprise the first face image, skipping to the video frame which is stopped to be played in the video file according to the position information of the video frame which is stopped to be played, and starting to play the video file. Or detecting whether the scene image only comprises the second face image, if so, skipping to the video frame which is stopped to be played in the video file according to the position information of the video frame which is stopped to be played, and starting to play the video file.

In a second aspect, an embodiment of the present application further provides a video playing apparatus, where the apparatus includes: the device comprises a video control module and a judgment module. The video control module is used for playing a video file; the video file comprises K video clips and content labels of each video clip, each video clip comprises N frames of images, and K and N are positive integers. The judging module is used for detecting whether a limited object watches the video file; the restricted object is an object that does not have the right to view the private section in the video file; and if the video file is watched by the limited object, judging whether the video clip being played belongs to the privacy clip or not according to the content label of the video clip being played. And the video control module is also used for stopping playing the video clip which is being played if the video clip which is being played belongs to the privacy clip.

In a possible implementation manner, the video control module is further configured to play the preset media file after stopping playing the video clip being played if the video clip being played belongs to the privacy clip; the presetting of the media file includes: presetting a video, a picture or a non-privacy segment in a video file.

In another possible embodiment, the apparatus further comprises a video tag module. And the video label module is used for acquiring the video file to be processed before the video file is played. And dividing the video file to be processed into K video segments, and acquiring the image characteristics and the audio characteristics of each video segment in the K video segments. Then, a content tag of each of the K video clips is determined according to the image feature and the audio feature of each of the K video clips, and a video file including the content tag is generated. The frame number of the video frames in each video clip is less than or equal to a preset frame number; the content label of each video clip is one of a plurality of preset content labels, the preset content labels are used for indicating the category of the video clip, and the preset content labels include any one of the following: normal, laughter, violence, emotional and vulgar.

In another possible embodiment, the video tag module is configured to determine a content tag of each of the K video segments according to an image feature and an audio feature of each of the K video segments, and includes: the video label module is specifically used for determining image label information of the ith video clip according to the image characteristics of the ith video clip in the K video clips; the image label information is used for indicating the probability that the image of the ith video clip belongs to the category indicated by each preset content label in the plurality of preset content labels; and i takes values from K numerical values from 1 to K in sequence. Determining audio label information of the ith video clip according to the audio characteristics of the ith video clip; the audio tag information is used for indicating the probability that the audio of the ith video clip belongs to the category indicated by each preset content tag in the plurality of preset content tags. Then, according to the image tag information and the preset image weight of the ith video clip and the audio tag information and the preset audio weight of the ith video clip, the probability that the ith video clip belongs to the category indicated by each preset content tag in the plurality of preset content tags is calculated. And finally, determining that the preset content label corresponding to the category with the highest probability is the content label of the ith video clip.

In another possible implementation manner, the determining module is further configured to display a plurality of preset content tags in response to a video playing instruction or a tag setting instruction before the video file is played. Receiving a label type corresponding to each preset content label in a plurality of preset content labels; the tag type includes a privacy tag or a regular tag. The judging module is configured to judge whether the video segment being played belongs to the privacy segment according to the content tag of the video segment being played, and includes: the judging module is specifically configured to judge, according to a tag type corresponding to each preset content tag in the plurality of preset content tags, that a content tag of a video clip being played is a privacy tag or a conventional tag. And if the content tag of the video clip being played is the privacy tag, determining that the video clip being played belongs to the privacy clip. And if the content label of the video clip being played is a conventional label, determining that the video clip being played does not belong to the privacy clip.

In another possible implementation manner, the determining module is configured to detect whether a restricted object views a video file, and includes: and the judging module is specifically used for acquiring scene images in the current environment for playing the video files. If the face image included in the scene image is the first face image, determining that the limited object is detected to watch the video file; or if the face image included in the scene image is not the second face image, determining that the limited object is detected to watch the video file. The first face image is a face image of a pre-configured limited object; the second face image is a face image of a user with a preset watching authority.

In another possible implementation, the determining module is further configured to receive and store the first face image before detecting whether the limited object views the video file; or receiving and saving the second face image.

In another possible implementation manner, the video control module is further configured to, if the video segment being played belongs to the privacy segment, stop playing the video segment being played, and then acquire position information of the video frame that is stopped playing; the video frame which stops playing belongs to the video clip which is playing. Then, the judging module is also used for acquiring a scene image in the current environment; detecting whether the scene image comprises a first face image or not; and the video control module is further used for skipping to the video frame which is stopped to be played in the video file according to the position information of the video frame which is stopped to be played and starting to play the video file if the scene image does not include the first face image. Or the judging module is also used for acquiring a scene image in the current environment; detecting whether the scene image only comprises a second face image; and the video control module is further used for jumping to the video frame which is stopped to be played in the video file according to the position information of the video frame which is stopped to be played and starting to play the video file if the scene image only comprises the second face image.

In a third aspect, an embodiment of the present application further provides an electronic device, including: a processor and a memory for storing processor-executable instructions;

wherein the processor is configured to execute the instructions such that the electronic device performs the video playing method as described in the first aspect and any one of its possible embodiments.

In a fourth aspect, embodiments of the present application further provide a computer-readable storage medium, where computer instructions are stored on the computer-readable storage medium, and when the computer instructions are executed on an electronic device, the electronic device is caused to perform a video playing method according to the first aspect and any possible implementation manner thereof.

In a fifth aspect, embodiments of the present application further provide a computer program product, which includes one or more instructions that can be executed on an electronic device, so that the electronic device executes a video playing method according to the first aspect and any possible implementation manner thereof.

The method provided by the embodiment of the application can play the video file. The video file includes K video clips and a content tag for each video clip. The content tag of the video clip is used for judging whether the video clip is a privacy clip. In the process of playing the video file, whether a limited object exists to watch the video file can be detected. If the limited object is detected to watch the video file, whether the video clip being played belongs to the privacy clip can be judged according to the content label of the video clip being played. The restricted object may be a user who does not have permission to view a private section in the video file. If a restricted object is watching a video file and a playing video clip belongs to a privacy clip, the privacy clip may be watched by the restricted object, which may result in disclosure of user privacy. Therefore, in the embodiment of the present application, if the restricted object is watching a video file and the playing video segment belongs to the privacy segment, the playing of the privacy segment in the video file may be automatically stopped. Therefore, the method can effectively protect the privacy segment being played from being watched by the limited object, and achieves intelligent privacy protection in the process of playing the video.

Drawings

Fig. 1 is a schematic diagram of an implementation environment related to a video playing method provided in an embodiment of the present application;

fig. 2 is a first flowchart of a video playing method provided in an embodiment of the present application;

fig. 3A is a first schematic diagram of an opening interface of a video playing client according to an embodiment of the present application;

fig. 3B is a schematic diagram of a video playing interface of a video playing client according to an embodiment of the present application;

fig. 3C is a schematic diagram of a second opening interface of a video playing client according to an embodiment of the present application;

fig. 4 is a flowchart for acquiring a content tag of a video segment according to an embodiment of the present application;

fig. 5A is a schematic diagram of a tag setting interface of a video playing client according to an embodiment of the present application;

fig. 5B is a third schematic diagram of an opening interface of a video playing client according to an embodiment of the present application;

fig. 5C is a schematic diagram of a face registration interface of a video playing client according to an embodiment of the present application;

FIG. 5D is a schematic diagram of an interface for video jumping provided by an embodiment of the present application;

fig. 5E is a schematic diagram of an alternative media setting interface of a video playing client according to an embodiment of the present application;

fig. 6 is a flowchart ii of a video playing method according to an embodiment of the present application;

fig. 7 is a first flowchart of another video playing method provided in an embodiment of the present application;

fig. 8 is a flowchart ii of another video playing method provided in the embodiment of the present application;

fig. 9 is a schematic structural diagram of a video playing apparatus according to an embodiment of the present application;

fig. 10 is a schematic structural diagram of another video playing apparatus provided in the embodiment of the present application;

fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.

Detailed Description

In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present embodiment, "a plurality" means two or more unless otherwise specified.

The embodiment of the application provides a video playing method, by which the possibility of disclosure of privacy segments can be reduced, and privacy protection can be intelligently performed in the video playing process.

Embodiments of the present application will be described in detail below with reference to the accompanying drawings.

Please refer to fig. 1, which illustrates an implementation environment diagram of a video playing method according to an embodiment of the present application. As shown in fig. 1, the implementation environment may include a server 101 and a plurality of terminal devices, such as a terminal device 102 (e.g., a mobile phone) and a terminal device 103 (e.g., a notebook computer).

The server 101 may be a server that provides video files for a plurality of terminal devices. Specifically, the terminal device (e.g., terminal device 102, terminal device 103) may download the video file directly from the server 101; the video file includes a video file of a network video.

Or, the terminal device uploads a locally stored video file to the server 101, where the locally stored video file is a video file to be processed; the server 101 processes (for example, tags) the video file to be processed, obtains a video file (i.e., a video file of a video to be played), and pushes the video file to the terminal device. And the terminal equipment receives and plays the video file.

Or, the terminal device processes (for example, applies a content tag) a locally stored video file or a video file of a network video to obtain a video file, and then plays the video file.

For example, the terminal device in the embodiment of the present application may be a mobile phone, a video player, a smart television, a tablet computer, a desktop, a laptop, a handheld computer, a notebook, a vehicle-mounted device, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a Personal Digital Assistant (PDA), an Augmented Reality (AR) \ Virtual Reality (VR) device, and the like, and the embodiment of the present application does not particularly limit the specific form of the terminal device.

It should be noted that the video playing method provided in the embodiment of the present application may be applied to any of the terminal devices (such as the terminal device 102 or the terminal device 103) including the camera. The terminal device may also be referred to as an electronic device. The execution main body of the video playing method provided by the embodiment of the application can be a video playing device, and the video playing device can be the electronic equipment. The video playing device can also be provided with an application program (APP) which can provide a video playing function; alternatively, the video playing apparatus may be a Central Processing Unit (CPU) of the electronic device; or, a control module in the electronic device for executing video playing.

The use of terminal equipment is more and more convenient now, and the user can use terminal equipment to watch the video anytime and anywhere, and this video may include some privacy segment that is inconvenient for others to see. In order to avoid inconvenience for the surrounding viewers to see the privacy segments in the video during the process of watching the video, when the privacy segments are played, the users need to judge whether some surrounding viewers (such as acquaintances or strangers) are close to the users intentionally or unintentionally. When the onlooker is found, the user immediately manually pauses playing the video, exits the interface for video playing, or plays other open video clips. It can be seen that the above-mentioned video playing is less intelligent. Moreover, the user judges whether the surrounding persons exist around the user, and the playing of the video is controlled manually, so that the situation that the user does not perceive some surrounding persons, the user operation is not timely or the user operation is wrong may exist, the privacy segment is seen by the surrounding persons, and the privacy segment is leaked. In addition, after the user leaves the enclosure, the user selects the position of the paused playing again in the video to continue playing the video from the position. The intelligent degree of video playing is low. The above problems all result in that the user cannot watch the video with concentration, and the film watching experience of the user is reduced.

In conclusion, the above video playing process may leak the privacy segments, and needs frequent operations by the user, so that the intelligence degree is low, and the film watching experience of the user is reduced. Therefore, the video playing method provided by the embodiment of the application can solve the problems in the related art, and can intelligently perform privacy protection and intelligently continue playing videos in the process of playing the videos, so that the film watching experience of users is improved.

Please refer to fig. 2, which is a flowchart illustrating a video playing method according to an embodiment of the present disclosure. As shown in fig. 2, the method may include steps 201-204.

Step 201: playing the video file; the video file comprises K video clips and content labels of each video clip, each video clip comprises N frames of images, and K and N are positive integers.

The video playing device (such as a terminal device) can acquire a video file of a video to be played in a local storage unit, and also can acquire a video file of a video to be played from a server; and then loads the video file to play the video file (i.e. play the video to be played).

It should be noted that the local storage unit of the video playing apparatus may include a plurality of video files.

For example, the video playing apparatus may receive an opening instruction of the video playing client from a user. In response to the opening instruction, the video playing apparatus may operate the video playing client and display an opening interface 31 of the video playing client, as shown in fig. 3A. The open interface 31 includes icons of a plurality of Video files, for example, a Video file Video1, a Video file Video2, and a Video file Video 3. The user can input an operation for controlling the video playing device to play the video on the opening interface 31; this operation may trigger a video play instruction. The video playing device can receive the video playing instruction. In response to the video playing instruction, the video file indicated by the video playing instruction may be acquired and loaded, and jump from the opening interface 31 to the video playing interface 32, as shown in fig. 3B, the video file is played on the video playing interface 32.

Illustratively, the video playing apparatus may also pop up a switch option 311 for the smart play video function on the opening interface 31, as shown in fig. 3C. The switch options 311 include two options of on and off. The user can turn on or off the smart play video function through the switch option 311. The video playing device receives the video playing instruction, responds to the video playing instruction, obtains the video file indicated by the video playing instruction and loads the video file when the intelligent video playing function is started, and jumps to the video playing interface 32 from the starting interface 31. And when the intelligent video playing function is closed, acquiring the common video file indicated by the video playing instruction and loading the common video file. Wherein the normal video file does not include a content tag.

In some embodiments, the video playback device obtains a video file to be processed. And dividing the video file into K video segments, and acquiring the image characteristics and the audio characteristics of each video segment in the K video segments. And the frame number of the video frames in each video clip is less than or equal to a preset frame number. Then, a content tag of each of the K video clips is determined according to the image feature and the audio feature of each of the K video clips, and a video file including the content tag is generated. The content tag of each video clip is one of a plurality of preset content tags, the preset content tags are used for indicating the category of the video clip, and the preset content tags include any one of the following: normal, laughter, violence, emotional and vulgar. The preset content tag is used for indicating that the category of the video clip is any one of the following categories: normal, laugh, violence, emotional, or vulgar.

The video playing device obtains a video file to be processed of a video to be played. Firstly, dividing a video file to be processed according to a preset frame number to obtain K video clips. And sequentially taking out the ith video clip from the K video clips, and extracting image features and audio features of the ith video clip. The value range of i is {1, 2, … …, K }. And determining the content label of the ith video clip according to the image characteristic and the audio characteristic of the ith video clip. And generating a video file by the K video clips and the respective content labels.

The number of video frames included in each video segment is less than or equal to a preset number of frames, and the preset number of frames may be 1 frame or multiple frames. The preset content tag includes any one of: normal, laughter, violence, emotional and vulgar.

Illustratively, the K video clips include the 1 st video clip to the K video clip ordered chronologically. The frame number of the video frame included in each of the 1 st video segment to the K-1 st video segment is equal to a preset frame number, and the frame number of the video frame included in the K-th video segment is less than the preset frame number.

Illustratively, the plurality of preset content tags may include any of: a normal tag indicating that the video clip is normal content, an effusive tag indicating that the video clip is efficient content, an violence tag indicating that the video clip is violent content, a sentiment tag indicating that the video clip is sentiment content, a low-custom tag indicating that the video clip is low-custom content, and the like.

Note that a plurality of preset content tags may be represented by different identifiers, for example, a normal tag is represented by L1, and a funny tag is represented by L2. The embodiment of the application does not limit the form of the plurality of preset content tags.

In some embodiments, the video playing apparatus determines, according to an image feature of an ith video segment of the K video segments, image tag information of the ith video segment, where the image tag information is used to indicate a probability that an image of the ith video segment belongs to a category indicated by each preset content tag in the plurality of preset content tags; wherein i takes values in turn from K values of 1 to K. Then, according to the audio characteristics of the ith video segment, determining audio tag information of the ith video segment, wherein the audio tag information is used for indicating the probability that the audio of the ith video segment belongs to the category indicated by each preset content tag in the plurality of preset content tags. Secondly, calculating the probability that the ith video clip belongs to the category indicated by each preset content label in the plurality of preset content labels according to the image label information and the preset image weight of the ith video clip and the audio label information and the preset audio weight of the ith video clip. And finally, determining that the preset content label corresponding to the category with the highest probability is the content label of the ith video clip.

The video playing device can comprise an image classification model and an audio classification model; the image classification model has the capability of extracting image label information from the image characteristics of the video clip; the audio classification model has the ability to extract audio tag information from the audio features of a video segment. The video playing device takes values of i from K values from 1 to K in sequence. Then, inputting the image characteristics of the ith video clip into the image classification model, operating the image classification model, and outputting the image label information of the ith video clip; and the audio classification model is operated after the audio characteristics of the ith video clip are input, and the audio label information of the ith video clip is output. And calculating the image label information, the preset image weight, the audio label information and the preset audio weight of the ith video clip to obtain the probability. And finally, taking the preset content label with the maximum probability as the content label of the ith video clip.

Illustratively, as shown in the flowchart of fig. 4 for obtaining the content tag of the video segment, the image classification model may be an image classifier, and the audio classification model may be an audio classifier. Firstly, extracting image characteristics and audio characteristics from a video clip; inputting the image characteristics into an image classifier, and outputting image label information by the image classifier; and inputting the audio features to an audio classifier, the audio classifier outputting audio tag information. And finally, according to the preset image weight and the preset audio weight, carrying out weighted summation on the image label information and the audio label information to obtain a plurality of probabilities of the video clip corresponding to a plurality of preset content labels, and selecting the preset content label with the maximum probability as the content label of the video clip.

Further, the video playing device may operate after inputting the image characteristics of the ith video segment into the image classification model, and output the image label information of the ith video segment; and after the audio characteristics of the ith video clip are input into the audio classification model, the operation is carried out, and before the audio label information of the ith video clip is output, a plurality of video clip samples and the content label of each video clip sample in the plurality of video clip samples are obtained. And then acquiring the image characteristics and the audio characteristics of each video segment sample in the plurality of video segment samples. And then, training the deep learning model by using the image characteristics of each video segment sample in the plurality of video segment samples and the content label of each video segment sample in the plurality of video segment samples to obtain an image classification model. And training the deep learning model by utilizing the audio characteristics of each video segment sample in the plurality of video segment samples and the content label of each video segment sample in the plurality of video segment samples to obtain an audio classification model. Wherein the content label of each video clip sample can be determined by the user.

In some embodiments, the image tag information includes a plurality of preset content tags and a plurality of image probabilities, and the plurality of preset content tags and the plurality of image probabilities are in one-to-one correspondence. Any one of the preset content tags in the image tag information and the corresponding image probability thereof represent the probability that the image of the ith video clip belongs to the category indicated by the preset content tag. Similarly, the audio tag information includes a plurality of preset content tags and a plurality of audio probabilities, and the plurality of preset content tags and the plurality of audio probabilities are in one-to-one correspondence. Any one of the preset content tags in the audio tag information and the corresponding audio probability thereof represent the probability that the audio of the ith video clip belongs to the category indicated by the preset content tag. The image probability and the audio probability may be values within 0-1.

Illustratively, the image tag information may include: the image probability corresponding to the normal label is 0, the image probability corresponding to the violent label is 0.8, the image probability corresponding to the emotional label is 0.1, and the image probability corresponding to the vulgar label is 0.7. Taking the probability of the image corresponding to the normal label as 0 as an example, it represents that the probability of the image of the ith video segment belonging to the normal content indicated by the normal label is 0.

In some embodiments, the video playing apparatus may input the image tag information of the ith video segment, the preset image weight, the audio tag information of the ith video segment, and the preset audio weight into the weighted summation model, so as to obtain a probability that the ith video segment belongs to the category indicated by each of the plurality of preset content tags; wherein, the weighted summation model is shown as the following formula (1):

Zj=Pj*WP+Vj*WV,j∈J (1)

wherein J is the total number of the plurality of preset content tags; j represents the jth content tag in the J preset content tags; zjA probability that the ith video clip belongs to the category indicated by the jth content tag; pjThe image probability corresponding to the jth content label; vjThe audio probability corresponding to the jth content label; wPIs a preset image weight; wVIs a preset image weight.

In some embodiments, the video playing apparatus may set the preset image weight and the preset audio weight as default values, for example, the preset image weight is 0.5, and the preset audio weight is 0.5. The current audio playing mode can be detected, and the preset image weight and the preset audio weight can be set according to the current audio playing mode.

The current audio playing mode may include a play-out mode, an earphone mode, and an earphone mode. In the play-out mode, the preset image weight can be set to be smaller than the preset audio weight; for example, the preset image weight is 0.7, and the preset audio weight is 0.3. In the earphone mode and the earphone mode, the preset image weight can be set to be larger than the preset audio weight; for example, the preset image weight is 0.4, and the preset audio weight is 0.6.

It will be appreciated that when a video clip is played in play-out mode, the surround can directly hear the sound of the video at a faster rate than the image of the video. Therefore, the preset audio weight of the audio tag information can be set to be greater than the preset image weight of the image tag information, and the content tag of the video segment is mainly determined by the audio tag information. That is, the content tag of the video clip more accurately indicates the category to which the audio of the video clip belongs. Then, whether the video segment is a privacy segment is determined based on such content tag, that is, whether the video segment is a privacy segment is determined mainly by the category to which the audio of the video segment belongs, that is, whether the audio of the video segment belongs to the privacy segment is mainly determined. When the audio of the video clip belongs to the privacy clip, the playing of the video clip can be controlled, so that the possibility that the surrounding person firstly hears the sound of the privacy clip can be reduced, and the possibility that the privacy clip is leaked can be better reduced.

In addition, when a video clip is played in the receiver mode or the earphone mode, compared with the sound of a video, the surround can see the video image more quickly, so that the preset image weight of the image tag information can be set to be greater than the preset audio weight of the audio tag information, and the content tag of the video clip is mainly determined by the image tag information. That is, the content tag of the video clip more accurately indicates the category to which the image of the video clip belongs. Then, whether the video segment is a privacy segment is determined based on such content tag, that is, whether the video segment is a privacy segment is determined mainly by the category to which the image of the video segment belongs, that is, whether the image of the video segment belongs to the privacy segment is mainly determined. When the image of the video clip belongs to the privacy clip, the playing of the video clip can be controlled, so that the possibility that the surrounding person sees the image of the privacy clip first can be reduced, and the possibility that the privacy clip is leaked can be better reduced.

In some embodiments, the video playback device includes a tag classification model having the ability to determine content tags for video segments. The video playing device inputs K video clips, operates the label classification model and outputs the content label of each video clip in the K video clips.

Further, the video playing apparatus may obtain the plurality of video segment samples and the content tag of each of the plurality of video segment samples before inputting the K video segments and running the tag classification model. And training the deep learning model by using the plurality of video segment samples and the content label of each video segment sample in the plurality of video segment samples to obtain a label classification model.

In some embodiments, the video playback device may display a plurality of preset content tags in response to a video playback instruction or a tag setting instruction before playing back the video file. Then, receiving a label type corresponding to each preset content label in a plurality of preset content labels; the tag type includes a privacy tag or a regular tag.

The video playing apparatus may display a tag setting interface in response to the video playing instruction or the tag setting instruction before step 201, and display a plurality of preset content tags on the tag setting interface. The user can set a plurality of preset content labels in a grouping manner on the label setting interface, and the video playing device receives a label setting instruction of the user. The tag setting instruction is used for instructing to set each preset content tag in a plurality of preset content tags as a privacy tag or a regular tag. The video playing device may set each preset content tag of the plurality of preset content tags as a privacy tag or a regular tag according to the tag setting instruction.

Illustratively, the video playing apparatus displays the opening interface 31 of the video playing client, and when receiving a video playing instruction or a tag setting instruction, jumps from the opening interface 31 to the tag setting interface 33, as shown in fig. 5A, a plurality of preset content tags 331, a grouping category 332, and a grouping completion option 333 are displayed on the tag setting interface 33. The plurality of preset content tags 331 may include a normal tag, a fun tag, a violence tag, a sentiment tag, and a low-custom tag. The packet class 332 includes a regular group and a privacy group. The user may drag some preset content tabs among the plurality of preset content tabs 331 to the regular group, which means that the preset content tabs are set as the regular group. The user may also drag another preset content tab of the preset content tabs 331 to the privacy group, which means that another preset content tab is set as the privacy group. After the user sets all the preset content tags, the user may click the grouping completion option 333, and the video playback device jumps from the tag setting interface 33 to the video playback interface 32.

It should be noted that the grouping category may be set according to the disclosure degree of the content, and the grouping category may include more groups, which is not limited in the embodiment of the present application. In addition, in addition to the above grouping manner, other grouping manners may be adopted, for example, one of the grouping categories is selected for each preset content tag, and the embodiment of the present application is not limited.

In some embodiments, the video playing apparatus may further set each of the plurality of preset content tags as a privacy tag or a regular tag for each of the plurality of preset scenes. Wherein the privacy tags and the conventional tags of the plurality of preset scenes are different. The plurality of preset scenes may include a plurality of play environments or a plurality of play objects; the plurality of playback environments may include indoor, outdoor, noisy environments, quiet environments, and the like; the plurality of play objects may include strangers, minors, specific objects, and the like.

Illustratively, the plurality of preset content tags may include a normal tag, an efface tag, a violence tag, a sentiment tag, and a low-colloquial tag. Indoor privacy tags include sentiment tags and vulgar tags, and indoor conventional tags include normal tags, fun tags and violence tags. The outdoor privacy tags include fun tags, violence tags, sentiment tags and vulgar tags, and the outdoor conventional tags include normal tags.

In some embodiments, the video playing apparatus may jump from the opening interface to the tag setting interface when receiving the video playing instruction or the tag setting instruction for the first time, and complete setting of each of the plurality of preset content tags as the privacy tag or the regular tag. When the video playing instruction is received again, a modification prompt message can be popped up on the opening interface; the modification prompt message is used for prompting the user whether modification needs to be carried out on the grouping of the plurality of preset content tags. When an unmodified instruction is received, jumping from the starting interface to a video playing interface; and when a modification instruction is received, jumping from the opening interface to the label setting interface.

Illustratively, as shown in fig. 5B, a modification prompt box 312 pops up on the opening interface 31, and the modification prompt box 312 includes a modification prompt message, an option to no longer remind, and an option to modify. The content of the modification prompt message is 'whether to modify the grouping of a plurality of preset content labels'. After the user selects the option which is not reminded any more, the video playing device does not pop up the modification prompt information when receiving the video playing instruction. After the user selects the modified option, the video playback device jumps from the start interface 31 to the label setting interface 33.

Step 202: detecting whether a limited object watches the video file; the restricted object is an object that does not have permission to view the private segment in the video file.

In the process of playing the video file, the video playing device can detect whether a limited object watches the video file (namely watches the video to be played) in the current environment according to a preset detection period. If the video file watched by the limited object is detected, whether the video clip being played is a privacy clip is further judged. And if the video file is not watched by the limited object, continuing to play the video file.

The preset detection period may be a frame duration or a multi-frame duration of the video file. In the process of playing the video file, the video playing device detects whether a limited object watches the video file in the current environment or not every time one or more frames are played.

In some embodiments, the video playback device may capture images of scenes in the current environment in which the video file is played. If the face image included in the scene image is a first face image, determining that a limited object is detected to watch the video file, wherein the first face image is a face image of a pre-configured limited object (namely, a user without watching authority); or if the face image included in the scene image is not the second face image, determining that the limited object is detected to watch the video file, wherein the second face image is a face image of a user with preset watching authority.

The video playing device can collect scene images in the current environment for playing the video files according to a preset detection period. Then, face detection is performed on the scene image. When the scene image is detected not to include the face image, determining that no limited object watches the video file, and continuously playing the video file; and if not, continuously judging whether the face image in the scene image belongs to the first face image or the second face image. If the face image included in the scene image is the first face image, or the face image included in the scene image is not the second face image, it can be determined that the limited object is detected to watch the video file. If the face image included in the scene image is not the first face image or the face image included in the scene image is the second face image, it can be determined that no limited object is available to watch the video file, and the video file is continuously played.

Wherein, the viewing authority may refer to an authority to view the privacy segment. The first face image may include face images of minors having no viewing authority, face images of some specific groups of people, and the like. The second face image may include a face image of a non-stranger having a viewing right, an own face image, or the like.

In some embodiments, the video player may receive and save the first face image before step 202; or receiving and storing the second face image.

Specifically, the video playing apparatus may display a face registration interface in response to the video playing instruction or the face entry instruction before step 202, or may display a first entry of the first face image and a second entry of the second face image on the face registration interface before step 201. When the user selects the first entry, the first facial image can be uploaded, or the first facial image can be acquired through the camera. When the user selects the second input entry, the second face image can be uploaded, or the second face image can be acquired through the camera.

Illustratively, the video playing apparatus displays the start interface 31 of the video playing client, and may jump from the start interface 31 to the face registration interface 34 when receiving a video playing instruction or a face entry instruction, as shown in fig. 5C, a first entry 341 of a first face image, a second entry 342 of a second face image, and a face registration completion option 343 are displayed on the face registration interface 34. If the first face image is a face image of a limited object, the prompt information of the first entry 341 is a masked person; if the second face image is a face image of a user having a viewing right, the prompt information of the second entry 342 is an allowed person. After the user clicks the first entry 341 or the second entry 341, an entry mode list pops up on the face registration interface 34; the input mode list comprises photos and shots. The picture is directly uploaded to a locally stored picture to serve as a first face image; shooting refers to acquiring a first face image by calling a camera. After the user selects the first face image and the second face image, the user can click the face registration completion option, and the video playing device jumps from the face registration interface 34 to the video playing interface 32 or the label setting interface 33.

It should be noted that, in the process of setting each preset content tag in the plurality of preset content tags as a privacy tag or a regular tag, the video playing apparatus may be before acquiring the first face image and/or the second face image, or after acquiring the first face image and/or the second face image. Moreover, the same process as that of setting each preset content tag in the plurality of preset content tags to be the privacy tag or the conventional tag only when the video playing instruction is received for the first time is performed, the first face image and/or the second face image may be acquired only when the video playing instruction is received for the first time.

Illustratively, when the user is indoors, the face image of the child is set as the first face image on the video playing device, so that the child is prevented from seeing the privacy segment. When the user is outdoors, the face image of the user is set as the second face image on the video playing device, so that strangers can be prevented from seeing the privacy segment.

In some embodiments, the video playing device comprises a front camera and a face recognition module. The face recognition module has the capability of recognizing whether the first face image or the second face image is included in the scene image. The video playing device can control the front camera to acquire the scene image in the current environment according to a preset detection period. Then, the scene image is input into a face recognition module, the face recognition module is operated, and a video file watched by a limited object or a video file watched by an unlimited object is output.

Step 203: if the limited object is detected to watch the video file, judging whether the video clip being played belongs to the privacy clip or not according to the content label of the video clip being played.

If the video playing device detects that the limited object watches the video file, the content tag of the video clip being played is obtained from the video file, and whether the video clip being played belongs to the privacy clip is judged based on the content tag of the video clip being played. Wherein, the video clip being played belongs to K video clips.

In some embodiments, the video playing apparatus may determine, according to a tag type corresponding to each of the plurality of preset content tags, that a content tag of a video clip being played is a privacy tag or a conventional tag. And if the content tag of the video clip being played is the privacy tag, determining that the video clip being played belongs to the privacy clip. And if the content label of the video clip being played is a conventional label, determining that the video clip being played does not belong to the privacy clip.

The video playing device may determine whether the video segment being played belongs to the privacy segment according to whether each preset content tag in the plurality of preset content tags is a privacy tag or a regular tag and a content tag of the video segment being played.

Step 204: and if the video clip being played belongs to the privacy clip, stopping playing the video clip being played.

When the video playing device determines that the video clip which is being played belongs to the privacy clip, the video playing device stops playing the video clip which is being played; otherwise, the playing video clip is continuously played.

In some embodiments, after step 204, the video playing apparatus may play a preset media file, where the preset media file includes: presetting a video, a picture or a non-privacy segment in a video file.

The video playing apparatus determines that the video segment being played belongs to the privacy segment, and may perform at least one of the following: stopping playing the video clip being played; quitting the video playing interface; and playing the preset media file. Wherein the non-private sections in the video file may include: the content tags in the video file are video clips of a conventional tag. The preset video may include: user-specified videos, network videos, and so on. The preset picture may include: user-specified pictures, network pictures, screensaver pictures, and the like.

Further, the video playing apparatus determines that the video segment being played belongs to the privacy segment, and may prompt the user to automatically protect the privacy segment in the video segment in a hidden manner (e.g., text, changing screen brightness, vibration, sound, etc.) in addition to performing at least one of the above operations.

Illustratively, as shown in the interface diagram of video jumping shown in fig. 5D, the video playing apparatus plays the video file on the video playing interface. In the process of playing the video file, when it is detected that the restricted object views the video file, the content tag of the playing video segment is obtained, and the playing video segment is shown as D1 in fig. 5D. When the content tag of the video clip being played is a violent tag and the violent tag belongs to the privacy group, determining that the video clip being played belongs to the privacy clip. Jumping to play a preset media file, for example, the advertisement video of the a-brand sofa, on the video playing interface, as D2 in fig. 5D.

In some embodiments, the video playing apparatus may obtain the preset media file before playing the video file. The method specifically comprises the following steps: the video playing device acquires a preset video or a preset picture; or when the preset video and the preset picture are not acquired, the non-privacy segment in the video file is automatically acquired, and the non-privacy segment in the video file can be defaulted to the preset media file.

Specifically, the video playing apparatus may display an alternative media setting interface in response to a video playing instruction or a preset media file entry instruction before playing the video file, where at least one of the following may be displayed: a list of a plurality of media files to be selected, an auto-select video option, and a network link input box. The media files selected by the user in the multiple media files to be selected are used as preset media files. The automatic selection of the video option means that a video segment of which the content tag is a conventional tag in the video file is automatically searched and used as a preset media file. And inputting a link address of the network video or the network picture through the network link input box, and taking the network video or the network picture as a preset media file.

Illustratively, the video playing apparatus displays the open interface 31 of the video playing client, and when receiving a video playing instruction, the video playing apparatus may jump from the open interface 31 to the standby media setting interface 35, and as shown in fig. 5E, the standby media setting interface 35 displays a list 351 of a plurality of media files to be selected, an auto-selection video option 352, a network link input box 353, and a media setting completion option 354 on the standby media setting interface 35. After the user selects the preset media file, the user may click the media setup complete option 354, and the video playback device jumps from the standby media setup interface 35 to the video playback interface 32, the tag setup interface 33, or the face registration interface 34.

It should be noted that, the order of setting each preset content tag in the plurality of preset content tags as a privacy tag or a conventional tag, acquiring the first face image and/or the second face image, and acquiring the preset media file by the video playing device is not limited. Moreover, in a similar manner to the process of setting each preset content tag of the plurality of preset content tags as a privacy tag or a regular tag only when the video playing instruction is received for the first time, the preset media file may be acquired only when the video playing instruction is received for the first time.

In some embodiments, the video playing apparatus acquires the position information of the video frame that has stopped playing (e.g., the time point of the video frame that has stopped playing) after step 204; the video frame which stops playing belongs to the video clip which is playing. And then continuously detecting whether the limited object watches the video file. If the limited object is detected not to watch the video file, jumping to the video frame which is stopped to be played in the video file according to the position information of the video frame which is stopped to be played, and starting to play the video file; otherwise, the video frame in the video file which stops playing is not jumped to, and the current playing is kept.

Specifically, the video playing apparatus acquires the position information of the video frame whose playing is stopped after step 204. And then collecting the scene image in the current environment of the video file. Then, whether the first face image is included in the scene image is detected. If the scene image does not include the first face image, skipping to the video frame which is stopped to be played in the video file according to the position information of the video frame which is stopped to be played, and starting to play the video file; otherwise, skipping to the video frame which is stopped to be played in the video file is not carried out.

Alternatively, the video playing apparatus acquires the position information of the video frame whose playing is stopped after step 204. And then collecting the scene image in the current environment of the video file. Then, whether the scene image only comprises the second face image is detected. If the scene image only comprises the second face image, skipping to the video frame which is stopped to be played in the video file according to the position information of the video frame which is stopped to be played, and starting to play the video file; otherwise, skipping to the video frame which is stopped to be played in the video file is not carried out.

Illustratively, a detailed flowchart of a video playing method is shown in fig. 6. As shown in fig. 6, the method may include steps 401-408.

Step 401: if the intelligent video playing function is started, acquiring a label type corresponding to each preset content label in a plurality of preset content labels; and acquiring a preset media file and the first face image or the second face image.

Wherein the tag type includes a privacy tag or a regular tag. The first facial image is a facial image of a user without viewing authority, and the second facial image is a facial image of a user with viewing authority. The preset media file comprises a preset video, a preset picture or a non-privacy segment in the video file; the preset media file can be configured in the video playing device in advance; alternatively, the predetermined media file may be entered or downloaded by the user into the video playback device.

The video playing device can respond to a starting instruction of the intelligent playing video function, display a label setting interface, display a plurality of preset content labels on the label setting interface, and then receive the label type corresponding to each preset content label in the plurality of preset content labels. Then, jumping to a face registration interface from the label setting interface, displaying a first entry of the first face image or a second entry of the second face image on the face registration interface, and receiving the first face image through the first entry or receiving the second face image through the second entry. And finally, jumping from the face registration interface to the standby media setting interface, displaying an entry of a preset media file on the standby media setting interface, and acquiring a preset video, a preset picture or a non-privacy segment in the video file through the entry of the preset media file.

It should be noted that, the order of acquiring the first face image or the second face image is not limited, and the video playing device acquires the tag type corresponding to each preset content tag in the plurality of preset content tags, acquires the preset media file, and acquires the first face image or the second face image. Correspondingly, the sequence of the display label setting interface, the face registration interface and the standby media setting interface of the video playing device is not limited. For example, a face registration interface may be displayed first, and a first face image or a second face image may be obtained; skipping from the face registration interface to a standby media setting interface, and acquiring a label type corresponding to each preset content label in a plurality of preset content labels; and finally, jumping from the standby media setting interface to the label setting interface to obtain the preset media file.

Step 402: receiving a video playing instruction aiming at a video file, and playing the video file in response to the video playing instruction; the video file comprises K video clips and content labels of each video clip, each video clip comprises N frames of images, and K and N are positive integers.

Step 403: and judging whether a limited object watches the video file or not according to the first face image or the second face image.

The video playing device can collect scene images in the current environment for playing the video file. If the face image included in the scene image is the first face image, determining that a limited object watches the video file, and executing a step 404; if the face image included in the scene image is not the first face image, it is determined that there is no restricted object to view the video file, and the process continues to step 403.

Alternatively, the video playing apparatus may capture a scene image in the current environment in which the video file is played. If the face image included in the scene image is not the second face image, determining that a limited object is available to watch the video file, and executing step 404; if the face image included in the scene image is the second face image, it is determined that there is no restricted object to view the video file, and step 403 is continuously performed.

Wherein, the video playing apparatus may continue to execute step 403 according to a preset detection period. The preset detection period may be a frame duration or a multi-frame duration of the video file. Accordingly, the video playing apparatus may perform step 403 every time one or more frames are played.

Step 404: and judging whether the video clip being played belongs to the privacy clip or not according to the tag type corresponding to each preset content tag in the plurality of preset content tags and the content tag of the video clip being played.

The video playing device may determine that the content tag of the video segment being played is a privacy tag or a conventional tag according to the tag type corresponding to each of the preset content tags. If the content tag of the video clip being played is the privacy tag, determining that the video clip being played belongs to the privacy clip, and executing step 405; if the content tag of the video segment being played is a normal tag, it is determined that the video segment being played does not belong to the privacy segment, and step 403 is continuously executed.

Step 405: and stopping playing the video clip being played and acquiring the position information of the video frame which is stopped to be played.

Step 406: and playing the preset media file.

Step 407: and judging whether a limited object watches the preset media file or not according to the first face image or the second face image.

The video playing device can judge whether a limited object watches the preset media file according to the first human face image or the second human face image according to the preset detection period. If it is determined that the limited object watches the preset media file, continuing to execute step 407; if it is determined that there is no restricted object to view the predetermined media file, step 408 is performed.

It should be noted that the implementation process of step 407 is the same as the implementation process of step 403, and is not described herein again.

Step 408: skipping to the video frame which is stopped playing in the video file, and starting playing the video file from the video frame which is stopped playing.

It can be understood that, with the method provided in the embodiment of the present application, if the intelligent video playing function is started, the tag type, the preset media file, and the first face image or the second face image corresponding to each preset content tag in the plurality of preset content tags are obtained. Wherein the first facial image is a facial image of a restricted object (i.e., a user without viewing right), and the second facial image is a facial image of a user with viewing right; the preset media file comprises a preset video, a preset picture or a non-privacy segment in the video file. That is, before playing a video file, a user may group a plurality of preset content tags according to personal needs, and designate a user without viewing right or a user with viewing right, and an alternative preset media file. Furthermore, in the process of playing the video file, whether a limited object watches the video file or not and whether the video clip being played belongs to the privacy clip can be judged according to the information (namely the grouped plurality of preset content tags, the authorized user or the unauthorized user and the alternative preset media file) meeting the requirements of the user. And then the private segment in the video file is stopped playing when the limited user is watching. In this way, privacy protection can be performed in the process of playing the video in a personalized and intelligent manner.

In addition, when the playing of the privacy segment in the video file is stopped, the position information of the video frame of which the playing is stopped is also recorded. By recording the position information, when the limited object is not checked to watch the video file, jumping back to the video frame which is stopped playing in the video file and starting playing. Therefore, the process that the user manually selects the video frame which is stopped playing in the video file is avoided, and the intelligent degree of playing the video can be further improved.

Please refer to fig. 7, which is a flowchart illustrating another video playing method according to an embodiment of the present application. As shown in fig. 7, the method may include steps 501-504.

Step 501: playing the video file; the video file comprises K video clips and content labels of each video clip, each video clip comprises N frames of images, and K and N are positive integers.

It should be noted that the implementation process of step 501 is the same as the implementation process of step 201, and is not described here again.

Step 502: judging whether M video clips to be played comprise privacy clips or not according to content tags included in the video files; m is a positive integer;

in the process of playing the video file, the video playing device acquires M video clips after the video frame being played, that is, M video clips to be played. The M video segments to be played belong to K video segments. And then judging whether the M video clips to be played comprise privacy clips or not.

In some embodiments, the video playing device may obtain the M video segments to be played from one or more frames after the video frame being played.

In some embodiments, the video playing apparatus may determine whether each of the M video segments to be played belongs to a privacy segment according to whether each of the plurality of preset content tags is a privacy tag or a regular tag and a content tag included in the video file. When at least one video clip in the M video clips to be played belongs to the privacy clip, determining that the M video clips to be played comprise the privacy clip, otherwise, determining that the M video clips to be played do not comprise the privacy clip.

The video playing device acquires a content label of each video clip in M video clips to be played from a video file; then, according to the content tag of each of the M video clips to be played, it is determined whether each of the M video clips to be played belongs to a privacy clip.

It should be noted that, the video playing apparatus determines whether each of the M video segments to be played belongs to the privacy segment according to the content tag of each of the M video segments to be played, which is the same as the process in step 203 and is not described herein again.

Step 503: if M video clips to be played comprise privacy clips, detecting whether a limited object watches a video file; a restricted object is an object that does not have the right to view a private segment in a video file.

If the video playing device detects that the M video clips to be played comprise privacy clips, detecting whether a limited object watches a video file; otherwise, continuing to play the video file and acquiring new M video clips to be played; and judging whether the M video clips to be played newly include privacy clips or not according to the content tags included in the video files.

In some embodiments, the video playing device comprises a front camera and a face recognition module. When the video playing device detects that M video clips to be played comprise privacy clips, the front-facing camera is controlled to collect scene images in the current environment of the played video file. Then, the scene image is input into a face recognition module, the face recognition module is operated, and a video file watched by a limited object or a video file watched by an unlimited object is output.

It should be noted that the process of step 503 has the same process as the process of step 202, and is not described herein again.

Step 504: and if the limited object is detected to watch the video file, stopping playing the video clip being played.

The video playing device detects that the limited object watches the video clip being played, and stops playing the video clip being played; otherwise, the video file is continuously played.

It should be noted that the process of step 504 is the same as the process of step 204, and is not described here again.

It can be understood that, the method provided by the embodiment of the application can acquire the video file of the video to be played and play the video file; the video file includes a content tag for each of the K video segments. The content tag of the video clip is used for judging whether the video clip is a privacy clip. Specifically, in the process of playing the video file, whether M video clips to be played include the privacy clip may be determined according to the content tags of the K video clips. When the M video clips to be played comprise the privacy clip, whether a limited object watches the video file is detected. The restricted object does not have permission to view the private section in the video file. Because the M video segments to be played include the privacy segment, that is, the privacy segment is to be played. In order to avoid that the privacy segment of the M video segments is seen by the restricted object, i.e. the privacy segment is leaked, it is detected whether there is a restricted object to watch the video file. In this way, it is possible to reduce the possibility of the leakage of the privacy segment by detecting whether there is a restricted object immediately before the privacy segment is played. And whether the limited object exists is detected just before the privacy segment, so that the power consumption for detecting the limited object all the time is reduced.

In addition, if a restricted object is detected to view the video file, the playing of the video clip being played is stopped. By using the content labels of K video clips in the video file, the fact that the privacy clips in the video file are to be played to the limited object is automatically stopped, and the intelligent degree of playing the video is improved.

Illustratively, a detailed flow chart of another video playing method is shown in fig. 8. As shown in fig. 8, the method may include steps 601-609.

Step 601: if the intelligent video playing function is started, acquiring a label type corresponding to each preset content label in a plurality of preset content labels; and acquiring a preset media file and the first face image or the second face image.

Step 602: receiving a video playing instruction aiming at a video file, and playing the video file in response to the video playing instruction; the video file includes K video clips and a content tag for each video clip, each video clip including N frames of images.

Wherein K and N are both positive integers.

It should be noted that the implementation process of steps 601-602 is the same as the implementation process of steps 401-402, and is not described herein again.

Step 603: and acquiring M video clips to be played from the video file.

The video playing device acquires M video clips behind a video frame which is being played from a video file; the M video segments to be played belong to K video segments.

Step 604: and judging whether the M video clips to be played comprise privacy clips or not according to the corresponding tag type of each preset content tag in the plurality of preset content tags and the content tags of the M video clips to be played.

The video playing device may determine, according to the tag type corresponding to each of the preset content tags, that the content tag of each of the M video segments is a privacy tag or a conventional tag. If the content tag of at least one of the M video segments is a privacy tag, it is determined that the M video segments to be played include privacy segments, and step 605 is executed. If the content tags of all the M video segments are conventional tags, determining that the M video segments to be played do not include privacy segments, and continuing to execute step 603 to obtain new M video segments to be played; then 604 is performed for the new M video segments to be played.

Step 605: and if the M video clips to be played comprise privacy clips, judging whether a limited object watches the video file or not according to the first face image or the second face image.

If the video playback device determines that there is a restricted object to view the video file, step 606 and 608 are executed. If it is determined that there are no restricted objects to watch the video file, continuing to execute step 603, and acquiring new M video segments to be played; then 604 is performed for the new M video segments to be played.

It should be noted that the implementation process of step 605 is the same as the implementation process of step 403, and is not described here again.

Step 606: and stopping playing the video clip being played and acquiring the position information of the video frame which is stopped to be played.

Step 607: and playing the preset media file.

Step 608: and judging whether a limited object watches the preset media file or not according to the first face image or the second face image.

The video playing device can judge whether a limited object watches the preset media file according to the first human face image or the second human face image according to the preset detection period. If it is determined that the restricted object views the predetermined media file, go to step 608; if it is determined that there is no restricted object to view the predetermined media file, go to step 609.

It should be noted that the implementation process of step 608 is the same as the implementation process of step 605, and is not described here again.

Step 609: skipping to the video frame which is stopped to play in the video file, and starting to play the video file from the video frame which is stopped to play.

It is understood that the above method can be implemented by a video playing device. In order to implement the above functions, the video playback apparatus includes a hardware structure and/or a software module corresponding to each function. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.

In the embodiment of the present application, the video playing apparatus and the like may be divided into functional modules according to the method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.

In the case of dividing the functional modules according to the respective functions, fig. 9 shows a schematic diagram of a possible structure of the video playback apparatus according to the above embodiment, where the video playback apparatus 7 includes: a video control module 71 and a judgment module 72. The video control module 71 is configured to play a video file; the video file comprises K video clips and content labels of each video clip, each video clip comprises N frames of images, and K and N are positive integers. A judging module 72, configured to detect whether there is a restricted object to view the video file; the restricted object is an object that does not have the right to view the private section in the video file; and if the video file is watched by the limited object, judging whether the video clip being played belongs to the privacy clip or not according to the content label of the video clip being played. The video control module 71 is further configured to stop playing the video segment being played if the video segment being played belongs to the privacy segment.

In a possible embodiment, the video control module 71 is further configured to play the preset media file after stopping playing the video segment being played if the video segment being played belongs to the privacy segment; the presetting of the media file includes: presetting a video, a picture or a non-privacy segment in a video file.

In another possible embodiment, the video playing apparatus 7 further includes a video tag module 73. And the video tag module 73 is configured to acquire a video file to be processed before the video file is played. And dividing the video file to be processed into K video segments, and acquiring the image characteristics and the audio characteristics of each video segment in the K video segments. Then, a content tag of each of the K video clips is determined according to the image feature and the audio feature of each of the K video clips, and a video file including the content tag is generated. The frame number of the video frames in each video clip is less than or equal to a preset frame number; the content label of each video clip is one of a plurality of preset content labels, the preset content labels are used for indicating the category of the video clip, and the preset content labels include any one of the following: normal, laughter, violence, emotional and vulgar.

In another possible embodiment, the video tag module 73 is configured to determine the content tag of each of the K video segments according to the image feature and the audio feature of each of the K video segments, and includes: the video tagging module 73 is specifically configured to determine image tag information of an ith video clip according to image features of the ith video clip in the K video clips; the image label information is used for indicating the probability that the image of the ith video clip belongs to the category indicated by each preset content label in the plurality of preset content labels; and i takes values from K numerical values from 1 to K in sequence. Determining audio label information of the ith video clip according to the audio characteristics of the ith video clip; the audio tag information is used for indicating the probability that the audio of the ith video clip belongs to the category indicated by each preset content tag in the plurality of preset content tags. Then, according to the image tag information and the preset image weight of the ith video clip and the audio tag information and the preset audio weight of the ith video clip, the probability that the ith video clip belongs to the category indicated by each preset content tag in the plurality of preset content tags is calculated. And finally, determining that the preset content label corresponding to the category with the highest probability is the content label of the ith video clip.

In another possible implementation, the determining module 72 is further configured to display a plurality of preset content tags in response to a video playing instruction or a tag setting instruction before playing the video file. Receiving a label type corresponding to each preset content label in a plurality of preset content labels; the tag type includes a privacy tag or a regular tag. The determining module 72 is configured to determine whether the video segment being played belongs to the privacy segment according to the content tag of the video segment being played, and includes: the determining module 72 is specifically configured to determine, according to a tag type corresponding to each preset content tag in the plurality of preset content tags, that a content tag of a video clip being played is a privacy tag or a conventional tag. And if the content tag of the video clip being played is the privacy tag, determining that the video clip being played belongs to the privacy clip. And if the content label of the video clip being played is a conventional label, determining that the video clip being played does not belong to the privacy clip.

In another possible implementation, the determining module 72 is configured to detect whether there is a restricted object to view a video file, and includes: the determining module 72 is specifically configured to acquire a scene image in the current environment where the video file is played. If the face image included in the scene image is the first face image, determining that the limited object is detected to watch the video file; or if the face image included in the scene image is not the second face image, determining that the limited object is detected to watch the video file. The first face image is a face image of a pre-configured limited object; the second face image is a face image of a user with a preset watching authority.

In another possible embodiment, the determining module 72 is further configured to receive and save the first face image before detecting whether there is a restricted object to view the video file; or receiving and saving the second face image.

In another possible embodiment, the video control module 71 is further configured to, if the video segment being played belongs to the privacy segment, stop playing the video segment being played, and then obtain location information of the video frame that is stopped playing; the video frame which stops playing belongs to the video clip which is playing. Then, the judging module 72 is further configured to acquire a scene image in the current environment; and detecting whether the first face image is included in the scene image. The video control module 71 is further configured to skip to a video frame of the video file that is stopped to be played according to the position information of the video frame that is stopped to be played and start to play the video file if the scene image does not include the first face image.

In another possible embodiment, the video control module 71 is further configured to, if the video segment being played belongs to the privacy segment, stop playing the video segment being played, and then obtain location information of the video frame that is stopped playing; the video frame which stops playing belongs to the video clip which is playing. Then, the judging module 72 is further configured to acquire a scene image in the current environment; and detecting whether the scene image only comprises the second face image. The video control module 71 is further configured to jump to a video frame of the video file that is stopped to be played according to the position information of the video frame that is stopped to be played and start to play the video file if the scene image only includes the second face image.

Of course, the video playing device 7 includes, but is not limited to, the above listed unit modules. For example, the video playing device 7 may further include a storage module. The storage module can be used for storing the video file, the preset media file and the like. Moreover, the functions that can be specifically realized by the functional unit also include, but are not limited to, the functions corresponding to the method steps described in the above example, and the detailed description of the corresponding method steps may be referred to for the detailed description of other modules of the video playing device 7, which is not described herein again in this embodiment of the present application.

In the case of dividing each functional module according to each function, fig. 10 shows another possible structural schematic diagram of the video playback device according to the foregoing embodiment, and the video playback device 8 includes:

a video control module 81 for playing a video file; the video file comprises K video clips and a content label of each video clip, each video clip comprises N frames of images, and K and N are positive integers;

a judging module 82, configured to judge whether M video segments to be played include privacy segments according to content tags included in the video file; m is a positive integer. If the M video clips to be played comprise privacy clips, detecting whether a limited object watches the video file; the restricted object is an object without the right to view the private section in the video file;

the video control module 81 is further configured to stop playing the video segment being played if it is detected that the limited object watches the video file.

Of course, the video playing device 8 includes, but is not limited to, the above listed unit modules. For example, the video playing device 8 may further include a video tag module and a storage module. The video tag module has the same function as the video tag module 73; the storage module can be used for storing the video file, the preset media file and the like. Moreover, the functions that can be specifically realized by the functional unit also include, but are not limited to, the functions corresponding to the method steps described in the above example, and the detailed description of the corresponding method steps may be referred to for the detailed description of other modules of the video playing device 8, which is not described herein again in this embodiment of the present application.

In the case of an integrated unit, fig. 11 shows a schematic diagram of a possible structure of the electronic device involved in the above-described embodiment. As shown in fig. 11, the electronic device 900 includes a processor 901 and a memory 902.

It is understood that the electronic device 900 shown in fig. 11 can implement all the functions of the video playback apparatus 7 or the video playback apparatus 8 described above. The functions of the respective modules in the video playback device 7 or the video playback device 8 can be implemented in the processor 901 of the electronic device 900. For example, the functions of the video control module 71, the judgment module 72 and the video tag module 73 can be integrated into the processor 901; the functions of the video control module 81 and the judgment module 82 can also be integrated into the processor 901. The storage module of the video playback device 7 and the storage module of the video playback device 8 both correspond to the memory 902 of the electronic apparatus 900.

Among other things, the processor 901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 901 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.

Memory 902 may include one or more computer-readable storage media, which may be non-transitory. The memory 902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 902 is used for storing at least one instruction, which is used for being executed by the processor 901 to implement the video playing method provided by the embodiment of the present application.

In some embodiments, the electronic device 900 may further optionally include: a peripheral interface 903 and at least one peripheral. The processor 901, memory 902, and peripheral interface 903 may be connected by buses or signal lines. Various peripheral devices may be connected to the peripheral interface 903 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 904, a display screen 905, a camera assembly 906, an audio circuit 907, a positioning assembly 908, and a power supply 909.

The peripheral interface 903 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 901 and the memory 902. In some embodiments, the processor 901, memory 902, and peripheral interface 903 are integrated on the same chip or circuit board. In some other embodiments, any one or two of the processor 901, the memory 902, and the peripheral interface 903 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.

The Radio Frequency circuit 904 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 904 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 904 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 904 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 904 may communicate with other electronic devices via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or Wi-Fi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 904 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.

The display screen 905 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 905 is a touch display screen, the display screen 905 also has the ability to capture touch signals on or over the surface of the display screen 905. The touch signal may be input to the processor 901 as a control signal for processing. At this point, the display 905 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 905 may be one, providing the front panel of the electronic device 900; the Display panel 905 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.

The camera assembly 906 is used to capture images or video. Optionally, camera assembly 906 includes a front camera and a rear camera. Generally, a front camera is disposed on a front panel of an electronic apparatus, and a rear camera is disposed on a rear surface of the electronic apparatus. Audio circuit 907 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 901 for processing, or inputting the electric signals to the radio frequency circuit 904 for realizing voice communication. For stereo capture or noise reduction purposes, the microphones may be multiple and located at different locations of the electronic device 900. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 901 or the radio frequency circuit 904 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuit 907 may also include a headphone jack.

The positioning component 908 is used to locate a current geographic Location of the electronic device 900 to implement navigation or LBS (Location Based Service). The Positioning component 908 may be a Positioning component based on the GPS (Global Positioning System) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.

The power supply 909 is used to supply power to various components in the electronic device 900. The power source 909 may be alternating current, direct current, disposable or rechargeable. When power source 909 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.

In some embodiments, the electronic device 900 also includes one or more sensors 910. The one or more sensors 910 include, but are not limited to: acceleration sensors, gyroscope sensors, pressure sensors, fingerprint sensors, optical sensors, and proximity sensors.

The acceleration sensor may detect acceleration magnitudes on three coordinate axes of a coordinate system established with the electronic device 900. The gyroscope sensor can detect the body direction and the rotation angle of the electronic device 900, and the gyroscope sensor and the acceleration sensor can cooperatively acquire the 3D action of the user on the electronic device 900. The pressure sensors may be disposed on the side bezel of the electronic device 900 and/or underneath the display screen 905. When the pressure sensor is disposed on a side frame of the electronic device 900, a user's holding signal of the electronic device 900 may be detected. The fingerprint sensor is used for collecting fingerprints of users. The optical sensor is used for collecting the intensity of ambient light. Proximity sensors, also known as distance sensors, are typically provided on the front panel of the electronic device 900. The proximity sensor is used to capture the distance between the user and the front of the electronic device 900.

Those skilled in the art will appreciate that the configuration shown in fig. 9 does not constitute a limitation of the electronic device 900, and may include more or fewer components than those shown, or combine certain components, or employ a different arrangement of components.

Embodiments of the present application further provide a computer storage medium, where the computer storage medium includes computer instructions, and when the computer instructions are run on the electronic device, the electronic device is caused to perform various functions or steps in the foregoing method embodiments. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.

Embodiments of the present application further provide a computer program product, which, when run on an electronic device, causes the electronic device to perform the functions or steps of the above method embodiments.

Through the description of the above embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.

In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, a module or a unit may be divided into only one logic function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another apparatus, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.

Units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed to a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.

The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

37页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种快速生成视频文件特征值的方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类