Segmentation method, segmentation system and non-transitory computer readable medium

文档序号:1430982 发布日期:2020-03-17 浏览:13次 中文

阅读说明:本技术 分段方法、分段系统及非暂态电脑可读取媒体 (Segmentation method, segmentation system and non-transitory computer readable medium ) 是由 詹诗涵 柯兆轩 于 2019-02-01 设计创作,主要内容包括:本公开内容关于一种分段方法、分段系统及非暂态电脑可读取媒体。该分段方法包含下列步骤:接收影片内容;其中,影片内容包含影像信号以及声音信号;针对影像数据进行分段处理,以产生至少一影像段落标记;针对该声音数据进行分段处理,以产生至少一声音段落标记;以及比较该至少一影像段落标记的一影像标记时间与该至少一声音段落标记的一声音标记时间之间的差异,以产生至少一影片内容标记。(The present disclosure relates to a segmentation method, a segmentation system, and a non-transitory computer readable medium. The segmentation method comprises the following steps: receiving movie content; wherein, the film content comprises a video signal and a sound signal; performing segmentation processing on the image data to generate at least one image paragraph mark; performing segmentation processing on the sound data to generate at least one sound paragraph mark; and comparing a difference between a video mark time of the at least one video segment mark and a sound mark time of the at least one sound segment mark to generate at least one film content mark.)

1. A segmentation method, comprising:

receiving a movie content; wherein, the film content comprises an image data and a sound data;

performing segmentation processing on the image data to generate at least one image paragraph mark;

performing segmentation processing on the sound data to generate at least one sound paragraph mark; and

comparing a difference between a video mark time of the at least one video segment mark and a sound mark time of the at least one sound segment mark to generate at least one film content mark.

2. The segmentation method of claim 1, wherein the performing segmentation processing on the image data to generate the at least one image segment marker further comprises:

selecting M units of the image data, and dividing the selected image data into a first image paragraph;

judging the content of the first image paragraph to generate an image content result; wherein the image content result comprises a dynamic content and a static content; and

detecting a changed content for the image data based on the image content result, and generating the at least one image paragraph mark according to the time position of the changed content.

3. The segmentation method of claim 2, wherein said determining the content of the first video segment to generate the video content result further comprises:

selecting T units from the first image paragraph, calculating image similarity in the T units, and generating an image difference result;

if the image difference result is greater than a first image threshold value, determining the content of the first image paragraph as the dynamic content; and

if the image difference result is not greater than the first image threshold, determining the content of the first image segment as the static content.

4. The segmentation method of claim 2, wherein the detecting the variant content for the image data based on the image content result and generating the at least one image segment marker according to a temporal location of the variant content further comprises:

if the content of the first image paragraph is the dynamic content, calculating the similarity between the image of the Mth unit and the image of the M +1 th unit to generate an image difference value;

merging the M +1 unit image with the first image section if the image difference value is greater than a second image threshold value; and

if the image difference value is not greater than the second image threshold value, the at least one image paragraph mark is generated at the time position of the M +1 unit image, and the M units of image data are selected to divide the selected image data into a second image paragraph.

5. The segmentation method of claim 2, wherein the detecting the variant content for the image data based on the image content result and generating the at least one image segment marker at a temporal location of the variant content further comprises:

if the content of the first image paragraph is the static content, calculating the similarity between the image of the Mth unit and the image of the M +1 th unit to generate an image difference value;

merging the M +1 unit image with the first image section if the image difference value is not greater than a second image threshold value; and

if the image difference value is greater than the second image threshold value, the at least one image paragraph mark is generated at the time position of the image of the M +1 unit, the image data of the M units is selected, and the selected image data is divided into a second image paragraph.

6. The segmentation method according to claim 1, wherein the performing segmentation processing on the sound data to generate the at least one sound paragraph mark further comprises:

converting the sound data into a sound time domain signal and a sound frequency domain signal respectively;

selecting a time domain section from the sound time domain signal, and judging whether the amplitude of the time domain section is smaller than a first threshold value, if the amplitude of the time domain section is smaller than the first threshold value, generating the at least one sound paragraph mark; and

selecting a first frequency domain section and a second frequency domain section from the sound frequency domain signal, and judging whether the difference value of the spectral intensities of the first frequency domain section and the second frequency domain section is larger than a second threshold value, if the difference value of the spectral intensities of the first frequency domain section and the second frequency domain section is larger than the second threshold value, generating the at least one sound paragraph mark.

7. A segmentation system, comprising:

a storage unit for storing a film content and at least one film content mark; and

a processor electrically connected to the storage unit for receiving the content of the movie; wherein, the film content includes an image data and a sound data, the processor includes:

an image segmentation unit for performing segmentation processing on the image data to generate at least one image segment mark;

a sound segmentation unit electrically connected to the image segmentation unit for performing segmentation processing on the sound data to generate at least one sound paragraph mark; and

a segment mark generating unit electrically connected to the image segmenting unit and the sound segmenting unit for comparing a difference between an image mark time of the at least one image segment mark and a sound mark time of the at least one sound segment mark to generate the at least one film content mark.

8. The segmentation system of claim 7, wherein the image segmentation unit is further configured to select M units of the image data, divide the selected image data into a first image segment, and determine the content of the first image segment to generate an image content result; wherein the image content result comprises a dynamic content and a static content; and detecting a change content for the image data based on the image content result, and generating the at least one image paragraph mark according to the time position of the change data.

9. The segmentation system of claim 8, wherein the image segmentation unit is further configured to select T units from the first image paragraph, calculate similarity of images in the T units, and generate an image difference result; if the image difference result is greater than a first image threshold value, determining the content of the first image paragraph as the dynamic content; and if the image difference result is not greater than the first image threshold value, determining the content of the first image paragraph as the static content.

10. The segmentation system of claim 8, wherein the image segmentation unit is further configured to calculate a similarity between an M +1 th unit of the image and an M-th unit of the image when the content of the first image segment is the dynamic content, so as to generate an image difference value; merging the M +1 unit image with the first image section if the image difference value is greater than a second image threshold value; and if the image difference value is not greater than the second image threshold value, generating the at least one image paragraph mark at the time position of the image of the (M + 1) th unit, selecting the image data for M seconds, and dividing the selected image data into a second image paragraph.

11. The segmentation system of claim 8, wherein the image segmentation unit is further configured to calculate a similarity between the M unit image and the M +1 unit image to generate an image difference value when the content of the first image segment is the static content; merging the M +1 unit image with the first image section if the image difference value is not greater than the second image threshold value; and if the image difference value is greater than the second image threshold value, generating the at least one image paragraph mark at the time position of the image of the M +1 unit, selecting the image data of the M units, and dividing the selected image data into a second image paragraph.

12. The system of claim 7, wherein the sound segmentation unit is further configured to convert the sound data into a sound time domain signal and a sound frequency domain signal, respectively, select a time domain segment from the sound time domain signal, determine whether the amplitude of the time domain segment is smaller than a first threshold, and generate the at least one sound segment flag if the amplitude of the time domain segment is smaller than the first threshold; and selecting a first frequency domain section and a second frequency domain section from the sound frequency domain signal, and judging whether the difference value of the spectral intensities of the first frequency domain section and the second frequency domain section is larger than a second threshold value, if the difference value of the spectral intensities of the first frequency domain section and the second frequency domain section is larger than the second threshold value, generating the at least one sound paragraph mark.

13. A non-transitory computer readable medium containing at least one program of instructions which is executed by a processor to perform a segmentation method, the segmentation method comprising:

receiving a movie content; wherein, the film content comprises an image data and a sound data;

performing segmentation processing on the image data to generate at least one image paragraph mark;

performing segmentation processing on the first sound data to generate at least one sound paragraph mark; and

comparing a difference between a video mark time of the at least one video segment mark and a sound mark time of the at least one sound segment mark to generate at least one film content mark.

Technical Field

The present disclosure relates to a segmentation method, a segmentation system and a non-transitory computer readable medium, and more particularly, to a segmentation method, a segmentation system and a non-transitory computer readable medium for a video source.

Background

The on-line learning platform is a network service that stores a plurality of learning data in a server, so that a user can connect to the server through the internet to browse the learning data at any time. In the existing various online learning platforms, the types of learning materials provided include films, audios, presentations, documents or forums.

Because the amount of learning materials stored in the online learning platform is huge, in order to facilitate the use of users, the audio-video content of the learning materials needs to be automatically segmented. Therefore, how to perform processing according to the correlation between the sound content and the image content of the learning film to achieve automatic segmentation of the learning film is a problem to be solved in the art.

Disclosure of Invention

A first aspect of the present disclosure is to provide a segmentation method. The segmentation method comprises the following steps: receiving movie content; wherein, the film content comprises image data and sound data; performing segmentation processing on the image data to generate at least one image paragraph mark; performing segmentation processing on the sound data to generate at least one sound paragraph mark; and comparing a difference between a video mark time of the at least one video segment mark and a sound mark time of the at least one sound segment mark to generate at least one film content mark.

A second aspect of the present disclosure is to provide a segmentation system, which includes a storage unit and a processor. The storage unit is used for storing a video source and at least one film content mark. The processor is electrically connected with the storage unit and used for receiving the film content; wherein, the film content includes image data and sound data, and the processor includes: the device comprises an image segmentation unit, a sound segmentation unit and a paragraph mark generation unit. The image segmentation unit is used for performing segmentation processing on the image data to generate at least one image segment mark. The sound segmentation unit is electrically connected with the image segmentation unit and used for performing segmentation processing on the sound data to generate at least one sound paragraph mark. The paragraph mark generating unit is electrically connected with the image segmenting unit and the sound segmenting unit and is used for comparing the difference between the image marking time of at least one image paragraph mark and the sound marking time of at least one sound paragraph mark so as to generate at least one film content mark.

A third aspect of the present application provides a non-transitory computer readable medium containing at least one program of instructions for execution by a processor to perform a segmentation method, the segmentation method comprising: receiving movie content; wherein, the film content comprises image data and sound data; performing segmentation processing on the image data to generate at least one image paragraph mark; performing segmentation processing on the sound data to generate at least one sound paragraph mark; and comparing a difference between a video mark time of the at least one video segment mark and a sound mark time of the at least one sound segment mark to generate at least one film content mark.

The present disclosure relates to a segmentation method, a segmentation system and a non-transitory computer readable medium, which mainly solve the problem of consuming a lot of labor and time for marking film segments manually. Segment marks are respectively carried out on the image signal and the sound signal, and then film content marks are generated according to the segment marks of the image signal and the segment marks of the sound signal, so that the function of automatically segmenting the learning film is achieved.

Drawings

In order to make the aforementioned and other objects, features, advantages and embodiments of the present disclosure more comprehensible, the following description is made with reference to the accompanying drawings:

FIG. 1 is a schematic diagram of a segmentation system depicted in accordance with some embodiments of the present application;

FIG. 2 is a flow diagram of a segmentation method according to some embodiments of the present application;

fig. 3 is a flowchart of step S220 according to some embodiments of the present application;

fig. 4 is a flowchart of step S222 according to some embodiments of the present application;

fig. 5A is a flowchart of step S223 according to some embodiments of the present application;

fig. 5B is a flowchart of step S223 according to some embodiments of the present application; and

fig. 6 is a flowchart of step S230 according to some embodiments of the present application.

[ description of reference ]

100: segmentation system

110: storage unit

130: processor with a memory having a plurality of memory cells

DB: course database

131: image segmentation unit

132: sound segmentation unit

133: paragraph mark generating unit

200: segmentation method

S210 to S240, S221 to S223, S2221 to S2223, S2231a to S2233a, S2231b to S2233b, and S231 to S233: step (ii) of

Detailed Description

Reference will now be made in detail to the present embodiments of the present application, examples of which are illustrated in the accompanying drawings. It should be understood, however, that these implementation details should not be used to limit the application. That is, in some embodiments of the disclosure, such practical details are not necessary. In addition, for simplicity, some conventional structures and elements are shown in the drawings in a simple schematic manner.

When an element is referred to as being "connected" or "coupled," it can be referred to as being "electrically connected" or "electrically coupled. "connected" or "coupled" may also be used to indicate that two or more elements are in mutual engagement or interaction. Moreover, although terms such as "first," "second," …, etc., may be used herein to describe various elements, these terms are used merely to distinguish one element or operation from another element or operation described in similar technical terms. Unless the context clearly dictates otherwise, the terms do not specifically refer or imply an order or sequence nor are they intended to limit the invention.

Please refer to fig. 1. Fig. 1 is a schematic diagram of a segmentation system 100 depicted in accordance with some embodiments of the present application. As shown in fig. 1, the segmentation system 100 includes a storage unit 110 and a processor 130. The storage unit 110 is electrically connected to the processor 130, and the storage unit 110 is used for storing the video source, at least one movie content tag, and the course database DB.

As mentioned above, the processor 130 includes an image segmentation unit 131, an audio segmentation unit 132, and a paragraph mark generation unit 133. The audio segmentation unit 132 is electrically connected to the image segmentation unit 131 and the paragraph mark generation unit 133. In various embodiments of the present invention, the storage device 110 can be implemented as a storage device, a hard disk, a portable disk, a memory card, etc. The processor 130 may be implemented as an integrated circuit such as a micro control unit (microcontroller), a microprocessor (microprocessor), a digital signal processor (digital signal processor), an Application Specific Integrated Circuit (ASIC), a logic circuit, or other similar components or combinations thereof.

Please refer to fig. 2. Fig. 2 is a flow diagram of a segmentation method 200 depicted in accordance with some embodiments of the present application. In one embodiment, the segmentation method 200 shown in fig. 2 can be applied to the segmentation system 100 shown in fig. 1, and the processor 130 is configured to perform segment marking on the image data and the audio signal according to the following steps of the segmentation method 200 to generate the film content mark. As shown in fig. 2, the segmentation method 200 first performs step S210 to receive movie content. In one embodiment, the film content includes video data and audio data. The processor 130 processes the video data and the audio data, and calculates the video data and then calculates the audio data in the next step, but the present disclosure is not limited thereto, and may calculate the audio data and then calculate the video data.

Next, the segmentation method 200 performs a step S220 to perform a segmentation process on the image data to generate at least one image segment marker. In an embodiment, step S220 further includes steps S221 to S223, please refer to fig. 3, and fig. 3 is a flowchart of step S220 according to some embodiments of the present disclosure. As shown in fig. 3, the segmentation method 200 further performs step S221 to select M units of image data and divide the selected image data into current image segments. In an embodiment, the M units are described as M seconds, and the M units may also be implemented as M frames, which is not limited in the present disclosure. The M seconds can be adjusted according to the time length of the movie content, where the M seconds is 30 seconds as an example, so the 0 th to 30 th second image data are used as the current image segment in this step.

Next, the segmentation method 200 executes step S222 to determine the content of the current video segment to generate a video content result. The image content result includes dynamic content and static content. In an embodiment, the step S222 further includes steps S2221 to S2223, please refer to fig. 4, and fig. 4 is a flowchart of the step S222 according to some embodiments of the present disclosure. As shown in fig. 4, the segmentation method 200 further performs step S2221 to select T units from the current image segment, calculate the similarity of the images in the T units, and generate an image difference result. In an embodiment, the T units are illustrated as T seconds, and the T units may also be implemented as T frames, which is not limited in the present disclosure. For example, T seconds is 3 seconds as an example, and assuming that there are 60 frames per second, the difference calculation may be subtracting the gray level value of the image of the 30 th frame in the 1 st second from the gray level value of the image of the 30 th frame in the 0 th second to generate the image difference value in the 1 st second, and only the image difference value in the 1 st second may be used as the image difference result to determine the content of the image. In another embodiment, in addition to the 1 st second image difference, the 2 nd second image difference and the 3 rd second image difference can be used as the image difference result to determine the content of the image.

Next, the segmentation method 200 executes step S2222, and if the image difference result is greater than the first image threshold, determines the content of the current image segment as the dynamic content. Continuing with the above embodiment, if the image difference value is greater than the first image threshold value, it indicates that the difference between the previous and next frames is large, and therefore the content of the current image segment may be dynamic content. Next, the segmentation method 200 executes step S2223 to determine the content of the current video segment as the static content if the video difference result is not greater than the first video threshold. In an embodiment, if the image difference value is less than or equal to the first image threshold value, it indicates that the frames in the two seconds before and after the image difference value belong to similar frames, and therefore the content of the current image segment may be static content, indicating that the frames are not changed much.

Next, the segmentation method 200 performs step S223 to detect a variation content for the image data based on the image content result, and generate at least one image segment mark at a time position of the variation content. In an embodiment, the step S223 further includes steps S2231 a-S2233 a, please refer to fig. 5A, and fig. 5A is a flowchart of the step S223 according to some embodiments of the present disclosure. As shown in fig. 5A, the segmentation method 200 further performs step S2231a to calculate a similarity between the mth unit of image and the M +1 th unit of image to generate an image difference value if the content of the current image segment is dynamic. In the following embodiment, the M seconds is 30 seconds, the current video segment is from the 0 th to the 30 th seconds of video data, the mth second of video data is the 30 th second of video data, and the M +1 th second of video data is the 31 st second of video data. In this case, the gray-scale value of the image of the 30 th frame in the 31 th second is subtracted from the gray-scale value of the image of the 30 th frame in the 30 th second to generate the image difference value, or the images of other frames are selected to calculate the image difference value.

In accordance with the above, the segmentation method 200 further performs step S2232a, if the image difference value is greater than the second image threshold value, merging the M +1 th unit of image with the current image segment. As shown in the above embodiment, if the image difference value is greater than the second image threshold value, it indicates that the image of the current image segment in the next second still belongs to the moving image, so the image data of the 31 st second can be merged into the current image segment. Next, the segmentation method 200 further performs step S2233a, if the image difference value is not greater than the second image threshold value, generating at least one image segment flag at the time position of the M +1 unit of image, and selecting the M units of image data to segment the selected image data into the next image segment. In view of the above, if the image difference value is less than or equal to the second image threshold value, it indicates that the image of the next second of the current image segment may belong to a still image, and therefore, it is necessary to generate an image segment flag at the time position of the 31 th second of the image data, so that the currently executed segment becomes the image data of the 31 st to 60 th seconds.

In view of the above, step S223 further includes steps S2231B-S2233B, please refer to fig. 5B, and fig. 5B is a flowchart of step S223 according to some embodiments of the present disclosure. As shown in fig. 5B, the segmentation method 200 further performs step S2231B, if the content of the current video segment is static, calculating a similarity between the mth unit video and the M +1 th unit video to generate a video difference value. The operation of step S2231b is the same as that of step S2231a, and is not described herein.

In the above, the segmentation method 200 further performs step S2232b, and if the image difference value is not greater than the second image threshold value, merges the M +1 unit image with the current image segment. As shown in the above embodiment, if the image difference value is less than or equal to the second image threshold value, it indicates that the image of the current image segment in the next second still belongs to the still image, so the image data of the 31 th second can be merged into the current image segment. Next, the segmentation method 200 further performs step S2233b, if the image difference value is greater than the second image threshold value, generating at least one image segment flag at the time position of the M +1 unit of image, and selecting the M units of image data to segment the selected image data into the next image segment. In view of the above, if the image difference value is greater than the second image threshold value, it indicates that the image of the next second of the current image segment may belong to a dynamic image, and therefore, it is necessary to generate an image segment flag at the time position of the 31 th second of the image data, so that the currently executed segment becomes the image data of the 31 st to 60 th seconds.

In another embodiment, the similarity between the images may be compared by using Peak signal-to-noise ratio (PSNR), Structural Similarity Index (SSIM), texture or color of the images, or specific shape (pattern), and the disclosure is not limited thereto.

Then, the segmentation method 200 further performs step S230 to perform a segmentation process on the sound data to generate at least one sound paragraph mark. Step S230 further includes steps S231 to S233, please further refer to fig. 6, and fig. 6 is a flowchart of step S230 according to some embodiments of the present disclosure. As shown in fig. 6, the segmentation method 200 further performs a step S231 of converting the sound data into a sound time domain signal and a sound frequency domain signal, respectively. In one embodiment, the sound data may be converted into a frequency domain signal by using a fourier transform, but is not limited thereto. The fourier transformed signal is discontinuous and can therefore be used to detect the difference between timbre and pitch as a basis for determining the sound segment signature.

In light of the above, the segmentation method 200 further performs step S232 to select a time domain segment from the audio time domain signal, and determine whether the amplitude of the time domain segment is smaller than the first threshold, and if the amplitude of the time domain segment is smaller than the first threshold, generate at least one audio segment flag. In this embodiment, a window (window) is used to select a time domain section from the audio time domain signal, for example, the size of the window can be set to 5 seconds, so that the time domain section is the audio time domain signal of 5 seconds. Then, it is determined whether the amplitude of the 5-second sound time domain signal (time domain segment) is smaller than a first threshold, and if so, the 5-second sound time domain signal selected by the window frame is represented as a possibly unvoiced segment, which indicates that there may be an interruption in the sound time domain signal. Thus, the sound segment signature may be generated when the amplitude of the time domain segment is less than a threshold value.

In view of the above, the segmentation method 200 further performs step S233 to select a first frequency domain segment and a second frequency domain segment from the audio frequency domain signal, calculate whether a difference value (phase size) between the spectral intensities of the first frequency domain segment and the second frequency domain segment is greater than a second threshold, and generate at least one audio segment flag if the difference value exceeds the second threshold. In this embodiment, a window is used to select frequency domain segments from the audio frequency domain signal. For example, the window size may be set to m seconds, so that the first frequency domain segment and the second frequency domain segment are m seconds of the sound frequency domain signal (the two selected sound frequency domain signals are different). The size (length) of the window utilized by the sound time domain signal and the sound frequency domain signal may be different or the same, and the disclosure is not limited thereto. Then, whether the difference value of the spectral intensities of the first frequency domain section and the second frequency domain section exceeds a second threshold value is judged. If it is larger than the second threshold, it indicates that the sound frequency domain signal of m seconds selected by the window frame may have different timbre or pitch, indicating that there may be different human voice. Thus, the sound segment signature can be generated when the amplitude of the frequency domain segment is less than the threshold value.

Next, the segmentation method 200 further performs step S240 of comparing a difference between the video mark time of the at least one video segment mark and the sound mark time of the at least one sound segment mark to generate at least one movie content mark. In one embodiment, the video segment markers and the audio segment markers are integrated to generate the movie content markers in step S220 and step S230, respectively. For example, if the video data is divided into five paragraphs in total, and the video paragraph labels are paragraph one (00:45), paragraph two (01:56), paragraph three (03:25), paragraph four (05:10) and paragraph five (05:55), respectively, while the sound data is divided into four paragraphs in total, the sound paragraph labels are paragraph one (02:02), paragraph two (03:12), paragraph three (04:30) and paragraph four (05:00), respectively. Assuming that the threshold is 15 seconds, then the difference between the image paragraph mark 01:56 of paragraph two and the sound paragraph mark 02:02 of paragraph one is within the threshold, the average of the time differences can be used as the film content mark, and thus the video source has the film content mark of paragraph one (01: 59). It is then possible to continue to find that the difference between the image paragraph mark 03:25 of paragraph three and the sound paragraph mark 03:12 of paragraph two is within the threshold value, and the difference between the image paragraph mark 05:10 of paragraph four and the sound paragraph mark 05:00 of paragraph four is within the threshold value, so that the film content mark 03:18 of paragraph two and the film content mark 05:05 of paragraph three can be generated, respectively. As can be seen from the above, the time difference between the video segment flag 00:45 of paragraph one, the video segment flag 05:55 of paragraph five, and the audio segment flag 04:30 of paragraph three and other flags is greater than the threshold value, and thus is the segment flag to be ignored. Finally, the divided movie content tags are stored in the course database DB of the storage unit 110.

According to the embodiments of the present application, the problem that a lot of labor and time are consumed for marking film paragraphs by manual methods in the prior art is mainly solved. The method comprises the steps of respectively carrying out paragraph marking on image data and sound data, and generating a film content mark according to the paragraph mark of the image data and the paragraph mark of the sound data, thereby achieving the function of automatically segmenting a learning film.

Additionally, the above illustration includes exemplary steps in sequential order, but the steps need not be performed in the order shown. It is within the contemplation of the disclosure that these steps may be performed in a different order. Steps may be added, substituted, changed in order, and/or omitted as appropriate within the spirit and scope of embodiments of the disclosure.

Although the present disclosure has been described with reference to the above embodiments, it should be understood that various changes and modifications can be made by one skilled in the art without departing from the spirit and scope of the disclosure, and therefore, the scope of the disclosure should be determined by that of the appended claims.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于RRU通讯模块设备安装的拼装组合支架

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类