Processing method and device for dynamically generating audio and video clips and electronic equipment

文档序号:1315135 发布日期:2020-07-10 浏览:12次 中文

阅读说明:本技术 动态生成音视频片段的处理方法、装置及电子设备 (Processing method and device for dynamically generating audio and video clips and electronic equipment ) 是由 王家万 于 2020-02-04 设计创作,主要内容包括:本发明实施例提供了一种动态生成音视频片段的处理方法、装置及电子设备,其中方法包括:获取音视频片段,将所述音视频片段划分为多个音视频单元;对多个所述音视频单元进行检测以获取第一音视频单元,所述第一音视频单元为根据预定标准选择的音视频单元;基于所述第一音视频单元,获取与所述第一音视频单元相关联的至少一个第二音视频单元作为候选音视频单元;基于所获取的第一音视频单元以及所述候选音视频单元,生成第一音视频片段。本发明实施例通过获取与指定音视频单元相关联的音视频单元,并基于指定音视频单元及与其相关联的音视频单元生成音视频片段,以实现对指定音视频单元的扩展,从而便于对指定音视频单元所在场景的观看及分析。(The embodiment of the invention provides a processing method and a processing device for dynamically generating audio and video clips and electronic equipment, wherein the method comprises the following steps: acquiring audio and video clips, and dividing the audio and video clips into a plurality of audio and video units; detecting a plurality of audio and video units to obtain a first audio and video unit, wherein the first audio and video unit is selected according to a preset standard; acquiring at least one second audio/video unit associated with the first audio/video unit as a candidate audio/video unit based on the first audio/video unit; and generating a first audio and video clip based on the acquired first audio and video unit and the candidate audio and video unit. According to the embodiment of the invention, the audio and video unit associated with the appointed audio and video unit is obtained, and the audio and video clip is generated on the basis of the appointed audio and video unit and the audio and video unit associated with the appointed audio and video unit, so that the appointed audio and video unit is expanded, and the scene where the appointed audio and video unit is located is conveniently watched and analyzed.)

1. A processing method for dynamically generating audio and video clips comprises the following steps:

acquiring audio and video clips, and dividing the audio and video clips into a plurality of audio and video units;

detecting a plurality of audio and video units to obtain a first audio and video unit, wherein the first audio and video unit is selected according to a preset standard;

acquiring at least one second audio/video unit associated with the first audio/video unit as a candidate audio/video unit based on the first audio/video unit;

and generating a first audio and video clip based on the acquired first audio and video unit and the candidate audio and video unit.

2. The method of claim 1, wherein the second audio-visual unit comprises:

and the first audio and video unit is at least one audio and video unit in a preset time range or in a preset quantity adjacent to the first audio and video unit in front.

3. A method according to claim 1 or 2, wherein the second audiovisual unit comprises:

and at least one audio/video unit in a preset time range or a preset number adjacent to the first audio/video unit behind the first audio/video unit.

4. The method of claim 3, further comprising:

detecting whether a third audio-video unit exists in at least one second audio-video unit behind the first audio-video unit, wherein the third audio-video unit is an audio-video unit selected according to a preset standard,

if the third audio-video unit exists, at least one fourth audio-video unit which is behind the second audio-video unit and is associated with the third audio-video unit is obtained,

and taking the at least one fourth audio and video unit and the at least one second audio and video unit together as the candidate audio and video unit.

5. The method of claim 3, further comprising:

detecting whether a fifth audio-video unit exists in the audio-video unit behind the second audio-video unit, wherein the fifth audio-video unit is selected according to a preset standard within a preset time or a preset number range of the audio-video units adjacent to the second audio-video unit behind the second audio-video unit,

if the fifth audio and video unit exists, at least one sixth audio and video unit between the second audio and video unit and the fifth audio and video unit is obtained,

and taking the at least one sixth audio and video unit, the fifth audio and video unit and the at least one second audio and video unit as the candidate audio and video unit together.

6. The method of claim 1, further comprising:

if the number of the audio and video units contained in the first audio and video clip is larger than a preset threshold value, the first audio and video clip is divided into a plurality of second audio and video clips, wherein the number of the audio and video units contained in the second audio and video clips is smaller than the preset threshold value.

7. A processing device for dynamically generating audio-video clips, comprising:

the audio and video unit dividing module is used for acquiring audio and video fragments and dividing the audio and video fragments into a plurality of audio and video units;

the first detection module is used for detecting a plurality of audio and video units to acquire a first audio and video unit, and the first audio and video unit is selected according to a preset standard;

the associated audio and video unit acquisition module is used for acquiring at least one second audio and video unit associated with the first audio and video unit as a candidate audio and video unit based on the first audio and video unit;

and the audio and video clip generation module is used for generating a first audio and video clip based on the acquired first audio and video unit and the candidate audio and video unit.

8. The apparatus of claim 7, wherein the second audio-visual unit comprises:

and the first audio and video unit is at least one audio and video unit in a preset time range or in a preset quantity adjacent to the first audio and video unit in front.

9. A device according to claim 7 or 8, wherein said second audiovisual unit comprises:

and at least one audio/video unit in a preset time range or a preset number adjacent to the first audio/video unit behind the first audio/video unit.

10. The apparatus of claim 9, further comprising:

a second detection module for detecting whether a third audio/video unit exists in at least one second audio/video unit behind the first audio/video unit, wherein the third audio/video unit is an audio/video unit selected according to a preset standard,

if the third audio-video unit exists, at least one fourth audio-video unit which is behind the second audio-video unit and is associated with the third audio-video unit is obtained,

and taking the at least one fourth audio and video unit and the at least one second audio and video unit together as the candidate audio and video unit.

11. The apparatus of claim 7, further comprising:

a third detection module for detecting whether a fifth audio/video unit exists in the audio/video unit behind the second audio/video unit, wherein the fifth audio/video unit is selected according to a predetermined standard within a preset time or a preset number range of the audio/video units adjacent to the second audio/video unit behind the second audio/video unit,

if the fifth audio and video unit exists, at least one sixth audio and video unit between the second audio and video unit and the fifth audio and video unit is obtained,

and taking the at least one sixth audio and video unit, the fifth audio and video unit and the at least one second audio and video unit as the candidate audio and video unit together.

12. The apparatus of claim 7, further comprising:

an audio and video clip dividing module: the device comprises a first audio/video clip, a second audio/video clip and a controller, wherein the first audio/video clip is used for dividing the first audio/video clip into a plurality of second audio/video clips when the number of audio/video units contained in the first audio/video clip is larger than a preset threshold, and the number of audio/video units contained in the second audio/video clip is smaller than the preset threshold.

13. An electronic device, comprising:

a memory for storing a program;

a processor for operating the program stored in the memory to execute the processing method for dynamically generating audio-video clips according to any one of claims 1 to 6.

Technical Field

The application relates to a processing method and device for dynamically generating audio and video clips and electronic equipment, and belongs to the technical field of computers.

Background

In the field of audio and video processing, required specified audio and video units can be screened out through a preset detection standard, but the specified audio and video units are discontinuous audio and video units, so that the watching and the analysis processing of a video viewer are not facilitated.

Disclosure of Invention

The embodiment of the invention provides a processing method and device for dynamically generating audio and video clips and electronic equipment, which are used for expanding a designated audio and video unit into the audio and video clips so as to facilitate viewing, analyzing and processing.

In order to achieve the above object, an embodiment of the present invention provides a processing method for dynamically generating audio/video clips, including:

acquiring audio and video clips, and dividing the audio and video clips into a plurality of audio and video units;

detecting a plurality of audio and video units to obtain a first audio and video unit, wherein the first audio and video unit is selected according to a preset standard;

acquiring at least one second audio/video unit associated with the first audio/video unit as a candidate audio/video unit based on the first audio/video unit;

and generating a first audio and video clip based on the acquired first audio and video unit and the candidate audio and video unit.

The embodiment of the present invention further provides a processing device for dynamically generating audio/video clips, including:

the audio and video unit dividing module is used for acquiring audio and video fragments and dividing the audio and video fragments into a plurality of audio and video units;

the first detection module is used for detecting a plurality of audio and video units to acquire a first audio and video unit, and the first audio and video unit is selected according to a preset standard;

the associated audio and video unit acquisition module is used for acquiring at least one second audio and video unit associated with the first audio and video unit as a candidate audio and video unit based on the first audio and video unit;

and the audio and video clip generation module is used for generating a first audio and video clip based on the acquired first audio and video unit and the candidate audio and video unit.

An embodiment of the present invention further provides an electronic device, including:

a memory for storing a program;

and the processor is used for operating the program stored in the memory so as to execute the method for dynamically generating the audio and video clips.

According to the embodiment of the invention, the audio and video unit associated with the appointed audio and video unit is obtained, and the audio and video clip is generated on the basis of the appointed audio and video unit and the audio and video unit associated with the appointed audio and video unit, so that the appointed audio and video unit is expanded, and the scene where the appointed audio and video unit is located is conveniently watched and analyzed.

The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.

Drawings

Fig. 1 is a schematic view of an application scenario of a processing method for dynamically generating audio/video clips according to an embodiment of the present invention;

fig. 2 is a schematic flow chart of a processing method for dynamically generating audio/video clips according to an embodiment of the present invention;

fig. 3 is a schematic structural diagram of a processing device for dynamically generating audio/video clips according to an embodiment of the present invention;

fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.

Detailed Description

Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.

In the present application, the audio-visual segment and the audio-visual element may contain audio, video or a combination of audio and video.

In the field of audio and video processing, required specified audio and video units can be screened out through a preset detection standard, but the specified audio and video units are discontinuous audio and video units, so that the watching and the analysis processing of a video viewer are not facilitated. Taking the example of capturing video, for example, in a scene of a monitored room, a specified audiovisual unit in which someone appears in the room is detected in the video. In this case, the separate audiovisual unit cannot reflect how and what is done after someone enters the room, which is not conducive to the viewer's viewing and analysis of the video scene.

According to the embodiment of the invention, the audio and video unit associated with the appointed audio and video unit is obtained as the candidate audio and video unit, and the audio and video clip is generated on the basis of the appointed audio and video unit and the candidate audio and video unit associated with the appointed audio and video unit, so that the audio and video clip is convenient to watch.

Specifically, the audio/video clips shot by the video monitoring device can be divided into a plurality of audio/video units, and then the plurality of audio/video units are detected to screen out the required specified audio/video units meeting the predetermined standard. For example, the predetermined detection criterion may be to compare the audio/video unit with a reference unit and screen out the audio/video unit with a difference degree greater than a preset threshold. In the embodiment of the application, the reference unit may be a preset picture or a picture generated based on an audio/video unit in an audio/video clip. Therefore, the screened specified audio and video unit is an audio and video unit with larger change in the actual scene by comparing the audio and video unit image with the reference unit. In addition, the predetermined criteria for detection may be that a specific audiovisual unit is screened out every predetermined number of audiovisual units, for example, a specific audiovisual unit is screened out every five audiovisual units.

When a specified audio and video unit meeting a preset standard is detected, an audio and video unit associated with the specified audio and video unit can be acquired as a candidate audio and video unit, and then an audio and video clip convenient to watch is generated based on the specified audio and video unit and the candidate audio and video unit associated with the specified audio and video unit. In this embodiment of the present application, the candidate audio/video units associated with the specified audio/video unit may include an audio/video unit in which a scene reflected by the audio/video unit image and a scene reflected by the specified audio/video unit image have a certain association. For example, adjacent to a given audiovisual unit, within a preset time range or a preset number of audiovisual units before and/or after the given audiovisual unit. For example, in the above-mentioned scenario of monitoring a room, after detecting that a person appears in a specified audiovisual unit of the room, the audiovisual unit or a number of audiovisual units in a few seconds before the specified audiovisual unit can reflect how the person enters the room, and the audiovisual unit after the specified audiovisual unit will reflect what is specifically done by the person entering the room. Therefore, the audio and video units within the preset time range before and after the specified audio and video unit or the preset number of the audio and video units can be used as candidate audio and video units associated with the specified audio and video unit.

The following description will take an example of acquiring a preset number of audio/video units adjacent to a specified audio/video unit as candidate audio/video units. As shown in fig. 1, the application scenario diagram of the processing method for dynamically generating audio/video clips in the embodiment of the present invention is shown. The captured audio-video clips are divided into a plurality of audio-video cells f1 to fn. In the process of detecting the audio/video units f1 to fn, when it is detected that the audio/video unit f6 is a designated audio/video unit (for convenience of description, referred to as a first audio/video unit), the number of the associated audio/video units to be acquired can be preset according to the requirement for scene viewing. For example, in the above scenario, a person appears in 3 audio/video units before and 3 audio/video units after the room, which can reflect how and what he or she does after entering the room. Then, the number of the associated audio/video units to be acquired may be preset to be 3 audio/video units, and then 3 audio/video units before f6 adjacent to the first audio/video unit f6 are acquired as second audio/video units, that is, the audio/video units f3, f4, and f5, with the first audio/video unit f6 as a reference. In addition, 3 audio-video units after f6 adjacent to the first audio-video unit f6 can be acquired as the second audio-video units, namely f7, f8 and f 9. And taking the second audio and video unit as a candidate audio and video unit, and then generating a first audio and video segment convenient to watch based on the first audio and video unit f6 and the candidate audio and video units f3 to f5 and f7 to f9 which are associated with the first audio and video unit f 6. For example, the audio/video units f3 to f5 are used as the audio/video units of the first half section of the first audio/video segment, the audio/video units f7 to f9 are used as the audio/video units of the second half section of the first audio/video segment, and the first audio/video unit f6 connecting the audio/video units of the first half section and the second half section jointly generate the audio/video segment composed of the audio/video units f3 to f9 (which is different from the audio/video segment in the following embodiment of the present invention, the audio/video segments of f3 to f9 are referred to as an audio/video segment a).

It should be noted that, in the process of detecting the designated audio/video unit, in order to subsequently possibly perform expansion of the designated audio/video unit to acquire the audio/video unit associated therewith, the audio/video unit that is previously determined as the unspecified audio/video unit may be temporarily stored in the buffer space. For example, the preset time may be saved, or the saving time may be dynamically adjusted according to the actual situation.

In addition, the number of the associated audio/video units before and after the specified audio/video unit, which is preset in the above example, may be the same or different according to actual needs. For example, in the above scenario, after a specified audiovisual unit of a room where a person is present is detected, detailed viewing of the video after entering the room is required, and then more audiovisual units following the specified audiovisual unit can be obtained. For example, 5 audiovisual units after a specified audiovisual unit may be acquired, and a small amount may be acquired as needed before the specified audiovisual unit. For example, 1 audio-video unit before a specified audio-video unit can be acquired. In addition, the acquired preset number of the associated audio/video units can be dynamically adjusted according to a specific scene.

In addition, in the above-mentioned extension of the specified audio and video unit, that is, the process of acquiring the audio and video unit associated therewith as the candidate audio and video unit and generating the audio and video clip based on the specified audio and video unit and the candidate audio and video unit, the audio and video unit before the specified audio and video unit adjacent to the specified audio and video unit can be acquired as the candidate audio and video unit according to the requirement of the actual scene, the audio and video unit after the specified audio and video unit adjacent to the specified audio and video unit can be acquired as the candidate audio and video unit, and the audio and video units before and after the specified audio and video unit can be acquired as the candidate audio and video unit as described in the above example. For example, in the above scenario, after a specified audio/video unit in a room is detected, when it is necessary to check what is done after a person enters the room, only the audio/video unit behind the specified audio/video unit adjacent to the specified audio/video unit may be acquired as a candidate audio/video unit, and an audio/video clip may be generated based on the candidate audio/video unit behind the specified audio/video unit and the specified audio/video unit.

The above example describes a process of detecting the designated audiovisual unit f6 and generating a first audiovisual segment, i.e., the audiovisual segment a of the audiovisual units f3 to f9, based on the designated audiovisual unit and the candidate audiovisual units associated therewith. This example reflects the most basic principles of embodiments of the present invention. In addition, in the above process, since there may also be a designated audiovisual unit (for convenience of description, referred to as a third audiovisual unit, which is an audiovisual unit selected by a predetermined criterion) among the second audiovisual units f7 to f9 associated with the first audiovisual unit f6 following the first audiovisual unit f 6. Since the third audio-video unit is also the audio-video unit associated with the first audio-video unit f6, the audio-video unit associated with the third audio-video unit should also have a certain correlation with the first audio-video unit f 6. Therefore, the fourth audiovisual unit audio-video unit associated with the third audiovisual unit may also be considered as a candidate audiovisual unit.

Specifically, as shown in fig. 1, after the designated audiovisual cell f6 is detected, the candidate audiovisual cells f7 through f9 associated with f6 after f6 may continue to be detected. When a specified audio/video unit f8 (for convenience of distinction, referred to herein as a third audio/video unit, which is an audio/video unit selected by a predetermined standard) is detected, in order to view a video scene related to the third audio/video unit f8, an audio/video unit associated with the third audio/video unit may be acquired as a fourth audio/video unit, and the fourth audio/video unit and the second audio/video unit may be taken together as candidate audio/video units. For example, adjacent to the third audiovisual unit, 3 audiovisual units before and/or after the third audiovisual unit, since 3 audiovisual units before the third audiovisual unit adjacent to the third audiovisual unit are f5, f6, f7, f5 and f7 have been determined as candidate audiovisual units for the first audiovisual unit f 6. Therefore, 3 audio and video units behind the third audio and video unit adjacent to the third audio and video unit f8 can be obtained, and in the three audio and video units, f9 is already determined as a candidate audio and video unit, so that the audio and video units behind the second audio and video unit adjacent to the second audio and video unit and associated with the third audio and video unit, namely f10 and f11 can be obtained as the fourth audio and video unit and taken as the candidate audio and video unit together with the second audio and video unit.

In addition, after the designated audiovisual unit f6 is detected and the second audiovisual unit associated with f6 is determined, since the designated audiovisual unit is within the range of the audiovisual units adjacent to and spaced from the second audiovisual unit by a small number of audiovisual units after the second audiovisual unit, there may also be a designated audiovisual unit (for ease of distinction, referred to herein as a fifth audiovisual unit, which is an audiovisual unit selected by a predetermined criterion). Because the number of the audio and video units between the designated audio and video unit and the second audio and video unit serving as the candidate audio and video unit is small, the fifth audio and video unit may have correlation with the scene where the first audio and video unit f6 and the second audio and video unit serving as the candidate audio and video unit are located. For example, in the above scenario, in addition to the audiovisual unit of the room where someone from f3 to f9 enters, a specified audiovisual unit f13 (a fifth audiovisual unit that is an audiovisual unit selected by a predetermined criterion) in which another person appears in the room is detected, and the number of audiovisual units that the audiovisual unit f13 is spaced from the audiovisual unit f9 is less than a preset threshold. From the perspective of scene development, scenes in which two persons appear in a room sequentially have strong correlation. Therefore, in order to enable a viewer to watch a long-time associated video scene, the audio/video units f10, f11 and f12 between f9 and f13 can be continuously obtained, the audio/video units f10 to f12 are used as sixth audio/video units, and the fifth audio/video unit, the sixth audio/video unit and the second audio/video unit are used as candidate audio/video units together.

Then, a first audio/video segment with longer viewing time, i.e. an audio/video segment composed of the audio/video units f3 to f13 (for convenience of description, referred to as an audio/video segment B) is generated based on the first audio/video unit and the candidate audio/video units.

It should be noted that, in the above processing procedure, if the fifth audio/video unit f13 meeting the predetermined standard behind the second audio/video unit is not detected, a first audio/video clip, that is, an audio/video clip (referred to as an audio/video clip C herein for convenience of description) composed of the audio/video units f3 to f11 is generated based on the first audio/video unit and the candidate audio/video unit composed of the second audio/video unit and the fourth audio/video unit.

In addition, in the detection process, the audio and video units adjacent to the second audio and video unit behind the second audio and video unit can be detected, if the fifth audio and video unit f13 meeting the predetermined standard is detected, the audio and video units f10, f11 and f12 between the second audio and video unit and the fifth audio and video unit can be obtained as a sixth audio and video unit, the second audio and video unit, the sixth audio and video unit and the fifth audio and video unit are taken as candidate audio and video units together, and then a first audio and video segment is generated based on the first audio and video unit and the candidate audio and video units, namely the audio and video segment B composed of the audio and video units f3 to f 13. Therefore, the detection of the designated audio and video unit in the second audio and video unit behind the first audio and video unit f6 is omitted, so that the processing resource is saved, and the processing efficiency is improved. If the fifth audio-video unit is not detected, detecting a second audio-video unit which is behind the first audio-video unit f6 and is associated with the first audio-video unit f6, if the third audio-video unit f8 is found, acquiring fourth audio-video units f10 and f11 which are behind the second audio-video unit and are associated with the third audio-video unit, taking the fourth audio-video units and the second audio-video units together as candidate audio-video units, and generating a first audio-video clip based on the first audio-video units and the candidate audio-video units, namely an audio-video clip C consisting of the audio-video units f3 to f 11. If the third audio-video unit is not detected, a first audio-video segment, namely an audio-video segment A consisting of the audio-video units f3 to f9, is generated based on the first audio-video unit and the candidate audio-video unit (the second audio-video unit).

In addition, in the above scheme, the detected first audio/video unit and the second audio/video unit associated therewith may be stored in the buffer space first, and if the third audio/video unit and/or the fifth audio/video unit is detected within a preset time, the audio/video clip is generated based on the first audio/video unit and the candidate audio/video unit composed of the second audio/video unit, the fourth audio/video unit and/or the sixth audio/video unit and the fifth audio/video unit. And if the third audio-video unit and/or the fifth audio-video unit is not detected, generating a first audio-video clip based on the first audio-video unit and a candidate audio-video unit consisting of the second audio-video unit.

In addition, the generated first audio/video segment may include too many audio/video units, that is, the length of the generated audio/video segment is too long, and in this case, the generated audio/video segment is not favorable for viewing. For example, in the scene of the monitoring room, if it is detected that a specified audio/video unit of a person appears in the room, a first audio/video segment is generated based on the audio/video unit and a candidate audio/video unit associated with the audio/video unit, and a monitoring person wants to check how the person enters the relevant scene of the room in the first audio/video segment, but because the length of the generated first audio/video segment is too long, a key point is not easily found in the checking process, and therefore the checking is not facilitated for the monitoring person.

In this case, if the length of the generated first audio/video segment exceeds the preset maximum audio/video segment length limit (the number of the included audio/video units is greater than the preset threshold), the generated first audio/video segment may be divided into a plurality of second audio/video segments. For example, two audio-visual clips may be split.

Further, the audio/video clips may be divided in two ways, one way is to divide the first audio/video clip into a plurality of second audio/video clips with fixed length. For example, the number of audio and video units included in the audio and video segments B from f3 to f13 generated in the above scenario is greater than a preset threshold (assuming that the threshold is 8), and the audio and video segments B need to be divided into a plurality of second audio and video segments, and assuming that the preset length of the second audio and video segments is 5 audio and video units, the audio and video segments B may be divided into audio and video segments from f3 to f7, audio and video segments from f8 to f12, and an audio and video unit f 13. However, the remaining av cell f13 is not good for viewing, so in this case, the av cell f13 may be merged with the av segments from f8 to f12 to generate the av segments from f8 to f13 (the number of av cells included in the generated av segments from f8 to f13 is less than the threshold).

In addition, the audio-video clips can be divided into a plurality of second audio-video clips with unfixed lengths, for example, the audio-video clips can be divided according to the relevance of the scene, and here, the detection of the relevance of the scene can be realized through a preset detection rule. For example, it may be preset that a character a appears in an audio/video unit image as an audio/video unit related to a scene, and then, divide the audio/video unit related to a continuous scene into an audio/video segment. For example, in the audio-video clips B of f3 to f13, it is detected that the human nails appear in the audio-video units f3 to f9 by detecting the image features of the audio-video units, and then f3 to f9 can be divided into a second audio-video clip. In addition, since the number of the remaining audio/video units f10 to f13 is smaller than the preset length threshold of the audio/video segments (assuming that the threshold is 8), the audio/video segments can be divided into a second audio/video segment.

According to the embodiment of the invention, the audio and video unit associated with the appointed audio and video unit is obtained, and the audio and video clip is generated on the basis of the appointed audio and video unit and the audio and video unit associated with the appointed audio and video unit, so that the appointed audio and video unit is expanded, and the scene where the appointed audio and video unit is located is conveniently watched and analyzed.

The technical solution of the present invention is further illustrated by some specific examples.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:音频控制装置、音频播放系统及方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类