Video processing method, device, system, storage medium and electronic equipment

文档序号:1231609 发布日期:2020-09-08 浏览:14次 中文

阅读说明:本技术 视频处理方法及装置、系统、存储介质和电子设备 (Video processing method, device, system, storage medium and electronic equipment ) 是由 张弓 于 2020-05-16 设计创作,主要内容包括:本公开提供了一种视频处理方法、视频处理装置、视频处理系统、计算机可读存储介质和电子设备,涉及视频处理技术领域。该视频处理方法包括:获取待处理视频,确定待处理视频中待进行插帧的原始视频帧序列;确定原始视频帧序列中预设两个原始视频帧的前景对象的变化程度;根据两个原始视频帧的前景对象的变化程度,确定两个原始视频帧之间待进行插帧的可选时间相位;基于两个原始视频帧之间待进行插帧的可选时间相位,在两个原始视频帧之间进行插帧,生成处理后的视频。本公开可以对视频固有内容进行修改。(The disclosure provides a video processing method, a video processing device, a video processing system, a computer readable storage medium and an electronic device, and relates to the technical field of video processing. The video processing method comprises the following steps: acquiring a video to be processed, and determining an original video frame sequence to be subjected to frame interpolation in the video to be processed; determining the change degree of foreground objects of two preset original video frames in an original video frame sequence; determining an optional time phase to be subjected to frame interpolation between two original video frames according to the change degree of foreground objects of the two original video frames; and performing frame interpolation between the two original video frames based on the selectable time phase of the frame to be interpolated between the two original video frames to generate a processed video. The present disclosure may make modifications to the video inherent content.)

1. A video processing method, comprising:

acquiring a video to be processed, and determining an original video frame sequence to be subjected to frame interpolation in the video to be processed;

determining the change degree of foreground objects of two preset original video frames in the original video frame sequence;

determining the selectable time phase of the frame to be interpolated between the two original video frames according to the change degree of the foreground objects of the two original video frames;

and performing frame interpolation between the two original video frames based on the selectable time phase of the frame to be interpolated between the two original video frames to generate a processed video.

2. The video processing method of claim 1, wherein determining a sequence of original video frames to be interpolated in the video to be processed comprises:

extracting a video clip containing the foreground object from the video to be processed;

and determining an original video frame sequence to be subjected to frame interpolation from the video segment containing the foreground object.

3. The video processing method of claim 2, wherein determining an original sequence of video frames to be interpolated from a video segment containing the foreground object comprises:

extracting a video frame of which the foreground object meets the motion attribute requirement from a video segment containing the foreground object;

and generating the original video frame sequence by utilizing the video frames of which the foreground objects meet the motion attribute requirements.

4. The video processing method according to claim 2, wherein the video segments of the foreground object in the video to be processed comprise a first video segment and a second video segment; wherein, determining an original video frame sequence to be subjected to frame interpolation from the video segment containing the foreground object comprises:

extracting at least one video frame from the first video segment;

extracting at least one video frame from the second video segment;

generating the original sequence of video frames using the at least one extracted video frame from the first video segment and the at least one extracted video frame from the second video segment.

5. The video processing method according to claim 4, wherein at least one video frame extracted from the first video segment contains the foreground object having a first motion attribute, and at least one video frame extracted from the second video segment contains the foreground object having a second motion attribute;

wherein the first motion profile is different from the second motion profile.

6. The video processing method according to claim 1, wherein determining the selectable temporal phase to be interpolated between the two original video frames according to the degree of change of the foreground object of the two original video frames comprises:

determining the maximum quantity of pluggable frames to be subjected to frame plugging between the two original video frames according to the change degree of foreground objects of the two original video frames;

acquiring a time interval between the two original video frames;

and according to the maximum pluggable frame quantity, performing equal phase division on the time interval between the two original video frames to determine the selectable time phase of the frame to be plugged between the two original video frames.

7. The video processing method of claim 6, wherein interpolating between the two original video frames based on the selectable temporal phase of the frame to be interpolated between the two original video frames comprises:

acquiring the frame number of frames to be interpolated between the two original video frames;

if the obtained frame number is the same as the maximum frame number to be inserted between the two original video frames, performing frame insertion operation on each time phase in the selectable time phases;

if the obtained frame number is less than the maximum frame number to be inserted between the two original video frames, inserting frames between the two original video frames according to an inserting frame rule;

and the frame interpolation rule comprises the steps of screening target time phases with the same number as the acquired frames from the selectable time phases, and performing frame interpolation operation based on the target time phases.

8. The video processing method of claim 7, wherein performing frame interpolation based on the target temporal phase comprises:

performing frame interpolation operation aiming at the target time phase;

for non-target time phases except the target time phase in the selectable time phases, copying a video frame nearest to the non-target time phase, and configuring the copied video frame at the non-target time phase.

9. The video processing method according to claim 1, 7 or 8, wherein the two original video frames comprise a first original video frame and a second original video frame: wherein interpolating between the two original video frames comprises:

respectively carrying out image blocking operation on the first original video frame and the second original video frame, and determining a motion vector of an image block in the first original video frame relative to the second original video frame;

determining mapping motion vectors of an interpolation block corresponding to an image block in an image to be generated in an interpolation frame relative to a first original video frame and a second original video frame according to a motion vector of the image block in the first original video frame relative to the second original video frame and based on a time phase to be subjected to frame interpolation between the first original video frame and the second original video frame;

searching corresponding image blocks in the first original video frame and the second original video frame respectively according to the mapping motion vector;

and carrying out interpolation operation on the searched image blocks, generating the frame interpolation image by combining the result of the interpolation operation, and configuring the frame interpolation image at a corresponding time phase.

10. The video processing method of claim 2, wherein generating the processed video comprises:

acquiring the positions of the two original video frames in the video to be processed;

and fusing the frame interpolation result to the position.

11. A video processing apparatus, comprising:

the video acquisition module is used for acquiring a video to be processed and determining an original video frame sequence to be subjected to frame interpolation in the video to be processed;

the object change degree determining module is used for determining the change degree of foreground objects of two preset original video frames in the original video frame sequence;

the phase determining module is used for determining the selectable time phase of the frame to be interpolated between the two original video frames according to the change degree of the foreground objects of the two original video frames;

and the frame interpolation processing module is used for performing frame interpolation between the two original video frames based on the selectable time phase of the frame to be interpolated between the two original video frames to generate a processed video.

12. A video processing system, comprising:

the video extractor is used for acquiring a video to be processed and extracting an original video frame sequence to be subjected to frame interpolation from the video to be processed;

the video generator is used for determining the change degree of foreground objects of two preset original video frames in the original video frame sequence, determining the optional time phase of the frame to be interpolated between the two original video frames according to the change degree of the foreground objects of the two original video frames, and interpolating the frame between the two original video frames based on the optional time phase of the frame to be interpolated between the two original video frames to obtain the result of the frame interpolation;

and the video fusion device is used for acquiring the positions of the two original video frames in the video to be processed, fusing the frame interpolation result to the positions and generating the processed video.

13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the video processing method according to any one of claims 1 to 10.

14. An electronic device, comprising:

a processor;

memory for storing one or more programs which, when executed by the processor, cause the processor to implement the video processing method of any of claims 1 to 10.

Technical Field

The present disclosure relates to the field of video processing technologies, and in particular, to a video processing method, a video processing apparatus, a video processing system, a computer-readable storage medium, and an electronic device.

Background

With the development of video technology and internet technology, people have become a common phenomenon of watching videos on personal computers, mobile phones, tablets and other electronic devices, and the number of various videos is rapidly increasing.

Currently, the modification of video includes adding special effects, adding text, etc., and these modifications cannot change the existing content of video itself, and for example, the way of video cropping may lose video information.

Disclosure of Invention

The present disclosure provides a video processing method, a video processing apparatus, a video processing system, a computer-readable storage medium, and an electronic device, so as to overcome the problems of single video modification mode and poor effect at least to some extent.

According to a first aspect of the present disclosure, there is provided a video processing method, including: acquiring a video to be processed, and determining an original video frame sequence to be subjected to frame interpolation in the video to be processed; determining the change degree of foreground objects of two preset original video frames in an original video frame sequence; determining an optional time phase to be subjected to frame interpolation between two original video frames according to the change degree of foreground objects of the two original video frames; and performing frame interpolation between the two original video frames based on the selectable time phase of the frame to be interpolated between the two original video frames to generate a processed video.

According to a second aspect of the present disclosure, there is provided a video processing apparatus comprising: the video acquisition module is used for acquiring a video to be processed and determining an original video frame sequence to be subjected to frame interpolation in the video to be processed; the object change degree determining module is used for determining the change degree of foreground objects of two preset original video frames in the original video frame sequence; the phase determining module is used for determining the selectable time phase of the frame to be interpolated between the two original video frames according to the change degree of the foreground objects of the two original video frames; and the frame interpolation processing module is used for performing frame interpolation between the two original video frames based on the selectable time phase of the frame to be interpolated between the two original video frames to generate a processed video.

According to a third aspect of the present disclosure, there is provided a video processing system comprising: the video extractor is used for acquiring a video to be processed and extracting an original video frame sequence to be subjected to frame interpolation from the video to be processed; the video generator is used for determining the change degree of foreground objects of two original video frames preset in an original video frame sequence, determining the optional time phase of the frames to be interpolated between the two original video frames according to the change degree of the foreground objects of the two original video frames, and interpolating the frames between the two original video frames based on the optional time phase of the frames to be interpolated between the two original video frames to obtain the result of the frame interpolation; and the video fusion device is used for acquiring the positions of the two original video frames in the video to be processed, fusing the frame interpolation result to the positions and generating the processed video.

According to a fourth aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the video processing method described above.

According to a fifth aspect of the present disclosure, there is provided an electronic device comprising a processor; a memory for storing one or more programs which, when executed by the processor, cause the processor to implement the video processing method described above.

In the technical solutions provided by some embodiments of the present disclosure, an original video frame sequence to be subjected to frame interpolation is determined from a video to be processed, a change degree of foreground objects of two original video frames in the original video frame sequence is determined, an optional time phase of the frame to be interpolated is determined according to the change degree, the frame interpolation is performed between the two original video frames in combination with the number of frames to be subjected to frame interpolation, and the processed video is generated by using a result of the frame interpolation. On one hand, the method can change the inherent content of the video by frame insertion, so as to achieve the purpose of modifying the video content; on the other hand, the original video frame for frame interpolation contains a foreground object, and a visual effect that the foreground object changes gradually can be generated in the modified video; on the other hand, the selectable time phase of the frame to be interpolated is determined based on the change degree of the foreground object, so that the frame interpolation image can be configured in multiple ways under the condition that the foreground object changes violently, so that the user can fully perceive the video content.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:

FIG. 1 shows a schematic diagram of an exemplary system architecture to which a video processing scheme of an embodiment of the present disclosure may be applied;

FIG. 2 illustrates a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure;

fig. 3 schematically shows a flow chart of a video processing method according to an exemplary embodiment of the present disclosure;

FIG. 4 is a schematic diagram illustrating an original video frame sequence resulting from video frame decimation in accordance with an embodiment of the present disclosure;

FIG. 5 shows a schematic diagram of foreground object changes of adjacent original video frames according to an example embodiment of the present disclosure;

FIG. 6 shows a schematic diagram of the change of foreground objects after frame interpolation corresponding to FIG. 5;

FIG. 7 shows a schematic diagram of foreground object changes of another adjacent original video frame, according to an example embodiment of the present disclosure;

fig. 8 shows a schematic diagram of a change of a panoramic object after interpolation corresponding to fig. 7;

FIG. 9 is a diagram illustrating motion vector determination based on motion estimation in an exemplary embodiment of the present disclosure;

FIG. 10 is a diagram illustrating a modified motion vector in an exemplary embodiment of the present disclosure;

FIG. 11 is a diagram illustrating frame interpolation based on motion compensation in an exemplary embodiment of the present disclosure;

FIG. 12 is a schematic diagram of a video segment after frame insertion according to one embodiment of the present disclosure;

FIG. 13 schematically illustrates a video processing architecture diagram with video decimation and video fusion functionality according to an exemplary embodiment of the present disclosure;

fig. 14 schematically shows a block diagram of a video processing apparatus according to an exemplary embodiment of the present disclosure.

Detailed Description

Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.

Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.

The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the steps. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation. In addition, all of the following terms "first" and "second" are used for distinguishing purposes only and should not be construed as limiting the present disclosure.

Fig. 1 shows a schematic diagram of an exemplary system architecture to which a video processing scheme of an embodiment of the present disclosure may be applied.

As shown in fig. 1, the system architecture 1000 may include one or more of terminal devices 1001, 1002, 1003, a network 1004, and a server 1005. The network 1004 is used to provide a medium for communication links between the terminal devices 1001, 1002, 1003 and the server 1005. Network 1004 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.

It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, the server 1005 may be a server cluster composed of a plurality of servers.

A user may use the terminal devices 1001, 1002, 1003 to interact with a server 1005 via a network 1004 to receive or transmit messages or the like. The terminal devices 1001, 1002, 1003 may be various electronic devices having a display screen, including but not limited to smart phones, tablet computers, portable computers, desktop computers, and the like. In addition, the server 1005 may be a server that provides various services.

In an embodiment of implementing the video processing scheme of the present disclosure based on the server 1005, first, the server 1005 may obtain videos sent by the terminal devices 1001, 1002, and 1003 as videos to be processed, and determine an original video frame sequence to be frame-inserted in the videos to be processed; next, the server 1005 may determine a degree of change of foreground objects of two adjacent original video frames in the original video frame sequence, and determine an optional time phase to be frame-inserted between the two adjacent original video frames according to the degree of change; subsequently, the server 1005 may obtain the number of frames to be subjected to frame interpolation between two adjacent original video frames, perform frame interpolation between two adjacent original video frames based on the obtained number of frames and the optional time phase, and generate a processed video corresponding to the video to be processed by using the result of frame interpolation. The server 1005 may also feed back the processed video to the terminal devices 1001, 1002, and 1003, and play or save the video.

In an embodiment of implementing the video processing scheme of the present disclosure based on the terminal devices 1001, 1002, and 1003, the terminal devices 1001, 1002, and 1003 may determine an original video frame sequence to be subjected to frame interpolation in a video to be processed, determine an optional time phase of the frame to be subjected to frame interpolation according to a change degree of foreground objects of two adjacent original video frames in the original video frame sequence, perform frame interpolation between the two adjacent original video frames by using the obtained frame number and optional time phase of the frame to be subjected to frame interpolation, and generate a processed video corresponding to the video to be processed by using a result of the frame interpolation.

In addition, the implementation process of the video processing scheme of the present disclosure can also be implemented by the terminal devices 1001, 1002, 1003 and the server 1005 together. For example, the terminal devices 1001, 1002, 1003 may acquire a video to be processed, determine an original video frame sequence to be frame-inserted in the video to be processed, and transmit the original video frame sequence to the server 1005. The server 1005 may determine a change degree of foreground objects of two adjacent original video frames in the original video frame sequence, and determine an optional time phase for frame interpolation according to the change degree, and after obtaining the number of frames to be frame interpolated, the server 1005 may perform frame interpolation between the two adjacent original video frames, and generate a video after frame interpolation.

Although the following description takes as an example that the terminal apparatuses 1001, 1002, 1003 perform the video processing procedure of the present disclosure, as explained above, the present disclosure does not limit the types of apparatuses that implement the steps of the video processing.

In addition, the video processing scheme of the present disclosure has wide application, for example, can be applied to the post-production process of television and movie, and can also be applied to the process of video editing and video production.

FIG. 2 shows a schematic diagram of an electronic device suitable for use in implementing exemplary embodiments of the present disclosure. The terminal device according to the present disclosure may be configured in the form as shown in fig. 2. It should be noted that the electronic device shown in fig. 2 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.

The electronic device of the present disclosure includes at least a processor and a memory for storing one or more programs, which when executed by the processor, cause the processor to implement the video processing method of the exemplary embodiments of the present disclosure.

Specifically, as shown in fig. 2, the electronic device 200 may include: a processor 210, an internal memory 221, an external memory interface 222, a Universal Serial Bus (USB) interface 230, a charging management module 240, a power management module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication module 250, a wireless communication module 260, an audio module 270, a speaker 271, a microphone 272, a microphone 273, an earphone interface 274, a sensor module 280, a display screen 290, a camera module 291, an indicator 292, a motor 293, a key 294, and a Subscriber Identity Module (SIM) card interface 295, and the like. The sensor module 280 may include a depth sensor, a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.

It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 200. In other embodiments of the present application, the electronic device 200 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.

Processor 210 may include one or more processing units, such as: the Processor 210 may include an Application Processor (AP), a modem Processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband Processor, and/or a Neural Network Processor (NPU), and the like. The different processing units may be separate devices or may be integrated into one or more processors. Additionally, a memory may be provided in processor 210 for storing instructions and data.

The USB interface 230 is an interface conforming to the USB standard specification, and may specifically be a MiniUSB interface, a microsusb interface, a USB type c interface, or the like. The USB interface 230 may be used to connect a charger to charge the electronic device 200, and may also be used to transmit data between the electronic device 200 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.

The charge management module 240 is configured to receive a charging input from a charger. The charger may be a wireless charger or a wired charger. The power management module 241 is used for connecting the battery 242, the charging management module 240 and the processor 210. The power management module 241 receives the input of the battery 242 and/or the charging management module 240, and supplies power to the processor 210, the internal memory 221, the display screen 290, the camera module 291, the wireless communication module 260, and the like.

The wireless communication function of the electronic device 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, a modem processor, a baseband processor, and the like.

The mobile communication module 250 may provide a solution including 2G/3G/4G/5G wireless communication applied on the electronic device 200.

The Wireless Communication module 260 may provide a solution for Wireless Communication applied to the electronic device 200, including Wireless Local Area Networks (WLANs) (e.g., Wireless Fidelity (Wi-Fi) network), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like.

The electronic device 200 implements a display function through the GPU, the display screen 290, the application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 290 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 210 may include one or more GPUs that execute program instructions to generate or alter display information.

The electronic device 200 may implement a shooting function through the ISP, the camera module 291, the video codec, the GPU, the display screen 290, the application processor, and the like. In some embodiments, the electronic device 200 may include 1 or N camera modules 291, where N is a positive integer greater than 1, and if the electronic device 200 includes N cameras, one of the N cameras is a main camera.

Internal memory 221 may be used to store computer-executable program code, including instructions. The internal memory 221 may include a program storage area and a data storage area. The external memory interface 222 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 200.

The electronic device 200 may implement an audio function through the audio module 270, the speaker 271, the receiver 272, the microphone 273, the headphone interface 274, the application processor, and the like. Such as music playing, recording, etc.

Audio module 270 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. Audio module 270 may also be used to encode and decode audio signals. In some embodiments, the audio module 270 may be disposed in the processor 210, or some functional modules of the audio module 270 may be disposed in the processor 210.

The speaker 271, also called "horn", is used to convert the audio electrical signal into a sound signal. The electronic apparatus 200 can listen to music through the speaker 271 or listen to a handsfree phone call. The receiver 272, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic device 200 receives a call or voice information, it can receive the voice by placing the receiver 272 close to the ear of the person. The microphone 273, also known as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 273 by sounding a voice signal near the microphone 273 through the mouth. The electronic device 200 may be provided with at least one microphone 273. The earphone interface 274 is used to connect wired earphones.

For sensors that the sensor module 280 may include in the electronic device 200, a depth sensor is used to obtain depth information of a scene. The pressure sensor is used for sensing a pressure signal and converting the pressure signal into an electric signal. The gyro sensor may be used to determine the motion pose of the electronic device 200. The air pressure sensor is used for measuring air pressure. The magnetic sensor includes a hall sensor. The electronic device 200 may detect the opening and closing of the flip holster using a magnetic sensor. The acceleration sensor may detect the magnitude of acceleration of the electronic device 200 in various directions (typically three axes). The distance sensor is used for measuring distance. The proximity light sensor may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The fingerprint sensor is used for collecting fingerprints. The temperature sensor is used for detecting temperature. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to the touch operation may be provided through the display screen 290. The ambient light sensor is used for sensing the ambient light brightness. The bone conduction sensor may acquire a vibration signal.

The keys 294 include a power-on key, a volume key, and the like. The keys 294 may be mechanical keys. Or may be touch keys. The motor 293 may generate a vibration indication. The motor 293 may be used for both electrical vibration prompting and touch vibration feedback. Indicator 292 may be an indicator light that may be used to indicate a state of charge, a change in charge, or may be used to indicate a message, missed call, notification, etc. The SIM card interface 295 is used to connect a SIM card. The electronic device 200 interacts with the network through the SIM card to implement functions such as communication and data communication.

The present application also provides a computer-readable storage medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device.

A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.

A computer readable storage medium may transmit, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.

The computer-readable storage medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method as described in the embodiments below.

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

The units described in the embodiments of the present disclosure may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.

Fig. 3 schematically shows a flow chart of a video processing method of an exemplary embodiment of the present disclosure. Referring to fig. 3, the video processing method may include the steps of:

and S32, acquiring the video to be processed, and determining an original video frame sequence to be subjected to frame interpolation in the video to be processed.

In the exemplary embodiment of the present disclosure, the video to be processed may be a video obtained by the terminal device through shooting by the camera module, or may also be a video downloaded from a network or sent by another device. In addition, the video to be processed according to the present disclosure can be determined from a video set including a plurality of videos in response to a video selection operation of a user.

According to some embodiments of the present disclosure, after obtaining a to-be-processed video, a terminal device may use each frame in the to-be-processed video as an original video frame for executing the scheme of the present disclosure, that is, the determined original video frame sequence is the obtained to-be-processed video itself.

According to other embodiments of the present disclosure, the terminal device may further perform video frame extraction on the video to be processed to obtain an original video frame according to the scheme of the present disclosure, and configure the extracted original video frame as an original video frame sequence. It should be understood that the sequence of original video frames includes at least two original video frames. In addition, the original video frame sequence may be configured according to the position (i.e., chronological order) of the extracted original video frame in the video to be processed.

For the way of extracting the video frame, first, the terminal device may extract a video clip containing a foreground object from the video to be processed. Wherein a foreground object is an object that changes in different video frames relative to the background in the video, e.g. a small ball that scrolls through the video. Whereas the background may be understood as an area of the video where little change occurs.

In addition, the foreground object targeted when the video frame is extracted may be selected by the user, for example, each object in the video is identified by an image identification technology, and the identified result is fed back to the user, and the user may select one or more of the objects as the foreground object according to which the video frame is extracted.

Next, the terminal device may determine an original video frame to be subjected to frame interpolation from the video segment containing the foreground object.

In one embodiment of the present disclosure, the terminal device may extract, from a video segment containing the foreground object, a video frame of which the foreground object meets the motion attribute requirement, and generate an original video frame sequence by using the video frame of which the foreground object meets the motion attribute requirement. Wherein the motion attribute requirement refers to meeting a preset requirement on the motion attribute, which may be configured by a user.

Still taking the ball as an example, the motion attribute requirement may be that the ball is located on the ground. In this case, the extraction of video frames with small balls in the air is avoided. Still alternatively, the motion attribute requirement may be that the ball be in the air. These are merely examples and the present disclosure does not limit the motion attribute requirements.

In another embodiment of the present disclosure, the video segment containing the foreground object may include a first video segment and a second video segment, in which case the terminal device may extract at least one video frame from the first video segment and at least one video frame from the second video segment, and generate the original sequence of video frames using the at least one video frame extracted from the first video segment and the at least one video frame extracted from the second video segment. In such an embodiment, there is no limitation on the number and the order of arrangement of the video frames extracted from the first video segment and the video frames extracted from the second video segment in the original video frame sequence.

For example, extracting video frame a and video frame B from a first video segment and video frame C from a second video segment, the original sequence of video frames may be configured as: video frame A, video frame C, video frame B. The original video frame sequence may also be configured to: video frame C, video frame A, video frame B.

In addition, at least one video frame extracted from the first video segment may contain a foreground object having a first motion attribute, and at least one video frame extracted from the second video segment may contain a foreground object having a second motion attribute, wherein the first motion attribute is different from the second motion attribute. Still taking the example of a ball contained in a video, the first motion attribute may be that the ball is on the ground and the second motion attribute may be that the ball is in the air.

It should be noted that, in the embodiment where video frames in a video segment are combined to obtain an original video frame sequence, a video segment containing a foreground object in a video to be processed may further include a third video segment, a fourth video segment, a fifth video segment, and the like, and those skilled in the art may conceive other configurations through the above description, which, however, all fall within the scope of the present disclosure.

In the above embodiment, the original video frame sequence is generated by extracting the video segment and then extracting the video frames in the video segment, so that the step-by-step processing can optimize the system computing resource configuration. However, in other embodiments of the present disclosure, it is also possible to directly extract video frames from the video to be processed based on the motion attributes of the foreground object to generate the original video frame sequence described in the present disclosure.

Fig. 4 shows a schematic diagram of an original video frame sequence resulting from video frame decimation according to an embodiment of the present disclosure. Referring to fig. 4, the original video frame sequence may include an original video frame 1, an original video frame 2, an original video frame 3, and an original video frame 4. It can be seen that the time interval between adjacent pairs of original video frame 1, original video frame 2, original video frame 3 and original video frame 4 may be different, e.g., the time interval between original video frame 1 and original video frame 2 is smaller than the time interval between original video frame 2 and original video frame 3. In addition, it should be understood that fig. 4 is only an exemplary illustration, and that other extracted original video frames may be included in the original video frame sequence.

S34, determining the change degree of foreground objects of two preset original video frames in the original video frame sequence.

In the following description, two original video frames are assumed to represent two adjacent original video frames in the original video frame sequence. However, it should be understood that the preset two original video frames may also be any two original video frames in the sequence of original video frames or two original video frames specified by the user. In addition, in the following description, unless otherwise specified, the term "two video frames" refers to the preset two original video frames.

In an exemplary embodiment of the present disclosure, the change of the foreground object may include, but is not limited to, moving, enlarging, reducing, and the like, which is not limited by the present disclosure. And the degree of change of the foreground object may refer to a moving distance, a moving direction, a magnification or reduction factor, and the like of the foreground object. Specifically, foreground objects of two adjacent original video frames may be identified respectively to determine the degree of change.

In addition, it should be understood that one or more adjacent two original video frames may be included in the original video frame sequence, and it can be seen that if the original video frame sequence includes m video frames, the original video frames exist in m-1 time intervals, that is, the frame interpolation process may be performed on m-1 time intervals, respectively.

And S36, determining the selectable time phase of the frame to be inserted between the two original video frames according to the change degree of the foreground objects of the two original video frames.

In an exemplary embodiment of the present disclosure, the selectable temporal phase refers to a temporal phase between two original video frames that can allow insertion of a video frame.

Typically, the selectable temporal phase is positively correlated with the degree of change of the foreground object.

First, the terminal device may determine the maximum number of insertable frames to be inserted between two original video frames according to the degree of change of the foreground object of the two original video frames. The mapping relationship between the foreground object change degree and the maximum pluggable frame number can be pre-constructed and stored in the terminal device, so that when the change degree of the foreground object is determined, the maximum pluggable frame number to be subjected to frame insertion between the two original video frames can be obtained by using the mapping relationship.

The time interval between the two original video frames can then be obtained. For example, the difference in the temporal phase of two original video frames in the video to be processed may be calculated to obtain the time interval.

Next, the time interval may be divided equally in phase according to the maximum number of pluggable frames to determine an optional time phase between the two original video frames to be plugged. For example, 298 time phases may be obtained, with equal time intervals between the time phases.

In addition, it is also within the contemplation of the present disclosure to configure the time phase with variable spacing according to the degree of change of the foreground object.

And S38, based on the selectable time phase of the frame to be interpolated between the two original video frames, performing frame interpolation between the two original video frames to generate a processed video.

First, the number of frames to be interpolated between two original video frames can be obtained.

In the exemplary embodiment of the present disclosure, the number of frames to be subjected to frame interpolation between two original video frames refers to the number of frames actually subjected to frame interpolation, that is, the number of frames to be actually subjected to new video frame interpolation.

In some embodiments, the number of frames to be interpolated between two original video frames may be set by a user according to the requirement. In other embodiments, the terminal device may determine the number of frames as the number of selectable time phases obtained in step S36.

And if the obtained frame number is the same as the maximum frame number to be interpolated between the two original video frames, performing frame interpolation operation on each time phase in the selectable time phases. For example, 298 selectable temporal phases are obtained in step S36, and each phase interpolates one frame, where if two original video frames are added, an image of 300 frames is obtained. Within the range that the human eye can receive, the 300 frames can be played at 10s, and the time interval of playing each frame is 33 ms.

If the number of the acquired frames is less than the maximum number of the pluggable frames to be subjected to frame insertion between the two original video frames, frame insertion can be performed between the two original video frames according to a frame insertion rule. The frame interpolation rule may include screening target time phases from the selectable time phases, the number of which is the same as the number of the acquired frames, and performing frame interpolation based on the target time phases.

Specifically, the target time phases may be manually filtered by the user, for example, 298 selectable time phases are obtained in step S36, and the number of the obtained frames is 118, then the first 118 of the 298 selectable time phases may be used as the time phase of the frame to be interpolated, and two original video frames are added, so that 120 video frames may be output, the 120 frames may be played in 4S, and the time interval for playing each frame is 33 ms.

It should be understood that the above configuration is merely an example, and the user may determine the target time phase by himself. In the embodiment of configuring the frame interpolation at unequal intervals, the effect of uneven change degree of the foreground object can be constructed in the output video.

In addition, for the case that the number of acquired frames is less than the maximum number of insertable frames to be subjected to frame insertion between two original video frames, in other embodiments, the frame insertion operation may be performed for a target time phase, and for a non-target time phase other than the target time phase in the selectable time phases, a video frame closest to the non-target time phase may be copied, and the copied video frame may be configured in the non-target time phase. The copied video frame may be one of the two original video frames, or may be a newly inserted video frame.

Still by way of the above example, 298 optional time phases are obtained in step S36, and if the number of acquired frames is 118, the first 118 optional time phases of the 298 optional time phases may be used as the time phase of the frame to be interpolated, that is, the target time phase. For the last 180 non-target temporal phases of the selectable temporal phases, the 118 th newly inserted video frame or the original video frame closest to these 180 non-target temporal phases may be copied. Therefore, due to the existence of repeated frames, the effect that the change degree of the foreground object is uneven can be realized.

In addition, the target time phase may be configured to be not completely consistent with the time interval between two phases, and both of them belong to the protection scope of the present disclosure.

Fig. 5 shows a schematic diagram of foreground object variation of two original video frames. Referring to fig. 5, the two original video frames can be denoted as a first original video frame 51 and a second original video frame 52. The foreground objects in the second original video frame 52 are moved to the right by a distance compared to the first original video frame 51. In this case, the above-mentioned scheme of the present disclosure may be utilized to perform frame interpolation processing, resulting in a foreground object change process as shown in fig. 6, for example. Such as video 60, 3 video frames with foreground objects between the position of the first original video frame 51 and the position of the second original video frame 52 are inserted to represent the moving process of the foreground objects. It should be understood that fig. 6 is merely an exemplary depiction of a video and should not be taken as a limitation of the present disclosure.

Fig. 7 shows a schematic diagram of another foreground object variation of two original video frames. Referring to fig. 7, the two original video frames can be denoted as a first original video frame 71 and a second original video frame 72. The foreground objects in the second original video frame 72 are larger than in the first original video frame 71, which corresponds to the foreground objects being enlarged. In this case, the interpolation processing may be performed by using the above-described scheme of the present disclosure, and a change process of the foreground object size, for example, as shown in fig. 8, is obtained. Such as a video frame in which 2 foreground objects having a size between the size in the first original video frame 71 and the size in the second original video frame 72 may be inserted to characterize the enlarging process of the foreground objects. It should be understood that fig. 8 is merely an exemplary depiction of a video and should not be taken as a limitation of the present disclosure.

For the Frame interpolation process, methods that can be adopted by the present disclosure include, but are not limited to, MEMC (Motion estimation and Motion Compensation), optical flow method, FRC (Frame RateConversion) method such as neural network, etc. The following description will be made by taking MEMC as an example.

In the case of recording two original video frames as a first original video frame and a second original video frame, first, image blocking operations may be performed on the first original video frame and the second original video frame, respectively, and a motion vector of an image block in the first original video frame relative to the second original video frame may be determined. Referring to fig. 9, the motion vector of the image block in the first original video frame relative to the second original video frame may be denoted as a forward motion vector. In addition, the motion vector of the image block in the second original video frame relative to the first original video frame can also be determined and recorded as a backward motion vector.

Additionally, the forward motion vector may be modified, wherein the modification comprises at least one of filtering, weighting, and the like, to determine the forward motion vector for each image block. The process is similar for embodiments utilizing backward motion vectors, as shown in fig. 10.

Next, a mapped motion vector of an interpolation frame block corresponding to an image block in an interpolation frame image to be generated with respect to the first original video frame and the second original video frame may be determined according to a motion vector (i.e., a forward motion vector) of the image block in the first original video frame with respect to the second original video frame and based on a time phase of an interpolation frame to be performed between the first original video frame and the second original video frame, as shown in fig. 11. The process for backward motion vectors is similar.

And then, searching corresponding image blocks in the first original video frame and the second original video frame by using the mapping motion vector, and performing interpolation operation on the searched image blocks to further generate an interpolated frame image, and configuring the interpolated frame image at a corresponding time phase.

A video segment after the frame interpolation operation is shown in fig. 12, corresponding to fig. 4. It should be understood that fig. 12 is merely an example of uniform frame interpolation, and the present disclosure may also include frame interpolation results that produce a visually uneven effect on how fast the foreground object changes, and the present disclosure is not limited thereto.

In addition, the method can also comprise a scheme of fusing the result after the frame insertion with the original video to obtain a new video and outputting the new video.

Specifically, first, the terminal device may obtain positions of two original video frames in the video to be processed, where the positions refer to time phases of the two original video frames in the video to be processed. Next, the results of the frame interpolation may be fused to the location, it being understood that the process of fusing is generally referred to as an alternative process.

Fig. 13 schematically illustrates a video processing system architecture diagram with video decimation and video fusion functionality according to an exemplary embodiment of the present disclosure.

Referring to fig. 13, an original video may be input to the video extractor 131, and the video extractor 131 may acquire a video to be processed and extract a sequence of original video frames to be interpolated from the video to be processed. The video extractor 131 may input the original video frame sequence to the video generator 132, the video generator 132 implements a process of performing frame interpolation on the original video frame sequence by using the above-mentioned video processing method, and inputs a result of the frame interpolation to the video fusion device 133, the video fusion device 133 obtains positions of two original video frames in the video to be processed, and fuses a result of the frame interpolation to the positions to generate a processed video.

For example, a video clip with no change in the previous video and a foreground change in the later period may be input into the video generator 132, and the video generator 132 may extract the first frame and the last frame of the video clip and perform a frame interpolation operation to form a new video clip with a foreground object transitioning rapidly. The frame rate of the new video segment is consistent with that of the original video segment, and the duration of the new video segment is shorter than that of the original video segment. The new video segment may then replace the original video segment.

The video processing method is exemplified below by taking a small sphere in a video as a foreground object.

Firstly, the terminal equipment acquires a video to be processed, extracts video segments containing small balls from the video to be processed, extracts video frames of the small balls on the ground from the video segments, and generates an original video frame sequence by using the video frames.

Next, the degree of change (e.g., direction and distance of movement) of the small ball in two adjacent original video frames in the original video frame sequence is determined. And determining the optional time phase of the frame to be interpolated between the two adjacent original video frames according to the change degree of the small balls, acquiring the frame number of the frame to be interpolated determined by a user, and performing the frame interpolation operation by using a method such as MEMC (motion compensated multi-media motion) based on the optional time phase and the frame number.

Then, a processed video corresponding to the video object to be processed is generated by using the result of the frame interpolation. Thereby, an effect including the movement of the ball in the video can be obtained.

Furthermore, for the original video frame sequence, if the extracted video segment includes a first video segment and a second video segment, the first video segment includes only video with the ball on the ground, and the second video segment includes video frames with the ball in the air. In this case, using the video processing method described above, video frames in the second video segment in which the ball is in the air can be inserted for the portion of the first video segment when the original video frame sequence is generated. Therefore, after the frame interpolation is carried out, the video effect that the small ball bounces to the air can be achieved in the time corresponding to the first video segment.

The present disclosure is described by way of example only with reference to a ball, and it should be understood that a scheme for generating a video based on a video processing method according to an exemplary embodiment of the present disclosure is within the scope of the present disclosure.

It should be noted that although the various steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that these steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.

Further, the present exemplary embodiment also provides a video processing apparatus.

Fig. 14 schematically shows a block diagram of a video processing apparatus according to an exemplary embodiment of the present disclosure. Referring to fig. 14, the video processing apparatus 14 according to an exemplary embodiment of the present disclosure may include a video acquisition module 141, an object change degree determination module 143, a phase determination module 145, and an interpolation processing module 147.

Specifically, the video obtaining module 141 may be configured to obtain a video to be processed, and determine an original video frame sequence to be subjected to frame interpolation in the video to be processed; the object change degree determining module 143 may be configured to determine a change degree of a foreground object of two preset original video frames in the original video frame sequence; the phase determining module 145 may be configured to determine, according to the degree of change of the foreground object of the two original video frames, a selectable time phase to be subjected to frame interpolation between the two original video frames; the frame interpolation processing module 147 may be configured to perform frame interpolation between two original video frames based on a selectable time phase of a frame to be interpolated between the two original video frames, so as to generate a processed video.

According to an exemplary embodiment of the present disclosure, the process of the video acquisition module 141 determining an original video frame sequence to be interpolated in a video to be processed may be configured to perform: extracting a video clip containing a foreground object from a video to be processed; an original video frame sequence to be interpolated is determined from a video segment containing a foreground object.

According to an exemplary embodiment of the present disclosure, the process of the video acquisition module 141 determining an original video frame sequence to be interpolated from a video segment containing a foreground object may be configured to perform: extracting a video frame of which the foreground object meets the motion attribute requirement from a video segment containing the foreground object; and generating an original video frame sequence by utilizing the video frames of which the foreground objects meet the motion attribute requirements.

According to an exemplary embodiment of the present disclosure, a video segment containing a foreground object in a video to be processed includes a first video segment and a second video segment. In this case, the process of the video obtaining module 141 determining the original video frame sequence to be interpolated from the video segment containing the foreground object may be configured to perform: extracting at least one video frame from a first video segment; extracting at least one video frame from the second video segment; an original sequence of video frames is generated using the at least one video frame extracted from the first video segment and the at least one video frame extracted from the second video segment.

According to an exemplary embodiment of the present disclosure, at least one video frame extracted from a first video segment contains a foreground object having a first motion attribute, and at least one video frame extracted from a second video segment contains a foreground object having a second motion attribute; wherein the first motion profile is different from the second motion profile.

According to an example embodiment of the present disclosure, the phase determination module 145 may be configured to perform: determining the maximum quantity of pluggable frames to be subjected to frame plugging between two original video frames according to the change degree of foreground objects of the two original video frames; acquiring a time interval between two original video frames; and according to the maximum quantity of the insertable frames, performing equal-phase division on the time interval between the two original video frames to determine the selectable time phase of the frame to be inserted between the two original video frames.

According to an example embodiment of the present disclosure, the frame interpolation processing module 147 may be configured to perform: acquiring the number of frames to be interpolated between two original video frames; if the obtained frame number is the same as the maximum frame number to be inserted between two original video frames, performing frame insertion operation on each time phase in the selectable time phases; if the obtained frame number is less than the maximum frame number to be inserted between the two original video frames, inserting the frame between the two original video frames according to the frame inserting rule; and the frame interpolation rule comprises the steps of screening target time phases with the same number as the obtained frames from the selectable time phases, and performing frame interpolation operation based on the target time phases.

According to an exemplary embodiment of the present disclosure, the process of the frame interpolation processing module 147 performing the frame interpolation operation based on the target time phase may be configured to perform: performing frame interpolation operation aiming at the target time phase; and for the non-target time phases except the target time phase in the selectable time phases, copying the video frame closest to the non-target time phase, and configuring the copied video frame at the non-target time phase.

According to an exemplary embodiment of the present disclosure, the two original video frames include a first original video frame and a second original video frame, in which case the process of the interpolation processing module 147 interpolating between two adjacent original video frames may be configured to perform: respectively carrying out image blocking operation on a first original video frame and a second original video frame, and determining a motion vector of an image block in the first original video frame relative to the second original video frame; determining mapping motion vectors of an interpolation block corresponding to an image block in an interpolation frame image to be generated relative to a first original video frame and a second original video frame according to a motion vector of the image block in the first original video frame relative to the second original video frame and based on a time phase to be subjected to frame interpolation between the first original video frame and the second original video frame; respectively searching corresponding image blocks in the first original video frame and the second original video frame according to the mapping motion vector; and carrying out interpolation operation on the searched image blocks, generating an interpolation frame image by combining the result of the interpolation operation, and configuring the interpolation frame image at the corresponding time phase.

According to an exemplary embodiment of the present disclosure, the process of the interpolation processing module 147 generating a processed video corresponding to a video to be processed using the result of the interpolation may be configured to perform: acquiring the positions of two original video frames in a video to be processed; the result of the frame interpolation is fused to the position.

Since each functional module of the video processing apparatus according to the embodiment of the present disclosure is the same as that in the embodiment of the method described above, it is not described herein again.

Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.

Furthermore, the above-described figures are merely schematic illustrations of processes included in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.

It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.

Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

24页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于视频中的人体骨架的多模态无损压缩实现方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类