Video generation method, device, terminal and storage medium

文档序号:1357169 发布日期:2020-07-24 浏览:12次 中文

阅读说明:本技术 视频生成方法、装置、终端及存储介质 (Video generation method, device, terminal and storage medium ) 是由 刘春宇 于 2020-04-22 设计创作,主要内容包括:本申请实施例提供了一种视频生成方法、装置、终端及存储介质。该方法包括:采集背景图像,以及获取前景图像;在背景图像中的目标窗口内显示前景图像,得到合成图像,目标窗口包括目标对象;根据合成图像生成目标视频。本申请实施例提供的技术方案,通过采用从网络中或从终端本地获取的前景图像代替视频录制时采用的环境图像或界面图像,可以丰富视频录制的背景,提升视频录制的多样性与灵活性。(The embodiment of the application provides a video generation method, a video generation device, a video generation terminal and a storage medium. The method comprises the following steps: acquiring a background image and acquiring a foreground image; displaying the foreground image in a target window in the background image to obtain a composite image, wherein the target window comprises a target object; and generating the target video according to the composite image. According to the technical scheme, the foreground image acquired from the network or the terminal locally replaces the environment image or the interface image adopted during video recording, so that the background of video recording can be enriched, and the diversity and flexibility of video recording are improved.)

1. A method of video generation, the method comprising:

acquiring a background image and acquiring a foreground image;

displaying the foreground image in a target window in the background image to obtain a composite image, wherein the target window comprises the target object;

and generating a target video according to the composite image.

2. The method of claim 1, wherein displaying the foreground image within a target window in the background image, resulting in a composite image, comprises:

replacing display content in a region within the target window other than the target object with the foreground image.

3. The method of claim 1, wherein displaying the foreground image within a target window in the background image, resulting in a composite image, comprises:

displaying the foreground image on the upper layer of the display content in the target window in an overlapping manner; and the transparency of the overlapping area of the foreground image and the target object is a preset value.

4. The method of claim 1, wherein after the acquiring the background image, further comprising:

marking the target object in the background image.

5. The method of claim 4, wherein said marking the target object in the background image comprises:

carrying out graying processing on the background image to obtain a grayed background image;

and carrying out image segmentation on the background image subjected to the graying processing to obtain the background image marked with the target object.

6. The method according to any one of claims 1 to 5, wherein before displaying the foreground image in a target window in the background image, obtaining a composite image, further comprising:

setting display parameters of the target window, wherein the display parameters of the target window comprise at least one of the following items: length, width, vertex coordinates;

and determining the target window according to the display parameters of the target window.

7. A video generation apparatus, characterized in that the apparatus comprises:

the image acquisition module is used for acquiring a background image;

the image acquisition module is used for acquiring a foreground image;

the image synthesis module is used for displaying the foreground image in a target window in the background image to obtain a synthesized image, wherein the target window comprises the target object;

and the video generation module is used for generating a target video according to the synthetic image.

8. The apparatus of claim 7, wherein the image synthesis module is configured to replace display content in a region other than the target object within the target window with the foreground image.

9. The apparatus of claim 7, wherein the image synthesis module is configured to display the foreground image in an overlay of the display content in the target window; and the transparency of the overlapping area of the foreground image and the target object is a preset value.

10. A terminal, characterized in that it comprises a processor and a memory, said memory storing a computer program that is loaded and executed by said processor to implement the video generation method according to any one of claims 1 to 6.

11. A computer-readable storage medium, in which a computer program is stored, which is loaded and executed by a processor to implement the video generation method according to any one of claims 1 to 6.

Technical Field

The embodiment of the application relates to the technical field of internet, in particular to a video generation method, a video generation device, a video generation terminal and a storage medium.

Background

With the continuous development of video technology, various video sharing platforms emerge, and a user can record videos through a terminal and upload the recorded videos to the video sharing platform for other users to enjoy.

Disclosure of Invention

The embodiment of the application provides a video generation method, a video generation device, a terminal and a storage medium, which can enrich the background of video recording. The technical scheme is as follows:

in one aspect, an embodiment of the present application provides a video generation method, where the method includes:

acquiring a background image and acquiring a foreground image;

displaying the foreground image in a target window in the background image to obtain a composite image, wherein the target window comprises the target object;

and generating a target video according to the composite image.

In another aspect, an embodiment of the present application provides a video generating apparatus, where the apparatus includes:

the image acquisition module is used for acquiring a background image;

the image acquisition module is used for acquiring a foreground image;

the image synthesis module is used for displaying the foreground image in a target window in the background image to obtain a synthesized image, wherein the target window comprises the target object;

and the video generation module is used for generating a target video according to the synthetic image.

In yet another aspect, a terminal is provided, the terminal comprising a processor and a memory, the memory storing a computer program that is loaded and executed by the processor to implement a video generation method of an aspect.

In yet another aspect, a computer-readable storage medium is provided, in which a computer program is stored, the computer program being loaded and executed by a processor to implement the video generation method of an aspect.

The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:

through setting up the display window in the background image, and show foreground image and record the object from the network or from terminal local acquireing at this display window, obtain the composite image, the video is recorded in the multiframe composite image constitution, because the foreground image is from the network or from terminal local acquireing, the kind is more, and the content is abundanter, compare in the correlation technique only gather environment image or interface image when the video is recorded, the technical scheme that this application embodiment provided can enrich the background that the video was recorded, promote the variety and the flexibility that the video was recorded.

Drawings

FIG. 1 is a flow diagram illustrating a video generation method according to an exemplary embodiment of the present application;

FIG. 2 is a flow chart of a video generation method shown in another exemplary embodiment of the present application;

FIG. 3 is a schematic diagram of generating a composite image, shown in one exemplary embodiment of the present application;

FIG. 4 is a block diagram of a video generation apparatus shown in another exemplary embodiment of the present application;

fig. 5 is a block diagram of a terminal according to an exemplary embodiment of the present application.

Detailed Description

To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.

The embodiment of the application provides a video generation method, a display window is arranged in a background image, a foreground image and a recording object which are obtained from a network or from a terminal locally are displayed on the display window, a composite image is obtained, and a recording video is formed by multi-frame composite images.

According to the technical scheme provided by the embodiment of the application, the execution main body of each step can be a terminal. The terminal may be a smartphone, tablet, laptop or desktop computer, etc. In some embodiments, a recording application is installed in the terminal, and the execution subject of each step may also be the recording application.

Referring to fig. 1, a flow chart of a video generation method according to an embodiment of the present application is shown. The method may comprise the steps of:

step 101, collecting a background image.

The background image includes a target object, which is used to describe a recording object, and the recording object may be a person, an animal, an article, a virtual person, and the like, which is not limited in this embodiment of the application.

The number of the background images may be one or multiple, and the number of the background images is not limited in the embodiment of the present application. The terminal is provided with hardware or software having an image capturing function by which an image is captured.

In some embodiments, the terminal is configured with hardware having image capture functionality (such as a camera) through which the terminal captures background images. It should be noted that the hardware with the image capturing function may be hardware of the terminal itself, or may be hardware that is independent from the terminal and has a wired connection or a wireless connection with the terminal.

In some embodiments, software with image capture functionality (such as screen recording software) is provided, by which the terminal captures background images. The background image acquired by the screen recording software is also the display content in the terminal screen.

In other embodiments, the terminal acquires images through the camera and the screen recording software at the same time, and synthesizes the images acquired by the camera and the screen recording software to obtain a background image.

Step 102, obtaining a foreground image.

The number of the foreground images can be one or more, and the number of the foreground images is not limited in the embodiment of the application. In a possible implementation manner, the number of foreground images is one, that is, each synthesized image is synthesized by using the same foreground image. In another possible implementation, the number of foreground images is the same as the number of background images, that is, each synthesized image is synthesized by using different foreground images.

In some embodiments, the foreground image is a frame of image in the foreground video, and the terminal acquires the foreground video first and then extracts the foreground image from the foreground video. The terminal can obtain the foreground video from a local preset storage path. The terminal can also acquire the foreground video from the network. For example, the terminal may obtain the foreground video from a server corresponding to the recording application.

The execution sequence of acquiring the background image and the foreground image is not limited in the embodiment of the application. The terminal can acquire the background image first and then acquire the foreground image, can acquire the foreground image first and then acquire the background image, and can acquire the background image and acquire the foreground image simultaneously.

When the technical scheme provided by the embodiment of the application is applied to a live broadcast scene, the terminal can acquire the foreground video and extract the foreground image from the foreground video, and then acquire the background image. When the technical scheme provided by the embodiment of the application is applied to a short video recording scene, the terminal can acquire the background image, acquire the foreground video and extract the foreground image from the foreground video.

And 103, displaying the foreground image in a target window in the background image to obtain a composite image.

The target window includes a target object. The target window is a preset region for displaying a target object in the foreground image and the background image. The number of the synthetic images may be determined according to the number of the background images, for example, the number of the synthetic images is the same as the number of the background images.

The composition of the composite image may be determined according to the size of the target window. When the size of the target window is the same as the size of the display window for displaying the background image, the composite image is composed of the foreground image and the target object, and when the size of the target window is smaller than the size of the display window for displaying the background image, the composite image is composed of the foreground image, the target object, and a partial region of the background object other than the target object.

In some embodiments, step 103 may be implemented as the following sub-steps: and replacing the display content in the area except the target object in the target window with the foreground image.

And the terminal replaces the display content in the region except the target object in the target window with the foreground image to obtain a composite image.

In other embodiments, step 103 may be replaced with the following sub-steps: and displaying the foreground image on the upper layer of the display content in the target window in an overlapping manner.

And the transparency of the overlapping area of the foreground image and the target object is a preset value. The preset value can be set according to actual requirements. For example, the preset value is 1, that is, an overlapping area of the foreground image and the target object is completely transparent, and the target object can be displayed through the overlapping area.

And 104, generating a target video according to the synthetic image.

And the terminal plays the plurality of synthesized images frame by frame to obtain the target video.

To sum up, the technical scheme provided by the embodiment of the application obtains the synthetic image by setting the display window in the background image and displaying the foreground image and the recording object which are obtained from the network or from the terminal locally in the display window, and the multi-frame synthetic image forms the recording video.

Please refer to fig. 2, which shows a flowchart of a video generation method provided by an embodiment of the present application. The method comprises the following steps:

step 201, a background image is collected.

The background image includes a target object.

Referring collectively to FIG. 2, a schematic diagram of an interface for generating a composite image is shown, according to an embodiment of the present application. The background image 31 includes a target object 311.

Step 202, marking the target object in the background image.

In order to avoid blocking the target object when generating the composite image from the foreground image, the target object needs to be marked before generating and combining the images.

In some embodiments, the terminal marks the target object in the background image by the following sub-steps:

(1) carrying out graying processing on the background image to obtain a grayed background image;

the graying processing is to convert a background image into a grayscale image, and the grayscale image is to represent an image of each pixel point by 8-bit (0-255) grayscale values. The gray values are used to represent the shades of the colors.

In the embodiment of the application, the terminal may perform graying processing on the background image by using a method such as an averaging method, a maximum-minimum averaging method, a weighted averaging method, and the like, so as to obtain a grayed background image.

(2) And carrying out image segmentation on the background image subjected to the gray processing to obtain a background image marked with the target object.

The background image marked out of the target object may be a mono-channel mask image in which the transparency of the target object is 1, i.e. completely opaque, and the transparency of the background object in the areas other than the target object is 0, i.e. completely transparent, and the transparency of the edges of the target object is between 0 and 1.

When the target object is a portrait or a virtual character, the grayed background image may be subjected to image segmentation by using a human body segmentation technique. In the embodiment of the present application, the algorithm used for image segmentation on the grayed background image may be an image edge segmentation algorithm, an image threshold segmentation algorithm, a region-based segmentation algorithm, a morphological watershed algorithm, and the like, which is not limited in the embodiment of the present application.

Referring to fig. 3 in combination, the terminal marks a target object 311 in the background image 31.

Step 203, obtaining a foreground image.

And step 204, setting display parameters of the target window.

The display parameters of the target window include at least one of: length, width, vertex coordinates. The display parameters of the target window may be determined according to the position of the target object in the background image and the size of the target object. The display parameters of the target window may be set by default by the terminal or may be set by the user in a user-defined manner, which is not limited in the embodiment of the present application.

Specifically, the length and width of the target window may be determined according to the size of the target object. In some embodiments, the length of the target window is greater than the length of the target object and less than or equal to the length of the background image, and the width of the target window is greater than the width of the target object and less than or equal to the width of the background image. The vertex coordinates of the target window may be determined according to the position of the target object in the background image.

Step 205, determining the target window according to the display parameters of the target window.

The terminal can uniquely determine the target window according to the display parameters of the target window.

And step 206, displaying the foreground image in the target window in the background image to obtain a composite image.

The target window includes a target object. Referring to fig. 3 in combination, the terminal displays the foreground image 32 and the target object 311 in the target window 312 in the background image 31, resulting in the composite image 33.

Step 207, generating a target video according to the composite image.

To sum up, the technical scheme provided by the embodiment of the application obtains the synthetic image by setting the display window in the background image and displaying the foreground image and the recording object which are obtained from the network or from the terminal locally in the display window, and the multi-frame synthetic image forms the recording video.

In the following, embodiments of the apparatus of the present application are described, and for portions of the embodiments of the apparatus not described in detail, reference may be made to technical details disclosed in the above-mentioned method embodiments.

Referring to fig. 4, a block diagram of a video generation apparatus according to an exemplary embodiment of the present application is shown. The video generation apparatus may be implemented as all or a part of the terminal by software, hardware, or a combination of both. The device includes: an image acquisition module 401, an image acquisition module 402, an image composition module 403, and a video generation module 404.

And an image acquisition module 401, configured to acquire a background image.

An image obtaining module 402, configured to obtain a foreground image.

An image synthesizing module 403, configured to display the foreground image in a target window in the background image to obtain a synthesized image, where the target window includes the target object.

A video generating module 404, configured to generate a target video according to the composite image.

To sum up, the technical scheme provided by the embodiment of the application obtains the synthetic image by setting the display window in the background image and displaying the foreground image and the recording object which are obtained from the network or from the terminal locally in the display window, and the multi-frame synthetic image forms the recording video.

In an alternative embodiment provided based on the embodiment shown in fig. 4, the image synthesis module 403 is configured to replace the display content in the region other than the target object in the target window with the foreground image.

In an optional embodiment provided based on the embodiment shown in fig. 4, the image synthesis module 403 is configured to superimpose and display the foreground image on an upper layer of the display content in the target window; and the transparency of the overlapping area of the foreground image and the target object is a preset value.

In an optional embodiment provided based on the embodiment shown in fig. 4, the apparatus further comprises: an object tagging module (not shown in fig. 4).

And the object marking module is used for marking the target object in the background image.

Optionally, the object tagging module is configured to:

carrying out graying processing on the background image to obtain a grayed background image;

and carrying out image segmentation on the background image subjected to the graying processing to obtain the background image marked with the target object.

In an optional embodiment provided based on the embodiment shown in fig. 4, the apparatus further comprises: a window determination module (not shown in fig. 4).

A window determination module to:

setting display parameters of the target window, wherein the display parameters of the target window comprise at least one of the following items: length, width, vertex coordinates;

and determining the target window according to the display parameters of the target window.

It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.

Fig. 5 is a block diagram illustrating a terminal 500 according to an exemplary embodiment of the present disclosure, where the terminal 500 may be a smart phone, a tablet pc, an MP3 player (Moving Picture Experts Group Audio L layer III, mpeg Audio layer 3), an MP4 player (Moving Picture Experts Group Audio L layer iv, mpeg Audio layer 4), a notebook pc, or a desktop pc, and the terminal 500 may also be referred to as a user equipment, a portable terminal, a laptop terminal, a desktop terminal, or other names.

In general, the terminal 500 includes: a processor 501 and a memory 502.

The processor 501 may include one or more Processing cores, such as a 4-core processor, an 8-core processor, etc., the processor 501 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable logic Array (Programmable L geographic Array, P L a), the processor 501 may also include a main processor and a coprocessor, the main processor is a processor for Processing data in a wake-up state, also called a Central Processing Unit (CPU), and the coprocessor is a low-power processor for Processing data in a standby state, in some embodiments, the processor 501 may be integrated with a Graphics Processing Unit (GPU) for rendering and rendering content desired for a display screen.

Memory 502 may include one or more computer-readable storage media, which may be non-transitory. Memory 502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 502 is used to store a computer program for execution by processor 501 to implement the live data streaming method provided by the method embodiments herein.

In some embodiments, the terminal 500 may further optionally include: a peripheral interface 503 and at least one peripheral. The processor 501, memory 502 and peripheral interface 503 may be connected by a bus or signal lines. Each peripheral may be connected to the peripheral interface 503 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 504, touch screen display 505, camera 505, audio circuitry 507, positioning components 508, and power supply 509.

The peripheral interface 503 may be used to connect at least one Input/Output (I/O) related peripheral to the processor 501 and the memory 502. In some embodiments, the processor 501, memory 502, and peripheral interface 503 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 501, the memory 502, and the peripheral interface 503 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.

The Radio Frequency circuit 504 is used for receiving and transmitting Radio Frequency (RF) signals, also called electromagnetic signals. The radio frequency circuitry 504 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 504 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 504 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 504 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or Wireless Fidelity (WiFi) networks. In some embodiments, rf circuitry 504 may also include Near Field Communication (NFC) related circuitry, which is not limited in this application.

The Display 505 is configured to Display a User Interface (UI). the UI may include graphics, text, icons, video, and any combination thereof, when the Display 505 is a touch screen, the Display 505 may also have the capability to capture touch signals on or over a surface of the Display 505. the touch signals may be input to the processor 501 for processing as control signals. at this time, the Display 505 may also be configured to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. in some embodiments, the Display 505 may be one, providing the front panel of the terminal 500. in other embodiments, the Display 505 may be at least two, each disposed on a different surface or in a folded design of the terminal 500. in still other embodiments, the Display 505 may be a flexible Display, disposed on a curved surface or on a folded surface of the terminal 500. even, the Display 505 may be configured with non-rectangular irregular graphics, shaped displays 505 may be made of liquid Crystal Display (L id Crystal Display, 56), emissive Organic Diode 3683, or the like.

Camera assembly 505 is used to capture images or video. Optionally, camera assembly 505 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and a Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 505 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.

Audio circuitry 507 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 501 for processing, or inputting the electric signals to the radio frequency circuit 504 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 500. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 501 or the radio frequency circuit 504 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 507 may also include a headphone jack.

The positioning component 508 is used to locate the current geographic location of the terminal 500 to implement navigation or location Based services (L position Based Service, L BS.) the positioning component 508 can be a positioning component of the Global Positioning System (GPS) Based on the united states, the beidou System in china, or the galileo System in russia.

Power supply 509 is used to power the various components in terminal 500. The power source 509 may be alternating current, direct current, disposable or rechargeable. When power supply 509 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.

In some embodiments, terminal 500 also includes one or more sensors 55. The one or more sensors 55 include, but are not limited to: acceleration sensor 511, gyro sensor 512, pressure sensor 513, fingerprint sensor 514, optical sensor 515, and proximity sensor 515.

The acceleration sensor 511 may detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 500. For example, the acceleration sensor 511 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 501 may control the touch screen 505 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 511. The acceleration sensor 511 may also be used for acquisition of motion data of a game or a user.

The gyro sensor 512 may detect a body direction and a rotation angle of the terminal 500, and the gyro sensor 512 may cooperate with the acceleration sensor 511 to acquire a 3D motion of the user on the terminal 500. The processor 501 may implement the following functions according to the data collected by the gyro sensor 512: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.

The pressure sensor 513 may be disposed on a side bezel of the terminal 500 and/or an underlying layer of the touch display screen 505. When the pressure sensor 513 is disposed on the side frame of the terminal 500, a user's holding signal of the terminal 500 may be detected, and the processor 501 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 513. When the pressure sensor 513 is disposed at the lower layer of the touch display screen 505, the processor 501 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 505. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.

The fingerprint sensor 514 is used for collecting fingerprints of a user, the identity of the user is identified by the processor 501 according to the fingerprints collected by the fingerprint sensor 514, or the identity of the user is identified by the fingerprint sensor 514 according to the collected fingerprints, when the identity of the user is identified as a credible identity, the user is authorized to execute relevant sensitive operations by the processor 501, the sensitive operations comprise screen unlocking, encrypted information viewing, software downloading, payment, setting change and the like, the fingerprint sensor 514 can be arranged on the front side, the back side or the side of the terminal 500, when a physical key or a manufacturer L ogo is arranged on the terminal 500, the fingerprint sensor 514 can be integrated with the physical key or the manufacturer L ogo.

The optical sensor 515 is used to collect the ambient light intensity. In one embodiment, the processor 501 may control the display brightness of the touch display screen 505 based on the ambient light intensity collected by the optical sensor 515. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 505 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 505 is turned down. In another embodiment, processor 501 may also dynamically adjust the shooting parameters of camera head assembly 505 based on the ambient light intensity collected by optical sensor 515.

A proximity sensor 516, also referred to as a distance sensor, is typically disposed on the front panel of the terminal 500. The proximity sensor 516 is used to collect the distance between the user and the front surface of the terminal 500. In one embodiment, when the proximity sensor 516 detects that the distance between the user and the front surface of the terminal 500 gradually decreases, the processor 501 controls the touch display screen 505 to switch from the bright screen state to the dark screen state; when the proximity sensor 516 detects that the distance between the user and the front surface of the terminal 500 becomes gradually larger, the processor 501 controls the touch display screen 505 to switch from the screen-rest state to the screen-on state.

Those skilled in the art will appreciate that the configuration shown in fig. 5 is not intended to be limiting of terminal 500 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.

In an exemplary embodiment, there is also provided a computer-readable storage medium having stored therein a computer program, which is loaded and executed by a processor of a terminal to implement the video generation method in the above-described method embodiments.

Alternatively, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic tape, a floppy disk, an optical data storage device, and the like.

In an exemplary embodiment, there is also provided a computer program product for implementing the video generation method provided in the above method embodiments when the computer program product is executed.

It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. As used herein, the terms "first," "second," and the like, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another.

The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.

The above description is only exemplary of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the present application should be included in the protection scope of the present application.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于云卡的数字对讲系统及方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类