Video acquisition method and electronic equipment

文档序号:245208 发布日期:2021-11-12 浏览:17次 中文

阅读说明:本技术 一种视频获取方法及电子设备 (Video acquisition method and electronic equipment ) 是由 韩俊宁 肖荣彬 于 2021-08-23 设计创作,主要内容包括:本申请公开了一种视频获取方法及电子设备,包括:基于摄像模式调用一组第一图像采集装置以及一组声音采集装置;通过第一图像采集装置实时采集第一空间的第一图像数据以及通过声音采集装置实时采集覆盖包括第一空间的空间环境的音频数据;基于音频处理引擎实时处理音频数据;通过音频处理引擎确定音频数据表征在第二空间中存在目标声音源,调用第二图像采集装置;通过第二图像采集装置实时采集第二空间的第二图像数据,第二空间与第一空间不同。本申请的视频获取方法基于声音采集装置采集到的音频数据,能够自动调用第二图像采集装置,以使第二图像采集装置采集第二空间的第二图像数据,无需用户手动开启第二图像采集装置,省时省力,方便快捷。(The application discloses a video acquisition method and electronic equipment, comprising the following steps: calling a group of first image acquisition devices and a group of sound acquisition devices based on the camera shooting mode; acquiring first image data of a first space in real time through a first image acquisition device and acquiring audio data covering a space environment including the first space in real time through a sound acquisition device; processing the audio data in real time based on the audio processing engine; determining that the audio data representation has a target sound source in a second space through an audio processing engine, and calling a second image acquisition device; second image data of a second space, which is different from the first space, is acquired in real time by a second image acquisition device. The video acquisition method can automatically call the second image acquisition device based on the audio data acquired by the sound acquisition device, so that the second image acquisition device acquires the second image data of the second space, the user does not need to manually open the second image acquisition device, and the method is time-saving, labor-saving, convenient and fast.)

1. A method of video acquisition, the method comprising:

calling a group of first image acquisition devices and a group of sound acquisition devices based on the camera shooting mode;

acquiring first image data of a first space in real time through the first image acquisition device and acquiring audio data covering a space environment including the first space in real time through the sound acquisition device;

processing the audio data in real-time based on an audio processing engine;

determining, by the audio processing engine, that the audio data characterisation has a target sound source in a second space, and invoking a second image acquisition device;

and acquiring second image data of a second space in real time through the second image acquisition device, wherein the second space is different from the first space.

2. The video acquisition method of claim 1, further comprising:

displaying the first image data in real time;

and superposing and displaying the second image data acquired in real time by the second image acquisition device.

3. The video acquisition method of claim 2, further comprising:

determining, by the audio processing engine, that the audio data characterizes a persistent presence of the target sound source in a second space, maintaining real-time acquisition of second image data of the second space by the second image acquisition device.

4. The video acquisition method of claim 3, further comprising:

and determining that the audio data represents that the target sound source disappears in a second space for a preset time period through the audio processing engine, and closing the real-time acquisition of second image data of the second space through the second image acquisition device.

5. The video acquisition method of claim 1, the processing the audio data in real-time based on an audio processing engine comprising:

processing the audio data of the space environment through a positioning module to obtain sound sources in the space environment;

determining a target sound source based on the position information of the sound source in the spatial environment, wherein the target sound source belongs to the second space and does not belong to the first space.

6. The video acquisition method of claim 5, the determining a target sound source based on the location information of the sound source in the spatial environment comprising:

and determining a target sound source based on the position information and the sound parameter information of the sound source in the space environment.

7. An electronic device, comprising:

a first calling module configured to call a set of first image pickup devices and a set of sound pickup devices based on a camera mode;

a first acquisition module configured to acquire first image data of a first space in real time by the first image acquisition device and acquire audio data covering a spatial environment including the first space in real time by the sound acquisition device;

a processing module configured to process the audio data in real-time based on an audio processing engine;

a second calling module configured to call a second image capture device by the audio processing engine to determine that the audio data representation has a target sound source in a second space;

a second acquisition module configured to acquire second image data of a second space in real time by the second image acquisition device, the second space being different from the first space.

8. The electronic device of claim 7, further comprising:

a first display module configured to display the first image data in real-time;

a second display module configured to display second image data acquired in real time by the second image acquisition device in an overlaid manner.

9. The electronic device of claim 8, further comprising:

a maintaining module configured to maintain the real-time acquisition of the second image data of the second space by the second image acquisition device by determining, by the audio processing engine, that the audio data characterizes the persistent presence of the target sound source in the second space.

10. The electronic device of claim 9, further comprising:

the closing module is configured to determine that the audio data represents that the target sound source disappears in the second space for a preset time period through the audio processing engine, and close the real-time acquisition of the second image data of the second space through the second image acquisition device.

Technical Field

The present disclosure relates to the field of video acquisition technologies, and in particular, to a video acquisition method and an electronic device.

Background

With the rise of video acquisition and sharing, more and more people share the video acquired by themselves on application software, and the current acquisition modes include the following modes: 1. recording based on a rear camera to obtain a video; 2. recording based on a front camera to obtain a video; 3. the video is obtained by simultaneously recording based on the front camera and the rear camera.

When the recording mode is used for obtaining videos, in the process of recording by adopting the rear camera, when images in the space corresponding to the front camera need to be recorded, the rear camera needs to be turned off manually, then the front camera is turned on manually for recording, so that the videos are not consistent, the user needs to perform subsequent processing such as editing, and the user manually switches the front camera and the rear camera, so that the operation is complex.

Disclosure of Invention

An object of the embodiment of the application is to provide a video obtaining method and an electronic device, which can automatically control a second image collecting device to collect second image data of a second space under the condition that a first image collecting device collects first image data of a first space, and do not need manual operation, thereby saving time and labor.

In a first aspect, an embodiment of the present application provides a video acquisition method, including:

calling a group of first image acquisition devices and a group of sound acquisition devices based on the camera shooting mode;

acquiring first image data of a first space in real time through the first image acquisition device and acquiring audio data covering a space environment including the first space in real time through the sound acquisition device;

processing the audio data in real-time based on an audio processing engine;

determining, by the audio processing engine, that the audio data characterisation has a target sound source in a second space, and invoking a second image acquisition device;

and acquiring second image data of a second space in real time through the second image acquisition device, wherein the second space is different from the first space.

In one possible implementation, the video acquisition method further includes:

displaying the first image data in real time;

and superposing and displaying the second image data acquired in real time by the second image acquisition device.

In one possible implementation, the video acquisition method further includes:

determining, by the audio processing engine, that the audio data characterizes a persistent presence of the target sound source in a second space, maintaining real-time acquisition of second image data of the second space by the second image acquisition device.

In one possible implementation, the video acquisition method further includes:

and determining that the audio data represents that the target sound source disappears in a second space for a preset time period through the audio processing engine, and closing the real-time acquisition of second image data of the second space through the second image acquisition device.

In one possible embodiment, the real-time processing of the audio data based on the audio processing engine includes:

processing the audio data of the space environment through a positioning module to obtain sound sources in the space environment;

determining a target sound source based on the position information of the sound source in the spatial environment, wherein the target sound source belongs to the second space and does not belong to the first space.

In one possible embodiment, the determining a target sound source based on the location information of the sound source in the spatial environment comprises:

and determining a target sound source based on the position information and the sound parameter information of the sound source in the space environment.

In a second aspect, an embodiment of the present application further provides an electronic device, including:

a first calling module configured to call a set of first image pickup devices and a set of sound pickup devices based on a camera mode;

a first acquisition module configured to acquire first image data of a first space in real time by the first image acquisition device and acquire audio data covering a spatial environment including the first space in real time by the sound acquisition device;

a processing module configured to process the audio data in real-time based on an audio processing engine;

a second calling module configured to call a second image capture device by the audio processing engine to determine that the audio data representation has a target sound source in a second space;

a second acquisition module configured to acquire second image data of a second space in real time by the second image acquisition device, the second space being different from the first space.

In one possible implementation, the electronic device further includes:

a first display module configured to display the first image data in real-time;

a second display module configured to display second image data acquired in real time by the second image acquisition device in an overlaid manner.

In one possible implementation, the electronic device further includes:

the closing module is configured to determine that the audio data represents that the target sound source disappears in the second space for a preset time period through the audio processing engine, and close the real-time acquisition of the second image data of the second space through the second image acquisition device.

The video acquisition method of the embodiment of the application acquires audio data in real time through the sound acquisition device and processes the audio data; under the condition that the audio data representation has the target sound source in the second space, the second image acquisition device is automatically called, so that the second image acquisition device acquires second image data of the second space, namely, in the process of recording by adopting the rear camera, the front camera is automatically opened under the condition that the audio data representation has the sound of the user in the space corresponding to the front camera, the user is not required to manually open the second image acquisition device (namely, the front camera), and the method is time-saving, labor-saving, convenient and quick.

Drawings

In order to more clearly illustrate the technical solutions in the present application or the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.

Fig. 1 shows a flow chart of a video acquisition method provided by the present application;

fig. 2 is a flowchart illustrating real-time processing of audio data based on an audio processing engine in a video acquisition method provided by the present application;

FIG. 3 is a flow chart illustrating displaying first image data and second image data in a video capture method provided by the present application;

fig. 4 is a flowchart illustrating a method for determining whether to turn off a second image capturing device in a video capturing method provided by the present application;

fig. 5 shows a schematic structural diagram of an electronic device provided in the present application.

Detailed Description

Various aspects and features of the present application are described herein with reference to the drawings.

It will be understood that various modifications may be made to the embodiments of the present application. Accordingly, the foregoing description should not be construed as limiting, but merely as exemplifications of embodiments. Those skilled in the art will envision other modifications within the scope and spirit of the application.

The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the application and, together with a general description of the application given above and the detailed description of the embodiments given below, serve to explain the principles of the application.

These and other characteristics of the present application will become apparent from the following description of preferred forms of embodiment, given as non-limiting examples, with reference to the attached drawings.

It should also be understood that, although the present application has been described with reference to some specific examples, a person of skill in the art shall certainly be able to achieve many other equivalent forms of application, having the characteristics as set forth in the claims and hence all coming within the field of protection defined thereby.

The above and other aspects, features and advantages of the present application will become more apparent in view of the following detailed description when taken in conjunction with the accompanying drawings.

Specific embodiments of the present application are described hereinafter with reference to the accompanying drawings; however, it is to be understood that the disclosed embodiments are merely exemplary of the application, which can be embodied in various forms. Well-known and/or repeated functions and constructions are not described in detail to avoid obscuring the application of unnecessary or unnecessary detail. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present application in virtually any appropriately detailed structure.

The specification may use the phrases "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments in accordance with the application.

For the understanding of the present application, a detailed description of a video acquisition method provided in the present application is first provided. In practical applications, the execution subject of the video acquisition method in the embodiment of the present application may be a server, a processor, or the like, and for convenience of illustration, the processor is described in detail below. As shown in fig. 1, a flowchart of a video acquisition method provided in the embodiment of the present application is shown, where the specific steps include:

and S101, calling a group of first image acquisition devices and a group of sound acquisition devices based on the shooting mode.

In specific implementation, the image capturing mode of the electronic device at least includes a first mode and a second mode, the first shooting mode and the second shooting mode are applied in different scenes, and the electronic device can call a group of first image capturing devices and call other image capturing devices to capture image data in the first shooting mode and the second shooting mode. Of course, the electronic device may also set other image capturing modes according to actual requirements, which is not specifically limited in this embodiment of the application.

The first image acquisition devices may be one first image acquisition device or a plurality of first image acquisition devices, and when the first image acquisition devices are multiple, the acquisition spaces corresponding to the first image acquisition devices may be the same or different and do not have overlapping spaces.

Furthermore, a group of sound collection devices are further arranged on the electronic equipment, the sound collection devices can be called according to the shooting mode, the number of the sound collection devices is at least three, and the sound collection devices are used for collecting audio data of the space environment where the electronic equipment is located currently.

S102, acquiring first image data of a first space in real time through a first image acquisition device and acquiring audio data covering a space environment including the first space in real time through a sound acquisition device.

After the current shooting mode of the electronic equipment is determined, a first image acquisition device and a sound acquisition device which can be called in the current shooting mode are checked, then the first image acquisition device is called to acquire first image data of a first space in real time, wherein the first space is determined by an acquisition angle and an acquisition distance corresponding to the first image acquisition device, the maximum acquisition angle and the maximum acquisition distance of the first image acquisition device are determined based on attribute parameters of the first image acquisition device, but in specific implementation, the actual acquisition angle and the actual acquisition distance can be adjusted according to actual requirements, the first image data comprises a picture corresponding to the first space, picture position information and the like, and the picture position information can comprise the distance and/or angle and the like of part of picture elements relative to the first image acquisition device.

Meanwhile, the sound collection device is called to collect audio data covering a space environment including the first space in real time, and here, the collection distance of the sound collection device may be set to be the same as that of the first image collection device. In practical applications, the audio data collected by the sound collection device is audio data in a spatial environment of a current space where the electronic device is located, that is, the audio data includes sound waves and sound wave position information in a first space, and sound waves and sound wave position information outside the first space, and the first space is included in the current space.

And S103, processing the audio data in real time based on the audio processing engine.

In a specific implementation, an audio processing engine is disposed on the electronic device, and is configured to process audio data acquired by the sound acquisition device, for example, separate all sound waves individually, determine an attribute of a sound wave to determine which user the sound wave belongs to based on the attribute of the sound wave, determine a location of a sound source to determine whether the sound source satisfies a preset condition based on the location, for example, the preset condition is that a distance from the sound acquisition device is smaller than a preset threshold, and the like.

And S104, determining that the audio data represent that the target sound source exists in the second space through the audio processing engine, and calling the second image acquisition device.

In a specific implementation, a plurality of sound sources may exist in the second space, and it is determined whether each sound source is the target sound source based on the position information and the sound parameter information of the sound source in the current space.

Specifically, parameters of all the sound collection devices are calculated through the audio processing engine to obtain a distance and an angle between each sound source and the sound collection device, wherein the parameters of the sound collection devices include a relative position relationship between the sound collection devices and the sound collection devices, an intensity of the same sound source collected by each sound collection device, a time point and the like. Screening out the sound sources of the position information falling into the second space based on the distance and the angle between the sound sources and the sound acquisition device, and extracting the voiceprint from each sound source; then, each voiceprint is compared with the similarity of the pre-stored preset voiceprint, and whether the target sound source exists in the second space is determined. Specifically, a sound source having a similarity to a preset voiceprint greater than or equal to a preset threshold is determined as a target sound source.

And calling the second image acquisition device under the condition that the target sound source exists in the second space.

And S105, acquiring second image data of a second space in real time through a second image acquisition device, wherein the second space is different from the first space.

The second space in the embodiment of the application is different from the first space, and there is no overlapping portion, that is, there is no duplication between the first image data acquired by the first image acquisition device and the second image data acquired by the second image acquisition device, that is, the environment data of the first space and the environment data of the second space constitute a space environment of a current space where the electronic device is located.

And acquiring second image data of a second space in real time through the second image acquisition device after the second image acquisition device is called. Preferably, the first image acquisition device and the second image acquisition device are located on two opposite side surfaces of the electronic device, the orientations of the first image acquisition device and the second image acquisition device are opposite, the acquisition angle of the first image acquisition device is set to be 180 degrees corresponding to the side surface where the first image acquisition device is located, and the acquisition angle of the second image acquisition device is set to be 180 degrees corresponding to the side surface where the second image acquisition device is located. For example, the electronic device is a mobile phone, the first image acquisition device is a rear camera, the second image acquisition device is a front camera, and at this time, the first image acquisition device acquires image data of a space corresponding to the back of the mobile phone, and the second image acquisition device acquires image data of a space corresponding to the front of the mobile phone.

The presence of the target sound source in the second space indicates that the target user corresponding to the target sound source is present in the second space, and therefore the second image data of the second space acquired by the second image acquisition device includes the image data of the target user.

Further, in order to ensure that the second image acquisition device can completely acquire the image data of the target user, the acquisition direction may be determined in advance based on the target sound source, the relative position between the target user and the second image acquisition device, and the relative angle between the target user and the second image acquisition device, so as to control the second image acquisition device to acquire the second image data according to the acquisition direction, thereby ensuring that the second image acquisition device can completely acquire the image data of the target user, wherein the space corresponding to the acquisition direction falls into the second space, and the space corresponding to the acquisition direction is smaller than the second space. Of course, after the second image acquisition device is started and the second image acquisition device is used to acquire the second image data of the second space, the pre-stored image data of the target user is used to search for the area where the target user is located in the second image data, and the second image acquisition device is adjusted to further perform data acquisition and the like for the area where the target user is located.

According to the embodiment of the application, the audio data are collected in real time through the sound collection device and are processed; under the condition that the audio data representation has the target sound source in the second space, the second image acquisition device is automatically called, so that the second image acquisition device acquires second image data of the second space, namely, in the process of recording by adopting the rear camera, the front camera is automatically opened under the condition that the audio data representation has the sound of the user in the space corresponding to the front camera, the user is not required to manually open the second image acquisition device (namely, the front camera), and the method is time-saving, labor-saving, convenient and quick.

Preferably, fig. 2 shows the method steps of real-time processing audio data based on an audio processing engine, specifically including S201 and S202.

S201, processing the audio data of the space environment through a positioning module to obtain a sound source in the space environment.

S202, a target sound source is determined based on the position information of the sound source in the space environment, and the target sound source belongs to the second space and does not belong to the first space.

The electronic equipment is provided with a positioning module, and after audio data are collected, the audio data are transmitted to the positioning module so as to process the audio data of the space environment through the positioning module and further obtain a sound source in the space environment. The acquisition distance of the sound acquisition device can be set to be the same as the acquisition distances of the first image acquisition device and the second image acquisition device, or at least the acquisition distance of the sound acquisition device can be set to be the same as the acquisition distance of the second image acquisition device, so that whether a target sound source exists in the second space can be accurately monitored; of course, the collecting distance of the sound collecting device can be set to be larger than that of the second image collecting device, as long as the sound collecting device can collect the space to cover the second space.

Since the foregoing has defined at least three sound collection devices in the embodiments of the present application, the following description will be made with reference to three sound collection devices. Specifically, under the condition that three sound acquisition devices for acquiring audio data are provided, for the same sound source, based on the principle of triangulation, the position information of the sound source in the current space is determined for the intensity of the sound source acquired by each sound acquisition device, where the position information is the distance from the electronic device. Of course, in the case that four or five sound collection devices are used to collect audio data, there will be corresponding location algorithms to determine the location information of each sound source in the spatial environment for the audio data collected by each sound collection device.

After determining the position information of each sound source in the spatial environment, screening the sound source which belongs to the second space and does not belong to the first space as a target sound source based on the angle information and the position information of the sound source calculated by the audio processing engine, wherein the angle information is an angle relative to the electronic equipment; and then avoid starting first image acquisition device once more when first image acquisition device operation and lead to first image data fault scheduling problem.

The first mode and the second mode of the electronic device are explained below, respectively.

When the camera shooting mode of the electronic equipment is the first mode, a user utilizes the electronic equipment to record videos such as a web lesson, a lecture and the like in a specific space, the user places the electronic equipment on a support, the orientation of a rear camera of the electronic equipment, namely a first image acquisition device, is opposite to the orientation of a blackboard, the orientation of a front camera of the electronic equipment, namely a second image acquisition device, is the same as the orientation of the blackboard, at the moment, the acquisition space corresponding to the rear camera is a space formed by a plane where the side face of the electronic equipment where the rear camera is located and a plane where the blackboard is located, and the front camera is other spaces in the specific space except the acquisition space. Under the condition of starting to record, the rear camera starts to operate, when a user starts to explain the content on the blackboard, the user needs to watch the content on the blackboard at the same time, namely the orientation of the user is opposite to that of the blackboard and is positioned at a position with a certain distance from the electronic equipment in other spaces, the user starts to make sound for explanation at the moment, when an audio processing engine of the electronic equipment determines that the sound collected by the sound collection device is the sound of the user, the front camera is automatically started so that the rear camera and the front camera record at the same time, the user does not need to go to the electronic equipment to manually start the front camera, and the efficiency and the quality of video recording are improved.

It should be noted that, if the user has a movement behavior during the recording process, such as moving to the collection space and making a sound, which is not required for explaining, the audio processing engine further needs to calculate whether the user is currently in another space, that is, determine whether the user's sound is in another space, and if it is determined that the user is in another space, that is, the user is in another space, turn on the front-facing camera; and if the user is determined to be in the acquisition space, namely the user is in the acquisition space, the front camera is not started. Of course, in the first mode, if there are listeners in other spaces besides the user, the audio processing engine needs to identify whether the sound collected by the sound collection device is of the user or of the listener, so as to avoid the failure of recording caused by the false start of the front camera.

When the camera shooting mode of the electronic equipment is the second mode, the user holds the electronic equipment to conduct online scenic spot introduction in a scenic spot, the rear camera, namely the first image acquisition device, is kept to acquire environmental data of the scenic spot in real time during the period so that a tourist can conduct online watching, the user conducts voice introduction under the condition that the tourist is visiting a building or a scenic spot with history records or history, and due to real-time movement of the user, an audio processing engine of the electronic equipment calculates sound acquired by the sound acquisition device in real time so as to ensure the accuracy and the reliability of the position information of the sound source. Similarly, when the audio processing engine determines that the sound of the user falls into the image acquisition space corresponding to the front camera, the front camera is automatically opened, so that the rear camera and the front camera are recorded simultaneously, the user does not need to manually open the front camera, the situation that the electronic equipment shakes when the user manually opens the front camera is avoided, and the shooting quality is ensured.

Of course, a corresponding third mode, a corresponding fourth mode, and the like may also be set for other application scenarios, which is not specifically limited in this embodiment of the application.

Further, in consideration of the case where two or more image capturing devices capture image data at the same time, it is necessary to display the image data captured by all the image capturing devices at the same time, and it is necessary to perform processing such as clipping and synthesis at a later stage. Therefore, fig. 3 shows a method for displaying an acquired video, which specifically includes the following steps:

s301, displaying the first image data in real time.

And S302, overlapping and displaying second image data acquired in real time through a second image acquisition device.

When first image data acquired by a first image acquisition device is received, the first image data is displayed in real time, wherein the first image data in the embodiment of the application is a picture corresponding to a first space.

And when second image data acquired by the second image acquisition device is received, displaying the second image data acquired in real time by the second image acquisition device in a superposition mode. Specifically, the second image data is subjected to preprocessing such as scaling, cropping and the like, so that the size of the second image data is smaller than that of the first image data and is a preset proportion of the size of the first image data, and the second image data is superimposed on the first image data for display. Of course, the preset proportion can be adjusted according to actual requirements.

For example, the first image data is displayed on the whole display screen of the electronic device, after the second image data is received, the second image data is subjected to scaling, cutting and other processing, so that the size of the second image data is smaller than that of the display screen and is a preset proportion of the display screen, and the second image data is further overlaid on the first image data for display. The user can also adjust the preset proportion according to the actual requirement of the user and the preset operation, so that the second image data can meet the actual requirement of the user.

For example, a user introduces a scenic spot on line by using a mobile phone of the user, when the scenic spot is displayed to a viewer, the user holds the mobile phone, faces the front face of the mobile phone, acquires first image data of a first space through a group of first image acquisition devices, displays the first image data in real time, and simultaneously invokes a group of sound acquisition devices on the mobile phone to acquire audio data covering a space environment including the first space in real time, wherein the first image acquisition devices are rear cameras. When a user needs to introduce the scenic spot, the user starts introduction, the sound collection device collects audio data containing sound information of the user, the audio data is processed in real time through the audio processing engine, and then whether a target sound source exists in the audio data, namely whether a voiceprint of the user exists or not is determined based on the relative position between the user and the mobile phone, the relative angle between the user and the mobile phone and the voiceprint information of the user. At the moment, the first image data is displayed on the whole mobile phone screen, the second image data is identified, if more data except for the user exists, the data except for the user can be cut off, the data corresponding to the user is scaled to a preset proportion, and the data is superposed and displayed on the first image data. In order to ensure that the viewer can view the sights of the scenic spot completely, the second image data is displayed in the area where the edge of the first image data is located, for example, the second image data is displayed in the lower right corner of the area where the first image data is located.

The display method provided by the embodiment of the application achieves the purpose of simultaneously displaying the first image data and the second image data in real time, can flexibly control the preset proportion of the second image data, and is high in flexibility and good in user experience; moreover, the first image data and the second image data are displayed in a superimposed manner, so that the comfort of the viewer can be improved.

Of course, after the second image acquisition device is called, the sound acquisition device can be used for acquiring audio data in real time, and whether the second image acquisition device needs to be closed or not can be determined in real time. Specifically, the method steps shown in fig. 4 are followed to determine whether the second image capturing device needs to be turned off, which includes S401 and S402.

S401, the audio processing engine determines that the audio data represent that the target sound source continuously exists in the second space, and the second image data of the second space is collected in real time through the second image collecting device.

S402, determining that the audio data represent that the target sound source in the second space disappears for a preset time period through the audio processing engine, and closing the real-time acquisition of the second image data of the second space through the second image acquisition device.

After the second image acquisition device is called, audio data covering a space environment comprising a first space and a second space are continuously acquired through the sound acquisition device, the audio data are processed through the audio processing engine in real time, and under the condition that the audio data represent that a target sound source continuously exists in the second space, the second image data of the second space are continuously acquired through the second image acquisition device; under the condition that the audio data feature is determined to be in the second space and the target sound source disappears for the preset duration, the second image data of the second space is closed to be collected in real time through the second image collecting device, the situation that when the second image data in the second space does not need to be displayed, the second image data is still displayed, so that resource waste is caused is avoided, and the automation degree is high.

For example, taking the second mode as an example, when the user performs online scenic spot introduction through the electronic device, the viewer can see the scene of the scenic spot and the scene of the tour guide on the screen during the voice introduction process of the user, and after the user completes the voice introduction, the whole screen is required to display the scene so that the viewer can view the complete scenic spot. Therefore, the sound collecting device of the electronic equipment collects the audio data covering the space environment comprising the first space and the second space in real time and continuously, and once the audio data is determined to have no target sound source, the second image collecting device is closed, and meanwhile, the display of the second image data in the display screen is closed, so that a viewer can watch the first image data completely, the manual operation of the user is not needed, the electronic equipment is convenient and fast, and the experience degree of the user is greatly improved.

The second aspect of the present application further provides an electronic device corresponding to the video obtaining method, and since the principle of the apparatus in the present application for solving the problem is similar to that of the video obtaining method in the present application, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not repeated.

Fig. 5 shows a schematic diagram of an electronic device provided in an embodiment of the present application, which specifically includes:

a first calling module 501 configured to call a set of first image capturing devices and a set of sound capturing devices based on a camera mode;

a first collecting module 502 configured to collect, in real time, first image data of a first space by the first image collecting device and collect, in real time, audio data covering a spatial environment including the first space by the sound collecting device;

a processing module 503 configured to process the audio data in real-time based on an audio processing engine;

a second invoking module 504 configured to invoke a second image capture device by the audio processing engine determining that the audio data representation has a target sound source in a second space;

a second acquisition module 505 configured to acquire, in real time, second image data of a second space by the second image acquisition device, the second space being different from the first space.

In another embodiment, the electronic device further includes:

a first display module 506 configured to display the first image data in real-time;

a second display module 507 configured to display second image data acquired by the second image acquisition device in real time in an overlaid manner.

In another embodiment, the electronic device further includes:

a maintaining module 508 configured to maintain the real-time acquisition of the second image data of the second space by the second image acquisition device by the audio processing engine determining that the audio data characterizes the persistent presence of the target sound source in the second space.

In another embodiment, the electronic device further includes:

a closing module 509 configured to determine, by the audio processing engine, that the audio data represents that the target sound source disappears in the second space for a preset time period, and close the real-time acquisition of the second image data of the second space by the second image acquisition device.

In another embodiment, the processing module 503 is specifically configured to:

processing the audio data of the space environment through a positioning module to obtain sound sources in the space environment;

determining a target sound source based on the position information of the sound source in the spatial environment, wherein the target sound source belongs to the second space and does not belong to the first space.

In another embodiment, when determining the target sound source based on the position information of the sound source in the spatial environment, the processing module 503 specifically includes:

and determining a target sound source based on the position information and the sound parameter information of the sound source in the space environment.

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by persons skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the disclosure. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the application. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.

Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

The above embodiments are only exemplary embodiments of the present application, and are not intended to limit the present application, and the protection scope of the present application is defined by the claims. Various modifications and equivalents may be made by those skilled in the art within the spirit and scope of the present application and such modifications and equivalents should also be considered to be within the scope of the present application.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种广角式建筑智能化可视对讲装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类