Image processing method, image processing device, electronic equipment and storage medium

文档序号:1890990 发布日期:2021-11-26 浏览:6次 中文

阅读说明:本技术 图像处理方法、装置、电子设备和存储介质 (Image processing method, image processing device, electronic equipment and storage medium ) 是由 刘永劼 于 2021-07-30 设计创作,主要内容包括:本申请提出一种图像处理方法、装置、电子设备和存储介质,其中,图像处理方法包括:获取多张待融合图像;对多张待融合图像进行运动幅度检测,以生成运动幅度特征图像;对多张待融合图像进行预处理,以生成第一运动特征图像和第二运动特征图像;根据运动幅度特征图像、第一运动特征图像和第二运动特征图像,生成目标运动特征图像;以及根据目标运动特征图像对多张待融合图像进行融合,以生成目标图像。由此,能够通过运动幅度特征图像动态生成目标运动特征图像,从而提高目标图像的融合效果。(The application provides an image processing method, an image processing device, an electronic device and a storage medium, wherein the image processing method comprises the following steps: acquiring a plurality of images to be fused; carrying out motion amplitude detection on a plurality of images to be fused to generate a motion amplitude characteristic image; preprocessing a plurality of images to be fused to generate a first motion characteristic image and a second motion characteristic image; generating a target motion characteristic image according to the motion amplitude characteristic image, the first motion characteristic image and the second motion characteristic image; and fusing the plurality of images to be fused according to the target motion characteristic image to generate a target image. Therefore, the target motion characteristic image can be dynamically generated through the motion amplitude characteristic image, and the fusion effect of the target image is improved.)

1. An image processing method, comprising:

acquiring a plurality of images to be fused;

carrying out motion amplitude detection on the multiple images to be fused to generate a motion amplitude characteristic image;

preprocessing the multiple images to be fused to generate a first motion characteristic image and a second motion characteristic image;

generating a target motion characteristic image according to the motion amplitude characteristic image, the first motion characteristic image and the second motion characteristic image; and

and fusing the plurality of images to be fused according to the target motion characteristic image to generate a target image.

2. The image processing method according to claim 1, wherein preprocessing the plurality of images to be fused to generate a first motion feature image and a second motion feature image comprises:

respectively carrying out down-sampling on the multiple images to be fused to obtain sampling images of the multiple images to be fused;

carrying out motion detection on the sampling images of the multiple images to be fused to generate the first motion characteristic image;

and carrying out motion detection on the multiple images to be fused so as to generate the second motion characteristic image.

3. The image processing method according to claim 1, wherein the motion amplitude feature image, the first motion feature image, and the second motion feature image are all plural, and generating the target motion feature image from the motion amplitude feature image, the first motion feature image, and the second motion feature image includes:

fusing the plurality of motion amplitude characteristic images according to a first fusion strategy to generate a target motion amplitude characteristic image;

fusing the plurality of first motion characteristic images according to a second fusion strategy to generate a target first motion characteristic image, and fusing the plurality of second motion characteristic images to generate a target second motion characteristic image;

and processing the target first motion characteristic image and the target second motion characteristic image based on the target motion amplitude characteristic image to generate the target motion characteristic image.

4. The image processing method according to claim 3, wherein the processing the target first motion feature image and the target second motion feature image based on the target motion amplitude feature image to generate the target motion feature image comprises:

acquiring a motion amplitude value corresponding to each pixel in the target motion amplitude characteristic image;

comparing the motion amplitude value corresponding to each pixel with a motion amplitude threshold value respectively to determine a first pixel in each pixel which is larger than the motion amplitude threshold value and a second pixel in each pixel which is smaller than or equal to the motion amplitude threshold value;

and processing the target first motion characteristic image and the target second motion characteristic image according to the first pixel and the second pixel to generate the target motion characteristic image.

5. The image processing method according to claim 4, wherein the processing the target first motion characteristic image and the target second motion characteristic image according to the first pixel and the second pixel to generate the target motion characteristic image comprises:

acquiring first position information of the first pixel and acquiring second position information of the second pixel;

extracting a first target pixel from the target first motion characteristic image according to the first position information, and extracting a second target pixel from the target second motion characteristic image according to the second position information;

and generating the target motion characteristic image according to the first target pixel and the second target pixel.

6. The image processing method according to claim 1, wherein the motion amplitude feature image, the first motion feature image, and the second motion feature image are all plural, and generating the target motion feature image from the motion amplitude feature image, the first motion feature image, and the second motion feature image includes:

fusing the plurality of first motion characteristic images according to a second fusion strategy to generate a target first motion characteristic image, and fusing the plurality of second motion characteristic images to generate a target second motion characteristic image;

respectively processing the target first motion characteristic image and the target second motion characteristic image based on the plurality of motion amplitude characteristic images to generate a plurality of third motion characteristic images;

and fusing the plurality of third motion characteristic images according to a third fusion strategy to generate the target motion characteristic image.

7. An image processing apparatus characterized by comprising:

the acquisition module is used for acquiring a plurality of images to be fused;

the first generation module is used for detecting the motion amplitude of the images to be fused so as to generate a motion amplitude characteristic image;

the second generation module is used for preprocessing the images to be fused to generate a first motion characteristic image and a second motion characteristic image;

a third generation module, configured to generate a target motion feature image according to the motion amplitude feature image, the first motion feature image, and the second motion feature image; and

and the fusion module is used for fusing the multiple images to be fused according to the target motion characteristic image so as to generate a target image.

8. The image processing apparatus according to claim 7, wherein the second generating module is specifically configured to:

respectively carrying out down-sampling on the multiple images to be fused to obtain sampling images of the multiple images to be fused;

carrying out motion detection on the sampling images of the multiple images to be fused to generate the first motion characteristic image;

and carrying out motion detection on the multiple images to be fused so as to generate the second motion characteristic image.

9. The image processing apparatus according to claim 7, wherein the motion amplitude feature image, the first motion feature image, and the second motion feature image are all plural, and the third generating module includes:

the first generation unit is used for fusing the plurality of motion amplitude characteristic images according to a first fusion strategy to generate a target motion amplitude characteristic image;

the second generation unit is used for fusing the plurality of first motion characteristic images according to a second fusion strategy to generate a target first motion characteristic image and fusing the plurality of second motion characteristic images to generate a target second motion characteristic image;

and the processing unit is used for processing the target first motion characteristic image and the target second motion characteristic image based on the target motion amplitude characteristic image so as to generate the target motion characteristic image.

10. The image processing apparatus according to claim 9, wherein the processing unit includes:

the acquiring subunit is used for acquiring a motion amplitude value corresponding to each pixel in the target motion amplitude characteristic image;

a determining subunit, configured to compare the motion amplitude value corresponding to each pixel with the motion amplitude threshold value, respectively, to determine a first pixel in each pixel that is greater than the motion amplitude threshold value, and a second pixel in each pixel that is less than or equal to the motion amplitude threshold value;

and the processing subunit is configured to process the target first motion characteristic image and the target second motion characteristic image according to the first pixel and the second pixel to generate the target motion characteristic image.

11. The image processing apparatus according to claim 10, wherein the processing subunit is specifically configured to:

acquiring first position information of the first pixel and acquiring second position information of the second pixel;

extracting a first target pixel from the target first motion characteristic image according to the first position information, and extracting a second target pixel from the target second motion characteristic image according to the second position information;

and generating the target motion characteristic image according to the first target pixel and the second target pixel.

12. The image processing apparatus according to claim 7, wherein the motion amplitude feature image, the first motion feature image, and the second motion feature image are all multiple, and the third generating module is specifically configured to:

fusing the plurality of first motion characteristic images according to a second fusion strategy to generate a target first motion characteristic image, and fusing the plurality of second motion characteristic images to generate a target second motion characteristic image;

respectively processing the target first motion characteristic image and the target second motion characteristic image based on the plurality of motion amplitude characteristic images to generate a plurality of third motion characteristic images;

and fusing the plurality of third motion characteristic images according to a third fusion strategy to generate the target motion characteristic image.

13. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the image processing method according to any one of claims 1 to 6 when executing the program.

14. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the image processing method according to any one of claims 1 to 6.

Technical Field

The present application relates to the field of data processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.

Background

With the rapid development of video image technology, people have higher and higher requirements on video viewing experience, and High Dynamic Range (HDR) images have become mainstream products nowadays, and compared with traditional Standard Dynamic Range (SDR) images, HDR images can present wider brightness and more colors, and show real contents of videos more abundantly.

Among them, the HDR technique is a common image processing technique. Generally, the technology will use multiple SDR images of different exposures of the same scene, which are fused into one HDR image by digital image processing technology.

Disclosure of Invention

An embodiment of the first aspect of the present application provides an image processing method, which can dynamically generate a target motion feature image through a motion amplitude feature image, so as to improve a fusion effect of the target image.

The embodiment of the second aspect of the application provides an image processing device.

The embodiment of the third aspect of the application provides an electronic device.

An embodiment of a fourth aspect of the present application provides a computer-readable storage medium.

An embodiment of a first aspect of the present application provides an image processing method, including: acquiring a plurality of images to be fused; carrying out motion amplitude detection on the multiple images to be fused to generate a motion amplitude characteristic image; preprocessing the multiple images to be fused to generate a first motion characteristic image and a second motion characteristic image; generating a target motion characteristic image according to the motion amplitude characteristic image, the first motion characteristic image and the second motion characteristic image; and fusing the plurality of images to be fused according to the target motion characteristic image to generate a target image.

According to the image processing method, firstly, a plurality of images to be fused are obtained, the motion amplitude detection is carried out on the plurality of images to be fused to generate a motion amplitude characteristic image, then the plurality of images to be fused are preprocessed to generate a first motion characteristic image and a second motion characteristic image, a target motion characteristic image is generated according to the motion amplitude characteristic image, the first motion characteristic image and the second motion characteristic image, and finally the plurality of images to be fused are fused according to the target motion characteristic image to generate the target image. Therefore, the target motion characteristic image can be dynamically generated through the motion amplitude characteristic image, and the fusion effect of the target image is improved.

In addition, the image processing method according to the above-mentioned embodiment of the present application may further have the following additional technical features:

in an embodiment of the present application, preprocessing the multiple images to be fused to generate a first motion characteristic image and a second motion characteristic image includes: respectively carrying out down-sampling on the multiple images to be fused to obtain sampling images of the multiple images to be fused; carrying out motion detection on the sampling images of the multiple images to be fused to generate the first motion characteristic image; and carrying out motion detection on the multiple images to be fused so as to generate the second motion characteristic image.

In an embodiment of the application, the generating a target motion feature image according to the motion amplitude feature image, the first motion feature image, and the second motion feature image includes: fusing the plurality of motion amplitude characteristic images according to a first fusion strategy to generate a target motion amplitude characteristic image; fusing the plurality of first motion characteristic images according to a second fusion strategy to generate a target first motion characteristic image, and fusing the plurality of second motion characteristic images to generate a target second motion characteristic image; and processing the target first motion characteristic image and the target second motion characteristic image based on the target motion amplitude characteristic image to generate the target motion characteristic image.

In an embodiment of the application, the processing the target first motion characteristic image and the target second motion characteristic image based on the target motion amplitude characteristic image to generate the target motion characteristic image includes: acquiring a motion amplitude value corresponding to each pixel in the target motion amplitude characteristic image; comparing the motion amplitude value corresponding to each pixel with a motion amplitude threshold value respectively to determine a first pixel in each pixel which is larger than the motion amplitude threshold value and a second pixel in each pixel which is smaller than or equal to the motion amplitude threshold value; and processing the target first motion characteristic image and the target second motion characteristic image according to the first pixel and the second pixel to generate the target motion characteristic image.

In an embodiment of the application, the processing the target first motion characteristic image and the target second motion characteristic image according to the first pixel and the second pixel to generate the target motion characteristic image includes: acquiring first position information of the first pixel and acquiring second position information of the second pixel; extracting a first target pixel from the target first motion characteristic image according to the first position information, and extracting a second target pixel from the target second motion characteristic image according to the second position information; and generating the target motion characteristic image according to the first target pixel and the second target pixel.

In an embodiment of the application, the generating a target motion feature image according to the motion amplitude feature image, the first motion feature image, and the second motion feature image includes: fusing the plurality of first motion characteristic images according to a second fusion strategy to generate a target first motion characteristic image, and fusing the plurality of second motion characteristic images to generate a target second motion characteristic image; respectively processing the target first motion characteristic image and the target second motion characteristic image based on the plurality of motion amplitude characteristic images to generate a plurality of third motion characteristic images; and fusing the plurality of third motion characteristic images according to a third fusion strategy to generate the target motion characteristic image.

An embodiment of a second aspect of the present application provides an image processing apparatus, including: the acquisition module is used for acquiring a plurality of images to be fused; the first generation module is used for detecting the motion amplitude of the images to be fused so as to generate a motion amplitude characteristic image; the second generation module is used for preprocessing the images to be fused to generate a first motion characteristic image and a second motion characteristic image; a third generation module, configured to generate a target motion feature image according to the motion amplitude feature image, the first motion feature image, and the second motion feature image; and the fusion module is used for fusing the images to be fused according to the target motion characteristic image so as to generate a target image.

The image processing device of the embodiment of the application firstly acquires a plurality of images to be fused through the acquisition module, carries out motion amplitude detection on the plurality of images to be fused through the first generation module to generate a motion amplitude characteristic image, then carries out preprocessing on the plurality of images to be fused through the second generation module to generate a first motion characteristic image and a second motion characteristic image, generates a target motion characteristic image according to the motion amplitude characteristic image, the first motion characteristic image and the second motion characteristic image through the third generation module, and finally fuses the plurality of images to be fused according to the target motion characteristic image through the fusion module to generate the target image. Therefore, the target motion characteristic image can be dynamically generated through the motion amplitude characteristic image, and the fusion effect of the target image is improved.

In addition, the image processing apparatus according to the above-described embodiment of the present application may further have the following additional technical features:

in an embodiment of the application, the second generating module is specifically configured to: respectively carrying out down-sampling on the multiple images to be fused to obtain sampling images of the multiple images to be fused; carrying out motion detection on the sampling images of the multiple images to be fused to generate the first motion characteristic image; and carrying out motion detection on the multiple images to be fused so as to generate the second motion characteristic image.

In an embodiment of the application, the motion amplitude feature image, the first motion feature image, and the second motion feature image are all multiple, and the third generating module includes: the first generation unit is used for fusing the plurality of motion amplitude characteristic images according to a first fusion strategy to generate a target motion amplitude characteristic image; the second generation unit is used for fusing the plurality of first motion characteristic images according to a second fusion strategy to generate a target first motion characteristic image and fusing the plurality of second motion characteristic images to generate a target second motion characteristic image; and the processing unit is used for processing the target first motion characteristic image and the target second motion characteristic image based on the target motion amplitude characteristic image so as to generate the target motion characteristic image.

In one embodiment of the present application, the processing unit includes: the acquiring subunit is used for acquiring a motion amplitude value corresponding to each pixel in the target motion amplitude characteristic image; a determining subunit, configured to compare the motion amplitude value corresponding to each pixel with the motion amplitude threshold value, respectively, to determine a first pixel in each pixel that is greater than the motion amplitude threshold value, and a second pixel in each pixel that is less than or equal to the motion amplitude threshold value; and the processing subunit is configured to process the target first motion characteristic image and the target second motion characteristic image according to the first pixel and the second pixel to generate the target motion characteristic image.

In an embodiment of the present application, the processing subunit is specifically configured to: acquiring first position information of the first pixel and acquiring second position information of the second pixel; extracting a first target pixel from the target first motion characteristic image according to the first position information, and extracting a second target pixel from the target second motion characteristic image according to the second position information; and generating the target motion characteristic image according to the first target pixel and the second target pixel.

In an embodiment of the application, the motion amplitude feature image, the first motion feature image, and the second motion feature image are all multiple, and the third generating module is specifically configured to: fusing the plurality of first motion characteristic images according to a second fusion strategy to generate a target first motion characteristic image, and fusing the plurality of second motion characteristic images to generate a target second motion characteristic image; respectively processing the target first motion characteristic image and the target second motion characteristic image based on the plurality of motion amplitude characteristic images to generate a plurality of third motion characteristic images; and fusing the plurality of third motion characteristic images according to a third fusion strategy to generate the target motion characteristic image.

An embodiment of a third aspect of the present application provides an electronic device, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the image processing method as described in the foregoing embodiments of the first aspect when executing the program.

According to the electronic equipment, the processor executes the computer program stored on the memory, and the target motion characteristic image can be dynamically generated through the motion amplitude characteristic image, so that the fusion effect of the target image is improved.

An embodiment of a fourth aspect of the present application provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the image processing method according to the embodiment of the first aspect.

The computer-readable storage medium of the embodiment of the application, by storing a computer program and executing the computer program by a processor, can dynamically generate a target motion characteristic image through a motion amplitude characteristic image, thereby improving the fusion effect of the target image.

Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.

Drawings

The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:

FIG. 1 is a schematic flow chart diagram of an image processing method according to an embodiment of the present application;

FIG. 2 is a schematic flow chart diagram of an image processing method according to another embodiment of the present application;

FIG. 3 is a schematic flow chart diagram of an image processing method according to another embodiment of the present application;

FIG. 4 is a schematic flow chart diagram of an image processing method according to another embodiment of the present application;

FIG. 5 is a schematic flow chart diagram of an image processing method according to another embodiment of the present application;

FIG. 6 is a timing diagram of an image processing method according to an embodiment of the present application

FIG. 7 is a schematic flow chart diagram of an image processing method according to another embodiment of the present application;

FIG. 8 is a block diagram of an image processing apparatus according to one embodiment of the present application; and

fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.

Detailed Description

Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.

An image processing method, an apparatus, an electronic device, and a storage medium of the embodiments of the present application are described below with reference to the accompanying drawings.

The image processing method provided in the embodiment of the present application may be executed by an electronic device, where the electronic device may be a Personal Computer (PC), a tablet Computer, a mobile phone, a server, or the like, and is not limited herein.

In the embodiment of the application, the electronic device can be provided with a processing component, a storage component and a driving component. Alternatively, the driving component and the processing component may be integrated, the storage component may store an operating system, an application program, or other program modules, and the processing component implements the image processing method provided in the embodiment of the present application by executing the application program stored in the storage component.

Fig. 1 is a flowchart illustrating an image processing method according to an embodiment of the present application.

The image processing method of the embodiment of the application can be further executed by the image processing device provided by the embodiment of the application, and the device can be configured in electronic equipment to obtain a plurality of images to be fused, perform motion amplitude detection on the plurality of images to be fused to generate a motion amplitude characteristic image, perform preprocessing on the plurality of images to be fused to generate a first motion characteristic image and a second motion characteristic image, generate a target motion characteristic image according to the motion amplitude characteristic image, the first motion characteristic image and the second motion characteristic image, and fuse the plurality of images to be fused according to the target motion characteristic image to generate a target image, so that the fusion effect of the target image is improved.

As a possible situation, the image processing method in the embodiment of the present application may also be executed at a server, where the server may be a cloud server, and the image processing method may be executed at a cloud end.

As shown in fig. 1, the image processing method may include:

step 101, acquiring a plurality of images to be fused.

It should be noted that the image to be fused described in this embodiment may be an SDR (Standard Dynamic Range) image.

In this embodiment of the application, there may be multiple ways to obtain the fused image, where multiple images to be fused may be collected by a collection terminal (e.g., a mobile terminal with a camera), for example, multiple images to be fused may be obtained by continuously taking pictures through the camera on the collection terminal, or multiple images to be fused may be collected (obtained) from a constructed image database to be fused based on a user operation, or may be obtained from an image providing device, where the image providing device may include a digital versatile disc player, a video optical disc player, a server, a usb disk, an intelligent hard disk, and the like. And are not limited in any way herein.

It should be noted that, when the collection terminal is used to collect a plurality of images to be fused, the plurality of images to be fused collected by the collection terminal can be transmitted to the storage space of the electronic device such as a computer and a server, so as to facilitate the subsequent use of the plurality of images to be fused. The storage space is not limited to an entity-based storage space, such as a hard disk, but may also be a storage space (cloud storage space) of a network hard disk connected to the electronic device.

Specifically, the electronic device (e.g., a computer) may acquire a plurality of images to be fused input by the acquisition terminal, or acquire a plurality of images to be fused from a constructed image database to be fused.

Step 102, carrying out motion amplitude detection on a plurality of images to be fused to generate a motion amplitude characteristic image. The motion amplitude characteristic image can be a plurality of images.

It should be noted that the motion amplitude feature image described in this embodiment may be a Mask image, or may be an image corresponding to another format capable of characterizing the local motion amplitude of the image, and is not limited herein.

In the embodiment of the application, the motion amplitude detection can be performed on a plurality of images to be fused according to a preset image motion amplitude detection algorithm to generate a motion amplitude characteristic image. The preset image motion amplitude detection algorithm may be calibrated according to an actual situation, for example, the preset image motion amplitude detection algorithm may be an Optical Flow algorithm (Optical Flow), a feature point matching algorithm, or the like.

Specifically, after acquiring the plurality of images to be fused, the electronic device may perform motion amplitude detection (i.e., detection of motion amplitude magnitude) on the plurality of images to be fused according to a preset image motion amplitude detection algorithm (e.g., an optical flow algorithm) to generate a motion amplitude feature image.

As a possible case, the electronic device may further perform motion amplitude detection on the multiple images to be fused according to the image motion amplitude detection model to generate a motion amplitude feature image. It should be noted that the image motion amplitude detection model described in this embodiment may be trained in advance and pre-stored in the memory space of the electronic device to facilitate the retrieval application.

The training and the generation of the image motion amplitude detection model can be executed by a related training server, the training server can be a cloud server or a host of a computer, communication connection is established between the training server and the electronic equipment capable of executing the image processing method provided by the application embodiment, and the communication connection can be at least one of wireless network connection and wired network connection. The training server can send the trained image motion amplitude detection model to the electronic equipment so that the electronic equipment can call the trained image motion amplitude detection model when needed, and therefore the computing pressure of the electronic equipment is greatly reduced.

Specifically, after acquiring a plurality of images to be fused, the electronic device may call (acquire) an image motion amplitude detection model from its own storage space, and input the plurality of images to be fused to the image motion amplitude detection model, so as to perform motion amplitude detection on the plurality of images to be fused by using the image motion amplitude detection model, thereby obtaining a motion amplitude characteristic image output by the image motion amplitude detection model.

As another possible scenario, the electronic device may also perform motion amplitude detection on the multiple images to be fused by using an image motion amplitude detection tool (e.g., a plug-in) to generate a motion amplitude feature image.

It should be noted that, in the preset image motion amplitude detection algorithm, the image motion amplitude detection model, and the motion amplitude detection tool described in this embodiment, when the motion amplitude of the plurality of images to be fused is detected, the motion amplitude can be detected based on a detection principle of pairwise comparison of the images (that is, the plurality of images to be fused are pairwise compared to detect the motion amplitude).

For example, if the number of the images to be fused is 3, after the motion amplitude detection based on the detection principle of pairwise comparison of the images, 3 motion amplitude characteristic images can be generated; assuming that 4 images to be fused are obtained, 6 motion amplitude characteristic images can be generated after motion amplitude detection based on the detection principle of image pairwise comparison; assuming that 2 images to be fused are obtained, 1 image with motion amplitude characteristic can be generated after the motion amplitude detection based on the detection principle of image pairwise comparison; that is, if the number of the images to be fused is N, the obtained motion amplitude feature images may be N × N-1/2, where N may be a positive integer greater than or equal to 2.

Step 103, preprocessing a plurality of images to be fused to generate a first motion characteristic image and a second motion characteristic image. The first motion characteristic image and the second motion characteristic image can be multiple.

It should be noted that, in this embodiment, the first motion characteristic image and the second motion characteristic image may also be Mask images or images corresponding to other formats capable of characterizing motion characteristics of local images, which is not limited herein.

To clearly illustrate the above embodiment, in an embodiment of the present application, as shown in fig. 2, preprocessing a plurality of images to be fused to generate a first motion characteristic image and a second motion characteristic image may include:

step 201, down-sampling a plurality of images to be fused respectively to obtain sampled images of the plurality of images to be fused.

In the embodiment of the application, the plurality of images to be fused can be respectively subjected to down-sampling according to a preset image down-sampling algorithm so as to obtain the sampled images of the plurality of images to be fused. The preset image down-sampling algorithm can be calibrated according to actual conditions.

Specifically, after generating the motion amplitude feature image, the electronic device may respectively perform downsampling on the multiple images to be fused according to a preset image downsampling algorithm to obtain sampled images of the multiple images to be fused (i.e., reduced images to be fused). The preset image down-sampling algorithm can be calibrated according to actual conditions.

Step 202, performing motion detection on the sampling images of the multiple images to be fused to generate a first motion characteristic image.

In the embodiment of the application, motion detection can be performed on the sampling images of the multiple images to be fused according to a preset image motion detection algorithm to generate the first motion characteristic image. For example, the preset image motion detection algorithm may be calculated by using a previously calibrated noise model of the image and combining internal parameters such as sensor sensitivity and digital gain during image shooting. Other more complex algorithms may also be used for the calculations.

Specifically, after obtaining the above-mentioned multiple sampled images of the image to be fused, the electronic device may perform motion detection (i.e., motion detection on a small scale) on the multiple sampled images of the image to be fused according to a preset image motion detection algorithm to generate the first motion feature image.

As a possible scenario, the electronic device may further perform motion detection on the sampled images of the multiple images to be fused according to the image motion detection model to generate a first motion feature image. It should be noted that the image motion detection model described in this embodiment may be trained in advance and pre-stored in the memory space of the electronic device to facilitate the retrieval of the application.

Specifically, after obtaining the above-mentioned multiple sampled images of the image to be fused, the electronic device may call (acquire) the image motion detection model from its own storage space, and input the multiple sampled images of the image to be fused to the image motion detection model, so as to perform motion detection on the multiple sampled images of the image to be fused by using the image motion detection model, so as to obtain the first motion characteristic image output by the image motion detection model.

As another possible scenario, the electronic device may further perform motion detection on the sampled images of the plurality of images to be fused using an image motion detection tool (e.g., a plug-in) to generate a first motion feature image.

It should be noted that, when the preset image motion detection algorithm, the image motion detection model and the image motion detection tool described in this embodiment are used to perform motion detection on the sampled images of the plurality of images to be fused, motion detection may also be performed based on a detection principle that two images are compared with each other (that is, two images are compared with each other to perform motion detection on the sampled images of the plurality of images to be fused). For example, assuming that the number of the sampling images is 3, after the motion detection based on the detection principle of pairwise comparison of the images, 3 first motion feature images can be generated.

Step 203, performing motion detection on the multiple images to be fused to generate a second motion characteristic image.

In this embodiment of the application, motion detection may be performed on a plurality of images to be fused according to the preset image motion detection algorithm, the image motion detection model, or the image motion detection tool, so as to generate a second motion characteristic image.

Specifically, after obtaining the first motion characteristic image, the electronic device may perform motion detection (i.e., motion detection on a large scale) on the multiple images to be fused according to the preset image motion detection algorithm, the image motion detection model, or the image motion detection tool, so as to generate a second motion characteristic image.

And 104, generating a target motion characteristic image according to the motion amplitude characteristic image, the first motion characteristic image and the second motion characteristic image.

It should be noted that the target motion feature image described in this embodiment may also be a Mask image, or an image corresponding to another format capable of characterizing the local motion feature of the image, and is not limited herein.

In the embodiment of the present application, the first motion feature image and the second motion feature image may be processed (for example, fusion processing) based on the motion amplitude feature image to generate a target motion feature image.

To clearly illustrate the above embodiment, in an embodiment of the present application, as shown in fig. 3, the generating the target motion feature image according to the motion amplitude feature image, the first motion feature image, and the second motion feature image includes:

step 301, fusing the plurality of motion amplitude characteristic images according to a first fusion strategy to generate a target motion amplitude characteristic image. The first fusion strategy can be calibrated according to actual conditions.

It should be noted that the first fusion strategy described in this embodiment may include performing image fusion based on the magnitude of the motion amplitude value corresponding to each pixel in the multiple motion amplitude feature images, for example, performing image fusion by using the pixel corresponding to the maximum value of the motion amplitude in the multiple motion amplitude feature images; or averaging the pixels at the same position in the multiple motion amplitude characteristic images respectively to perform image fusion, and meanwhile, respectively averaging the motion amplitude values corresponding to each pixel in the multiple motion amplitude characteristic images. Wherein the motion amplitude value may be generated at the time of the motion amplitude detection described above.

Specifically, after obtaining the plurality of motion amplitude feature images, the first motion feature image and the second motion feature image, the electronic device may first obtain a motion amplitude value corresponding to each pixel in the plurality of motion amplitude feature images, compare the motion amplitude values corresponding to pixels at the same position in the plurality of motion amplitude feature images respectively to obtain a pixel corresponding to the maximum value of the motion amplitude values, and fuse the plurality of motion amplitude feature images based on the manner to generate the target motion amplitude feature image. Or the electronic device may first obtain each pixel in the multiple motion amplitude feature images, average the pixels at the same position in the multiple motion amplitude feature images, and fuse the multiple motion amplitude feature images based on the mode to generate the target motion amplitude feature image.

Step 302, fusing the plurality of first motion characteristic images according to a second fusion strategy to generate a target first motion characteristic image, and fusing the plurality of second motion characteristic images to generate a target second motion characteristic image. And the second fusion strategy can be calibrated according to actual conditions.

It should be noted that the second fusion strategy described in this embodiment may have the same concept as the first fusion strategy described above, and is not described herein again.

Specifically, after obtaining the target motion amplitude feature image, the electronic device may first obtain a motion feature value corresponding to each pixel in the multiple first motion feature images, compare the motion feature values corresponding to the pixels at the same position in the multiple first motion feature images respectively to obtain a pixel corresponding to the maximum value, and fuse the multiple first motion feature images based on the mode to generate the target first motion feature image. Or the electronic device may first obtain each pixel in the plurality of first motion characteristic images, average the pixels at the same position in the plurality of first motion characteristic images, and fuse the plurality of first motion characteristic images based on the method to generate the target first motion characteristic image.

Further, the electronic device obtains a motion feature value corresponding to each pixel in a plurality of second motion feature images, compares the motion feature values corresponding to the pixels at the same position in the plurality of second motion feature images respectively to obtain a pixel corresponding to the maximum value, and fuses the plurality of second motion feature images based on the mode to generate a target second motion feature image. Or the electronic device may first obtain each pixel in the plurality of second motion characteristic images, average the pixels at the same position in the plurality of second motion characteristic images, and fuse the plurality of second motion characteristic images based on the method to generate the target second motion characteristic image. The motion feature value may be generated when the motion feature detection is performed.

And 303, processing the target first motion characteristic image and the target second motion characteristic image based on the target motion amplitude characteristic image to generate a target motion characteristic image.

To clearly illustrate the above embodiment, in an embodiment of the present application, as shown in fig. 4, processing the target first motion characteristic image and the target second motion characteristic image based on the target motion amplitude characteristic image to generate the target motion characteristic image may include:

step 401, obtaining a motion amplitude value corresponding to each pixel in the target motion amplitude feature image.

Step 402, comparing the motion amplitude value corresponding to each pixel with a motion amplitude threshold value respectively to determine a first pixel in each pixel which is greater than the motion amplitude threshold value and a second pixel in each pixel which is less than or equal to the motion amplitude threshold value. The motion amplitude threshold value can be calibrated according to actual conditions, and the number of the first pixels and the number of the second pixels can be multiple.

Specifically, after obtaining the target motion amplitude feature image, the first motion feature image, and the target second motion feature image, the electronic device may first obtain a motion amplitude value corresponding to each pixel in the target motion amplitude feature image, and may compare the motion amplitude value corresponding to each pixel with a motion amplitude threshold value, respectively, to determine a pixel (i.e., a first pixel) greater than the motion amplitude threshold value in each pixel, and a pixel (i.e., a second pixel) less than or equal to the motion amplitude threshold value in each pixel.

And step 403, processing the target first motion characteristic image and the target second motion characteristic image according to the first pixel and the second pixel to generate a target motion characteristic image.

To clearly illustrate the above embodiment, in an embodiment of the present application, as shown in fig. 5, processing the target first motion characteristic image and the target second motion characteristic image according to the first pixel and the second pixel to generate the target motion characteristic image may include:

step 501, first position information of a first pixel is obtained, and second position information of a second pixel is obtained.

It should be noted that the position information described in this embodiment may be coordinate information of the pixel.

Step 502, extracting a first target pixel from the target first motion characteristic image according to the first position information, and extracting a second target pixel from the target second motion characteristic image according to the second position information.

Step 503, generating a target motion characteristic image according to the first target pixel and the second target pixel.

Specifically, after determining the first pixel and the second pixel described above, the electronic device may acquire coordinate information of the first pixel and the second pixel, respectively, and extract (acquire) a pixel at a position corresponding to the coordinate information of the first pixel (i.e., a first target pixel) from the target first motion characteristic image, and extract (acquire) a pixel at a position corresponding to the coordinate information of the second pixel (i.e., a second target pixel) from the target second motion characteristic image. And then the electronic equipment places (splices) the pixels extracted from the target first motion characteristic image and the target second motion characteristic image according to the corresponding coordinate positions of the pixels to generate a target motion characteristic image.

As a possible case, if the motion amplitude feature image, the first motion feature image, and the second motion feature image obtained by the electronic device are all 1, the first motion feature image and the second motion feature image may be processed directly based on the motion amplitude feature image to generate the target motion feature image.

And 105, fusing a plurality of images to be fused according to the target motion characteristic image to generate a target image.

In the embodiment of the application, a plurality of images to be fused can be fused based on an HDR image generation algorithm according to the target motion characteristic image to generate the target image. The HDR image generation algorithm can be calibrated according to actual conditions.

Specifically, after generating the target motion characteristic image, the electronic device may fuse a plurality of images to be fused based on the HDR image generation algorithm and according to the target motion characteristic image to generate the target image. Thus, the fusion effect of the target image can be improved, and good image quality can be obtained.

To make the present application more clear to those skilled in the art, fig. 6 is a timing diagram of an image processing method according to an embodiment of the present application. Referring to fig. 6, the image processing method may include:

the electronic device may perform motion amplitude detection on the SDR image A, SDR image B and the SDR image C to generate 3 pieces of motion amplitude Mask images, perform downsampling on the SDR image A, SDR image B and the SDR image C, and perform motion detection on the 3 pieces of sampled images obtained after the downsampling to generate 3 pieces of first motion characteristic Mask images. Then, the electronic device can also perform motion detection on an SDR image A, SDR image B and an SDR image C of an SDR image A, SDR image B and an SDR image C to generate 3 second motion characteristic Mask images, and respectively fuse the 3 motion amplitude Mask images, the 3 first motion characteristic Mask images and the 3 second motion characteristic Mask images to generate a target motion amplitude Mask image, a target first motion characteristic Mask image and a target second motion characteristic Mask image. And then the electronic equipment can process the target first motion characteristic Mask image and the target second motion characteristic Mask image based on the target motion amplitude Mask image to generate a target motion characteristic Mask image. And finally, the electronic equipment can fuse the SDR image A, SDR image B and the SDR image C according to the target motion characteristic Mask image to generate an HDR image.

As a possible situation, if the input SDR image in fig. 6 only includes an SDR image a and an SDR image B, only 1 motion amplitude Mask image, 1 first motion characteristic Mask image, and 1 second motion characteristic Mask image may be obtained, and the first motion characteristic Mask image and the second motion characteristic Mask image may be directly processed based on the motion amplitude Mask image to generate a target motion characteristic Mask, and the SDR image a and the SDR image B may be fused according to the target motion characteristic Mask image to generate an HDR image.

Further, in an embodiment of the present application, as shown in fig. 7, the generating the target motion feature image according to the motion amplitude feature image, the first motion feature image, and the second motion feature image includes:

and 701, fusing the plurality of first motion characteristic images according to a second fusion strategy to generate a target first motion characteristic image, and fusing the plurality of second motion characteristic images to generate a target second motion characteristic image.

Specifically, after obtaining a plurality of motion amplitude characteristic images, a first motion characteristic image and a second motion characteristic image, the electronic device may first obtain a motion characteristic value corresponding to each pixel in the plurality of first motion characteristic images, compare the motion characteristic values corresponding to the pixels at the same position in the plurality of first motion characteristic images respectively to obtain a pixel corresponding to the maximum value, and fuse the plurality of first motion characteristic images based on the manner to generate the target first motion characteristic image. Or the electronic device may first obtain each pixel in the plurality of first motion characteristic images, average the pixels at the same position in the plurality of first motion characteristic images, and fuse the plurality of first motion characteristic images based on the method to generate the target first motion characteristic image.

Further, the electronic device obtains a motion feature value corresponding to each pixel in a plurality of second motion feature images, compares the motion feature values corresponding to the pixels at the same position in the plurality of second motion feature images respectively to obtain a pixel corresponding to the maximum value, and fuses the plurality of second motion feature images based on the mode to generate a target second motion feature image. Or the electronic device may first obtain each pixel in the plurality of second motion characteristic images, average the pixels at the same position in the plurality of second motion characteristic images, and fuse the plurality of second motion characteristic images based on the method to generate the target second motion characteristic image.

Step 702, based on the multiple motion amplitude characteristic images, respectively processing the target first motion characteristic image and the target second motion characteristic image to generate multiple third motion characteristic images.

And 703, fusing the plurality of third motion characteristic images according to a third fusion strategy to generate a target motion characteristic image.

It should be noted that the third fusion policy described in this embodiment may have the same concept as the first fusion policy and the second fusion policy described above, and is not described herein again.

Specifically, after obtaining the target first motion feature image and the target second motion feature image, the electronic device may respectively process the target first motion feature image and the target second motion feature image based on the multiple motion amplitude feature images to generate multiple third motion feature images, and fuse the multiple third motion feature images according to a third fusion policy to generate the target motion feature image. The specific processing method and the fusion method are described in detail above, and are not described herein again.

To sum up, according to the image processing method of the embodiment of the application, a plurality of images to be fused are obtained, the plurality of images to be fused are subjected to motion amplitude detection to generate a motion amplitude characteristic image, then the plurality of images to be fused are subjected to preprocessing to generate a first motion characteristic image and a second motion characteristic image, a target motion characteristic image is generated according to the motion amplitude characteristic image, the first motion characteristic image and the second motion characteristic image, and finally the plurality of images to be fused are fused according to the target motion characteristic image to generate the target image. Therefore, the target motion characteristic image can be dynamically generated through the motion amplitude characteristic image, and the fusion effect of the target image is improved.

FIG. 8 is a block diagram of an image processing apparatus according to an embodiment of the present application.

The image processing device can be configured in electronic equipment to achieve the purpose of acquiring a plurality of images to be fused, detect the motion amplitude of the plurality of images to be fused to generate a motion amplitude characteristic image, preprocess the plurality of images to be fused to generate a first motion characteristic image and a second motion characteristic image, generate a target motion characteristic image according to the motion amplitude characteristic image, the first motion characteristic image and the second motion characteristic image, fuse the plurality of images to be fused according to the target motion characteristic image to generate a target image, and improve the fusion effect of the target image.

As shown in fig. 8, the image processing apparatus 800 may include: an obtaining module 810, a first generating module 820, a second generating module 830, a third generating module 840 and a fusing module 850.

The obtaining module 810 is configured to obtain a plurality of images to be fused.

The first generating module 820 is configured to perform motion amplitude detection on a plurality of images to be fused to generate a motion amplitude feature image.

The second generating module 830 is configured to pre-process a plurality of images to be fused to generate a first motion characteristic image and a second motion characteristic image.

The third generating module 840 is configured to generate a target motion feature image according to the motion amplitude feature image, the first motion feature image and the second motion feature image.

The fusion module 850 is configured to fuse the multiple images to be fused according to the target motion feature image to generate a target image.

In an embodiment of the present application, the second generating module 830 is specifically configured to: respectively carrying out down-sampling on a plurality of images to be fused to obtain sampled images of the plurality of images to be fused; carrying out motion detection on the sampling images of the plurality of images to be fused to generate a first motion characteristic image; and carrying out motion detection on the plurality of images to be fused to generate a second motion characteristic image.

In an embodiment of the application, the motion amplitude characteristic image, the first motion characteristic image, and the second motion characteristic image are all multiple, as shown in fig. 8, the third generating module 840 may include: a first generation unit 841, a second generation unit 842, a processing unit 843.

The first generating unit 841 is configured to fuse the multiple motion amplitude feature images according to a first fusion policy to generate a target motion amplitude feature image.

The second generating unit 842 is configured to fuse the plurality of first motion characteristic images according to the second fusion policy to generate a target first motion characteristic image, and fuse the plurality of second motion characteristic images to generate a target second motion characteristic image.

The processing unit 843 is configured to process the target first motion characteristic image and the target second motion characteristic image based on the target motion amplitude characteristic image to generate a target motion characteristic image.

In one embodiment of the present application, as shown in fig. 8, the processing unit 843 may include: an acquisition sub-unit 801, a determination sub-unit 802, and a processing sub-unit 803.

The obtaining subunit 801 is configured to obtain a motion amplitude value corresponding to each pixel in the target motion amplitude feature image.

The determining subunit 802 is configured to compare the motion amplitude value corresponding to each pixel with the motion amplitude threshold value, respectively, to determine a first pixel in each pixel that is greater than the motion amplitude threshold value, and a second pixel in each pixel that is less than or equal to the motion amplitude threshold value.

The processing sub-unit 803 is configured to process the target first motion characteristic image and the target second motion characteristic image according to the first pixel and the second pixel to generate a target motion characteristic image.

In an embodiment of the present application, the processing subunit 803 is specifically configured to: acquiring first position information of a first pixel and acquiring second position information of a second pixel; extracting a first target pixel from a target first motion characteristic image according to the first position information, and extracting a second target pixel from a target second motion characteristic image according to the second position information; and generating a target motion characteristic image according to the first target pixel and the second target pixel.

In an embodiment of the application, the motion amplitude feature image, the first motion feature image, and the second motion feature image are all multiple, and the third generating module 840 is specifically configured to: fusing the plurality of first motion characteristic images according to a second fusion strategy to generate a target first motion characteristic image, and fusing the plurality of second motion characteristic images to generate a target second motion characteristic image; respectively processing the target first motion characteristic image and the target second motion characteristic image based on the plurality of motion amplitude characteristic images to generate a plurality of third motion characteristic images; and fusing the plurality of third motion characteristic images according to a third fusion strategy to generate a target motion characteristic image.

It should be noted that details that are not disclosed in the image processing apparatus of the embodiment of the present application refer to details that are disclosed in the image processing method of the embodiment of the present application, and are not described herein again in detail.

To sum up, the image processing apparatus according to the embodiment of the present application first obtains a plurality of images to be fused through an obtaining module, performs motion amplitude detection on the plurality of images to be fused through a first generating module to generate a motion amplitude characteristic image, then performs preprocessing on the plurality of images to be fused through a second generating module to generate a first motion characteristic image and a second motion characteristic image, generates a target motion characteristic image according to the motion amplitude characteristic image, the first motion characteristic image and the second motion characteristic image through a third generating module, and finally fuses the plurality of images to be fused according to the target motion characteristic image through a fusing module to generate the target image. Therefore, the target motion characteristic image can be dynamically generated through the motion amplitude characteristic image, and the fusion effect of the target image is improved.

In order to implement the foregoing embodiments, as shown in fig. 9, the present application further proposes an electronic device 900, which includes a memory 910, a processor 920 and a computer program stored in the memory 910 and executable on the processor 920, wherein the processor 920 executes the program to implement the image processing method proposed in the foregoing embodiments of the present application.

According to the electronic equipment, the processor executes the computer program stored on the memory, and the target motion characteristic image can be dynamically generated through the motion amplitude characteristic image, so that the fusion effect of the target image is improved.

In order to implement the above embodiments, the present application also proposes a non-transitory computer-readable storage medium having stored thereon a computer program, which is executed by a processor, to implement the image processing method proposed by the foregoing embodiments of the present application.

The computer-readable storage medium of the embodiment of the application, by storing a computer program and executing the computer program by a processor, can dynamically generate a target motion characteristic image through a motion amplitude characteristic image, thereby improving the fusion effect of the target image.

In the description of the present specification, the terms "first", "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.

In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.

Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

22页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:模型优化方法及相关装置、电子设备和存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!