Image processing method, image processing device, terminal equipment and computer medium

文档序号:1875552 发布日期:2021-11-23 浏览:11次 中文

阅读说明:本技术 图像处理方法、装置、终端设备和计算机介质 (Image processing method, image processing device, terminal equipment and computer medium ) 是由 贺超 徐克勤 程希来 于 2021-08-24 设计创作,主要内容包括:本公开的实施例公开了图像处理方法、装置、终端设备和计算机介质。该方法的一具体实施方式包括:获取待处理图像集合和相似度阈值;生成预处理图像集合,其中,预处理图像集合包括第一预处理图像和第二预处理图像;基于第一预处理图像和第二预处理图像,生成汉明距离;基于第一预处理图像和第二预处理图像,生成欧式距离;基于汉明距离和欧式距离,生成相似度指标;从目标终端设备接收用户输入的删除处理信息;响应于相似度指标大于相似度阈值,根据删除处理信息,删除第二待处理图像。该实施方式根据汉明距离和欧式距离衡量第一待处理图像和第二待处理图像的相似度,能够根据色彩信息和纹理信息删除掉相似度高的重复图像,提高图像处理的水平。(The embodiment of the disclosure discloses an image processing method, an image processing device, a terminal device and a computer medium. One embodiment of the method comprises: acquiring a to-be-processed image set and a similarity threshold; generating a set of pre-processed images, wherein the set of pre-processed images comprises a first pre-processed image and a second pre-processed image; generating a hamming distance based on the first pre-processed image and the second pre-processed image; generating a Euclidean distance based on the first preprocessed image and the second preprocessed image; generating a similarity index based on the Hamming distance and the Euclidean distance; receiving deletion processing information input by a user from a target terminal device; and deleting the second image to be processed according to the deletion processing information in response to the similarity index being larger than the similarity threshold. According to the embodiment, the similarity of the first image to be processed and the second image to be processed is measured according to the Hamming distance and the Euclidean distance, repeated images with high similarity can be deleted according to the color information and the texture information, and the image processing level is improved.)

1. An image processing method comprising:

acquiring a to-be-processed image set and a similarity threshold, wherein the to-be-processed image set comprises a first to-be-processed image and a second to-be-processed image;

generating a set of preprocessed images based on the set of images to be processed, wherein the set of preprocessed images comprises a first preprocessed image and a second preprocessed image;

generating a hamming distance based on the first pre-processed image and the second pre-processed image;

generating a Euclidean distance based on the first pre-processed image and the second pre-processed image;

generating a similarity index based on the Hamming distance and the Euclidean distance;

receiving deletion processing information input by a user from a target terminal device;

and deleting the second image to be processed according to the deletion processing information in response to the similarity index being larger than the similarity threshold.

2. The method of claim 1, wherein the method further comprises:

and displaying the first image to be processed and the second image to be processed in response to the similarity index not being greater than the similarity threshold.

3. The method of claim 2, wherein generating a set of pre-processed images based on the set of images to be processed comprises:

normalizing the first image to be processed and the second image to be processed to obtain a first-stage image to be processed and a second-stage image to be processed, wherein the sizes of the first-stage image to be processed and the second-stage image to be processed are the first number multiplied by the first number;

filtering the first-stage image to be processed and the second-stage image to be processed to obtain a first preprocessed image and a second preprocessed image;

determining a set of the first pre-processed image and the second pre-processed image as the set of pre-processed images.

4. The method of claim 3, wherein generating a Hamming distance based on the set of preprocessed images comprises:

performing graying processing on the first preprocessed image and the second preprocessed image to obtain a first preprocessed grayscale image and a second preprocessed grayscale image;

performing discrete cosine transform on the first pre-processing gray level image and the second pre-processing gray level image to obtain a first pre-processing cosine image and a second pre-processing cosine image;

determining a region obtained by multiplying the second number of the upper left corners of the first preprocessed cosine image by the second number as a first feature region;

determining the area obtained by multiplying the second number of the upper left corners of the second preprocessed cosine image by the second number as a second characteristic area;

determining a first feature matrix according to the first feature area;

determining a second feature matrix according to the second feature area;

determining a distance of the first feature matrix and the second feature matrix as the hamming distance.

5. The method of claim 4, wherein the generating Euclidean distances based on the first and second pre-processed images comprises:

for each pixel in the first pre-processed image, generating a linear spatial three-channel value for the pixel using the following equation to obtain a first linear image:

wherein, [ cR1, cG1, cB1]Three channel values, [ cX1, cY1, cZ1, for the pixel]Three channel values of the second color space for the pixel, t representing a variable, n being a subscript mark, Xn=95.047,Yn=100.000,Zn=108.883,[cL1,ca1,cb1]Is a linear spatial three-channel value of the pixel;

for each pixel in the second pre-processed image, generating a linear spatial three-channel value for the pixel using the following equation to obtain a second linear image:

wherein, [ cR2, cG2, cB2]Three channel values, [ cX2, cY2, cZ2, for the pixel]Is a second color space three-channel value of the pixel, t represents a variable, n is a subscript mark, Xn=95.047,Yn=100.000,Zn=108.883,[cL2,ca2,cb2]Is a linear spatial three-channel value of the pixel;

determining a distance of the first and second linear images as the Euclidean distance.

6. The method of claim 5, wherein generating a similarity indicator based on the hamming distance and the euclidean distance comprises:

generating a similarity index based on the Hamming distance and the Euclidean distance using the following formula: s ═ 0.5 × dist _ AB ″ +0.5 × dist _ AB'

Wherein dist _ AB 'is the Hamming distance, dist _ AB' is the Euclidean distance, and s is the similarity index.

7. The method of claim 6, wherein said determining the distance of the first and second linear images as the Euclidean distance comprises:

generating a first linear vector based on the first linear image, wherein dimensions of the first linear vector are a first number multiplied by 3;

generating a second linear vector based on the second linear image, wherein dimensions of the second linear vector are a first number multiplied by 3;

generating the Euclidean distance from the first linear vector and the second linear vector using the following equation:

dist_AB″=||AL′-BL′||

wherein AL 'is the first linear vector, BL' is the second linear vector, | | | | | represents a vector 2-norm operation, dist _ AB "is the euclidean distance.

8. An image processing apparatus comprising:

the image processing device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is configured to acquire a set of images to be processed and a similarity threshold value, and the set of images to be processed comprises a first image to be processed and a second image to be processed;

a first generating unit configured to generate a set of pre-processed images based on the set of images to be processed, wherein the set of pre-processed images includes a first pre-processed image and a second pre-processed image;

a second generation unit configured to generate a hamming distance based on the first and second preprocessed images;

a third generating unit configured to generate a euclidean distance based on the first preprocessed image and the second preprocessed image;

a fourth generating unit configured to generate a similarity index based on the hamming distance and the euclidean distance;

a receiving unit configured to receive deletion processing information input by a user from a target terminal device;

a control unit configured to delete the second image to be processed according to the deletion processing information in response to the similarity index not being greater than the similarity threshold.

9. A terminal device, comprising:

one or more processors;

a storage device having one or more programs stored thereon,

when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.

10. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-7.

Technical Field

Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to an image processing method, an image processing apparatus, a terminal device, and a computer medium.

Background

With the further development of electronic commerce and multimedia technology, the making, generation and transmission of videos and images are exponentially increased, the growing multimedia data bring huge pressure to the storage and retrieval of information, and how to eliminate redundant pictures and accurately retrieve image information in massive image information becomes one of the important problems to be solved urgently. The image similarity calculation is the core part value of image retrieval and weight removal, and has important effects in the fields of electronic commerce similar picture retrieval, weight removal, traffic picture search, analysis and the like.

However, in the process of image processing, there are often technical problems as follows:

first, most current methods only evaluate the similarity of images based on image textures, but ignore the influence of the color style of images on the similarity, and particularly in the field of electronic commerce, the color style similarity and the image texture similarity have the same weight. In the prior art, the influence of the color style of the image on the similarity is ignored, so that the accuracy of processing the similar image is poor.

Secondly, the multi-channel pixel values in the color image in the E-commerce field are the pixel values in the nonlinear space, and the problem of distortion exists when the similarity calculation is directly carried out on the pixel values in the nonlinear space, so that the accuracy of image processing is influenced.

Disclosure of Invention

This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.

Some embodiments of the present disclosure propose image processing methods, apparatuses, terminal devices, and computer media to solve one or more of the technical problems mentioned in the background section above.

In a first aspect, some embodiments of the present disclosure provide an image processing method, including: acquiring a to-be-processed image set and a similarity threshold; generating a set of pre-processed images, wherein the set of pre-processed images comprises a first pre-processed image and a second pre-processed image; generating a hamming distance based on the first pre-processed image and the second pre-processed image; generating a Euclidean distance based on the first preprocessed image and the second preprocessed image; generating a similarity index based on the Hamming distance and the Euclidean distance; receiving deletion processing information input by a user from a target terminal device; and deleting the second image to be processed according to the deletion processing information in response to the similarity index being larger than the similarity threshold.

In a second aspect, some embodiments of the present disclosure provide an image processing apparatus, including: the image processing device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is configured to acquire a to-be-processed image set and a similarity threshold value, and the to-be-processed image set comprises a first to-be-processed image and a second to-be-processed image; a first generating unit configured to generate a set of pre-processed images based on a set of images to be processed, wherein the set of pre-processed images includes a first pre-processed image and a second pre-processed image; a second generation unit configured to generate a hamming distance based on the first preprocessed image and the second preprocessed image; a third generating unit configured to generate a euclidean distance based on the first preprocessed image and the second preprocessed image; a fourth generating unit configured to generate a similarity index based on the hamming distance and the euclidean distance; a receiving unit configured to receive deletion processing information input by a user from a target terminal device; a control unit configured to delete the second image to be processed according to the deletion processing information in response to the similarity index not being greater than the similarity threshold.

In a third aspect, some embodiments of the present disclosure provide a terminal device, including: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement a method as in any one of the first aspects.

In a fourth aspect, some embodiments of the disclosure provide a computer readable medium having a computer program stored thereon, wherein the program when executed by a processor implements a method as in any one of the first aspect.

The above embodiments of the present disclosure have the following beneficial effects: according to the image processing method of some embodiments of the disclosure, the similarity between the first image to be processed and the second image to be processed can be measured according to the Hamming distance and the Euclidean distance, repeated images with high similarity can be deleted according to the color information and the texture information, and the image processing level is improved. Specifically, the inventors found that the reason for the poor image processing effect at present is that: most of the current methods only evaluate the similarity of images based on image textures, but neglect the influence of the color style of the images on the similarity, and particularly in the field of electronic commerce, the color style similarity and the image texture similarity have the same weight. In the prior art, the influence of the color style of the image on the similarity is ignored, so that the accuracy of processing the similar image is poor. Based on this, first, some embodiments of the present disclosure obtain a set of images to be processed and a similarity threshold. The image set to be processed comprises a first image to be processed and a second image to be processed, and the first image to be processed and the second image to be processed are images with similarity to be measured. The similarity threshold is used for measuring the image similarity. Secondly, preprocessing the first image to be processed and the second image to be processed to obtain a first preprocessed image and a second preprocessed image. Then, the Hamming distance of the first preprocessed image and the second preprocessed image is generated, and the Hamming distance is used for measuring the image similarity according to the texture. And thirdly, generating Euclidean distances between the first preprocessed image and the second preprocessed image, wherein the Euclidean distances measure the image similarity according to the color. And finally, generating a similarity index according to the Hamming distance and the Euclidean distance. And judging the image similarity according to the relation between the similarity index and the similarity threshold value so as to delete the similar images. The method can evaluate the image similarity by utilizing the texture and color information, meets the image processing requirement of high color richness in the E-commerce field, improves the capability of rejecting repeated images with high similarity, and improves the image processing level.

Drawings

The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.

FIG. 1 is an architectural diagram of an exemplary system in which some embodiments of the present disclosure may be applied;

FIG. 2 is a flow diagram of some embodiments of an image processing method according to the present disclosure;

FIG. 3 is an exemplary authorization prompt box;

FIG. 4 is a flow diagram of some embodiments of an image processing apparatus according to the present disclosure;

fig. 5 is a schematic block diagram of a terminal device suitable for use in implementing some embodiments of the present disclosure.

Detailed Description

Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.

It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.

It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.

It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.

The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.

Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the image processing method of the present disclosure may be applied.

As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.

The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have installed thereon various communication client applications, such as an information processing application, an image processing application, a data analysis application, and the like.

The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various terminal devices having a display screen, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the above-listed terminal apparatuses. It may be implemented as a plurality of software or software modules (e.g., to provide a set of images to be processed and a similarity threshold input, etc.), or as a single software or software module. And is not particularly limited herein.

The server 105 may be a server that provides various services, such as a server that stores a set of images to be processed and a similarity threshold value input by the terminal apparatuses 101, 102, 103, and the like. The server may process the received to-be-processed image set and the similarity threshold, and feed back a processing result (e.g., a similarity index) to the terminal device.

It should be noted that the image processing method provided by the embodiment of the present disclosure may be executed by the server 105 or by the terminal device.

It should be noted that the local area of the server 105 may also directly store the to-be-processed image set and the similarity threshold, and the server 105 may directly extract the local to-be-processed image set and the similarity threshold and obtain the similarity index after processing, in this case, the exemplary system architecture 100 may not include the terminal devices 101, 102, and 103 and the network 104.

It should be noted that the terminal apparatuses 101, 102, and 103 may also have image processing applications installed therein, and in this case, the processing method may also be executed by the terminal apparatuses 101, 102, and 103. At this point, the exemplary system architecture 100 may also not include the server 105 and the network 104.

The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules (for example, for providing image processing services), or as a single software or software module. And is not particularly limited herein.

It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.

With continued reference to fig. 2, a flow 200 of some embodiments of an image processing method according to the present disclosure is shown. The image processing method comprises the following steps:

step 201, acquiring a to-be-processed image set and a similarity threshold.

In some embodiments, an executing subject of the image processing method (e.g., a server shown in fig. 1) acquires the set of images to be processed and the similarity threshold in response to receiving the target authorization signal. Specifically, the target authorization signal may be a signal generated by the user performing a target operation on the target control in the to-be-processed image set and the similarity threshold. The target control may be contained in an authorization prompt box. The authorization prompt box can be displayed on the target terminal equipment. The target terminal device may be a terminal device logged with an account corresponding to the user. The terminal equipment can be a mobile phone or a computer. The target operation may be a "click operation" or a "slide operation". The target control may be a "confirm button".

As an example, the authorization prompt box described above may be as shown in fig. 3. The authorization prompt box may include: a prompt information display section 301 and a control 302. The prompt information display section 301 may be configured to display prompt information. The prompt information may be "whether to allow acquisition of the to-be-processed image set and the similarity threshold". The control 302 may be a "confirm button" or a "cancel button".

The image set to be processed comprises a first image to be processed and a second image to be processed. The first image to be processed and the second image to be processed are images with similarity to be measured. The similarity threshold is used for measuring the image similarity. Specifically, the similarity threshold may be a positive number, and the similarity threshold may be 0.5.

Step 202, generating a pre-processing image set based on the image set to be processed.

In some embodiments, the execution subject generates the set of pre-processed images based on the set of images to be processed. Optionally, the first to-be-processed image and the second to-be-processed image are normalized to obtain a first-stage to-be-processed image and a second-stage to-be-processed image. The size of the first-stage to-be-processed image and the second-stage to-be-processed image is the first number multiplied by the first number. Specifically, the first to-be-processed image and the second to-be-processed image may be normalized from an arbitrary size to the first number multiplied by the first number by the nearest neighbor interpolation method. Specifically, the first number may be 32. The mask layer can be removed by performing channel conversion on the first-stage image to be processed and the second-stage image to be processed, so as to ensure that the first-stage image to be processed and the second-stage image to be processed are in the same dimension and the same size. Specifically, for each pixel in the first-stage image to be processed, normalization processing is performed using the following formula to generate a pixel in the first-stage image to be processed corresponding to the pixel, so as to obtain a first-stage image to be processed:

wherein, srcW1 and srcH1 are respectively the length and width of the first image to be processed, dstW1 and dstH1 are respectively the length and width of the first image to be processed, srcX1 and srcY1 are respectively the row count and column count of the pixels of the first image to be processed, and dstX1 and dstY1 are respectively the row count and column count of the pixels of the first image to be processed. Specifically, for each pixel in the second image to be processed, normalization processing is performed using the following formula, and a pixel in the second-stage image to be processed corresponding to the pixel is generated, so as to obtain a second-stage image to be processed:

wherein, srcW2 and srcH2 are respectively the length and width of the second image to be processed, dstW2 and dstH2 are respectively the length and width of the second image to be processed, srcX2 and srcY2 are respectively the row count and column count of the pixels of the second image to be processed, and dstX2 and dstY2 are respectively the row count and column count of the pixels of the second image to be processed.

Optionally, the image to be processed at the first stage and the image to be processed at the second stage are subjected to filtering processing to obtain a first preprocessed image and a second preprocessed image. Specifically, 3 × 3 gaussian filter cores may be adopted to perform gaussian filter processing on the first-stage image to be processed and the second-stage image to be processed, so as to filter noise in the first-stage image to be processed and the second-stage image to be processed, reduce the influence of image noise, and obtain the first preprocessed image and the second preprocessed image. In particular, the Gaussian filter kernel may beA set of first and second pre-processed images is determined as a set of pre-processed images.

Step 203, generating a hamming distance based on the first preprocessed image and the second preprocessed image.

In some embodiments, the executing subject performs graying processing on the first preprocessed image and the second preprocessed image to obtain a first preprocessed grayscale image and a second preprocessed grayscale image. Specifically, for each pixel in the first preprocessed image, a pixel value in the first preprocessed gray-scale image corresponding to the pixel is generated by using the following formula to obtain the first preprocessed gray-scale image:

Gray1=(cr1*19595+cg1*38469+cb1*7472)>>16

where Gray1 denotes the pixel values in the first pre-processed grayscale image, cr1、cg1、cb1Three channel values respectively representing the pixel in the first preprocessed image, ">" represents a shift operation of the matrix. Specifically, for each pixel in the second preprocessed image, a pixel value in the second preprocessed gray-scale image corresponding to the pixel is generated by using the following formula to obtain the second preprocessed gray-scale image:

Gray2=(cr2*19595+cg2*38469+cb2*7472)>>16

wherein Gray2 represents the pixel value "c" in the second pre-processed Gray imager2、cg2、cb2Three channel values representing the pixel in the second preprocessed image, respectively, ">" represents a shift operation of the matrix.

And performing discrete cosine transform on the first pre-processed gray level image and the second pre-processed gray level image to obtain a first pre-processed cosine image and a second pre-processed cosine image. Specifically, for each pixel in the first preprocessed gray-scale image, a pixel value in the first preprocessed cosine image corresponding to the pixel is generated by using the following formula, so as to obtain a first preprocessed pre-image:

wherein F1(u, v) represents the pixel value in the first preprocessed cosine image, (u, v) represents the abscissa and ordinate of F1, F1(x, y) represents the gray value of the pixel, and (x, y) represents the abscissa and ordinate of the pixel. N is a first number, and in particular, N may be 32. Specifically, for each pixel in the second preprocessed gray-scale image, a pixel value in the second preprocessed cosine image corresponding to the pixel is generated by using the following formula, so as to obtain a second preprocessed pre-image:

wherein F2(u, v) represents the pixel value in the second preprocessed cosine image, (u, v) represents the abscissa and ordinate of F2, F2(x, y) represents the gray value of the pixel, and (x, y) represents the abscissa and ordinate of the pixel. N is a first number, and in particular, N may be 32.

Optionally, the first feature region is determined by multiplying the second number by the second number in the upper left corner of the first preprocessed cosine image. The area obtained by multiplying the second number of the upper left corner of the second preprocessed cosine image by the second number is determined as the second feature area, and the second number may be 8.

Optionally, the first feature matrix is determined according to the first feature region. Specifically, an average value of each pixel in the first feature region is calculated to obtain a first average value. And stretching the first characteristic region into a 1-dimensional vector format in an end-to-end manner by using a line unit to obtain a first vector characteristic matrix. Specifically, the first vector feature matrix is a matrix of 1 times the square of the second number. Each element in the first vector feature matrix is compared to the first mean value, and the value of the pixel is determined to be 0 in response to the value of the element being less than the first mean value.

And determining a second feature matrix according to the second feature area. Specifically, an average value of each pixel in the second feature region is calculated to obtain a second average value. And stretching the second characteristic region end to end in a line unit into a 1-dimensional vector format to obtain a second vector characteristic matrix. Specifically, the second vector feature matrix is a matrix of 1 times the square of the second number. Each element in the second vector feature matrix is compared to the second mean value, and the value of the pixel is determined to be 0 in response to the value of the element being less than the second mean value.

And determining the distance between the first feature matrix and the second feature matrix as a Hamming distance. Specifically, the distance between the first feature matrix and the second feature matrix may be normalized to obtain a hamming distance by using the following formula:

distAB′=distAB/len(Binary_A)

where len (Binary _ a) is the square of the second number. distAB is the distance between the first and second feature matrices, and distAB' is the hamming distance.

And step 204, generating the Euclidean distance based on the first preprocessed image and the second preprocessed image.

In some embodiments, the execution subject generates the euclidean distance based on the first preprocessed image and the second preprocessed image. Optionally, for each pixel in the first preprocessed image, a linear three-channel spatial value of the pixel is generated by using the following formula to obtain a first linear image:

wherein, cR1, cG1 and cB1 are three channel values of the pixel, cX1, cY1 and cZ1 are three channel values of the second color space of the pixel, and t represents a variable. n is lower corner mark, Xn=95.047,Yn=100.000,Zn108.883; cL1, ca1, cb1 are the linear spatial three-channel values of the pixel.

For each pixel in the second pre-processed image, generating a linear spatial three-channel value for the pixel using the following equation to obtain a second linear image:

wherein, cR2, cG2 and cB2 are three channel values of the pixel, cX2, cY2 and cZ2 are three channel values of the second color space of the pixel, and t represents a variable. n is lower corner mark, Xn=95.047,Yn=100.000,Zn108.883. cL2, ca2, cb2 are the linear spatial three-channel values of the pixel.

Optionally, the distance between the first linear image and the second linear image is determined as the euclidean distance. Based on the first linear image, a first linear vector is generated. Wherein the first linear vector has dimensions of the first number multiplied by 3. Specifically, the first linear image is stretched end to end in line units into a 1-dimensional vector format to obtain a first linear vector. Based on the second linear image, a second linear vector is generated. Wherein the second linear vector has dimensions of the first number multiplied by 3. Specifically, the second linear image is stretched end to end in line units into a 1-dimensional vector format to obtain a second linear vector. Generating a euclidean distance from the first linear vector and the second linear vector using:

dist_AB″=||AL′-BL′||

wherein AL 'is the first linear vector, BL' is the second linear vector, | | | | | represents the vector 2-norm operation, dist _ AB "is the euclidean distance.

Optional contents in the above step 204 are: the technical method for generating the Euclidean distance is taken as an invention point of the embodiment of the disclosure, and the technical problem mentioned in the background art is solved, namely, the problem that the similarity calculation of the pixel values in the nonlinear space is distorted because the multi-channel pixel values in the color image in the field of electronic commerce are the pixel values in the nonlinear space, so that the accuracy of image processing is influenced. ". Factors that lead to poor image processing accuracy tend to be as follows: the Euclidean distance is directly generated according to the color information in the nonlinear space, and the accuracy is poor. If the above factors are solved, the effect of improving the image processing accuracy can be achieved. To achieve this effect, the present disclosure proposes a method of linear transformation to generate euclidean distances. First, the first pre-processed image and the second pre-processed image are linearly transformed to obtain a first linear image and a second linear image. Then, the first linear image and the second linear image are vectorized to generate a first linear vector and a second linear vector. And generating the Euclidean distance according to the first linear vector and the second linear vector in the linear space, so that the accuracy of the Euclidean distance can be improved, the problem of calculation distortion in the nonlinear space is solved, and the technical problem II is solved.

Step 205, generating a similarity index based on the hamming distance and the euclidean distance.

In some embodiments, the execution subject generates the similarity index based on a hamming distance and a euclidean distance. Optionally, the similarity index is generated based on the hamming distance and the euclidean distance using the following formula:

s=0.5*dist_AB″+0.5*dist_AB″′

wherein dist _ AB 'is a Hamming distance, dist _ AB' is a Euclidean distance, and s is a similarity index.

In step 206, the deletion processing information input by the user is received from the target terminal device.

In some embodiments, the execution body receives the deletion processing information input by the user from the target terminal device. The target terminal device may be a device connected to the execution main body in communication. The target terminal equipment can be a mobile phone or a computer. Specifically, the deletion processing information may be a voice instruction, and the deletion processing information may also be a text instruction.

And step 207, in response to the similarity index being greater than the similarity threshold, deleting the second image to be processed according to the deletion processing information.

In some embodiments, the execution subject deletes the second to-be-processed image according to the deletion processing information in response to the similarity index being greater than the similarity threshold. Specifically, in response to the similarity index being greater than the similarity threshold, the first to-be-processed image and the second to-be-processed image are similar images, and the second to-be-processed image may be deleted. And responding to the similarity index not larger than the similarity threshold value, and displaying the first image to be processed and the second image to be processed. Specifically, in response to the similarity index not being greater than the similarity threshold, the first to-be-processed image and the second to-be-processed image are not similar images, and the first to-be-processed image and the second to-be-processed image are retained and displayed.

One embodiment presented in fig. 2 has the following beneficial effects: acquiring a to-be-processed image set and a similarity threshold; generating a set of pre-processed images, wherein the set of pre-processed images comprises a first pre-processed image and a second pre-processed image; generating a hamming distance based on the first pre-processed image and the second pre-processed image; generating a Euclidean distance based on the first preprocessed image and the second preprocessed image; generating a similarity index based on the Hamming distance and the Euclidean distance; receiving deletion processing information input by a user from a target terminal device; and deleting the second image to be processed according to the deletion processing information in response to the similarity index being larger than the similarity threshold. According to the embodiment, the similarity of the first image to be processed and the second image to be processed is measured according to the Hamming distance and the Euclidean distance, repeated images with high similarity can be deleted according to the color information and the texture information, and the image processing level is improved.

With further reference to fig. 4, as an implementation of the above-described method for the above-described figures, the present disclosure provides some embodiments of an image processing apparatus, which correspond to those of the method embodiments described above for fig. 2, and which may be applied in particular to various terminal devices.

As shown in fig. 4, an image processing apparatus 400 of some embodiments includes: an acquisition unit 401, a first generation unit 402, a second generation unit 403, a third generation unit 404, a fourth generation unit 405, a reception unit 406, and a control unit 407. Wherein, the obtaining unit 401 is configured to obtain the to-be-processed image set and the similarity threshold. The image set to be processed comprises a first image to be processed and a second image to be processed. A first generating unit 402 configured to generate a set of pre-processed images based on the set of images to be processed. Wherein the set of pre-processed images comprises a first pre-processed image and a second pre-processed image. A second generating unit 403 configured to generate a hamming distance based on the first preprocessed image and the second preprocessed image. A third generating unit 404 configured to generate a euclidean distance based on the first preprocessed image and the second preprocessed image. A fourth generating unit 405 configured to generate the similarity index based on the hamming distance and the euclidean distance. A receiving unit 406 configured to receive the deletion processing information input by the user from the target terminal device. A control unit 407 configured to delete the second image to be processed according to the deletion processing information in response to the similarity index not being greater than the similarity threshold.

It will be understood that the elements described in the apparatus 400 correspond to various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 400 and the units included therein, and will not be described herein again.

Referring now to FIG. 5, shown is a block diagram of a computer system 500 suitable for use in implementing a terminal device of an embodiment of the present disclosure. The terminal device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.

As shown in fig. 5, the computer system 500 includes a Central Processing Unit (CPU)501 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage section 506 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data necessary for the operation of the system 500 are also stored. The CPU 501, ROM 502, and RAM503 are connected to each other via a bus 504. An Input/Output (I/O) interface 505 is also connected to bus 504.

The following components are connected to the I/O interface 505: a storage section 506 including a hard disk and the like; and a communication section 507 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 507 performs communication processing via a network such as the internet. The driver 508 is also connected to the I/O interface 505 as necessary. A removable medium 509 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 508 as necessary, so that a computer program read out therefrom is mounted into the storage section 506 as necessary.

In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 507 and/or installed from the removable medium 509. The above-described functions defined in the method of the present disclosure are performed when the computer program is executed by a Central Processing Unit (CPU) 501. It should be noted that the computer readable medium in the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.

Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the C language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the inventive concept as defined above. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

19页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种目标对象的档案搜索方法及相关装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!