Self-learning-based commodity detail page generation method

文档序号:1737490 发布日期:2019-12-20 浏览:19次 中文

阅读说明:本技术 一种基于自学习的商品详情页的生成方法 (Self-learning-based commodity detail page generation method ) 是由 彭石 于 2019-08-29 设计创作,主要内容包括:本发明公开了一种基于自学习的商品详情页的生成方法,至少包括以下步骤:S1、提供图像处理模型;S2、构建详情页模板库;S3、构建图片库;S4、提供图像或图像集,提取图像或图像集主体颜色;针对所述图像用姿态相似度算法匹配图片库中图片,结合主体颜色和匹配度排序图像集;S5、根据所述图像或图像集的类别,使其与详情页模板库进行匹配;S6、根据S5中匹配的详情页模板以及配置参数,匹配模块上图片占位框和图像;S7、根据模块上的图片占位框中的裁剪规则,裁剪对应的图像;S8、生成商品详情页。(The invention discloses a self-learning-based commodity detail page generation method, which at least comprises the following steps: s1, providing an image processing model; s2, constructing a detail page template library; s3, constructing a picture library; s4, providing an image or an image set, and extracting the main color of the image or the image set; matching pictures in a picture library by using a posture similarity algorithm aiming at the images, and sequencing the image sets by combining the main body color and the matching degree; s5, matching the image or the image set with a detail page template library according to the category of the image or the image set; s6, matching the picture space occupying frame and the image on the module according to the matched detail page template and the configuration parameters in the S5; s7, cutting the corresponding image according to the cutting rule in the picture occupying frame on the module; and S8, generating a commodity detail page.)

1. A self-learning-based commodity detail page generation method is characterized by at least comprising the following steps:

s1, providing an image processing model;

s2, constructing a detail page template library;

s3, constructing a picture library;

s4, providing an image or an image set, and extracting the main color of the image or the image set; matching pictures in a picture library by using a posture similarity algorithm aiming at the images, and sequencing the image sets by combining the main body color and the matching degree;

s5, matching the image or the image set with a detail page template library according to the category of the image or the image set;

s6, matching the picture space occupying frame and the image on the module according to the matched detail page template and the configuration parameters in the S5;

s7, cutting the corresponding image according to the cutting rule in the picture occupying frame on the module;

and S8, generating a commodity detail page.

2. The self-learning based commodity detail page generation method according to claim 1, wherein the image processing model is constructed by the following method:

s11, collecting commodity images;

s12, classifying the commodity images based on the deep neural network training model;

s13, selecting key points of the commodity image based on the conditional Pose Machines model;

s14, positioning a commodity image target detection frame based on the Faster R-CNN model;

and S15, extracting the main body color of the commodity image and classifying according to the main body color.

3. The self-learning based commodity detail page generation method according to claim 1, wherein the detail page template library is constructed by the following method:

s21, recording the new detail page into a template library;

s22, training a CRF model based on the template element sequence in the template library, and predicting the template element sequence of the image or the image set to obtain a new detail page template.

4. The self-learning based merchandise detail page generation method of claim 1 wherein the attitude similarity algorithm is as follows:

s41, detecting a human body or clothing target frame in the image;

s42, selecting a target frame with the largest area in the image, and zooming the target area to 200 pixels in width;

s43, if the difference of the aspect ratio of the target frames of the two images is larger than k, returning a 'dissimilarity' judgment result; wherein k is 0.2;

s44, detecting key points of the human body or the clothes, and defining the number of the key points as m;

s45, normalizing the coordinates X and Y of the key points detected in the selected target frame to be between [0 and 1 ];

s46, fixing key pointsSequentially forming a vector coordinate axis, and taking the X and Y coordinate values of the key points as vector values to obtain a vector v; undetected keypoint markers X and Y are-1; v. ofi,xX coordinate, v, representing the ith keypointi,yY-coordinate representing the ith keypoint;

s47, if at least n homonymous key points exist in the two comparison areas, continuing to calculate; otherwise, returning a 'dissimilar' judgment result; wherein n is 0.9;

s48, constructing a rectangle for each area according to the min (X), the min (Y), the max (X) and the max (Y) of the coordinates of the key points with the same name; if the ratio of the area of the smaller rectangle to the area of the larger rectangle is less than t, returning a 'dissimilarity' judgment result; wherein t is 0.7;

s49, and keypoint vectors v1 and v2 of the two graphs, which define the distance between the keypoints in Euclidean space:

d <0.1 is considered as a diagram with similar postures.

Technical Field

The invention belongs to the field of automation of visual layout design, and particularly relates to a commodity detail page generation method based on self-learning.

Background

The E-commerce commodity detail page is subjected to art designing, picture repairing, typesetting, adjusting and cutting, and then the manufactured commodity detail page is uploaded and published to each E-commerce platform. Such a conventional procedure takes several tens of minutes to several hours.

At present, in the prior art, a commodity detail page of an e-commerce platform is generated in a naming matching mode. Firstly naming the picture space occupying frame on the template, then appointing a name for the material picture file, and placing the material picture file with the appointed name on the picture space occupying frame with the same name on the template through a computer program. The method has the following defects: the designer is required to plan the layout of the detail page in the mind in advance, then specify the position of the material diagram on the detail page, and finally name the picture file. The method only fills the named pictures in the detail page template, does not have the function of intelligent typesetting layout, and has limited improvement efficiency.

Chinese patent publication No. CN105068985A, which is an automated design and typesetting method based on artificial intelligence machine. S1, establishing an information mapping docking framework according to fields required by a third-party e-commerce platform; and S2, performing intelligent automatic typesetting according to the product pictures. And S3, forming a final commodity detail page after fine adjustment, checking and confirming, and fully automatically uploading the final commodity detail page to each large third-party e-commerce platform according to the final confirmation of the customer. The specific process of S2 is that the original detail page template submitted by the customer is uploaded to the typesetting system, the typesetting system has the decoding function of analyzing the PSD source file, the PSD source file can be analyzed by the decoding function in the typesetting system and the preset template information provided by the merchant is combined to generate an intelligent template for artificial intelligent identification and automatic editing, then the product picture provided by the customer is uploaded to the intelligent template in the typesetting system, the typesetting system can automatically visually identify the content of the product picture from the aesthetic standard angle, the picture content of the automatically identified product is combined into the commodity display detail page by automatically comparing, placing, cutting and beautifying the picture, and the typesetting system can automatically visually identify the content of the product picture from the aesthetic standard angle by using a DenseNet model and follow the following steps: a. performing aesthetic scoring on the pictures according to aesthetic rules on the third-party E-business platform and detection of real-time images, automatically sequencing the scoring from high to low, and entering the next step; b. identifying objects in the pictures by detecting the pictures meeting the conditions in the last step a through a GPU cluster, scoring each product picture according to the preset information matching degree, and sequencing the picture scores from high to low in sequence; c. automatically cutting and deleting the selected picture after the step b according to the picture requirement of the size of a preset aesthetic template on the intelligent template; d. analyzing the semantics of the E-commerce product field to analyze the specific semantics of the product field and the corresponding relation of the product field; e. the typesetting system can automatically select fields and pictures meeting the requirements of preset pictures and field information, and finally, the fields and the pictures are spliced to corresponding areas on the intelligent template through automatic comparison, placing, cutting and beautifying of the pictures to be combined into a commodity display detail page. The disadvantages are that: when the method is used for processing a certain material packet, only one pre-configured detail page template exists. When the clothing type of the material map does not conform to the preset type on the template, or the components of the material package picture are complex, the final detailed page effect is poor, a large amount of manual adjustment in a detailed page editor is needed, and the efficiency is still low.

Disclosure of Invention

In order to solve the technical problem, the invention provides a self-learning-based commodity detail page generation method, which at least comprises the following steps:

s1, providing an image processing model;

s2, constructing a detail page template library;

s3, constructing a picture library;

s4, providing an image or an image set, and extracting the main color of the image or the image set; matching pictures in a picture library by using a posture similarity algorithm aiming at the images, and sequencing the image sets by combining the main body color and the matching degree;

s5, matching the image or the image set with a detail page template library according to the category of the image or the image set;

s6, matching the picture space occupying frame and the image on the module according to the matched detail page template and the configuration parameters in the S5;

s7, cutting the corresponding image according to the cutting rule in the picture occupying frame on the module;

and S8, generating a commodity detail page.

Preferably, the image processing model is constructed as follows:

s11, collecting commodity images;

s12, classifying the commodity images based on the deep neural network training model;

s13, selecting key points of the commodity image based on the conditional Pose Machines model;

s14, positioning a commodity image target detection frame based on the Faster R-CNN model;

and S15, extracting the main body color of the commodity image and classifying according to the main body color.

Preferably, the construction method of the detail page template library is as follows:

s21, recording the new detail page into a template library;

s22, training a CRF model based on the template element sequence in the template library, and predicting the template element sequence of the image or the image set to obtain a new detail page template.

Preferably, the attitude similarity algorithm is as follows:

s41, detecting a human body or clothing target frame in the image;

s42, selecting a target frame with the largest area in the image, and zooming the target area to 200 pixels in width;

s43, if the difference of the aspect ratio of the target frames of the two images is larger than k, returning a 'dissimilarity' judgment result; wherein k is 0.2;

s44, detecting key points of the human body or the clothes, and defining the number of the key points as m;

s45, normalizing the coordinates X and Y of the key points detected in the selected target frame to be between [0 and 1 ];

s46, fixing the sequence of the key points to form a vector coordinate axis, and taking the X and Y coordinate values of the key points as vector values to obtain a vector v; undetected keypoint markers X and Y are-1; v. ofi,xX coordinate, v, representing the ith keypointi,yY-coordinate representing the ith keypoint;

s47, if at least n homonymous key points exist in the two comparison areas, continuing to calculate; otherwise, returning a 'dissimilar' judgment result; wherein n is 0.9;

s48, constructing a rectangle for each area according to the min (X), the min (Y), the max (X) and the max (Y) of the coordinates of the key points with the same name; if the ratio of the area of the smaller rectangle to the area of the larger rectangle is less than t, returning a 'dissimilarity' judgment result; wherein t is 0.7;

s49, and keypoint vectors v1 and v2 of the two graphs, which define the distance between the keypoints in Euclidean space:

d <0.1 is considered as a diagram with similar postures.

Compared with the prior art, the technical scheme of the application has the beneficial effects that:

according to the method for generating the commodity detail page based on self-learning, the efficiency of editing the detail page can be greatly improved, a large amount of manual adjustment in a detail page editor is not needed, and the labor cost is saved.

For manually making a single detail page in the prior art, the average time consumption is about 20 minutes or more, and for making the single detail page by the method, the average time consumption is about 2 minutes or less, so that the efficiency improvement of more than 10 times is realized. After the merchant user uses the system, the labor input of the art designer for manufacturing the commodity detail page can be greatly reduced, and great economic benefits are generated.

Drawings

FIG. 1 is a flow chart of the method of the present application.

Detailed Description

The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.

9页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于协同注意力的草图图像检索方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!