Skin monitoring method and device, electronic equipment and computer readable storage medium

文档序号:99041 发布日期:2021-10-15 浏览:4次 中文

阅读说明:本技术 皮肤监测方法、装置、电子设备及计算机可读存储介质 (Skin monitoring method and device, electronic equipment and computer readable storage medium ) 是由 陈福兴 刘兴云 李志阳 罗家祯 齐子铭 于 2021-07-23 设计创作,主要内容包括:本申请的实施例提供了一种皮肤监测方法、装置、电子设备和计算机可读存储介质,涉及数据处理领域。方法包括:采集目标用户在多个时间点的脸部皮肤数据;根据第一脸部皮肤数据中的特征点与第二脸部皮肤数据中的特征点的对应关系,对第一脸部皮肤数据和第二脸部皮肤数据进行对齐处理;比较对齐后的第一脸部皮肤数据和第二脸部皮肤数据,根据比较结果生成第二时间点的皮肤检测结果;根据第二时间点的皮肤检测结果更新目标用户的预设皮肤档案。本申请提供的皮肤监测方法,能够对目标用户的脸部皮肤的状态进行长期监测,从而较为全面和深入地对目标用户的脸部皮肤状态进行分析,并较为全面地反映目标用户脸部皮肤存在的问题。(The embodiment of the application provides a skin monitoring method and device, electronic equipment and a computer-readable storage medium, and relates to the field of data processing. The method comprises the following steps: acquiring facial skin data of a target user at a plurality of time points; performing alignment processing on the first face skin data and the second face skin data according to the corresponding relation between the feature points in the first face skin data and the feature points in the second face skin data; comparing the aligned first face skin data and the second face skin data, and generating a skin detection result of a second time point according to the comparison result; and updating the preset skin file of the target user according to the skin detection result at the second time point. The skin monitoring method provided by the application can be used for monitoring the state of the facial skin of the target user for a long time, so that the facial skin state of the target user can be comprehensively and deeply analyzed, and the problems of the facial skin of the target user can be comprehensively reflected.)

1. A method of skin monitoring, the method comprising:

acquiring facial skin data of a target user at a plurality of time points, wherein the facial skin data of the plurality of time points at least comprise first facial skin data acquired at a first time point and second facial skin data acquired at a second time point, and the second time point is later than the first time point;

aligning the first face skin data and the second face skin data according to the corresponding relation between the feature points in the first face skin data and the feature points in the second face skin data, wherein the feature points are used for representing position points in the face of the target user;

comparing the aligned first face skin data and the second face skin data, and generating a skin detection result of the second time point according to a comparison result;

and updating the preset skin file of the target user according to the skin detection result of the second time point, wherein the preset skin file of the target user comprises the skin detection result of at least one time point.

2. The method of claim 1, wherein the facial skin data comprises a face model, a texture map, the face model comprising a plurality of the feature points, the texture map indicating pixel points of the target user's face and coordinates of the pixel points in the face model.

3. The method according to claim 2, wherein the aligning the first face skin data and the second face skin data according to the correspondence between the feature points in the first face skin data and the feature points in the second face skin data comprises:

aligning the first face model and the second face model according to the corresponding relation between the feature points of the first face model in the first face skin data and the feature points of the second face model in the second face skin data;

and according to the aligned first face model and the aligned second face model, performing alignment processing on a first texture map in the first face skin data and a second texture map in the second face skin data.

4. The method of claim 3, wherein comparing the aligned first face skin data and the second face skin data and generating the skin detection result at the second time point according to the comparison result comprises:

and comparing the first texture map and the second texture map after the alignment treatment, and generating a skin detection result of the second time point according to a comparison result.

5. The method of claim 1, wherein the facial skin data further comprises a face model and a binary map indicating the coordinates of the color patches and the color patches of the target user's face in the face model.

6. The method according to claim 5, wherein the aligning the first face skin data and the second face skin data according to the correspondence between the feature points in the first face skin data and the feature points in the second face skin data comprises:

aligning the first face model and the second face model according to the corresponding relation between the feature points of the first face model in the first face skin data and the feature points of the second face model in the second face skin data;

and according to the aligned first face model and the aligned second face model, performing alignment processing on a first binary image in the first face skin data and a second binary image in the second face skin data.

7. The method of claim 6, wherein comparing the aligned first face skin data and the second face skin data and generating the skin detection result at the second time point according to the comparison result comprises:

and comparing the first binary image and the second binary image after the alignment treatment, and generating a skin detection result of the second time point according to the comparison result.

8. A skin monitoring device, characterized in that the device comprises:

the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring facial skin data of a target user at a plurality of time points, and the facial skin data of the plurality of time points at least comprise first facial skin data acquired at a first time point and second facial skin data acquired at a second time point, wherein the second time point is later than the first time point;

an alignment module, configured to perform alignment processing on the first face skin data and the second face skin data according to a correspondence between feature points in the first face skin data and feature points in the second face skin data, where the feature points are used to represent position points in a face of the target user;

the comparison module is used for comparing the aligned first face skin data and the second face skin data and generating a skin detection result of the second time point according to a comparison result;

and the updating module is used for updating the preset skin file of the target user according to the skin detection result of the second time point, wherein the preset skin file of the target user comprises the skin detection result of at least one time point.

9. An electronic device, characterized in that the electronic device comprises: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating over the bus when the electronic device is operating, the processor executing the machine-readable instructions to perform the steps of the method of any of claims 1-7.

10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, is adapted to carry out the steps of the method according to any one of claims 1-7.

Technical Field

The present application relates to the field of data processing, and in particular, to a skin monitoring method, apparatus, electronic device, and computer-readable storage medium.

Background

With the continuous development of living standard and scientific technology, people pay more and more attention to the health state of skin, particularly facial skin. The skin detection is a technology which can enable people to know the skin state more accurately, and people can obtain the data of the skin through the skin detection, so that the skin can be nursed and protected by selecting proper cosmetics and skin physical therapy schemes more pertinently.

The traditional skin detection method is to collect image information of skin under light of white light, polarized light and the like, and then analyze data of oil content, moisture, focus and the like in the skin through deep learning or a traditional image algorithm, so as to obtain data of multiple dimensions of pores, wrinkles, color spots and the like of facial skin.

However, this skin detection method can only detect the state data of the facial skin of the user at a certain time, and the analysis result is one-sided, and cannot deeply and comprehensively reflect the facial skin problem.

Disclosure of Invention

The present application provides a skin monitoring method, apparatus, electronic device and computer readable storage medium, which can monitor the state of the skin of the face of a user for a long time, and can analyze the state of the skin of the face more comprehensively and deeply, so as to more comprehensively reflect the problems existing in the skin of the face.

The embodiment of the application can be realized as follows:

in a first aspect, the present application provides a method of skin monitoring, the method comprising:

the method comprises the steps of collecting facial skin data of a target user at a plurality of time points, wherein the facial skin data of the plurality of time points at least comprise first facial skin data collected at a first time point and second facial skin data collected at a second time point, and the second time point is later than the first time point;

aligning the first face skin data and the second face skin data according to the corresponding relation between the feature points in the first face skin data and the feature points in the second face skin data, wherein the feature points are used for representing position points in the face of the target user;

comparing the aligned first face skin data and the second face skin data, and generating a skin detection result of a second time point according to the comparison result;

and updating the preset skin file of the target user according to the skin detection result of the second time point, wherein the preset skin file of the target user comprises the skin detection result of at least one time point.

In an optional embodiment, the face skin data includes a face model and a texture map, the face model includes a plurality of feature points, and the texture map is used to indicate pixel points of the face of the target user and coordinates of the pixel points in the face model.

In an alternative embodiment, performing an alignment process on the first face skin data and the second face skin data according to a correspondence relationship between feature points in the first face skin data and feature points in the second face skin data includes:

aligning the first face model and the second face model according to the corresponding relation between the characteristic points of the first face model in the first face skin data and the characteristic points of the second face model in the second face skin data;

and according to the first face model and the second face model after the alignment processing, performing alignment processing on a first texture map in the first face skin data and a second texture map in the second face skin data.

In an alternative embodiment, comparing the aligned first face skin data and the second face skin data, and generating a skin detection result at a second time point according to the comparison result, includes:

and comparing the first texture map and the second texture map after the alignment treatment, and generating a skin detection result of a second time point according to the comparison result.

In an alternative embodiment, the facial skin data further comprises a face model and a binary map indicating coordinates of mottled patches and color patches of the target user's face in the face model.

In an alternative embodiment, performing an alignment process on the first face skin data and the second face skin data according to a correspondence relationship between feature points in the first face skin data and feature points in the second face skin data includes:

aligning the first face model and the second face model according to the corresponding relation between the characteristic points of the first face model in the first face skin data and the characteristic points of the second face model in the second face skin data;

and according to the first face model and the second face model after the alignment processing, performing alignment processing on a first binary image in the first face skin data and a second binary image in the second face skin data.

In an alternative embodiment, comparing the aligned first face skin data and the second face skin data, and generating a skin detection result at a second time point according to the comparison result, includes:

and comparing the first binary image and the second binary image after the alignment treatment, and generating a skin detection result of a second time point according to the comparison result.

In an alternative embodiment, comparing the first binary image and the second binary image after the alignment process, and generating a skin detection result at a second time point according to the comparison result includes:

respectively counting the number of color patches and the area of each color patch in the first binary image and the second binary image after alignment processing to generate a statistical result;

and generating a skin detection result of the second time point according to the statistical result.

In an alternative embodiment, comparing the first binary image and the second binary image after the alignment process, and generating a skin detection result at a second time point according to the comparison result includes:

determining a color spot block corresponding to each color spot block in the aligned first binary image in the aligned second binary image based on a neighbor search algorithm to generate a search result;

and generating a skin detection result of the second time point according to the search result.

In an alternative embodiment, comparing the first binary image and the second binary image after the alignment process, and generating a skin detection result at a second time point according to the comparison result includes:

respectively counting the number of color patches and the area of each color patch in the first binary image and the second binary image after alignment processing to generate a statistical result;

determining a color spot block corresponding to each color spot block in the aligned first binary image in the aligned second binary image based on a neighbor search algorithm to generate a search result;

and generating a skin detection result of the second time point according to the statistical result and the search result.

In a second aspect, the present application provides a skin monitoring device, the device comprising:

the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring facial skin data of a target user at a plurality of time points, the facial skin data of the plurality of time points at least comprise first facial skin data acquired at a first time point and second facial skin data acquired at a second time point, and the second time point is later than the first time point;

the alignment module is used for aligning the first face skin data and the second face skin data according to the corresponding relation between the feature points in the first face skin data and the feature points in the second face skin data, and the feature points are used for representing position points in the face of the target user;

the comparison module is used for comparing the aligned first face skin data and the second face skin data and generating a skin detection result of a second time point according to the comparison result;

and the updating module is used for updating the preset skin file of the target user according to the skin detection result of the second time point, and the preset skin file of the target user comprises the skin detection result of at least one time point.

In a third aspect, the present application provides an electronic device, comprising: the electronic device comprises a processor, a storage medium and a bus, wherein the storage medium stores machine-readable instructions executable by the processor, when the electronic device runs, the processor and the storage medium are communicated through the bus, and the processor executes the machine-readable instructions to execute the steps of the method according to any one of the preceding implementation modes.

In a fourth aspect, the present application provides a computer-readable storage medium having a computer program stored thereon, the computer program, when executed by a processor, performing the steps of the method according to any of the preceding embodiments.

According to the skin monitoring method, the skin monitoring device, the electronic equipment and the computer readable storage medium, the first face skin data of the first time point of the target user and the second face skin data of the second time point after alignment are compared, the skin detection result of the second time point is generated according to the comparison result, and then the preset skin file of the target user is updated according to the skin detection result of the second time point, so that the state of the face skin of the target user is monitored for a long time, the face skin state of the target user is analyzed comprehensively and deeply, and the problems existing in the face skin of the target user are reflected comprehensively.

Drawings

In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.

Fig. 1 is a flowchart of a skin monitoring method according to an embodiment of the present application;

FIG. 2 is a schematic diagram of a binary map provided in accordance with an embodiment of the present application;

fig. 3 is a schematic view of a skin monitoring device according to an embodiment of the present application;

fig. 4 is a schematic view of an electronic device according to an embodiment of the present application.

Detailed Description

In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.

Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.

In the description of the present application, it should be noted that if the terms "upper", "lower", "inside", "outside", etc. are used for indicating the orientation or positional relationship based on the orientation or positional relationship shown in the drawings or the orientation or positional relationship which the present invention product is usually put into use, it is only for convenience of describing the present application and simplifying the description, but it is not intended to indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation and be operated, and thus, should not be construed as limiting the present application.

Furthermore, the appearances of the terms "first," "second," and the like, if any, are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.

It should be noted that the features of the embodiments of the present application may be combined with each other without conflict.

Before the embodiments of the present application are specifically described, an application scenario of the present application is described.

With the continuous development of living standard and scientific technology, people pay more and more attention to the health state of skin, particularly facial skin. The skin detection is a technology which can enable people to know the skin state more accurately, and people can obtain the data of the skin through the skin detection, so that the skin can be nursed and protected by selecting proper cosmetics and skin physical therapy schemes more pertinently. The traditional skin detection method is to collect image information of skin under light of white light, polarized light and the like, and then analyze data of oil content, moisture, focus and the like in the skin through deep learning or a traditional image algorithm, so as to obtain data of multiple dimensions of pores, wrinkles, color spots and the like of facial skin.

However, this skin detection method can only detect the state data of the facial skin of the user at a certain time, and the analysis result is one-sided, and cannot deeply and comprehensively reflect the facial skin problem.

In order to solve the problem, the application provides a skin monitoring method, a skin monitoring device, an electronic device and a computer-readable storage medium, which can monitor the state of the facial skin of a target user for a long time, so as to analyze the facial skin state of the target user more comprehensively and deeply and reflect the problems existing in the facial skin of the target user more comprehensively.

Referring to fig. 1, the skin monitoring method provided in the present application includes:

s101: the method comprises the steps of collecting facial skin data of a target user at a plurality of time points, wherein the facial skin data of the plurality of time points at least comprise first facial skin data collected at a first time point and second facial skin data collected at a second time point, and the second time point is later than the first time point.

Optionally, a structured light version of a panoramic AI (Artificial Intelligence) skin tester may be used to collect facial skin data of a target user, which is beneficial to improving the accuracy and integrity of the collected facial skin data, thereby improving the accuracy of the detection and analysis results of the facial skin data.

In the embodiment of the application, the facial skin data of the target user is firstly collected at a first time point, and then the facial skin data of the target user is collected again at a second time point after the first time point, so that the state of the facial skin of the target user is monitored for a long time according to the first facial skin data collected at the first time point and the second facial skin data collected at the second time point.

In a specific embodiment, the facial skin data of the target user can be periodically acquired, so that the state of the facial skin of the target user can be tracked for a long time, and the long-term monitoring of the facial skin state of the target user is realized. The time interval of the acquisition is not limited in the application.

It should be noted that the face skin data may include data representing colors of points of the face, data representing positions of the points, and the like.

S102: and aligning the first face skin data and the second face skin data according to the corresponding relation between the characteristic points in the first face skin data and the characteristic points in the second face skin data, wherein the characteristic points are used for representing position points in the face of the target user.

Specifically, the feature points may be vertices in a face model for representing position points in the face of the target user, for example, a specific position of a nose in the face may be represented by a plurality of feature points, a specific position of an eye in the face may be represented by a plurality of feature points, and the like.

Feature points in the face skin data of the same user at different time points generally do not change, and therefore, according to a corresponding relationship between feature points in the first face skin data and feature points in the second face skin data, the first face skin data and the second face skin data may be aligned, that is, the feature points in the first face skin data and the second face skin data coincide with each other, specifically, the second face skin data may be aligned to the first face skin data with the first face skin data as a reference, or the first face skin data may be aligned to the second face skin data with the second face skin data as a reference.

Through the alignment processing, the change of the second face skin data relative to the first face skin data can be better compared, so that the face skin state of the target user can be more comprehensively and deeply analyzed, and the problems of the face skin of the target user can be more comprehensively reflected.

S103: and comparing the aligned first face skin data and the second face skin data, and generating a skin detection result of a second time point according to the comparison result.

The skin detection result at the second time point can represent the change of second face skin data at the second time point relative to the first face skin data at the first time point, and by comparing the change of the face skin data at the second time point relative to the face skin data at the first time point after the alignment processing of the target user, the face skin state of the target user can be analyzed more comprehensively and deeply, and the problems of the face skin of the target user can be reflected more comprehensively.

S104: and updating the preset skin file of the target user according to the skin detection result of the second time point, wherein the preset skin file of the target user comprises the skin detection result of at least one time point.

Optionally, a preset skin profile of the target user may be pre-established according to the first collected facial skin data of the target user and the corresponding skin detection result.

It should be noted that the preset skin profile of the target user may include skin detection results of the target user in multiple dimensions, such as crow's foot print, frame peripheral print, raised head print, statute print, angular ridge print, eyebrow space print, lacrimal furrow, eye pouch, pore, keratin, acne, color spot, sensitivity, black eye ring, and the like, which is not limited in this application.

The skin monitoring method provided by the application comprises the steps of comparing first face skin data of a first time point of a target user after alignment with second face skin data of a second time point, generating a skin detection result of the second time point according to the comparison result, and updating a preset skin file of the target user according to the skin detection result of the second time point so as to monitor the long-term state of the face skin of the target user, so that the face skin state of the target user is analyzed comprehensively and deeply, and the problem of the face skin of the target user can be reflected comprehensively.

In an optional embodiment, the face skin data includes a face model and a texture map, the face model includes a plurality of feature points, and the texture map is used to indicate pixel points of the face of the target user and coordinates of the pixel points in the face model.

In particular, the face model may be in a cartesian coordinate system, i.e. in a three-dimensional space, and be composed of N feature points, each feature point having determined three-dimensional coordinates.

In addition, the texture map is used for indicating pixel points of the face of the target user and coordinates of the pixel points in the face model, optionally, the texture map may be composed of M pixel points, color information is stored in each pixel point, a position point in the texture map and each position point in the face model have a mapping relationship, a pixel of a certain position point in the texture map has face color information of a corresponding position on the face model, in other words, the texture map is used for indicating colors of different position points of the face of the target user and colors of position points corresponding to the face model.

Optionally, the texture map may be in a planar coordinate system, that is, in a two-dimensional space, each pixel point in the texture map has a determined two-dimensional coordinate, and the three-dimensional coordinates of the feature point in the face model may be transformed to the two-dimensional coordinates of the pixel point in the texture map by mapping.

Specifically, assuming that the three-dimensional coordinate of a certain feature point in the face model is (x, y, z), the two-dimensional coordinate of a pixel point in the texture map through mapping transformation is (u, v), the width of the texture map is W, the height of the texture map is H, R ═ pi, and G ═ 265, then the mapping transformation relationship is specifically:

in an alternative embodiment, performing an alignment process on the first face skin data and the second face skin data according to a correspondence relationship between feature points in the first face skin data and feature points in the second face skin data includes: aligning the first face model and the second face model according to the corresponding relation between the characteristic points of the first face model in the first face skin data and the characteristic points of the second face model in the second face skin data; and according to the first face model and the second face model after the alignment processing, performing alignment processing on a first texture map in the first face skin data and a second texture map in the second face skin data.

Specifically, the alignment processing of the first face model and the second face model can be realized by using a clustered ICP (Iterative Closest Point) algorithm, wherein the ICP algorithm is an algorithm based on a free form surface by using a Closest Point search method based on a data registration method, and the clustered ICP algorithm is an algorithm for performing data registration by combining color information on the basis of the ICP algorithm. Specifically, the second face model may be aligned with the first face model based on the first face model, or the first face model may be aligned with the second face model based on the second face model, which is not limited in this application.

Taking the first face model as a reference, aligning the second face model with the first face model, and setting the first face model as M1The second face model is M2Rigid transformation T from the second face model to the first face model can be calculated through a clustered ICP algorithm, and therefore the second face model is aligned with the first face model to obtain an aligned second face model M'2I.e. M'2=TM2

Based on the rigid transformation of the aligned second face model and the original second face model and the mapping transformation relationship between the feature points in the original second face model and the pixel points in the second texture map, the coordinates of the pixel points in the second texture map can be transformed into the coordinates corresponding to the feature points in the aligned second face model, so that the alignment of the first texture map in the first face skin data and the second texture map in the second face skin data is realized.

The principle of aligning the first face model with the second face model is the same as the above with reference to the second face model, and the details are not repeated here.

In an alternative embodiment, comparing the aligned first face skin data and the second face skin data, and generating a skin detection result at a second time point according to the comparison result, includes: and comparing the first texture map and the second texture map after the alignment treatment, and generating a skin detection result of a second time point according to the comparison result.

Specifically, the skin color change condition of the target user can be obtained by comparing the color transformations of the first texture map and the second texture map after the alignment processing, that is, the skin color of the target user is tracked; the change of the color around the eyes of the first texture map and the second texture map after the alignment processing may also be compared to obtain the change of the black eye of the target user, that is, the black eye of the target user is tracked, which is not limited in the present application.

In an alternative embodiment, comparing the first texture map and the second texture map after the alignment processing, and generating a skin detection result at the second time point according to the comparison result includes: respectively counting the color mean values of the plaques of all colors in the first texture image and the second texture image after the alignment treatment to generate a statistical result; and generating a skin detection result of the second time point according to the statistical result.

Assuming that the color mean of all colored patches in the first texture map after the alignment process is cave1The average color value of all the color patches in the second texture map after the alignment treatment is cave2Then the skin detection result at the second time point includes the variation of the average value of the color patch, from Δ cave=cave2-cave1Measure, if Δ caveLess than zero indicates that the color mean of the color patch of the target user's facial skin at the second time point is less than the color mean of the color patch of the target user's facial skin at the first time point.

In an alternative embodiment, comparing the first texture map and the second texture map after the alignment processing, and generating a skin detection result at the second time point according to the comparison result includes: respectively counting the color mean values of the plaques of all colors in the first texture image and the second texture image after the alignment treatment to generate a statistical result; determining a color spot block corresponding to each color spot block in the aligned first texture map in the aligned second texture map based on a neighbor search algorithm, and generating a search result; and generating a skin detection result of the second time point according to the statistical result and the search result.

Suppose a color patch S in the first texture map after the alignment process1Has a color mean value of c1And the color mean value of the corresponding color patches in the second texture map after the alignment treatment is c2Then the skin detection result at the second time point includes the color mean change of the color patch, by Δ c ═ c2-c1And if the deltac is less than zero, the color of the color spot is lighter, otherwise, the color of the color spot is darker.

Referring to fig. 2, in an alternative embodiment, the facial skin data further includes a face model and a binary map, and the binary map is used to indicate the coordinates of the color patches and the color patches of the face of the target user in the face model.

The binary image is used for indicating coordinates of color patches and color patches of the face of the target user in the face model, optionally, the binary image can be composed of a plurality of pixel points, the color of the pixel point in each color patch can be set to be white, the color of the pixel point outside the color patches can be set to be black, of course, the binary image can also be set to be other colors, only the color of the pixel point inside the color patch is required to be different from the color of the pixel point outside the color patches, and the application does not limit the color.

Optionally, the binary image may be in a planar coordinate system, that is, in a two-dimensional space, each color patch in the binary image and a pixel point in the color patch have a determined two-dimensional coordinate, and the three-dimensional coordinate of the feature point in the face model may be transformed to the two-dimensional coordinate of the pixel point in the texture map through mapping, where the mapping transformation relationship is specifically the same as the mapping transformation of the three-dimensional coordinate of the feature point in the face model and the two-dimensional coordinate of the pixel point in the texture map, and is not described here again.

In an alternative embodiment, performing an alignment process on the first face skin data and the second face skin data according to a correspondence relationship between feature points in the first face skin data and feature points in the second face skin data includes: aligning the first face model and the second face model according to the corresponding relation between the characteristic points of the first face model in the first face skin data and the characteristic points of the second face model in the second face skin data; and according to the first face model and the second face model after the alignment processing, performing alignment processing on a first binary image in the first face skin data and a second binary image in the second face skin data.

Specifically, the alignment processing of the first face model and the second face model can be realized by using a clustered ICP (Iterative Closest Point) algorithm, which is not described herein again.

Based on the rigid transformation of the aligned second face model and the original second face model and the mapping transformation relationship between the feature points in the original second face model and the color patches in the second binary image, the coordinates of the pixel points in the second texture image can be transformed into the coordinates corresponding to the feature points in the aligned second face model, so that the alignment of the first binary image in the first face skin data and the second binary image in the second face skin data is realized.

The principle of aligning the first face model with the second face model is the same as the above with reference to the second face model, and the details are not repeated here.

In an alternative embodiment, comparing the aligned first face skin data and the second face skin data, and generating a skin detection result at a second time point according to the comparison result, includes: and comparing the first binary image and the second binary image after the alignment treatment, and generating a skin detection result of a second time point according to the comparison result.

Specifically, the color patch change condition of the target user can be obtained by comparing the first binary image and the second binary image after the alignment processing, that is, the color patch of the target user is tracked.

It should be noted that, in the embodiment of the present application, tracking and continuous monitoring may also be performed on fishtail lines, frame peripheral lines, raised head lines, statuary lines, mouth and corner lines, eyebrow ridges, lacrimal canals, eye bags, pores, keratin, acne, color spots, sensitivities, black eye circles, and the like of a target user, and a specific monitoring method is similar to the monitoring method for color patches, and is not described herein again.

In addition, in this embodiment of the application, a skin detection result at the second time point may also be generated according to a comparison result of the first texture map and the second texture map after the alignment processing and a comparison result of the first binary map and the second binary map after the alignment processing. For example, the position information of the color patch is obtained by comparing the first binary image and the second binary image after the alignment processing, and the color change information of the color patch at the corresponding position is obtained from the position information of the color patch and the first texture image and the second texture image after the alignment processing. Of course, this is merely an example and does not represent a limitation of the present application.

In an alternative embodiment, comparing the first binary image and the second binary image after the alignment process, and generating a skin detection result at a second time point according to the comparison result includes: respectively counting the number of color patches and the area of each color patch in the first binary image and the second binary image after alignment processing to generate a statistical result; and generating a skin detection result of the second time point according to the statistical result.

Specifically, assume that the number of color patches in the first binary image after the alignment process is n1The number of color patches in the second binary image after the alignment process is n2Then, the skin detection result at the second time point includes the number of the color patches changed from Δ n to n2-n1And if the delta n is less than zero, the number of the facial skin spots of the target user at the second time point is less than that of the facial skin spots of the target user at the first time point.

Assuming that the total area of all color patches in the first binary image after the alignment process is asum1The total area of all the color patches in the second binary image after the alignment treatment is asum2Then the skin detection result at the second time point includes the change of the total area of the color patch, from Δ asum=asum2-asum1Measured if Δ asumLess than zero indicates that the total area of color patches on the facial skin of the target user at the second time point is less than the first time pointThe total area of the pigmented patches of the skin of the face of the user is spotted.

In an alternative embodiment, comparing the first binary image and the second binary image after the alignment process, and generating a skin detection result at a second time point according to the comparison result includes: determining a color spot block corresponding to each color spot block in the aligned first binary image in the aligned second binary image based on a neighbor search algorithm to generate a search result; and generating a skin detection result of the second time point according to the search result.

Specifically, the neighbor search algorithm may be: a certain color patch S in the first binary image after the alignment treatment1Each pixel in the second binary image after the alignment process finds the color at the same pixel in the second binary image after the alignment process, and if the color is white, the pixel belongs to the same pixel as S in the second binary image after the alignment process1Corresponding color spot block S2To belong to S2If the color is black, the color spot disappears, and subsequent processes are not carried out; for color patch S2The pixel of (2) calculates the center of gravity and the length l furthest from the center of gravitymax(ii) a To belong to S2Each pixel p is taken as a circle center, and the search radius r is equal to lmaxA pixel in the range of (1) is searched, and if the color is white, the pixel belongs to a color patch of S2To belong to S2If the color is black, the color spot disappears, and subsequent processes are not carried out; repeating the steps until all the color patches S are found2The pixel of the S-shaped image is the same as that of the S-shaped image, the neighbor search algorithm is adopted, the search range is larger each time, the search efficiency is favorably improved, and the S-shaped image belongs to S more quickly2All of which are accommodated.

Optionally, the skin detection result further includes a change result of each color patch in the facial skin of the target user, and according to the search result, the change result of each color patch in the facial skin of the target user can be obtained by comparing a corresponding condition of each color patch in the aligned second binary image with each color patch in the aligned first binary image.

In an alternative embodiment, comparing the first binary image and the second binary image after the alignment process, and generating a skin detection result at a second time point according to the comparison result includes: respectively counting the number of color patches and the area of each color patch in the first binary image and the second binary image after alignment processing to generate a statistical result; determining a color spot block corresponding to each color spot block in the aligned first binary image in the aligned second binary image based on a neighbor search algorithm to generate a search result; and generating a skin detection result of the second time point according to the statistical result and the search result.

Specifically, assume that a certain color patch S in the first binary image after the alignment process1Has an area of a1The area of the corresponding color patch in the second binary image after the alignment process is a2Then the skin detection result at the second time point includes the area change of the patch, by Δ a ═ a2-a1In other words, if Δ a is less than zero, the color spot is better, otherwise the color spot is larger.

In the embodiment of the present application, alignment processing and comparison between the first binary image and the second binary image are similar to alignment processing and comparison between the first texture image and the second texture image, and an unreferenced part of the above one party may be supplemented by the other party, which is not described herein again.

Referring to fig. 3, the present application provides a skin monitoring device 30, comprising:

the collecting module 301 is configured to collect facial skin data of a target user at multiple time points, where the facial skin data at the multiple time points at least include first facial skin data collected at a first time point and second facial skin data collected at a second time point, where the second time point is later than the first time point.

An alignment module 302, configured to perform alignment processing on the first face skin data and the second face skin data according to a correspondence between feature points in the first face skin data and feature points in the second face skin data, where the feature points are used to represent position points in a face of a target user.

The comparing module 303 is configured to compare the aligned first face skin data and the second face skin data, and generate a skin detection result at a second time point according to the comparison result.

An updating module 304, configured to update a preset skin profile of the target user according to the skin detection result at the second time point, where the preset skin profile of the target user includes the skin detection result at least one time point.

In an optional embodiment, the face skin data includes a face model and a texture map, the face model includes a plurality of feature points, and the texture map is used to indicate pixel points of the face of the target user and coordinates of the pixel points in the face model.

In an optional embodiment, the alignment module 302 is specifically configured to perform alignment processing on the first face model and the second face model according to a correspondence between feature points of the first face model in the first face skin data and feature points of the second face model in the second face skin data; and according to the first face model and the second face model after the alignment processing, performing alignment processing on a first texture map in the first face skin data and a second texture map in the second face skin data.

In an optional embodiment, the alignment module 302 is further specifically configured to compare the first texture map and the second texture map after the alignment processing, and generate a skin detection result at the second time point according to the comparison result. In an alternative embodiment, the facial skin data further comprises a face model and a binary map indicating coordinates of mottled patches and color patches of the target user's face in the face model.

In an optional embodiment, the alignment module 302 is specifically configured to perform alignment processing on the first face model and the second face model according to a correspondence between feature points of the first face model in the first face skin data and feature points of the second face model in the second face skin data; and according to the first face model and the second face model after the alignment processing, performing alignment processing on a first binary image in the first face skin data and a second binary image in the second face skin data.

In an optional embodiment, the alignment module 302 is further specifically configured to compare the first binary map and the second binary map after the alignment processing, and generate a skin detection result at the second time point according to the comparison result.

In an optional embodiment, the alignment module 302 is further specifically configured to count the number of color patches and the areas of the color patches in the first binary image and the second binary image after the alignment processing, respectively, and generate a statistical result; and generating a skin detection result of the second time point according to the statistical result.

In an optional embodiment, the alignment module 302 is further specifically configured to determine, based on a neighbor search algorithm, a color spot block corresponding to each color spot block in the aligned first binary image in the aligned second binary image, and generate a search result; and generating a skin detection result of the second time point according to the search result.

In an optional embodiment, the alignment module 302 is further specifically configured to count the number of color patches and the areas of the color patches in the first binary image and the second binary image after the alignment processing, respectively, and generate a statistical result; determining a color spot block corresponding to each color spot block in the aligned first binary image in the aligned second binary image based on a neighbor search algorithm to generate a search result; and generating a skin detection result of the second time point according to the statistical result and the search result.

Referring to fig. 4, the present application provides an electronic device 40, including: a processor 401, a storage medium 402 and a bus 403, wherein the storage medium 402 stores machine-readable instructions executable by the processor 401, when the electronic device 40 is operated, the processor 401 communicates with the storage medium 402 via the bus 403, and the processor 401 executes the machine-readable instructions to perform the steps of any one of the methods according to the foregoing embodiments.

The present application further provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method according to any of the preceding embodiments.

According to the skin monitoring method, the skin monitoring device, the electronic equipment and the computer readable storage medium, the first face skin data of the first time point of the target user and the second face skin data of the second time point after alignment are compared, the skin detection result of the second time point is generated according to the comparison result, and then the preset skin file of the target user is updated according to the skin detection result of the second time point, so that the long-term state of the face skin of the target user is monitored, the face skin state of the target user is analyzed comprehensively and deeply, and the problems existing in the face skin of the target user can be reflected comprehensively. In addition, the skin monitoring method provided by the application can be applied to the field of skin health, and can be used for establishing a skin health file of a user by tracking dimensional health state changes of the user in pores, wrinkles, color spots, acne and the like for a long time and continuously and effectively tracking skin treatment results and progress, so that a more accurate treatment scheme is guided to be developed, and powerful visual display which is more convenient to communicate is provided.

In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.

The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to perform some steps of the methods according to the embodiments of the present application. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用户睡眠质量的确定方法和装置、电子设备和存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!