Method for displaying skin details in augmented reality mode and electronic equipment

文档序号:154357 发布日期:2021-10-26 浏览:13次 中文

阅读说明:本技术 以增强现实方式显示皮肤细节的方法及电子设备 (Method for displaying skin details in augmented reality mode and electronic equipment ) 是由 周一丹 卢曰万 董辰 郜文美 于 2020-04-26 设计创作,主要内容包括:本申请提供一种以增强现实方式显示皮肤细节的方法及电子设备,其中,以增强现实方式显示皮肤细节的方法应用于电子设备,该方法包括:获取的人脸图像;确定所述人脸图像中特定位置的图像特征;获取与所述特定位置的图像特征对应的皮肤状态;根据针对所述特定位置的预设的操作激活与所述皮肤状态对应的增强内容,以及将所述增强内容与所述人脸图像叠加地显示。根据本申请实施例的方法,能够将增强内容与人脸图像相结合,可以放大皮肤细节,显示多种皮肤问题的内部结构,使用户了解到皮肤问题的根源,更加深刻地学习皮肤相关知识,增添用户使用的趣味性。(The application provides a method for displaying skin details in an augmented reality mode and electronic equipment, wherein the method for displaying the skin details in the augmented reality mode is applied to the electronic equipment, and the method comprises the following steps: acquiring a face image; determining image characteristics of a specific position in the face image; acquiring a skin state corresponding to the image feature of the specific position; activating the enhancement content corresponding to the skin state according to a preset operation for the specific position, and displaying the enhancement content and the face image in an overlapping manner. According to the method, the enhancement content can be combined with the face image, the skin details can be amplified, the internal structures of various skin problems can be displayed, a user can know the root of the skin problems, relevant knowledge of the skin can be learned more deeply, and the use interestingness of the user is increased.)

1. A method of displaying skin details, comprising:

the electronic equipment acquires a face image;

the electronic equipment determines image characteristics of a specific position in the face image;

the electronic equipment acquires a skin state corresponding to the image feature of the specific position;

and the electronic equipment responds to the preset operation of the specific position and displays the enhanced content.

2. The method of claim 1, wherein the electronic device determines image features of a specific location in the face image, comprising:

the electronic equipment determines a region of interest in the face image, and determines image features of a specific position in the region of interest.

3. The method of claim 2, wherein determining the region of interest on the face image comprises:

detecting key points of a face in the face image;

and dividing an interested area according to the key points.

4. The method of claim 3, wherein the key points comprise locations of facial contours and facial features of the face in the face image, and

and dividing the region of interest according to the key points, wherein the region of interest comprises a region obtained after the position of the facial feature outline is removed from the face image.

5. The method of claim 1 or 2, wherein the electronic device determines image features of a specific location in the face image, comprising:

determining feature points in the face image, and taking the feature points as the specific positions,

wherein the image features of the feature points are different from the image features of the peripheral region of the feature points.

6. The method of claim 1, wherein the electronic device activates the enhanced content corresponding to the skin state according to a preset operation for the specific location, comprising:

determining that the operation for the display screen is a zoom-in operation for the specific location;

determining and acquiring amplified parameters of a specific position in the face image;

and when the amplified parameter reaches a preset threshold value, activating the enhancement content corresponding to the operation and the skin state.

7. The method of claim 1, wherein the electronic device activates the enhanced content corresponding to the skin state according to a preset operation for the specific location, comprising:

determining that the operation being performed is a preset clicking operation for the specific location;

activating enhancement content corresponding to the operation and the skin state.

8. The method of claim 1, wherein the electronic device activates the enhanced content corresponding to the skin state according to a preset operation for the specific location, comprising:

determining that the operation is a first operation for the particular location;

activating enhancement content corresponding to the operation and the skin state.

9. The method of claim 7 or 8, further comprising:

before activating the enhancement content corresponding to the operation and the skin state, determining whether the amplified multiple of a specific position in the face image is larger than or equal to a first threshold value, and if so, activating the enhancement content.

10. The method of claim 1, wherein the enhanced content is content in one form of image, text, video, audio, or a combination of at least two forms.

11. The method of claim 10, wherein the enhancement content comprises: an internal structure image of a subcutaneous tissue structure located below the dermis of the human face corresponding to the skin state, a formation principle corresponding to the skin state, and a care suggestion for the skin state.

12. An electronic device, comprising:

the collector is used for obtaining a face image;

the processor is used for determining the image characteristics of a specific position in the face image;

the processor acquires a skin state corresponding to the image feature of the specific position;

the processor is used for responding to preset operation of a specific position, namely activating the enhanced content corresponding to the skin state, and calling the enhanced content corresponding to the skin state so as to display the enhanced content on the display screen.

13. The device of claim 12, wherein the processor is specifically configured to: and determining a region of interest in the face image, and determining image characteristics of a specific position in the region of interest.

14. The device of claim 13, wherein the processor is further specifically configured to:

detecting key points of a face in the face image;

and dividing an interested area according to the key points.

15. The apparatus of claim 12 or 13, wherein the processor is further configured to:

determining feature points in the face image, and taking the feature points as the specific positions,

wherein the image features of the feature points are different from the image features of the peripheral region of the feature points.

16. The device of claim 12, wherein the processor is specifically configured to:

determining that the operation for the display screen is a zoom-in operation for the specific location;

determining and acquiring amplified parameters of a specific position in the face image;

and when the amplified parameter reaches a preset threshold value, activating the enhancement content corresponding to the operation and the skin state.

17. The device of claim 12, wherein the processor is specifically configured to:

determining that the operation being performed is a preset clicking operation for the specific location;

activating enhancement content corresponding to the operation and the skin state.

18. The device of claim 12, wherein the processor is specifically configured to:

determining that the operation is a first operation for the particular location;

activating enhancement content corresponding to the operation and the skin state.

19. The apparatus of claim 17 or 18, wherein the processor is further configured to:

before activating the enhancement content corresponding to the operation and the skin state, determining whether the amplified multiple of a specific position in the face image is larger than or equal to a first threshold value, and if so, activating the enhancement content.

20. An electronic device, characterized in that: comprises a processor and a memory, wherein the processor is connected with the memory,

the memory has stored therein an instruction that,

the processor configured to read the instructions stored in the memory to perform the method of any of claims 1-11.

21. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to perform the method of any one of claims 1-11.

Technical Field

The present application relates to the field of augmented reality display technologies, and in particular, to a method and an apparatus for augmented reality display of skin details, an electronic device, and a computer-readable storage medium.

Background

Augmented Reality (AR) is a technology for calculating the position and angle of a camera image in real time and adding a corresponding image, and the technology aims to sleeve a virtual world on a screen in the real world and perform interaction. Along with the improvement of the operational capability of portable electronic products, the application of augmented reality is wider and wider.

There are currently many applications of skin analysis, even specialized skin analysis instruments, and such skin analysis research also utilizes augmented reality techniques to present skin surface details to the user. In addition to the picture of the skin surface details, the user has more needs for some additional information, including the cause of the skin problem, the solution, etc.

Disclosure of Invention

In view of this, the present application provides a method, an apparatus, an electronic device and a computer-readable storage medium for augmented reality display of skin details, which can augment reality to display a subcutaneous structural model corresponding to a skin problem, so that a user can visually see a subcutaneous tissue structure corresponding to the skin problem, thereby improving interestingness and satisfying the user's exploration requirement for the subcutaneous tissue structure corresponding to the skin problem.

Some embodiments of the present application provide a method of magnifying an augmented reality display of skin details. The present application is described below in terms of several aspects, embodiments and advantages of which are mutually referenced.

In a first aspect, the present application provides a method for augmented reality display of enlarged skin details, which is applied to an electronic device, where the electronic device includes a display screen, and the method includes: the method comprises the steps that the electronic equipment acquires a face image to be analyzed and displays the face image on a display screen of the electronic equipment, for example, the face image can be acquired through an image acquisition function such as a mobile terminal and a shooting function, and the acquired face image is displayed on the display screen of the smart phone; the electronic equipment determines the image characteristics of a specific position in the face image, wherein the image characteristics can be color characteristics, pattern outlines, color intensity or combination characteristics of more than two of the color characteristics, the pattern outlines, the color intensity and the like of the specific position; the electronic equipment acquires a skin state corresponding to the image characteristics of the specific position, wherein the skin state can be blackheads, acnes, moles and the like; the electronic device activates the enhanced content corresponding to the skin state according to a preset operation for a specific location to display the enhanced content on the display screen. Wherein the augmented content is pre-stored, including virtual content, or a combination of virtual and real content. The skin state and the enhanced content of the preset position are overlapped on the display screen to be displayed, so that the interestingness of the user is improved, and the user experience is improved.

In a possible implementation of the first aspect, the determining, by the electronic device, an image feature of a specific location in the face image includes: the method comprises the steps of determining an interested area in a face image, determining the image characteristics of a specific position in the interested area, and determining the image characteristics of the specific position in the range of the interested area, so that the accuracy of image characteristic analysis can be improved.

In one possible implementation of the first aspect, determining a region of interest on a face image includes: detecting key points of a face in the face image, wherein the key points comprise: the position where the outline of the face, eyebrows, nose, eyes, and mouth outline are located; the basic contour of the face can be determined by detecting the key points, and the region of interest can be divided according to the key points.

In a possible implementation of the first aspect, the key points include positions of facial contours and facial contours of facial features in the facial image, and the dividing the region of interest according to the key points includes removing the positions of the facial contours from the facial image.

In a possible implementation of the first aspect, determining an image feature of a specific location in a face image includes: and determining feature points in the face image, and taking the feature points as specific positions, wherein the image features of the feature points are different from those of the peripheral area of the feature points.

In one possible implementation of the first aspect described above, the skin condition includes normal skin and problem skin, and the problem skin includes at least one type of problem skin.

In one possible implementation of the first aspect described above, the type of problem skin includes at least one of a pox, a mole, or a blackhead.

In one possible implementation of the first aspect described above, the type of skin corresponding to each question includes at least one grade.

In one possible implementation of the first aspect, activating the enhancement content corresponding to the skin state according to a preset operation for a specific location includes: determining that the operation for the display screen is a zoom-in operation for a specific position; determining and acquiring amplified parameters of a specific position in a face image; when the amplified parameter reaches a preset threshold, the enhancement content corresponding to the operation and the skin state is activated. By the method, the using fluency of the user can be improved, and the user experience is improved.

In one possible implementation of the first aspect, activating the enhancement content corresponding to the skin state according to a preset operation for a specific location includes: determining that the operation being performed is a preset click operation for a specific location; the enhanced content corresponding to the operation and the skin state is activated. The user can click the skin that wants to watch as required, and then improves the travelling comfort that the user used.

In one possible implementation of the first aspect, activating the enhancement content corresponding to the skin state according to a preset operation for a specific location includes: determining that the operation is a first operation for a particular location; the enhanced content corresponding to the operation and the skin state is activated. By the method, the using fluency of the user can be improved, and the user experience is improved.

In a possible implementation of the first aspect, before activating the enhancement content corresponding to the operation and the skin state, determining whether a magnified parameter of a specific position in the face image is greater than or equal to a first threshold, and if so, activating the enhancement content.

In a possible implementation of the above first aspect, the augmented content comprises virtual content, or a combination of virtual content and real content.

In one possible implementation of the first aspect, the content is enhanced in one form of image, text, video, audio, or a combination of at least two forms.

In one possible implementation of the first aspect, the enhancement content includes: and at least one of an internal structure image of a subcutaneous tissue structure located under the dermis of the human face corresponding to the skin state, a formation principle corresponding to the skin state, and a care suggestion for the skin state. The user can take care of the skin of the user according to the suggestion, and the experience of the user is further improved.

In a second aspect, the present application provides an apparatus for displaying skin details in an augmented reality manner, comprising:

the acquisition module is used for acquiring a face image to be analyzed;

the detection module is used for determining the image characteristics of a specific position in the face image;

the detection module acquires a skin state corresponding to the image characteristics of the specific position;

and the processing module is used for responding to the preset operation of the specific position, namely activating the enhanced content corresponding to the skin state, and calling the enhanced content corresponding to the skin state to display the enhanced content.

According to the device that shows skin details with augmented reality mode of this application embodiment, can combine together augmented content and reality content (face image), can enlarge skin details, show the inner structure of multiple skin problem, and then make the user know the root cause of skin problem, learn skin relevant knowledge more deeply, increase interest.

In a possible implementation of the second aspect, the detection module is specifically configured to: the method comprises the steps of determining an interested area in a face image, determining the image characteristics of a specific position in the interested area, and determining the image characteristics of the specific position in the range of the interested area, so that the accuracy of image characteristic analysis can be improved.

In a possible implementation of the second aspect, the detection module is further specifically configured to detect a key point of the face in the face image, where the key point includes: the position where the outline of the face, eyebrows, nose, eyes, and mouth outline are located; the basic contour of the face can be determined by detecting the key points, and the region of interest can be divided according to the key points.

In a possible implementation of the second aspect, the key points include positions of facial contours and facial contours of facial features in the facial image, and the dividing of the region of interest according to the key points includes removing the positions of the facial contours from the facial image.

In a possible implementation of the second aspect, the detecting module is further configured to: and determining feature points in the face image, and taking the feature points as specific positions, wherein the image features of the feature points are different from those of the peripheral area of the feature points.

In one possible implementation of the above second aspect, the skin condition includes normal skin and problem skin, and the problem skin includes at least one type of problem skin.

In one possible implementation of the second aspect above, the type of problem skin comprises at least one of a pox, a mole or a blackhead.

In one possible implementation of the above second aspect, the type of skin corresponding to each question includes at least one grade.

In a possible implementation of the second aspect, the processing module is specifically configured to: determining that the operation for the display screen is a zoom-in operation for a specific position; determining and acquiring amplified parameters of a specific position in a face image; when the amplified parameter reaches a preset threshold, the enhancement content corresponding to the operation and the skin state is activated. By the method, the using fluency of the user can be improved, and the user experience is improved.

In a possible implementation of the second aspect, the processing module is specifically configured to: determining that the operation being performed is a preset click operation for a specific location; the enhanced content corresponding to the operation and the skin state is activated. The user can click the skin that wants to watch as required, and then improves the travelling comfort that the user used.

In a possible implementation of the second aspect, the processing module is specifically configured to: determining that the operation is a sliding operation according to a preset track for a specific position; the enhanced content corresponding to the operation and the skin state is activated. By the method, the using fluency of the user can be improved, and the user experience is improved.

In a possible implementation of the second aspect, before activating the enhancement content corresponding to the operation and the skin state, determining whether a magnified parameter of a specific position in the face image is greater than or equal to a first threshold, and if so, activating the enhancement content.

In one possible implementation of the above second aspect, the augmented content comprises virtual content, or a combination of virtual and real content.

In one possible implementation of the second aspect, the content is enhanced in one form of image, text, video, audio, or a combination of at least two forms.

In one possible implementation of the second aspect, the enhancement content includes: and at least one of an internal structure image of a subcutaneous tissue structure located under the dermis of the human face corresponding to the skin state, a formation principle corresponding to the skin state, and a care suggestion for the skin state.

In a third aspect, the present application provides an electronic device, including a collector and a processor connected to the collector, where the collector is configured to obtain a face image, the processor is configured to determine an image feature of a specific location in the face image, the processor is configured to obtain a skin state corresponding to the image feature of the specific location, and the processor is configured to respond to a preset operation of the specific location, that is, to activate enhancement content corresponding to the skin state, and call the enhancement content corresponding to the skin state, so that the enhancement content is displayed on a display screen. The skin state and the enhanced content of the preset position are overlapped on the display screen to be displayed, so that the interestingness of the user is improved, and the user experience is improved.

In a possible implementation of the third aspect, the processor is specifically configured to determine a region of interest in the face image, and determine an image feature at a specific location in the region of interest, so as to improve accuracy of image feature analysis.

In a possible implementation of the third aspect, the processor is specifically further configured to detect a key point of the face in the face image, where the key point includes: the position where the outline of the face, eyebrows, nose, eyes, and mouth outline are located; the basic contour of the face can be determined by detecting the key points through the processor, and the region of interest can be divided according to the key points.

In a possible implementation of the third aspect, the processor is further configured to determine a feature point in the face image, and take the feature point as the specific location, where an image feature of the feature point is different from an image feature of a peripheral area of the feature point.

In a possible implementation of the third aspect, the processor is specifically configured to determine that the operation on the display screen is a zoom-in operation for a specific location, and the processor determines to acquire a zoomed-in parameter for the specific location in the face image. When the amplified parameter reaches a predetermined threshold, the processor activates the enhancement content corresponding to the operation and skin condition. By the method, the using fluency of the user can be improved, and the user experience is improved.

In one possible implementation of the third aspect described above, the processor is specifically configured to determine that the operation being performed is a preset click operation for a specific location to activate the enhanced content corresponding to the operation and the skin state. The user can click the skin that wants to watch as required, and then improves the travelling comfort that the user used.

In one possible implementation of the third aspect, the processor is specifically configured to determine that the operation is a first operation for a particular location, and the processor activates the enhancement content corresponding to the operation and the skin state. By the method, the using fluency of the user can be improved, and the user experience is improved.

In a possible implementation of the third aspect, the processor is further configured to determine whether a magnified scale factor of a specific location in the face image is greater than or equal to a first threshold before activating the enhancement content corresponding to the operation and the skin condition, and if so, activate the enhancement content.

In a fourth aspect, the present application provides an electronic device, including a processor and a memory, where the memory stores instructions, and the processor is configured to read the instructions stored in the memory to execute the method in the foregoing first aspect.

In a fifth aspect, the present application provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, causes the processor to execute the method of the first aspect.

Drawings

Fig. 1 is a schematic diagram of an application scenario of augmented reality display of a face image according to an embodiment of the present application;

FIG. 2 is a schematic diagram of an internal structure model corresponding to a skin condition according to an embodiment of the present application;

FIG. 3 is a schematic diagram of a user interface for operating a mobile phone according to an embodiment of the present application;

FIG. 4 is a flow chart of a method of enlarged augmented reality display of skin details according to one embodiment of the present application;

fig. 5 is a schematic structural diagram of a face key point according to an embodiment of the present application;

FIG. 6 is a flow diagram of a method for enhanced content activation corresponding to a skin condition according to one embodiment of the present application;

FIG. 7 is a flow diagram of a method of enhanced content activation corresponding to a skin condition according to another embodiment of the present application;

FIG. 8 is a flowchart of a method for enhanced content activation corresponding to a skin condition according to yet another embodiment of the present application;

FIG. 9 is a diagram illustrating a scenario of a user operating a mobile phone interface according to an embodiment of the present application;

FIG. 10 is a flowchart of a user operating a mobile phone interface according to an embodiment of the present application;

FIG. 11 is an augmented reality display device for skin details according to an embodiment of the present application;

FIG. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application;

FIG. 13 is a block diagram of an apparatus of some embodiments of the present application;

fig. 14 is a block diagram of a system on a chip (SoC) in accordance with some embodiments of the present application.

Detailed Description

Embodiments of the present application will be further described with reference to the accompanying drawings.

According to some embodiments of the present application, a method and related apparatus for realising details of skin in an augmented reality manner is disclosed. The method and apparatus according to the present application can be applied to the skin of various parts of the human body, and in the detailed description of the present application, the skin of a human face is taken as an example for the sake of simplicity of description.

Fig. 1 shows a schematic diagram of an application scene of a human face image augmented reality display. As shown in fig. 1, a user 101 acquires a face image of himself or a friend around himself via a camera of a mobile terminal 102, so that the face image is displayed on a display screen of the mobile terminal 102, and parameters such as resolution of the face image mainly depend on parameters of a camera component equipped with the mobile terminal 102, such as resolution of the camera.

Further, the face image is analyzed by the detection module of the mobile terminal, and the skin state of the face image of the user is obtained, where the skin state may include one of a problem skin and a normal skin (healthy skin) other than the problem skin, and for the skin of the face, the problem skin includes but is not limited to one or more of blackheads, smallpox and nevi, and the type (e.g., blackheads, smallpox or nevi, etc.), the grade (e.g., pimples and pustules according to the severity of smallpox) and the location information (e.g., the location information of the smallpox in the face image) of the face image are stored.

Corresponding to the above skin type, different categories of skin type, different levels of each category of skin type, enhanced content is correspondingly provided to display some additional information for the skin condition, which may be pre-stored in a memory local to the terminal device or in a remote server connected to the terminal device to facilitate quick recall of the terminal device. The enhancement content may be a content obtained by the terminal device through machine learning based on the existing enhancement content, and is not limited herein.

It can be understood that the Augmented content in the present application refers to virtual content that can be combined with a face image on a display screen of a terminal device by an Augmented Reality (AR) technology. According to the method and the device, the enhancement content is combined with the reality and the virtuality of the face image, so that the use interestingness of a user can be improved.

According to one embodiment of the present application, the enhanced content may include virtual content in one form of image, text, video, audio, or a combination of at least two forms. For example, the enhanced content may be presented in the form of an internal structure model image of a three-dimensional stereo-shaped skin, showing an internal structure image of a subcutaneous tissue structure located below the human dermis corresponding to the skin state.

In the above example, the internal structure model is displayed separately in the form of virtual content, and may also be displayed in combination with other real content or virtual content, for example, nursing advice displayed in a video presentation manner, an explanation of the principle of audio presentation, and the like are displayed in combination in addition to the internal structure model.

Taking the enhanced content as an internal structure model as an example, fig. 2 is a schematic diagram exemplarily listing an internal structure model map corresponding to skin states, such as pox, nevus, blackhead, and normal skin, as shown in fig. 2. Further, pox can be classified as papules, pustules, and the like. The acne internal structure models respectively correspond to the skin states and are divided into a papule internal structure model and a pustule internal structure model.

According to an embodiment of the present application, fig. 3 is a schematic view of a mobile phone interface operated by a user according to an embodiment of the present application, and as shown in fig. 3, on a display screen 310, the user can perform a zoom-in operation on a face image 320 to more clearly observe a skin state in the image. The display of the enhancement content may be activated when the magnification of the image reaches a certain degree. Further, when the user wants to know the internal structure of the skin state in a specific position in more detail, the user can perform a first operation by enlarging or clicking the specific position of the face image or performing a first operation at the specific position in the face image, for example, two fingers press the display screen and gradually open the display screen to activate or trigger the internal structure model of the skin state, for example, the skin state at the specific position is a black head, and after the user continuously enlarges the black head to a specified multiple, the internal structure model corresponding to the black head is displayed on the display screen 310 of the mobile terminal in an augmented reality manner, so that the interest of the user can be increased, the user experience can be improved, and the search requirement of the user on the detailed part of the skin state can be met.

The internal structure model of the skin state can respectively obtain the internal structure models corresponding to the skin states one by one through the existing model construction method, and the internal structure models corresponding to the skin states are stored in the mobile terminal device or the remote server.

It should be noted that, the implementation of the method according to the present application by using a mobile terminal in combination with a corresponding application is only an exemplary description, and the implementation of the present application is not limited to a mobile terminal such as a smart phone, and may also be other dedicated electronic devices having a shooting function and a display function, such as a dedicated skin treatment apparatus, or an electronic device that has no shooting function but can receive an image and display the image, or an electronic device that has no display function but can be connected to a device having a display function, and is not limited herein. The enhancement content of the application has universality and is simple and feasible.

Based on the above description of the embodiments, according to some embodiments of the present application, the method for displaying skin details in augmented reality according to embodiments of the present application is described below in specific embodiments, and is described below by taking as an example that the method is executed on a smartphone as an example of a mobile terminal shown in fig. 1. Fig. 4 shows a flow chart of a method of skin detail augmented reality display according to the application, which method comprises in particular, as shown in fig. 4:

step S310, a face image to be analyzed is obtained and displayed on a display screen, wherein the face image can be obtained through image acquisition functions such as a mobile terminal and the like and a shooting function, and the obtained face image is displayed on the display screen of the smart phone. In other embodiments of the present application, the mobile terminal may also acquire images of other parts of the user's body, such as the hands, the back, and the like, besides the face image, which is not limited herein.

In step S320, the image characteristics of the specific position in the face image are determined. The specific position may be a designated position on the face image manually selected by the user, such as a position of the nose, cheek, or forehead. The specific position may be obtained by the user by freely selecting the screen according to the interest of the user, for example, when the user wants to know the image feature of the specific position in the face image, the user may select a position on the display screen of the terminal device corresponding to the specific position of the face image to obtain the specific position in the face image, and further detect the specific position to obtain the image feature of the specific position, where the image feature may be a color feature, a pattern contour, a color intensity, or a combination of the two or more features of the specific position. That is to say, the coordinate of the clicked position of the display screen is mapped to the specific position of the face image in an equal proportion, and then the image feature of the specific position is determined, so that the user can determine the image feature of the preset position of the face image in a screen clicking manner.

According to another embodiment of the application, the image characteristics for determining the specific position in the face image can be automatically identified and determined by the terminal device based on a neural network face identification technology. That is to say, the face detection step using the face recognition technology first determines feature points related to skin states in the face image, and for example, when the image features of the feature points at a certain position in the face image are recognized to be different from those of the surrounding area, it can be determined that the certain position in the face image is a specific position. This implementation is more useful for identifying problem skins.

As another implementation manner, the user may recognize the difference between the color of the feature point and the color of the surrounding image by naked eyes, and if the color of the feature point is different from the color of the surrounding image, determine that the position is the feature point, and regard the feature point as the specific position. For example, when a face has pox, the red color of the pox is different from the yellow color of the surrounding skin, and the user can determine the position of the pox on the image as the characteristic point to be processed through the color difference analysis. In order to prevent the color from being influenced by factors such as light, photographing angle and the like, the user can select the position and the angle suitable for the light to photograph the image, so that the difference of the color can be identified by naked eyes of people, and the user can obtain a more real image conveniently.

According to an embodiment Of the application, before determining the image features Of a specific position in the face image, a Region Of Interest (ROI) in the face image may be determined to more quickly locate the position Of Interest, thereby reducing the amount Of computation. The ROI region may be set to different regions according to different skin conditions, for example, a blackhead may be generally present on the nose, and thus, the ROI region corresponding to the blackhead may be set at the nose. After the ROI is determined, the specific position is further determined, and the judgment on which ROI the specific position is located is beneficial to determining the image feature of the specific position in the range of the RIO region, for example, the specific position is a certain position of a nose, because the position of the nose is the ROI region corresponding to the blackhead, the skin state of the image corresponding to the specific position is preliminarily judged to be possibly the blackhead, and the image feature is further extracted through the judgment result, so that the accuracy of image feature analysis can be improved.

The division of the ROI region is described in detail below with reference to the drawings. Fig. 5 shows a schematic structural diagram of a face key point. As shown in fig. 5, the key points of the face in the face image are detected, and the ROI region of interest is divided by the key points. Specifically, the key points include: the positions of the outlines of the face, the eyebrows, the nose, the eyes and the mouth, where the number of the key points set by different key point detection algorithms is different, 68 key points may be set as shown in fig. 5, and 68 key points are set as shown in fig. 5, and the number of the key points is 0 to 67, or more dense 98, 1000, 4000 or more key points may be set, which is not limited herein. The basic outline of the face can be determined through the detection of key points, and an interested region can be divided according to the key points, for example, the upper position of eyebrow can be divided into a forehead 501, the parts below the eyes and on two sides of the nose can be divided into a cheek 502, and the part below the mouth can be divided into a chin 503 and other ROI regions. Considering that some parts of the human face do not belong to a part of the skin, such as lips, eyes and nostrils, the division of the ROI of interest may bypass these parts, i.e. the ROI of interest may be the facial skin with lips, eyes and nostrils removed.

According to one embodiment of the application, the image characteristics of a specific position in the face image are determined, the method comprises the steps of detecting the ROI area of interest or the skin of the specific position, and determining the characteristic point in the face image, wherein the characteristic point serves as the specific position. Wherein the image features of the feature points are different from those of the peripheral region of the feature points. For example, the color feature, pattern profile, color intensity, or combination of both, at a specific location may be different. The detection of the characteristic points can realize automatic identification and obtain a specific position.

Taking color features as an example, the image features of a specific location are described below, for example, the skin details of a face image are detected by a skin detection module, where the skin detection method may adopt a conventional detection method, such as detection of pox, an input color RGB (Red, Green, Blue) image may be converted into a grayscale image, a maximum Value of grayscale in each region is found, each region of the grayscale image is normalized by the obtained maximum Value, the color RGB image is converted into an HSV (Hue, Saturation, Value) color space, a V channel in the HSV color space is extracted and normalized, and the normalized grayscale image is subtracted from the normalized V channel to obtain a characteristic point of pox.

It will also be understood by those skilled in the art that, when a feature point is identified by the analysis method, the ROI region may also be identified based on the feature point, for example, a circle may be drawn with a predetermined radius based on the identified feature point, and the circle region is the ROI region determined based on the feature point. Other possible feature points may be further identified within the ROI region.

In step S330, a skin state corresponding to the image feature of the specific location is acquired. The skin condition may include the skin condition shown in fig. 2, such as skin with blackheads, pox, and nevus, and other skin conditions may be other skin conditions in other embodiments of the present application, such as skin conditions with color spots, fine lines, and the like, which are not limited herein. For example, the user clicks the position of the pox on the face nose on the display screen of the terminal device, and the terminal device can determine that the corresponding skin state at the position is the pox through the detection module.

Step S340, activating the enhancement content corresponding to the skin state according to the specified operation for the specific location, and calling the enhancement content to enhance and display the enhancement content and the face image. Wherein the augmented content is pre-stored, including virtual content, or a combination of virtual and real content. The skin state and the enhanced content of the preset position are overlapped on the display screen to be displayed, so that the interestingness of the user is improved, and the user experience is improved.

According to an embodiment of the present application, the enhanced content may include content in one form of image, text, video, audio, or a combination of at least two forms, for example, at least one of an internal structure image of a subcutaneous tissue structure located under the dermis of the human face corresponding to the skin state, a formation principle corresponding to the skin state, and a care suggestion for the skin state. For example, the enhanced content is an internal structure image corresponding to the skin state, and when the internal structure image is activated through operation, the internal structure image and the skin state at the specific position are superposed on the display screen to be displayed, so that a user can know the internal structure of the skin state at the specific position more clearly, further exploration requirements of the user on the skin state are met, and the user experience is improved.

According to one embodiment of the present application, activating the enhanced content corresponding to the skin state for a specified operation of a specific location may include the following several exemplary implementations.

The first implementation, as shown in fig. 6, includes the following steps:

step S510, determining that the operation for the screen of the display screen is an enlarging operation for a specific position, where the enlarging operation for the specific position (e.g., pox, blackhead, or mole) is a gesture operation in which a finger touches the display screen, for example, two fingers press the display screen and gradually open the display screen, before and after the operation, the specific position is enlarged by several times on the display screen, and when the pixel size of the specific position after the enlarging operation is larger than the pixel size of the specific position before the enlarging operation, it is determined that the enlarging operation for the specific position is being performed. Alternatively, the user may select the magnification of a particular location by selecting the magnification option.

Step S520, determining the magnification factor of the specific position in the acquired face image, and detecting the magnification factor of the specific position (e.g. pox, blackhead or mole), for example, the position of the blackhead is magnified N times before and after the operation of the user, where N is a natural number greater than or equal to 1. For example, if the image is magnified by 3 times, the magnification of the specific position is 3 times.

In step S530, when the magnified parameter reaches a predetermined threshold, for example, the first threshold is that the specific location is magnified 5 times before and after the operation by the user, and when the threshold of the specific location being magnified 5 times is satisfied, that is, once the image including the feature point is magnified more than 5 times, the enhanced content corresponding to the operation and the skin condition, such as the vaccinia internal structure model, is activated.

According to another embodiment of the present application, as shown in fig. 7, the second implementation of activating the enhanced content corresponding to the skin state for the operation of the specific location may include the following steps:

in step S510, it is determined that the operation being performed is a preset click operation for a specific location, where the click operation may be a preset regular operation that a user directly clicks on a display screen through a finger or clicks a mouse, double-click or multi-click, or long-press the specific location.

Step S520, after the user clicks a specific location according to a predetermined rule (single click, double click, or multiple click, etc.), the enhanced content corresponding to the operation and skin state of the user, such as the pox internal structure model, is activated.

When the user performs a single-click, double-click, or the like operation with respect to a specific location to activate the display of the enhanced content, a click operation is performed on the basis of an enlarged image for the sake of accuracy of the operation. In connection with the first embodiment, magnification may be combined as one of the conditions for triggering enhancement of content. For example, when an image including a feature point is enlarged by at least 5 times, display of an enhanced content such as a vaccinia internal structure model is activated by clicking a portion of vaccinia. Conversely, when the image is merely enlarged by a factor of 3, clicking on the feature point does not activate the enhancement.

According to another embodiment of the present application, as shown in fig. 8, a third way of activating the enhanced content corresponding to the skin state for the first operation at the specific location may include the steps of:

in step S710, it is determined that the operation for the feature point is a sliding operation according to a preset track for the specific location, where the sliding operation may be a sliding operation of drawing a preset track around the specific location, such as drawing an arc, a circle, or a straight line.

In step S720, when it is detected that the operation being performed slides along a preset trajectory, for example, a circle is drawn around a specific point, or a user-defined special operation on the screen in advance, for example, a specific letter trajectory is drawn, and the enhancement content corresponding to the skin state of the feature point is activated.

When a user performs a sliding operation with respect to a specific position or performs a sliding operation on a screen in a specially defined trajectory to activate the display of the enhanced content, the sliding operation is performed on the basis of an enlarged image for the sake of accuracy of the operation. In connection with the above first embodiment, magnification may be combined as one of the conditions for triggering enhancement of content. For example, when an image including a feature point is enlarged by at least 5 times, display of an enhanced content such as a vaccinia internal structure model is activated by circling a portion around vaccinia. Conversely, when the image is merely magnified by a factor of 3, circling around the feature point does not activate the enhancement.

In other embodiments of the present application, the operation may be a combination of a click and slide operation to enable activation of the enhancement corresponding to the operation and the skin condition, and is not limited herein.

The method for displaying skin details of the present application is described below by taking a mobile phone as an example and combining a specific use scenario, fig. 9 shows a scenario diagram of a user operating a mobile terminal interface of the mobile phone, as shown in fig. 9, the user first opens a corresponding App application, and shoots a face image, and then detects a basic situation of a skin state of the face image through a mobile terminal face detection module, and comprehensively displays the basic situation of the skin state of the user, such as the number of pox, blackhead, moles, position information, and the like, and a comprehensive score situation of the skin state of the user, and when the user wants to further know a more detailed structure of a certain skin state, the user can select an interested skin state.

The method for displaying skin details is described below by taking vaccinia as an example, fig. 10 shows a flowchart of a user operating an interface of a mobile terminal, and as shown in a in fig. 10, the user first clicks a face image so that the face image is displayed on a display screen of the mobile terminal. As shown in b and c in fig. 10, the user zooms in the face image by sliding the two fingers on the display screen, and detects the magnification, which is more convenient for the user to observe the details of the skin. As shown in c in fig. 10, the user clicks a specific position with a finger, the clicked position is a pustule, and after clicking, the finger slides up to activate an image of the internal structure model of the pustule corresponding to the pustule, so that the internal structure model of the pustule and the corresponding pustule are displayed in an augmented reality manner, as shown in d in fig. 10. Further, as shown by e in fig. 10, text or audio information such as corresponding formation principle about the skin state or care advice for the skin state can be further understood by clicking a specific position or indication, such as a specific symbol or the like. When the user wants to return to the previous level, the user can return to the previous level by clicking a button on the previous level or by a gesture operation, such as pressing a double finger on the display screen and performing a pinch-in action, it can be understood that the user can also zoom out the picture by means of double finger pinch-in.

According to the method for displaying the skin details in the augmented reality mode, the virtual image and the real skin image can be combined, the augmented reality is displayed to the user, the exploration of the user for forming root causes of skin problems is met, and the requirement for learning of skin knowledge is met. The user not only can look over the management in real time to self facial skin situation, can also show through augmented reality and obtain more directly perceived interesting experience, and the user can know the skin problem of oneself more deeply, in depth, learns the relevant knowledge of skin, obtains skin care's suggestion, and teaching in joy promotes user's experience.

According to some embodiments of the present application, a device for displaying skin details augmented reality is disclosed, and fig. 11 shows a schematic structural diagram of the device for displaying skin details augmented reality. As shown in fig. 11, the display device includes:

a display screen 1001;

the acquisition module 1002 is configured to acquire a face image to be analyzed by a user, and display the face image on a display screen, where in other embodiments of the application, the face image may also be an image on another part of a body of the user, and is not limited herein.

The detecting module 1003 is configured to determine an image feature at a specific location in the face image, such as a color feature, a pattern contour, a color intensity, or a combination of the two or more.

The detecting module 1003 is configured to obtain a skin status corresponding to the image feature of the specific location, where the skin status includes a normal skin and a problem skin, the problem skin includes at least one type of problem skin, the type of problem skin includes at least one of pox, mole or blackhead, and the type corresponding to each problem skin includes at least one rank, for example, pox is classified into two ranks of papule and pustule according to severity. Other embodiments of the present disclosure may have other skin conditions, such as, but not limited to, pigmented spots, fine wrinkles, etc.

And the processing module 1004 is used for activating the enhanced content corresponding to the skin state according to the preset operation aiming at the specific position and displaying the enhanced content and the human face image on the display screen in an overlapping mode. Wherein the augmented content comprises virtual content, or a combination of virtual content and real content. The enhanced content can be pre-stored locally in the terminal or temporarily called to a remote server.

According to one embodiment of the application, the content exists in one form of image, text, video and audio or the combination of at least two forms. Such as at least one of an internal structure image of a subcutaneous tissue structure located under the dermis of the human face corresponding to the skin state, a formation principle corresponding to the skin state, and a care recommendation for the skin state.

According to an embodiment of the present application, the detection module 1003 is further configured to determine an ROI area of interest in the face image, and determine image features at specific locations in the ROI area of interest. The ROI can be set to be different interesting ROI areas according to different skin states, for example, a blackhead usually appears on a nose, so that the ROI area corresponding to the blackhead can be set at the nose, and image features of a specific position are determined in the ROI area to improve the accuracy of image feature analysis.

Further, the detecting module 1003 is further specifically configured to detect a key point of the face in the face image, and divide an interesting ROI region according to the key point. The key points comprise the positions of the facial contour and the facial contour of the facial features in the facial image, and the ROI area of interest is divided according to the key points and comprises an area obtained after the positions of the facial contours are removed from the facial image.

According to an embodiment of the present application, the detecting module 1003 is further configured to: and determining feature points in the face image, and taking the feature points as specific positions, wherein the image features of the feature points are different from those of the peripheral area of the feature points.

According to an embodiment of the present application, the processing module 1004 is specifically configured to: the determining operation is an amplifying operation aiming at a specific position, the amplified parameter of the specific position in the face image is determined and acquired, and when the amplified parameter reaches a first threshold value, the enhanced content corresponding to the operation and the skin state is activated.

According to an embodiment of the present application, the processing module 1004 is specifically configured to determine that the operation is a preset click operation for a specific location, and activate the enhanced content corresponding to the operation and the skin state, where the click operation includes a preset regular operation that a user directly clicks on a display screen with a finger or clicks with a mouse, double-click or multi-click, or long-press the specific location.

According to an embodiment of the present application, the processing module 1004 is specifically configured to determine that the operation is a sliding operation in a preset track for a specific location, and activate the enhanced content corresponding to the operation and the skin state, where the sliding operation includes a sliding operation that draws the preset track around the specific location, such as a sliding operation that draws an arc, a circle, or a straight line.

In the present application, the functions and the workflow of each component of the display device have been described in detail in the foregoing embodiments, and specific reference may be made to the method for realizing skin details in an augmented reality manner in the foregoing embodiments, which are not described herein again.

According to the skin detail augmented reality display device for the embodiment of the application, the virtual image and the real skin image can be combined, the augmented reality is displayed for a user, teaching is achieved through lively activities, and the experience of the user is improved.

According to some embodiments of the present application, an electronic device is disclosed, and fig. 12 shows a schematic structural diagram of the electronic device. As shown in fig. 12, specifically, the electronic device may include:

the collector 1103 is configured to collect a facial image of a user.

A memory 1102 for storing instructions;

a processor 1104 for reading instructions stored in the memory to execute the method for displaying skin details in an augmented reality manner of the above embodiments; and

a display device implemented as a display screen for augmented reality display of augmented content and a true skin state image.

Although in the above embodiments the electronic device itself comprises a display screen for displaying the augmented content and the real skin state image, it will be understood by those skilled in the art that some electronic devices may not have a display screen themselves, and according to the electronic device of the embodiments of the present application, data may be synchronized to a terminal device with a display screen, such as a smartphone, a tablet computer, a desktop PC, a notebook PC, etc., communicably connected to the electronic device, so as to present the corresponding augmented reality display image to the user.

According to the electronic equipment provided by the embodiment of the application, the virtual image and the real skin image can be combined, the reality and the field are enhanced to be displayed for a user, edutainment is realized, and the experience of the user is improved.

Referring now to FIG. 13, shown is a block diagram of an apparatus 1200 in accordance with one embodiment of the present application. The device 1200 may include one or more processors 1201 coupled to a controller hub 1203. For at least one embodiment, the controller hub 1203 communicates with the processor 1201 via a multi-drop Bus such as a Front Side Bus (FSB), a point-to-point interface such as a Quick Path Interconnect (QPI), or similar connection 1206. The processor 1201 executes instructions that control general types of data processing operations. In one embodiment, Controller Hub 1203 includes, but is not limited to, a Graphics Memory Controller Hub (GMCH) (not shown) and an Input/Output Hub (IOH) (which may be on separate chips) (not shown), where the GMCH includes a Memory and a Graphics Controller and is coupled to the IOH.

The device 1200 may also include a coprocessor 1202 and a memory 1204 coupled to the controller hub 1203. Alternatively, one or both of the memory and GMCH may be integrated within the processor (as described herein), with the memory 1204 and coprocessor 1202 being directly coupled to the processor 1201 and to the controller hub 1203, with the controller hub 1203 and IOH being in a single chip. The Memory 1204 may be, for example, a Dynamic Random Access Memory (DRAM), a Phase Change Memory (PCM), or a combination of the two. In one embodiment, coprocessor 1202 is a special-Purpose processor, such as, for example, a high-throughput MIC processor (MIC), a network or communication processor, compression engine, graphics processor, General Purpose Graphics Processor (GPGPU), embedded processor, or the like. The optional nature of coprocessor 1202 is represented in FIG. 13 by dashed lines.

Memory 1204, as a computer-readable storage medium, may include one or more tangible, non-transitory computer-readable media for storing data and/or instructions. For example, the memory 1204 may include any suitable non-volatile memory, such as flash memory, and/or any suitable non-volatile storage device, such as one or more Hard-Disk drives (Hard-Disk drives, hdd (s)), one or more Compact Discs (CD) drives, and/or one or more Digital Versatile Discs (DVD) drives.

In one embodiment, device 1200 may further include a Network Interface Controller (NIC) 1206. Network interface 1206 may include a transceiver to provide a radio interface for device 1200 to communicate with any other suitable device (e.g., front end module, antenna, etc.). In various embodiments, the network interface 1206 may be integrated with other components of the device 1200. The network interface 1206 may implement the functions of the communication unit in the above-described embodiments.

The device 1200 may further include an Input/Output (I/O) device 1205. I/O1205 may include: a user interface designed to enable a user to interact with the device 1200; the design of the peripheral component interface enables peripheral components to also interact with the device 1200; and/or sensors may be configured to determine environmental conditions and/or location information associated with device 1200.

It is noted that fig. 13 is merely exemplary. That is, although fig. 13 shows that the apparatus 1200 includes a plurality of devices, such as the processor 1201, the controller hub 1203, the memory 1204, etc., in practical applications, an apparatus using the methods of the present application may include only a part of the devices of the apparatus 1200, for example, only the processor 1201 and the NIC1206 may be included. The nature of the alternative device in fig. 13 is shown in dashed lines.

According to some embodiments of the present application, the memory 1204 serving as a computer-readable storage medium stores instructions that, when executed on a computer, cause the system 1200 to perform the method according to the above embodiments, which may be referred to specifically for the method according to the above embodiments, and will not be described herein again.

Referring now to fig. 14, shown is a block diagram of a SoC (System on Chip) 1300 in accordance with an embodiment of the present application. In fig. 14, like parts have the same reference numerals. In addition, the dashed box is an optional feature of more advanced socs. In fig. 14, the SoC1300 includes: an interconnect unit 1350 coupled to the application processor 1310; a system agent unit 1380; a bus controller unit 1390; an integrated memory controller unit 1340; a set or one or more coprocessors 1320 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; a Static Random Access Memory (SRAM) unit 1330; a Direct Memory Access (DMA) unit 1360. In one embodiment, the coprocessor 1320 includes a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like.

Included in Static Random Access Memory (SRAM) unit 1330 may be one or more computer-readable media for storing data and/or instructions. A computer-readable storage medium may have stored therein instructions, in particular, temporary and permanent copies of the instructions. The instructions may include: when executed by at least one unit in the processor, the Soc1300 may execute the calculation method according to the foregoing embodiment, which may specifically refer to the method of the foregoing embodiment and will not be described herein again.

Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of these implementations. Embodiments of the application may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.

Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices in a known manner. For purposes of this Application, a processing system includes any system having a Processor such as, for example, a Digital Signal Processor (DSP), a microcontroller, an Application Specific Integrated Circuit (ASIC), or a microprocessor.

The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code can also be implemented in assembly or machine language, if desired. Indeed, the mechanisms described in this application are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.

In some cases, the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors. For example, the instructions may be distributed via a network or via other computer readable media. Thus, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), including, but not limited to, floppy diskettes, optical disks, Compact disk Read Only memories (CD-ROMs), magneto-optical disks, Read Only Memories (ROMs), Random Access Memories (RAMs), Erasable Programmable Read Only Memories (EPROMs), Electrically Erasable Programmable Read Only Memories (EEPROMs), magnetic or optical cards, flash Memory, or a tangible machine-readable Memory for transmitting information (e.g., carrier waves, infrared signals, digital signals, etc.) using the Internet in electrical, optical, acoustical or other forms of propagated signals. Thus, a machine-readable medium includes any type of machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).

In the drawings, some features of the structures or methods may be shown in a particular arrangement and/or order. However, it is to be understood that such specific arrangement and/or ordering may not be required. Rather, in some embodiments, the features may be arranged in a manner and/or order different from that shown in the figures. In addition, the inclusion of a structural or methodical feature in a particular figure is not meant to imply that such feature is required in all embodiments, and in some embodiments, may not be included or may be combined with other features.

It should be noted that, in the embodiments of the apparatuses in the present application, each unit/module is a logical unit/module, and physically, one logical unit/module may be one physical unit/module, or may be a part of one physical unit/module, and may also be implemented by a combination of multiple physical units/modules, where the physical implementation manner of the logical unit/module itself is not the most important, and the combination of the functions implemented by the logical unit/module is the key to solve the technical problem provided by the present application. Furthermore, in order to highlight the innovative part of the present application, the above-mentioned device embodiments of the present application do not introduce units/modules which are not so closely related to solve the technical problems presented in the present application, which does not indicate that no other units/modules exist in the above-mentioned device embodiments.

It is noted that, in the examples and descriptions of this patent, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the use of the verb "comprise a" to define an element does not exclude the presence of another, same element in a process, method, article, or apparatus that comprises the element.

While the present application has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present application.

24页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种图像处理的方法和装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!