Portrait marking method and device and computer readable storage medium

文档序号:1337228 发布日期:2020-07-17 浏览:6次 中文

阅读说明:本技术 一种人像标注方法、装置及计算机可读存储介质 (Portrait marking method and device and computer readable storage medium ) 是由 吴峰 吴奎 邱小锋 张晓峰 于 2020-03-05 设计创作,主要内容包括:本发明提供了一种人像标注方法、装置及计算机可读存储介质,其中人像标注方法,包括以下步骤:S1、获取显示屏高和宽,界面高和宽,计算界面高和显示屏高宽比率;S2、将显示屏沿高方向分为N等份,沿宽方向均分为M等份;S3、建立坐标轴,获取坐标数据;S4、分别获取界面的坐标数据;S5、计算所述单位显示屏与界面中人脸高宽比例;S6、计算单位显示屏坐标关于界面坐标的线性回归方程;S7、根据所述线性回归方程和和界面坐标计算人脸在显示屏的坐标并在显示屏进行人脸画框。本人像标注方法、装置及计算机可读存储介质,能够降低视觉重影,避免使用者产生晕眩感。(The invention provides a portrait labeling method, a portrait labeling device and a computer-readable storage medium, wherein the portrait labeling method comprises the following steps: s1, acquiring the height and width of the display screen and the height and width of the interface, and calculating the ratio of the height and width of the interface; s2, dividing the display screen into N equal parts along the height direction and M equal parts along the width direction; s3, establishing a coordinate axis to obtain coordinate data; s4, respectively acquiring coordinate data of the interface; s5, calculating the height-width ratio of the unit display screen to the face in the interface; s6, calculating a linear regression equation of the unit display screen coordinate with respect to the interface coordinate; and S7, calculating coordinates of the human face on the display screen according to the linear regression equation and the interface coordinates, and performing human face picture frame on the display screen. The image annotation method, device and computer readable storage medium can reduce visual double images and avoid the dizzy feeling of users.)

1. A portrait labeling method is characterized by comprising the following steps:

s1, acquiring height lHeight and width lWidth of a display screen, height pHeight and width pWidth of an interface, calculating the ratio hRatio of the height of the interface to the height of the display screen as lHeight/pHeight, and calculating the ratio wRatio of the width of the interface to the width of the display screen as lWidth/pWidth;

s2, dividing the display screen into N equal parts along the height direction, and dividing the display screen into M equal parts along the width direction to obtain N × M unit display screens;

s3, establishing coordinate axes, and acquiring coordinate data of the N × M unit display screens;

s4, aligning at least two unit display screens in the N x M unit display screens to the live-action human face, and respectively acquiring coordinate data of an interface;

s5, calculating the average height and width of the face in the interface according to the coordinate data of the interface; calculating the face height ratio xRatio and the face width ratio yRatio in the unit display screen and the interface;

s6, calculating a linear regression equation of the coordinates of the unit display screen relative to the coordinates of the interface according to the coordinate data of the interface, the coordinate data of the unit display screen, the high ratio hRatio and the width ratio wRatio of the interface and the display screen, and the high ratio xRatio and the width ratio yRatio of the face in the unit display screen and the interface;

and S7, calculating coordinates of the human face on the display screen according to the linear regression equation and the interface coordinates, and performing human face picture frame on the display screen.

2. The portrait labeling method of claim 1, wherein the N x M unit display screens are marked as:

the S3 specifically includes:

the left vertex of the first unit rectangle at the upper left corner is (0, 0), the height of the display screen is an x-axis positive axis, the width of the display screen is a y-axis positive axis, a coordinate system is constructed, and the height (lHeight/n) and the width (lWidth) of each unit rectangle are obtained, wherein A ismnThe coordinates corresponding to the upper left point areAmnThe coordinates corresponding to the lower right point are

3. The portrait annotation method according to claim 2, wherein the S4 specifically includes:

a is to bemnAligning each rectangular frame to the live-action face, and respectively counting the face coordinate data to obtainPmnThe coordinates corresponding to the upper left point are:Pmnthe coordinates corresponding to the lower right point are:

4. the portrait labeling method of claim 3, wherein the S5 specifically includes:

high average value of interface faceInterface face width averageHigh ratio of unit display screen height to interface face height Unit display screen width and interface face width ratio

5. The portrait annotation method of claim 4, wherein the S6 specifically includes: x coordinate of upper left point of interface(wherein ) Corresponding to the x coordinate of the upper left point of the unit display screen From the sample values, the units are foundLinear regression equation of x coordinate of upper left point of display screen relative to x coordinate of upper left point of previewIs recorded as y ═ a + bx;

y coordinate of upper left point of interface(wherein ) Corresponding to the y coordinate of the upper left point of the unit display screen According to the sample values, solving a linear regression equation of the y coordinate of the upper left point of the unit display screen relative to the y coordinate of the upper left point of the previewDenoted as y ═ c + dx.

6. The portrait labeling method of claim 5, wherein the S7 specifically includes:

setting the coordinate value of the upper left point of the face obtained by the interface as (x) for the real scene picture frame on the display screen according to the obtained coordinate value of the interface0,y0) The coordinate value of the lower right point is (x)1,y1) Then it is set to the upper left point (coordinates of (a + b *) (hRatio * x) for the real scene0),c+d*(wRatio*y0) )) is a starting point and is (x) high1-x0) * xRatio with a width of (y)1-y0) * YRatio takes a human face picture frame on the display screen.

7. A portrait annotation device, comprising:

the acquisition module is used for acquiring the height lHeight and the width lWidth of the display screen, the height pHeight and the width pWidth of the interface, calculating the ratio hRatio of the height of the interface and the height of the display screen as lHeight/pHeight, and calculating the ratio wRatio of the width of the interface and the width of the display screen as lWidth/pWidth;

the dividing module is used for dividing the display screen into N equal parts along the height direction and M equal parts along the width direction to obtain N × M unit display screens;

the axis building module is used for building coordinate axes and obtaining coordinate data of the N x M unit display screens;

the coordinate acquisition module is used for aligning at least two unit display screens in the N x M unit display screens to the live-action human face and respectively acquiring coordinate data of an interface;

the first calculation module is used for calculating the average height value and the average width value of the face in the interface according to the coordinate data of the interface; calculating the face height ratio xRatio and the face width ratio yRatio in the unit display screen and the interface;

the second calculation module is used for calculating a linear regression equation of the coordinates of the unit display screen relative to the coordinates of the interface according to the coordinate data of the interface, the coordinate data of the unit display screen, the high ratio hRatio and the wide ratio wRatio of the interface and the display screen, and the high ratio xRatio and the wide ratio yRatio of the face in the unit display screen and the interface;

and the picture frame module is used for calculating the coordinates of the human face on the display screen according to the linear regression equation and the interface coordinates and performing human face picture frame on the display screen.

8. A computer readable storage medium storing instructions adapted to be loaded by a processor to perform the steps of the portrait annotation method according to any one of claims 1-6.

Technical Field

The present invention relates to the field of technologies, and in particular, to a portrait labeling method and apparatus, and a computer-readable storage medium.

Background

Augmented Reality (AR) and Virtual Reality (VR) are the fields of technology that have attracted much attention in recent years, and their near-to-eye display systems project a far-away virtual image onto the human eye by forming pixels on a display through a series of optical imaging elements. The difference is that AR glasses require perspective (see-through) to see both the real outside world and virtual information, so the imaging system cannot be in front of the line of sight. This requires the addition of one or a group of optical combiners (optical combiners) to integrate, complement and "enhance" virtual information and real scenes in a "stacked" fashion.

The optical display system of an AR device is usually composed of a miniature display screen and optical elements. In summary, the display systems adopted by the AR glasses on the market at present are combinations of various miniature display screens and optical elements such as prisms, free-form surfaces, BirdBath, optical waveguides and the like, wherein the difference of the optical combiners is a key part for distinguishing the AR display systems.

At present, the AR glasses label the portrait in preview, and the effect realized by the scheme can cause visual double images and make users feel dizzy.

Disclosure of Invention

In view of the above, the technical problem to be solved by the present invention is to provide a portrait labeling method, apparatus and computer readable storage medium, which can reduce visual double images and avoid the dizzy feeling of the user.

The technical scheme of the invention is realized as follows:

a portrait labeling method comprises the following steps:

s1, acquiring height lHeight and width lWidth of a display screen, height pHeight and width pWidth of an interface, calculating the ratio hRatio of the height of the interface to the height of the display screen as lHeight/pHeight, and calculating the ratio wRatio of the width of the interface to the width of the display screen as lWidth/pWidth;

s2, dividing the display screen into N equal parts along the height direction, and dividing the display screen into M equal parts along the width direction to obtain N × M unit display screens;

s3, establishing coordinate axes, and acquiring coordinate data of the N × M unit display screens;

s4, aligning at least two unit display screens in the N x M unit display screens to the live-action human face, and respectively acquiring coordinate data of an interface;

s5, calculating the average height and width of the face in the interface according to the coordinate data of the interface; calculating the face height ratio xRatio and the face width ratio yRatio in the unit display screen and the interface;

s6, calculating a linear regression equation of the coordinates of the unit display screen relative to the coordinates of the interface according to the coordinate data of the interface, the coordinate data of the unit display screen, the high ratio hRatio and the width ratio wRatio of the interface and the display screen, and the high ratio xRatio and the width ratio yRatio of the face in the unit display screen and the interface;

and S7, calculating coordinates of the human face on the display screen according to the linear regression equation and the interface coordinates, and performing human face picture frame on the display screen.

Preferably, the N × M unit display screens are marked as:

the S3 specifically includes:

the left vertex of the first unit rectangle at the upper left corner is (0, 0), the height of the display screen is an x-axis positive axis, the width of the display screen is a y-axis positive axis, a coordinate system is constructed, and the height (lHeight/n) and the width (lWidth) of each unit rectangle are obtained, wherein A ismnThe coordinates corresponding to the upper left point areAmnThe coordinates corresponding to the lower right point are

Preferably, the S4 specifically includes:

a is to bemnAligning each rectangular frame to the live-action face, and respectively counting the face coordinate data to obtainPmnThe coordinates corresponding to the upper left point are:Pmnthe coordinates corresponding to the lower right point are:

preferably, the S5 specifically includes:

high average value of interface faceInterface face width averageHigh ratio of unit display screen height to interface face height Unit display screen width and interface face width ratio

Preferably, the S6 specifically includes:

x coordinate of upper left point of interface(wherein ) Corresponding to the x coordinate of the upper left point of the unit display screen According to the sample values, solving a linear regression equation of the x coordinate of the upper left point of the unit display screen relative to the x coordinate of the upper left point of previewIs recorded as y ═ a + bx;

y coordinate of upper left point of interface(wherein ) Corresponding to the y coordinate of the upper left point of the unit display screen

According to the sample values, solving a linear regression equation of the y coordinate of the upper left point of the unit display screen relative to the y coordinate of the upper left point of the previewDenoted as y ═ c + dx.

Preferably, the S7 specifically includes:

setting the coordinate value of the upper left point of the face obtained by the interface as (x) for the real scene picture frame on the display screen according to the obtained coordinate value of the interface0,y0) The coordinate value of the lower right point is (x)1,y1) Then it is set to the upper left point (coordinates of (a + b *) (hRatio * x) for the real scene0),c+d*(wRatio*y0) )) is a starting point and is (x) high1-x0) * xRatio with a width of (y)1-y0) * YRatio takes a human face picture frame on the display screen.

The invention also provides a portrait labeling device, which comprises:

the acquisition module is used for acquiring the height lHeight and the width lWidth of the display screen, the height pHeight and the width pWidth of the interface, calculating the ratio hRatio of the height of the interface and the height of the display screen as lHeight/pHeight, and calculating the ratio wRatio of the width of the interface and the width of the display screen as lWidth/pWidth;

the dividing module is used for dividing the display screen into N equal parts along the height direction and M equal parts along the width direction to obtain N × M unit display screens;

the axis building module is used for building coordinate axes and obtaining coordinate data of the N x M unit display screens;

the coordinate acquisition module is used for aligning at least two unit display screens in the N x M unit display screens to the live-action human face and respectively acquiring coordinate data of an interface;

the first calculation module is used for calculating the average height value and the average width value of the face in the interface according to the coordinate data of the interface; calculating the face height ratio xRatio and the face width ratio yRatio in the unit display screen and the interface;

the second calculation module is used for calculating a linear regression equation of the coordinates of the unit display screen relative to the coordinates of the interface according to the coordinate data of the interface, the coordinate data of the unit display screen, the high ratio hRatio and the wide ratio wRatio of the interface and the display screen, and the high ratio xRatio and the wide ratio yRatio of the face in the unit display screen and the interface;

and the picture frame module is used for calculating the coordinates of the human face on the display screen according to the linear regression equation and the interface coordinates and performing human face picture frame on the display screen.

The invention also proposes a computer-readable storage medium storing a plurality of instructions adapted to be loaded by a processor for performing the steps of the portrait annotation method according to any one of claims 1 to 6.

The invention provides a portrait labeling method, a portrait labeling device and a computer-readable storage medium, wherein at least two unit display screens in a plurality of unit display screens are aligned to a live-action face, and coordinate data of an interface are respectively obtained; calculating a linear regression equation of the unit display screen coordinate relative to the interface coordinate; therefore, the coordinates of the human face on the display screen can be calculated according to the linear regression equation and the interface coordinates, human face picture frames are performed on the display screen, visual double images are reduced, and a user is prevented from being dizzy.

Drawings

Fig. 1 is a display screen image in a portrait labeling method according to an embodiment of the present invention;

FIG. 2 is a diagram illustrating a segmented image of a display screen in a portrait annotation method according to an embodiment of the present invention

FIG. 3 is a flowchart of a portrait annotation method according to an embodiment of the present invention;

fig. 4 is a block diagram of a portrait labeling apparatus according to an embodiment of the present invention.

Detailed Description

The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

As shown in fig. 3, an embodiment of the present invention provides a portrait labeling method, including the following steps:

s101, enabling the interface (preview) to be transparent through the app, and displaying the default horizontal screen as shown in the figure 1.

S102, acquiring display screen height lHeight and width lWidth, proview height pHeight and width pWidth by a program, calculating a ratio hRatio of preview height to display screen height (lHeight/pHeight), and calculating the ratio wRatio of preview height to display screen width (lWidth/pWidth);

s103, as shown in figure 2, dividing the height of the display screen into n equal parts, dividing n-1 lines on the display screen according to equal ratio, dividing the width of the display screen into m equal parts, and dividing m-1 lines on the display screen according to equal ratio to obtain n-m unit rectangles, and marking the unit rectangles as n-m unit rectangles

S104, according to the height and the width of the display screen obtained in S102, a coordinate system is constructed by taking the left vertex of the first unit rectangle at the upper left as (0, 0), the height of the display screen as an x-axis and the width of the display screen as a y-axis, and the height lHeight/n and the width lWidth/m of each unit rectangle are obtained, wherein A ismnThe coordinates corresponding to the upper left point areAmnThe coordinates corresponding to the lower right point are

S105, adding AmnAligning each rectangular frame to the live-action face to ensure that the live-action face is framed by the rectangular frame, and respectively counting the coordinate data (the coordinates of the upper left point and the lower right point) of the preview face at the moment to obtainPmnThe coordinates corresponding to the upper left point are:Pmnthe coordinates corresponding to the lower right point are:

s106, obtaining the preview human face high average value according to the coordinate data obtained in the S105preview face wide averageUnit display screen height to preview face height ratio Unit display screen widthAspect ratio to preview

S107, obtaining the x coordinate of the upper left point of preview according to the coordinate data obtained in the S105(wherein) Corresponding to the x coordinate of the upper left point of the unit display screenAccording to the sample values, solving a linear regression equation of the x coordinate of the upper left point of the unit display screen relative to the x coordinate of the upper left point of previewIs recorded as y ═ a + bx;

s108, obtaining y coordinates of the upper left point of preview according to the coordinate data obtained in the S105(wherein) Corresponding to the y coordinate of the upper left point of the unit display screenAccording to the sample values, solving a linear regression equation of the y coordinate of the upper left point of the unit display screen relative to the y coordinate of the upper left point of the previewC + dx;

s109, according to the obtained corresponding coordinate relation, the real scene picture frame can be displayed on the display screen according to the acquired preview coordinate value, and the coordinate value of the upper left point of the face acquired by the preview is assumed to be (x)0,y0) The coordinate value of the lower right point is (x)1,y1) Then, thenIt takes the upper left point (coordinates of (a + b) x for the real scene0),c+d*(wRatio*y0) )) is a starting point and is (x) high1-x0) xRatio of width (y)1-y0) yRatio is a human face picture frame.

As shown in fig. 4, the present invention further provides a portrait labeling apparatus, including:

an obtaining module 10, configured to obtain a display height lhight and a width lWidth, an interface height pHeight and a width pWidth, calculate an interface height to display height ratio hRatio ═ lhight/pHeight, and calculate an interface and display width ratio wRatio ═ lWidth/pWidth;

a dividing module 20, configured to divide the display screen into N equal parts along a height direction and into M equal parts along a width direction, so as to obtain N × M unit display screens;

the axis building module 30 is used for building coordinate axes and obtaining coordinate data of the N × M unit display screens;

the coordinate acquisition module 40 is configured to align at least two unit display screens of the N × M unit display screens with a live-action face, and respectively acquire coordinate data of an interface;

the first calculation module 50 is used for calculating the average height value and the average width value of the face in the interface according to the coordinate data of the interface; calculating the face height ratio xRatio and the face width ratio yRatio in the unit display screen and the interface;

a second calculating module 60, configured to calculate a linear regression equation of the coordinates of the unit display screen with respect to the interface coordinates according to the coordinate data of the interface, the coordinate data of the unit display screen, the high ratio hRatio and the wide ratio wRatio of the interface and the display screen, and the high ratio xRatio and the wide ratio wRatio of the face in the unit display screen and the interface;

and the frame module 70 is configured to calculate coordinates of the human face on the display screen according to the linear regression equation and the interface coordinates, and perform human face frame on the display screen.

The invention also proposes a computer-readable storage medium storing a plurality of instructions adapted to be loaded by a processor for performing the steps of the portrait annotation method according to any one of claims 1 to 6.

Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus necessary general-purpose hardware, and certainly can also be implemented by special-purpose hardware including special-purpose integrated circuits, special-purpose CPUs, special-purpose memories, special-purpose components and the like. Generally, functions performed by computer programs can be easily implemented by corresponding hardware, and specific hardware structures for implementing the same functions may be various, such as analog circuits, digital circuits, or dedicated circuits. However, for the present application, the implementation of a software program is more preferable. Based on such understanding, the technical solutions of the present application may be substantially embodied in the form of a software product, which is stored in a readable storage medium, such as a floppy disk, a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the method of the embodiments of the present application.

The computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, e.g., from one website site, computer, server, or data center via a wired (e.g., coaxial cable, fiber optic, digital subscriber line (DS L)) or wireless (e.g., infrared, wireless, microwave, etc.) manner, the computer-readable storage medium may be any available medium that can be stored by a computer or an integrated (e.g., solid state, optical, storage, such as a solid state, magnetic, optical, or optical storage medium, such as a solid state, magnetic, optical, or semiconductor storage medium, such as a solid state, optical, or semiconductor storage medium, such as a floppy disk, a solid state, optical, or optical disk, a solid state, or optical storage medium, such as a floppy disk, a solid state, or optical disk, a solid state, or semiconductor storage medium, such as a magnetic, a floppy disk, a magnetic, or optical disk, or a solid state storage medium, such as a DVD, a magnetic or optical disk, or a magnetic or optical disk.

Finally, it is to be noted that: the above description is only a preferred embodiment of the present invention, and is only used to illustrate the technical solutions of the present invention, and not to limit the protection scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

13页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:界面显示方法、装置及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类