Conference system-based participant monitoring and processing method and device and intelligent terminal

文档序号:1956629 发布日期:2021-12-10 浏览:15次 中文

阅读说明:本技术 基于会议系统的参会人员监测处理方法、装置、智能终端 (Conference system-based participant monitoring and processing method and device and intelligent terminal ) 是由 汤晓仙 于 2021-08-31 设计创作,主要内容包括:本发明公开了基于会议系统的参会人员监测处理方法、装置、智能终端,其中,上述基于会议系统的参会人员监测处理方法包括:获取收集各应用程序的登录信息,并将收集的与各应用程序对应的登录信息存储在预设的密码管理器;基于所述密码管理器,复制提取与指定应用程序对应的登录信息;控制需登录的所述指定应用程序获取复制的登录信息自动识别填充,通过自动识别填充的登录信息完成登录。与现有技术相比,本发明方案通过摄像头对参会人员行为外貌进行识别,对参会人员的专注度、外貌特征以及性别特征进行综合分析得出会议的目标用户群体,以及输出参会人员的行为信息辅助演讲者进行演讲和控场,提高演讲者的演讲技巧以及提高演讲氛围。(The invention discloses a conference system-based participant monitoring and processing method, a conference system-based participant monitoring and processing device and an intelligent terminal, wherein the conference system-based participant monitoring and processing method comprises the following steps: acquiring and collecting login information of each application program, and storing the collected login information corresponding to each application program in a preset password manager; based on the password manager, copying and extracting login information corresponding to a specified application program; and controlling the specified application program to be logged in to acquire the copied login information for automatic identification and filling, and completing the login through the login information automatically identified and filled. Compared with the prior art, the method and the system have the advantages that the behavior and appearance of the participants are identified through the camera, the concentration degree, appearance characteristics and gender characteristics of the participants are comprehensively analyzed to obtain a target user group of the conference, and the behavior information of the participants is output to assist a speaker in performing lecture and controlling the field, so that the lecture skill of the speaker is improved, and the lecture atmosphere is improved.)

1. A conference system-based participant monitoring and processing method is characterized by comprising the following steps:

acquiring and acquiring image data of conference participants;

determining concentration degree data of participants in the image data based on the image data of the conference participants, wherein the concentration degree data is obtained according to a preset algorithm through face orientation data, eyeball focusing power data and mobile phone screen brightness indexes in the image data;

and outputting the participation condition statistical data of the participants based on the concentration data of the participants.

2. The conference system based participant monitoring processing method according to claim 1, wherein the step of determining the attentiveness data of the participants in the image data based on the image data of the conference participants comprises:

determining a target user according to the departure rate and concentration data of the participants, and confirming the proportion data of the target user in the participants;

counting the ratio of each characteristic of the target user to obtain a target user portrait report;

outputting a target user representation report.

3. The conference system based participant monitoring processing method according to claim 1, wherein the step of determining the attentiveness data of the participants in the image data based on the image data of the conference participants further comprises:

determining the departure rate and the conference speech length deviation data of the conference participants based on the image data of the conference participants;

performing speech scoring on the current conference based on the determined departure rate, the determined concentration data and the conference speech length deviation of the participants in the conference;

and synthesizing and outputting a conference report based on the speech scoring, the real-time scoring and the optimization suggestion.

4. The conference system-based participant monitoring and processing method according to claim 1, wherein the step of acquiring image data of conference participants comprises:

detecting that a conference is started, and shooting at preset time intervals to obtain a conference panoramic image;

and acquiring image data of conference participants based on the conference panoramic image.

5. The conference system based participant monitoring processing method according to claim 1, wherein the step of determining concentration data of participants in the image data based on the image data of the conference participants comprises:

identifying and processing the image data of the conference participants;

recognizing the face and the position of each image according to the time sequence;

determining the information of the participants, the information of the persons leaving the scene midway, the information of the accessories of the persons and the information of the clothing in the current image through an image recognition technology, and inducing and sequencing the face orientation, the screen brightness and the gestures of the mobile phone of the same person according to a time sequence;

identifying face orientation data, eyeball focusing power data and mobile phone screen brightness indexes of the participants from the image data through an image identification technology; wherein the face orientation data comprises: when the shooting range of the camera is larger than half face, the face is judged to be positive, otherwise, the face is judged to be negative; the mobile phone screen non-bright index is an object in front of a detected face, and the mobile phone is identified by an image and is divided into bright and non-bright; if the mobile phone cannot be identified, judging that the mobile phone is not bright; the eyeball focusing power data is used for identifying the focusing direction of eyeballs, focusing is performed when the eyeball focusing power data face 50% of the central point of the screen, and non-focusing is performed when the eyeball focusing power data do not face the central point of the screen;

and obtaining the concentration degree data according to a preset algorithm based on the face orientation data, the eyeball concentration degree data and the mobile phone screen brightness index.

6. The conference system-based participant monitoring and processing method according to claim 5, wherein the obtaining the concentration data according to a predetermined algorithm based on the face orientation data, the eyeball concentration data and the mobile phone screen brightness index comprises:

by the formula: the concentration degree data is obtained by dividing the concentration degree data into 50% x face forward probability + 30% x eyeball focusing probability + 20% x screen non-bright probability.

7. The method as claimed in claim 2, wherein the step of counting the ratio of each feature of the target user to obtain the target user portrait report comprises:

identifying and calculating the field leaving rate of the participants according to the image data based on the conference participants;

confirming a conference target user based on the participant departure rate;

identifying the appearance characteristics of the target user to construct a picture based on the confirmed conference target user;

and counting the proportion of each appearance feature in the target user to generate a target user portrait report.

8. A meeting personnel monitoring and processing device based on a meeting system is characterized by comprising:

the image acquisition module is used for acquiring and acquiring image data of conference participants;

the concentration identification module is used for determining concentration data of the participants in the image data based on the image data of the conference participants, wherein the concentration data is obtained according to a preset algorithm through face orientation data, eyeball concentration data and mobile phone screen brightness indexes in the image data;

the output control module is used for outputting participant participation condition statistical data based on the concentration degree data of the participants;

the user portrait module is used for determining a target user according to the field leaving rate and concentration data of the participants and confirming the proportion data of the target user in the participants; counting the ratio of each characteristic of the target user to obtain a target user portrait report; outputting a target user image report;

the conference report generating module is used for determining the departure rate and the conference speech length deviation data of the conference participants based on the image data of the conference participants; and (3) carrying out speech scoring on the current conference based on the determined field leaving rate, the determined concentration degree data and the conference speech length deviation of the participants in the conference, and synthesizing and outputting a conference report based on the speech scoring, the real-time scoring and the optimization suggestion.

9. An intelligent terminal, characterized in that the intelligent terminal comprises a memory, a processor and a conference system-based participant monitoring processing program stored in the memory and operable on the processor, wherein the conference system-based participant monitoring processing program realizes the steps of the conference system-based participant monitoring processing method according to any one of claims 1 to 7 when executed by the processor.

10. A computer-readable storage medium, wherein the computer-readable storage medium stores a conference system-based participant monitoring processing program, and the conference system-based participant monitoring processing program, when executed by a processor, implements the steps of the conference system-based participant monitoring processing method according to any one of claims 1 to 7.

Technical Field

The invention relates to the technical field of conference systems, in particular to a conference system-based participant monitoring and processing method, a conference system-based participant monitoring and processing device and an intelligent terminal.

Background

With the development of electronic technology, especially the rapid development of camera shooting technology and image processing technology, the use of conference systems is more and more popular; in the conference system in the prior art, the monitoring of the concentration degree of the participants in the conference can not be realized, and the participation condition of each participant can not be known.

Thus, there is still a need for improvement and development of the prior art.

Disclosure of Invention

The invention mainly aims to provide a conference system-based participant monitoring processing method, a conference system-based participant monitoring processing device, an intelligent terminal and a computer-readable storage medium, and aims to solve the problems that in a conference system in the prior art, the concentration degree of participants in a conference cannot be monitored, and the participation condition of each participant cannot be known.

In order to achieve the above object, a first aspect of the present invention provides a conference system-based participant monitoring processing method, where the method includes:

acquiring and acquiring image data of conference participants;

determining concentration degree data of participants in the image data based on the image data of the conference participants, wherein the concentration degree data is obtained according to a preset algorithm through face orientation data, eyeball focusing power data and mobile phone screen brightness indexes in the image data;

and outputting the participation condition statistical data of the participants based on the concentration data of the participants.

Optionally, the step of determining the concentration degree data of the participants in the image data based on the image data of the conference participants comprises:

determining a target user according to the departure rate and concentration data of the participants, and confirming the proportion data of the target user in the participants;

counting the ratio of each characteristic of the target user to obtain a target user portrait report;

outputting a target user representation report.

Optionally, the step of determining the concentration degree data of the participants in the image data based on the image data of the conference participants further includes:

determining the departure rate and the conference speech length deviation data of the conference participants based on the image data of the conference participants;

performing speech scoring on the current conference based on the determined departure rate, the determined concentration data and the conference speech length deviation of the participants in the conference;

and synthesizing and outputting a conference report based on the speech scoring, the real-time scoring and the optimization suggestion.

Optionally, the step of acquiring image data of conference participants includes:

detecting that a conference is started, and shooting at preset time intervals to obtain a conference panoramic image;

and acquiring image data of conference participants based on the conference panoramic image.

Optionally, the step of determining the concentration degree data of the participants in the image data based on the image data of the conference participants includes:

identifying and processing the image data of the conference participants;

recognizing the face and the position of each image according to the time sequence;

determining the information of the participants, the information of the persons leaving the scene midway, the information of the accessories of the persons and the information of the clothing in the current image through an image recognition technology, and inducing and sequencing the face orientation, the screen brightness and the gestures of the mobile phone of the same person according to a time sequence;

identifying face orientation data, eyeball focusing power data and mobile phone screen brightness indexes of the participants from the image data through an image identification technology; wherein the face orientation data comprises: when the shooting range of the camera is larger than half face, the face is judged to be positive, otherwise, the face is judged to be negative; the mobile phone screen non-bright index is an object in front of a detected face, and the mobile phone is identified by an image and is divided into bright and non-bright; if the mobile phone cannot be identified, judging that the mobile phone is not bright; the eyeball focusing power data is used for identifying the focusing direction of eyeballs, focusing is performed when the eyeball focusing power data face 50% of the central point of the screen, and non-focusing is performed when the eyeball focusing power data do not face the central point of the screen;

and obtaining the concentration degree data according to a preset algorithm based on the face orientation data, the eyeball concentration degree data and the mobile phone screen brightness index.

Optionally, the obtaining the concentration data according to a predetermined algorithm based on the face orientation data, the eyeball concentration data, and the mobile phone screen opacity index includes:

by the formula: the concentration degree data is obtained by dividing the concentration degree data into 50% x face forward probability + 30% x eyeball focusing probability + 20% x screen non-bright probability.

Optionally, the step of counting the ratio of each feature of the target user to obtain the target user portrait report includes:

according to the image data based on the conference participants, identifying and calculating the field leaving rate of the participants, and identifying clothes, accessories, hairstyles, ages and sexes of the participants, wherein the field leaving rate is the field leaving times/image shooting times;

confirming a conference target user based on the participant departure rate;

identifying the appearance characteristics of the target user to construct a picture based on the confirmed conference target user;

and counting the proportion of each appearance feature in the target user to generate a target user portrait report.

The second aspect of the present invention provides a conference system-based participant monitoring and processing apparatus, wherein the apparatus comprises:

the image acquisition module is used for acquiring and acquiring image data of conference participants;

the concentration identification module is used for determining concentration data of the participants in the image data based on the image data of the conference participants, wherein the concentration data is obtained according to a preset algorithm through face orientation data, eyeball concentration data and mobile phone screen brightness indexes in the image data;

the output control module is used for outputting participant participation condition statistical data based on the concentration degree data of the participants;

the user portrait module is used for determining a target user according to the field leaving rate and concentration data of the participants and confirming the proportion data of the target user in the participants; counting the ratio of each characteristic of the target user to obtain a target user portrait report; outputting a target user image report;

the conference report generating module is used for determining the departure rate and the conference speech length deviation data of the conference participants based on the image data of the conference participants; and (3) carrying out speech scoring on the current conference based on the determined field leaving rate, the determined concentration degree data and the conference speech length deviation of the participants in the conference, and synthesizing and outputting a conference report based on the speech scoring, the real-time scoring and the optimization suggestion.

A third aspect of the present invention provides an intelligent terminal, where the intelligent terminal includes a memory, a processor, and a conference system-based participant monitoring processing program that is stored in the memory and is executable on the processor, and the conference system-based participant monitoring processing program implements any one of the steps of the conference system-based participant monitoring processing method when executed by the processor.

A fourth aspect of the present invention provides a storage medium, where a conference system-based participant monitoring processing program is stored in the storage medium, and when being executed by a processor, the conference system-based participant monitoring processing program implements any one of the steps of the conference system-based participant monitoring processing method.

From the above, in the scheme of the invention, the invention provides a conference member concentration monitoring method based on image shooting and image processing of a conference television camera, and the invention adds new functions to a conference system: the conference system has the function of monitoring the concentration degree of the participants in the conference, can timely know the participation condition of the participants, and can provide the user portrait of the target user interested in the conference content according to the concentration degree of the participants so as to help the conference speaker to adjust the speech mode.

Drawings

In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.

Fig. 1 is a schematic flow chart of a conference system-based participant monitoring processing method according to an embodiment of the present invention;

FIG. 2 is a schematic flow chart illustrating the implementation of step S100 in FIG. 1;

FIG. 3 is a schematic flow chart illustrating the implementation of step S200 in FIG. 1;

fig. 4 is a schematic specific flowchart of a conference system-based participant monitoring process according to an embodiment of the present invention;

fig. 5 is a schematic structural diagram of a conference monitoring processing apparatus based on a conference system according to an embodiment of the present invention;

fig. 6 is a schematic block diagram of an internal structure of an intelligent terminal according to an embodiment of the present invention.

Detailed Description

In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.

It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.

It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.

As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when …" or "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted depending on the context to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".

The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings of the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described and will be readily apparent to those of ordinary skill in the art without departing from the spirit of the present invention, and therefore the present invention is not limited to the specific embodiments disclosed below.

With the rapid development of the internet technology, the use demands of people on the internet-based online meeting and online courses are gradually increased, and when company staff go on business or schools can not return to school correction and normally learn due to certain factors, meetings and classes can still be normally carried out in the online meeting and online course mode. But the people involved in the conference or the students can not be found easily as in normal meetings and classes through the network, and the people involved in the conference or the students can not be found to be particularly interested in the content of the speeches. Similarly, it is difficult to distinguish the attention and interest of the audience when the audience opens, gives lessons, and gives a lecture online, and the attention and interest of all the people cannot be simultaneously considered and analyzed while the lecturer gives a lecture while the audience opens, gives a lecture, and gives a lecture online.

In order to solve the problems in the prior art, the invention provides a conference member concentration monitoring method based on image shooting and image processing of a conference television camera, and the invention adds new functions to a conference system: the conference system has the function of monitoring the concentration degree of the participants in the conference, can timely know the participation condition of the participants, and can provide the user portrait of the target user interested in the conference content according to the concentration degree of the participants so as to help the conference speaker to adjust the speech mode.

Exemplary method

As shown in fig. 1, an embodiment of the present invention provides a conference system-based participant monitoring processing method, specifically, the method includes the following steps:

s100, acquiring image data of conference participants;

in this embodiment, the participant monitoring system or the application software collects image data of the participant through the camera, including wearing and dressing appearances of the participant and actions of the participant. The appearance of looking up of meeting personnel's dress is used for judging attribute such as meeting personnel's sex, age provides the reference value of target user group for the speaker, meeting personnel's action includes facial orientation, and behaviors such as hand body posture are judged through above-mentioned facial orientation whether meeting personnel are gazing speaker or screen, through hand body posture infers meeting personnel's mood is relax, anxious or impatient, for the speaker provides each meeting personnel's mental state.

When the conference is an online conference, the online conference room controls the cameras of the participants to start the maximum shooting visual angle to acquire the image data of the faces or the upper half bodies of the participants; when the conference is an offline conference, the wide-angle camera or the rotatable visual angle camera is used for collecting the image data of the participants on the conference site in a timing or real-time manner. The method realizes real-time or timing monitoring of on-line or off-line participants, and assists the speaker in observing the listening and speaking conditions of the participants.

Step S200, determining concentration degree data of participants in the image data based on the image data of the conference participants, wherein the concentration degree data is obtained through face orientation data, eyeball concentration degree data and mobile phone screen brightness indexes in the image data according to a preset algorithm;

in this embodiment, the determining, by the monitoring system, concentration degree data of the participant in the image data according to the image data specifically includes: identifying face orientation data of each participant in the image data through an image identification technology, and judging that the participant is relatively attentive when the face orientation of the participant faces a screen or a speaker; identifying eye focus power data for each participant in the image data, similar to the face orientation data, determining that the participant is relatively more attentive when the eye focus orientation is a screen or a presenter or a focus trajectory thereof changes between the presenter and the screen; the mobile phone screen non-brightness index is used for judging whether the face and the brightness or the hand action near the face of the participant judge whether the user uses a mobile phone or other similar electronic products, and when the brightness near the face of the participant is judged to be high or the hand action behavior of the participant is analyzed to judge that the participant uses the mobile phone, the participant is considered to be low in concentration degree. Besides, the concentration data of the participants can be collected in other modes of analyzing the behavioral and action attention of the participants. In the steps of the method, the concentration degree data of the participants is obtained through analysis according to the image data returned in real time or at regular time, so that the lecturer is assisted to control the rhythm of the lecture or the conference, or a teacher giving lessons is helped to find out students with different lectures, and the efficiency of class giving is improved.

And step S300, outputting the participation situation statistical data of the participants based on the concentration data of the participants.

In this embodiment, the monitoring system outputs statistical data of the meeting conditions of the meeting participants based on the concentration data obtained by analysis, and sends the statistical data to a presenter in a meeting, a lecture or a web class. The statistical data is, for example, the proportion of the participants who pay attention to listening and speaking currently, the proportion of the participants who pay attention to listening and speaking in each gender, age group and clothing taste, and the target user is obtained through data screening. The returned frequency is set according to the requirements of the lecturer, when the lecturer only wants to know that one lecture is finished, the target user group of the lecture content and the concentration degree of the user in the lecture process are changed, namely the feedback to the lecturer is set in a mode of manual calling in the monitoring system after the lecture; and when the lecturer wants to continuously receive the attention state feedback of the participants in the lecture process and correspondingly adjusts the lecture rhythm, the monitoring system is set to return the attention condition statistical data once in real time or within a fixed short interval time.

When the speech is an online speech, the system displays the speech on the computer of the speaker in a software application mode, for example, displays the proportion of the current high-concentration attendees and the characteristics of the high-concentration attendees in a pie chart or bar chart mode, timely captures a target user group, and changes the speech style to capture the attention of the target user group; when the lecture is an offline lecture, the system can send the participation condition statistical data to earphones, intelligent glasses or other portable intelligent equipment worn by the lecturer in a wireless transmission mode. In the steps of the method, the monitoring system improves the speaking rhythm controlled by the lecturer in a mode of transmitting the statistic data of the meeting situation to the lecturer, improves the speaking skill, and provides better speaking and listening experience for the lecturer and the meeting personnel.

Besides, the image analysis can analyze the attention degree of the participants, and can further provide effective help for positioning and acquiring the target user according to the appearance characteristics of the participants with higher attention degree and the user portrait of the target user interested in the speech and the conference content, including data such as gender, age group, and dressing style of the target user with higher importance degree.

As can be seen from the above, in the conference member monitoring processing method based on the conference system provided in the embodiment of the present invention, a conference member concentration monitoring method based on image shooting and image processing of a conference television or a conference site camera is provided, and the present invention adds a new function to the conference system: the conference monitoring system has the function of monitoring the concentration degree of the participants in the conference, and can timely know the participation condition of the participants so as to help conference speakers to adjust the speech mode.

Specifically, in this embodiment, when the conference is an offline conference, the monitoring system acquires image data of conference participants through the wide-angle camera, and when the camera acquiring the image data of the conference participants is other equipment, the specific scheme in this embodiment may be referred to.

In an application scenario, after a lecture conference begins, the participant monitoring system controls to start a camera to acquire image data containing conference participants.

Specifically, in this embodiment, as shown in fig. 2, the step S100 includes:

s101, detecting that a conference is started, and shooting at preset time intervals to obtain a conference panoramic image;

and S102, acquiring image data of conference participants based on the conference panoramic image.

For example, in a lecture of a certain speaker, when the attendee monitoring system detects an operation instruction for starting a conference, the wide-angle camera for shooting attendees in the conference hall is controlled to be started, the panoramic images of all attendees in the conference hall are controlled to be shot in real time or at preset intervals, image data of all the attendees in the lecture hall is obtained, when the conference hall is large, a motion path and an angle of the wide-angle camera are set, the panoramic images containing the image data of all the attendees are obtained in a cycle of every preset interval, such as ten seconds, and the image data is used for extracting appearance wearing and motion expression of each attendee to obtain attributes and age attributes of each attendee and a gender degree of attention to the conference. The method realizes that the image information of all the participants is acquired at preset time intervals so as to ensure that each participant is analyzed, the attention of the participants to the conference and the whole speech atmosphere are obtained, and speech help is provided for the speechmaker.

In an application scenario, the monitoring system analyzes concentration degree data for representing the listening and speaking concentration degree of the participants based on image data of the participants obtained through shooting, and the concentration degree data is obtained through analyzing indexes such as face orientation data, eyeball focusing data and whether a screen is bright or not of the participants in the image data.

Specifically, as shown in fig. 3, the step S200 includes:

step S201, identifying the image data of the conference participants;

step S202, recognizing the face and the position of each image according to time sequence;

step S203, determining the information of the participants, the information of the persons leaving the scene midway, the information of the accessories of the persons and the information of the clothes in the current image through an image recognition technology, and carrying out induction sequencing on the face orientation, the screen brightness and the gestures of the same person according to a time sequence;

step S204, recognizing face orientation data, eyeball focusing power data and mobile phone screen brightness indexes of the participants from the image data through an image recognition technology; wherein the face orientation data comprises: when the shooting range of the camera is larger than half face, the face is judged to be positive, otherwise, the face is judged to be negative; the mobile phone screen non-bright index is an object in front of a detected face, and the mobile phone is identified by an image and is divided into bright and non-bright; if the mobile phone cannot be identified, judging that the mobile phone is not bright; the eyeball focusing power data is used for identifying the focusing direction of eyeballs, focusing is performed when the eyeball focusing power data face 50% of the central point of the screen, and non-focusing is performed when the eyeball focusing power data do not face the central point of the screen;

and S205, obtaining the concentration degree data according to a preset algorithm based on the face orientation data, the eyeball concentration degree data and the mobile phone screen brightness index.

Specifically, the obtaining of the concentration data according to a predetermined algorithm based on the face orientation data, the eyeball concentration data, and the mobile phone screen opacity index includes:

by the formula: the concentration degree data is obtained by dividing the concentration degree data into 50% x face forward probability + 30% x eyeball focusing probability + 20% x screen non-bright probability.

For example, the monitoring system controls to recognize and process the acquired image data, searches for part of data of the participants in the image data, tracks and analyzes the faces and positions of the corresponding participants in each image according to a time sequence, and determines the action and path records of each participant through the analysis, wherein the action and path records include the departure times, clothing information, average face orientation, mobile phone screen brightness and gestures. The face orientation data is used for judging whether the attendee is in a listening and speaking state, and specifically, when the area of the face of the user is more than half of the area of the face shot by the camera in the image data shot by the camera and faces the camera, the face of the attendee is judged to face a screen or a speaker; the eyeball focusing direction is to judge whether the participant looks at the screen or the speaker by analyzing the eyeball focusing position of the participant in the image data, specifically, when the eyeball focusing direction is judged to be within 50% of the screen central point according to an algorithm, the participant is judged to be in a listening and speaking state, otherwise, the participant is judged to be in an unfocused and non-listening and speaking state; the mobile phone screen non-bright index is that whether the attendee uses the mobile phone or not is analyzed through detecting whether an object in front of the face position of the attendee is the mobile phone or not and whether the screen is on or off or not.

The concentration degree data is obtained through the face orientation data, the eyeball concentration degree data and the mobile phone screen brightness index according to a preset algorithm, and specifically, the preset algorithm is as follows: the concentration data is 50% multiplied by the face forward probability + 30% multiplied by the eyeball focusing probability + 20% multiplied by the screen non-lighting probability, and a probability value is obtained by combining the time when the face faces the screen in the forward direction, the eyeball focusing time and the screen non-lighting time with the corresponding ratio of each part. For example, when the face of the participant a faces the screen in the forward direction, the eyeball focuses on the screen or the speaker, and the mobile phone screen is not bright for 50%, that is, the concentration data of the participant a is 50%, and the concentration data range is 0% -100%, and considering that the participant cannot actually focus on the speech by 100%, it is determined that the participant is in a state of listening seriously when the concentration data of the participant exceeds 70%. The data listening and speaking atmosphere is obtained by quantifying the listening and speaking attention of the participants, so that the speaker can more intuitively know the listening and speaking enthusiasm of the current participants.

Further, the step of determining the attentiveness data of the participants in the image data based on the image data of the conference participants comprises the following steps:

determining a target user according to the departure rate and concentration data of the participants, and confirming the proportion data of the target user in the participants;

counting the ratio of each characteristic of the target user to obtain a target user portrait report;

outputting a target user representation report.

The step of counting the ratio of each feature of the target user to obtain the target user portrait report includes:

identifying and calculating the field leaving rate of the participants according to the image data based on the conference participants;

confirming a conference target user based on the participant departure rate;

identifying the appearance characteristics of the target user to construct a picture based on the confirmed conference target user;

and counting the proportion of each appearance feature in the target user to generate a target user portrait report.

In an application scene, parameters such as the standing frequency field-leaving rate of each participant are identified through the image data of the participants, and a target user portrait report is automatically generated by further combining the appearance characteristics of the participants.

For example, the sex, age, hairstyle, and clothing accessories of the attendee are identified, the above features are registered as appearance features of the same attendee, and the departure rate of the attendee, which is the number of times images of the attendee are taken without an agent divided by the total number of times images are taken since the start of the lecture, i.e., the departure rate is the number of times images are taken/the number of times images are taken. Confirming target users of the lecture based on the obtained field leaving rates of the participants, for example, the monitoring system collects the field leaving rates of all persons, and taking the lowest first 20% of the participants as the target users; the target user can also be selected by taking the 20% of the participants with the highest concentration data; in order to further judge the target user who really listens and speaks seriously, the participant who has excellent double indexes obtained by combining the departure rate and the concentration degree data is the target user. And carrying out data statistics on the appearance feature construction image of the target user, wherein the appearance feature construction image is the data of the appearance, the clothes and the personal features of the participants, such as accessories, hairstyles, age and sex and the like. For example, when the lecture content of the lecturer is related to the host game, the male accounts for 72% of the target users, and the age group with the highest occupancy rate is 18-25 years old, after the statistics of all the appearance feature construction portrayal data is completed, the data can draw the conclusion that the target users of the lecture content of the host game and related products are 18-25 years old male, and the clothes are inclined to sportswear without wearing more glasses. And meanwhile, the data are output and transmitted to the carry-on wearing equipment of the speaker, the speaker chats related contents with participants according to the target user portrait report of the target user during the speech, and a better speech atmosphere is brought by increasing the interaction form with the target user.

In an application scenario, the monitoring system comprehensively scores the whole lecture or conference according to the concentration data, the departure rate, the conference length deviation data and the like of the participants in the steps, analyzes and optimizes suggestions in real time, and synthesizes and outputs a conference report.

For example, the monitoring system determines the off-site rate, concentration data and speech length deviation of the participants based on the image data of the participants shot by the camera and the speech content of the speaker, wherein the speech length deviation is the comparison between a preset speech process and the current speech process, and when the time spent by the speaker when speaking the content at the same position is shorter, the speech is judged to be faster; and judging that the deviation of the speech length is larger when the deviation of the time spent by the speaker in the same content position and the preset time is larger. The method for acquiring the off-site rate and the concentration data is described in the above steps, and is not described in detail again. And the monitoring system scores according to the obtained field leaving rate, the concentration degree data and the speech length deviation of the participants to obtain real-time scores and speech scores. The real-time score is obtained only according to the listening and speaking states of the participants, and the specific calculation method comprises the steps of real-time score (1-field-separating rate) multiplied by 3+ concentration degree multiplied by 5 and full score of eights; the speech score is obtained by combining the listening state of the participants and the speech state of the speaker, and the specific calculation method comprises the following steps The full score is ten. Meanwhile, the monitoring system intelligently analyzes the data to obtain a conference improvement point, for example, the improvement point obtained by the monitoring system analyzing the speech in real time comprises prompting that the rate of leaving of the speaker is high, and the behaviors of the user playing a mobile phone are more; if the monitoring system evaluates the whole speech after the speech is finished to obtain an improvement point, dividing the data into a speech starting period and a speech beginning period according to the change of each datum in the conference processAnd (4) proposing a staged optimization suggestion at the middle stage and the end stage of the lecture, for example, when the lecturer uses PPT to lecture a certain page, the real-time score is lower by 4 points, recording the page number and indicating that PPT of the page needs to be weighed continuously in the optimization suggestion. And simultaneously combining the speech scoring, the real-time scoring and the optimization suggestion into the real-time or intermittent output of the meeting report. By the method, the lecturer can acquire more detailed speech data of the participants and speech speed data of the lecturer in real time, can know the insufficiency of the lecture in real time according to the optimization suggestion of intelligent analysis and make a response, and can analyze weak points in the course of reflecting the lecture according to the data in the course of the lecture after the lecture is finished, so that the lecture skill is improved.

In this embodiment, the conference system-based participant monitoring processing method is further specifically described based on an application scenario, and fig. 4 is a specific flowchart of the conference system-based participant monitoring processing process provided in this embodiment of the present invention, and the steps include:

step S10, start, proceed to step S11;

s11, acquiring image data of the participants by the camera, and entering S12;

step S12, control the image data processing, and proceed to step S13;

step S13, analyzing and judging the concentration degree of the participants according to the data to obtain data of the participants heard by the special notes in the conference, and entering step S14;

step S14, collecting user figures, namely appearance features of the participants with high concentration, and entering step S15 for the appearance features owned by the participants with high concentration;

step S15, analyzing to obtain a speech score according to the image data and speech data collected by the speaker, wherein the speech data comprises the progress rhythm of the speech, and entering step S16;

step S16, outputting the data and report obtained by the analysis, and entering step S20;

and step S20, end.

As can be seen from the above, in the embodiment of the present invention, the monitoring system for monitoring the participants controls the camera to collect the image data of the participants, processes the acquired image containing the appearance information of the participants, determines the listening and speaking concentration of each participant according to the image, and obtains the concentration data of the whole personnel participating in the conference comprehensively. Further, user images of the participants with high concentration degree are extracted, wherein the user images comprise sex, appearance and clothing preference, and common appearance features of the participants with high concentration degree are obtained through statistical analysis. Furthermore, the image data of the participants are analyzed to obtain the speech score of the speaker, and finally the information is synthesized and output.

Exemplary device

As shown in fig. 5, corresponding to the conference system-based participant monitoring and processing method, an embodiment of the present invention further provides a conference system-based participant monitoring and processing apparatus, where the conference system-based participant monitoring and processing apparatus includes:

an image acquisition module 510, configured to acquire and acquire image data of conference participants;

in this embodiment, the participant monitoring system or the application software collects image data of the participant through the camera, including wearing and dressing appearances of the participant and actions of the participant. The appearance of looking up of meeting personnel's dress is used for judging attribute such as meeting personnel's sex, age provides the reference value of target user group for the speaker, meeting personnel's action includes facial orientation, and behaviors such as hand body posture are judged through above-mentioned facial orientation whether meeting personnel are gazing speaker or screen, through hand body posture infers meeting personnel's mood is relax, anxious or impatient, for the speaker provides each meeting personnel's mental state.

When the conference is an online conference, the online conference room controls the cameras of the participants to start the maximum shooting visual angle to acquire the image data of the faces or the upper half bodies of the participants; when the conference is an offline conference, the wide-angle camera or the rotatable visual angle camera is used for collecting the image data of the participants on the conference site in a timing or real-time manner. The method realizes real-time or timing monitoring of on-line or off-line participants, and assists the speaker in observing the listening and speaking conditions of the participants.

A concentration identification module 520, configured to determine concentration data of participants in the image data based on image data of conference participants, where the concentration data is obtained according to a predetermined algorithm through face orientation data, eyeball concentration data, and a mobile phone screen brightness index in the image data;

in this embodiment, the determining, by the monitoring system, concentration degree data of the participant in the image data according to the image data specifically includes: identifying face orientation data of each participant in the image data through an image identification technology, and judging that the participant is relatively attentive when the face orientation of the participant faces a screen or a speaker; identifying eye focus power data for each participant in the image data, similar to the face orientation data, determining that the participant is relatively more attentive when the eye focus orientation is a screen or a presenter or a focus trajectory thereof changes between the presenter and the screen; the mobile phone screen non-brightness index is used for judging whether the face and the brightness or the hand action near the face of the participant judge whether the user uses a mobile phone or other similar electronic products, and when the brightness near the face of the participant is judged to be high or the hand action behavior of the participant is analyzed to judge that the participant uses the mobile phone, the participant is considered to be low in concentration degree. Besides, the concentration data of the participants can be collected in other modes of analyzing the behavioral and action attention of the participants. In the steps of the method, the concentration degree data of the participants is obtained through analysis according to the image data returned in real time or at regular time, so that the lecturer is assisted to control the rhythm of the lecture or the conference, or a teacher giving lessons is helped to find out students with different lectures, and the efficiency of class giving is improved.

An output control module 530, configured to output participant participation condition statistical data based on the concentration data of the participants;

in this embodiment, the monitoring system outputs statistical data of the meeting conditions of the meeting participants based on the concentration data obtained by analysis, and sends the statistical data to a presenter in a meeting, a lecture or a web class. The statistical data is, for example, the proportion of the participants who pay attention to listening and speaking currently, the proportion of the participants who pay attention to listening and speaking in each gender, age group and clothing taste, and the target user is obtained through data screening. The returned frequency is set according to the requirements of the lecturer, when the lecturer only wants to know that one lecture is finished, the target user group of the lecture content and the concentration degree of the user in the lecture process are changed, namely the feedback to the lecturer is set in a mode of manual calling in the monitoring system after the lecture; and when the lecturer wants to continuously receive the attention state feedback of the participants in the lecture process and correspondingly adjusts the lecture rhythm, the monitoring system is set to return the attention condition statistical data once in real time or within a fixed short interval time.

When the speech is an online speech, the system displays the speech on the computer of the speaker in a software application mode, for example, displays the proportion of the current high-concentration attendees and the characteristics of the high-concentration attendees in a pie chart or bar chart mode, timely captures a target user group, and changes the speech style to capture the attention of the target user group; when the lecture is an offline lecture, the system can send the participation condition statistical data to earphones, intelligent glasses or other portable intelligent equipment worn by the lecturer in a wireless transmission mode. In the steps of the method, the monitoring system improves the speaking rhythm controlled by the lecturer in a mode of transmitting the statistic data of the meeting situation to the lecturer, improves the speaking skill, and provides better speaking and listening experience for the lecturer and the meeting personnel.

The user image module 540 is used for determining a target user according to the departure rate and concentration data of the participants and confirming the proportion data of the target user in the participants; counting the ratio of each characteristic of the target user to obtain a target user portrait report; outputting a target user image report;

in this embodiment, a high concentration target user and the ratio of the target user are confirmed based on the concentration data of the participants, the ratio of each feature of the target user is further counted, for example, in a lecture of a beauty product, the high concentration gender of the high concentration user is a female, and the high concentration age is 25 to 30 years old and has more hair, and a portrait report of the target user is obtained and output by the statistical calculation method. By the steps of the method, the target users and the characteristics of the target users in the conference participants are automatically analyzed and obtained, and the smooth proceeding and the corresponding popularization of the conference are facilitated.

A conference report generating module 550, configured to determine, based on the image data of the conference participants, departure rates of the conference participants and conference speech length deviation data; and (3) carrying out speech scoring on the current conference based on the determined field leaving rate, the determined concentration degree data and the conference speech length deviation of the participants in the conference, and synthesizing and outputting a conference report based on the speech scoring, the real-time scoring and the optimization suggestion.

In this embodiment, through the image data of the meeting participant who obtains of shooting, confirm data such as meeting participant's off-site rate, meeting speech length deviation, concentration degree, based on data are graded, data feedback amalgamation and output to speech or meeting, obtain normalized, quantized speech effect feedback, help the user to carry out repeated study and improvement to the speech, make positive improvement for speech and popularization afterwards, improve the speech skill of the speaker, effect and help the popularization of speech.

Therefore, the conference system-based participant monitoring and processing device provided by the invention adds new functions to the conference system: the conference system has the function of monitoring the concentration degree of the participants in the conference, can timely know the participation condition of the participants, and can provide the user portrait of the target user interested in the conference content according to the concentration degree of the participants so as to help the conference speaker to adjust the speech mode.

Specifically, in this embodiment, the specific functions of each module of the conference system-based participant monitoring and processing apparatus may refer to the corresponding descriptions in the conference system-based participant monitoring and processing method, which are not described herein again.

Based on the above embodiment, the present invention further provides an intelligent terminal, and a schematic block diagram thereof may be as shown in fig. 6. The intelligent terminal comprises a processor, a memory and a network interface which are connected through a system bus. Wherein, the processor of the intelligent terminal is used for providing calculation and control capability. The memory of the intelligent terminal comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a conference system-based participant monitoring processing program. The internal memory provides an environment for the operation of an operating system in the nonvolatile storage medium and a conference system-based participant monitoring processing program. The network interface of the intelligent terminal is used for being connected and communicated with an external terminal through a network. When being executed by a processor, the conference system-based participant monitoring processing program realizes the steps of any conference system-based participant monitoring processing method.

It will be understood by those skilled in the art that the block diagram shown in fig. 6 is only a block diagram of a part of the structure related to the solution of the present invention, and does not constitute a limitation to the intelligent terminal to which the solution of the present invention is applied, and a specific intelligent terminal may include more or less components than those shown in the figure, or combine some components, or have a different arrangement of components.

In one embodiment, an intelligent terminal is provided, where the intelligent terminal includes a memory, a processor, and a conference system-based participant monitoring processing program stored in the memory and executable on the processor, and the conference system-based participant monitoring processing program, when executed by the processor, performs the following operations:

acquiring and acquiring image data of conference participants;

determining concentration degree data of participants in the image data based on the image data of the conference participants, wherein the concentration degree data is obtained according to a preset algorithm through face orientation data, eyeball focusing power data and mobile phone screen brightness indexes in the image data;

and outputting the participation condition statistical data of the participants based on the concentration data of the participants.

The embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium is stored with a conference system-based participant monitoring processing program, and the conference system-based participant monitoring processing program is executed by a processor to realize the steps of any conference system-based participant monitoring processing method provided by the embodiment of the invention.

It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.

It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.

Those of ordinary skill in the art would appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.

In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the above modules or units is only one logical division, and the actual implementation may be implemented by another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.

The integrated modules/units described above, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium and can implement the steps of the embodiments of the method when the computer program is executed by a processor. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer readable medium may include: any entity or device capable of carrying the above-mentioned computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, etc. It should be noted that the contents contained in the computer-readable storage medium can be increased or decreased as required by legislation and patent practice in the jurisdiction.

The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art; the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein.

19页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于自学习判据的过程层网络故障定位方法和装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!