Information communication method and system

文档序号:1736857 发布日期:2019-12-20 浏览:2次 中文

阅读说明:本技术 信息沟通方法及其系统 (Information communication method and system ) 是由 黄柏钧 于 2019-08-23 设计创作,主要内容包括:本发明提供一种信息沟通方法及其系统,应用于具备相机的移动装置,该方法包括步骤:输入沟通信息;撷取装置所属用户的当前面部图像;根据该当前面部图像选择相应的预设动作;执行该预设动作并应用于该沟通信息,以发送该沟通信息。本发明在用户输入沟通信息后,即时撷取用户当前的面部图像,从而根据面部图像分析得出用户当前的表情和/或情绪,再由该用户当前的表情和/或情绪选择相应的预设动作,执行该预设动作以应用于沟通消息。通过上述方法及其系统能够准确有效的得知用户在输入或是发送沟通消息时的真实表情和/或情绪,提升发送方与接收方的沟通效率,加强了沟通过程的生动性。(The invention provides an information communication method and a system thereof, which are applied to a mobile device with a camera, and the method comprises the following steps: inputting communication information; capturing a current facial image of a user to which the device belongs; selecting a corresponding preset action according to the current face image; and executing the preset action and applying the preset action to the communication information so as to send the communication information. According to the invention, after the user inputs the communication information, the current facial image of the user is captured immediately, so that the current expression and/or emotion of the user are obtained according to the facial image analysis, and then the corresponding preset action is selected according to the current expression and/or emotion of the user, and the preset action is executed to be applied to the communication information. By the method and the system, the real expression and/or emotion of the user when inputting or sending the communication message can be accurately and effectively known, the communication efficiency of the sender and the receiver is improved, and the vividness of the communication process is enhanced.)

1. An information communication method applied to a mobile device with a camera, the method comprising the steps of:

inputting communication information;

capturing a current facial image of a user to which the device belongs;

selecting a corresponding preset action according to the current face image;

and executing the preset action and applying the preset action to the communication information so as to send the communication information.

2. The method of claim 1, wherein the communication message comprises one or a combination of a text message, a numeric message, an image message, a symbol message and an audio message.

3. The information communication method according to claim 1, wherein said "selecting a corresponding preset action according to the current facial image" comprises:

and analyzing the expression parameters associated with the current image to select the corresponding preset action according to the expression parameters.

4. The information communication method according to claim 3, wherein the predetermined action comprises:

controlling the communication information to be dynamically displayed according to the expression parameters; and/or

And adding a corresponding expression label according to the expression parameter, and combining the expression label with the communication information for display.

5. The information communication method according to claim 1, wherein before said "inputting communication information", further comprising:

collecting a plurality of expression parameters corresponding to a plurality of expressions, and counting the plurality of expression parameters to preset an expression parameter database.

6. The information communication method according to claim 3 or 5, wherein the expression parameters include:

one or a combination of relative coordinates, distances, sizes and shapes between eyebrows, eyes and mouth in the face image.

7. The method of claim 1, wherein the step of applying the predetermined action to the communication message to send the communication message comprises:

the user selects whether to apply the preset action to the communication information and sends the communication information.

8. The method of claim 7, wherein if the user chooses not to apply the predetermined action to the communication message, the current image of the user is captured again to reselect the corresponding predetermined action.

9. An information communication system applied to a mobile device with a camera, the information communication system comprising:

the input module is used for inputting communication information;

the image acquisition module is coupled with the input module and is used for acquiring a current facial image of a user to which the device belongs;

the processing module is coupled to the image capturing module and used for selecting corresponding preset actions according to the current image;

and the execution module is coupled with the execution module and is used for executing the preset action and applying the preset action to the communication information so as to send the communication information.

10. The messaging system of claim 9, wherein the mobile device comprises any one of a smart phone, a laptop computer, and a wearable device.

Technical Field

The present invention relates to the field of electronic communications, and in particular, to an information communication method and system.

Background

At present, a remote information communication mode is deeply involved in the lives of most people, and due to the convenience and real-time performance of the remote information communication mode, every person almost uses an information communication system every day to conveniently communicate with other users in a data transmission system.

However, this method still belongs to a flat and monotonous interaction, and cannot make the real-time emotional expression of the user more vivid, and even causes mismeeting in both communication.

Therefore, it is necessary to design a new information communication method and system thereof to overcome the above-mentioned drawbacks.

Disclosure of Invention

The invention aims to provide an information communication method and an information communication system, which can determine the real-time emotion of a sender while generating communication information and execute corresponding preset actions according to the real-time emotion.

In order to achieve the above object, the present invention provides an information communication method applied to a mobile device having a camera, the method comprising: inputting communication information; capturing a current facial image of a user to which the device belongs; selecting a corresponding preset action according to the current face image; and executing the preset action and applying the preset action to the communication information so as to send the communication information.

Preferably, the communication message includes one or a combination of text message, digital message, image message, symbol message and voice message.

Preferably, the "selecting the corresponding preset action according to the current face image" includes: and analyzing the expression parameters associated with the current image to select the corresponding preset action according to the expression parameters.

Preferably, the preset action includes: controlling the communication information to be dynamically displayed according to the expression parameters; and/or adding a corresponding expression label according to the expression parameter, and combining the expression label with the communication information for displaying.

Preferably, before the step of "inputting communication information", the method further comprises: collecting a plurality of expression parameters corresponding to a plurality of expressions, and counting the plurality of expression parameters to preset an expression parameter database.

Preferably, the expression parameters include: one or a combination of relative coordinates, distances, sizes and shapes between eyebrows, eyes and mouth in the face image.

Preferably, after the step of applying the predetermined action to the communication message to send the communication message, the method includes: the user selects whether to apply the preset action to the communication information and sends the communication information.

Preferably, if the user chooses not to apply the predetermined action to the communication message, the current image of the user is captured again to reselect the corresponding predetermined action.

In addition, the present invention also provides an information communication system applied to a mobile device having a camera, the information communication system comprising: the input module is used for inputting communication information; the image acquisition module is coupled with the input module and is used for acquiring a current facial image of a user to which the information communication system belongs; the processing module is coupled to the image capturing module and used for selecting corresponding preset actions according to the current image; and the execution module is coupled with the execution module and is used for executing the preset action and applying the preset action to the communication information so as to send the communication information.

Preferably, the mobile device includes any one of a smart phone, a notebook computer, and a wearable device.

Compared with the prior art, the information communication method and the system thereof provided by the invention can capture the current facial image of the user immediately after the communication information is input, execute the corresponding preset action according to the current facial image and apply the preset action to the communication information, so that the communication information is more vivid, and the emotion efficiency and precision of a sender and/or a receiver are improved when the communication information is transmitted.

Drawings

Fig. 1 is a functional block diagram of an information communication system according to an embodiment of the present invention;

fig. 2 is a functional block diagram of an image capturing module according to an embodiment of the present invention;

FIG. 3 is a schematic diagram illustrating an application of a default action according to an embodiment of the present invention;

FIG. 4 is a schematic diagram illustrating an application of a preset action according to another embodiment of the present invention;

fig. 5 is a schematic flow chart of an information communication method according to an embodiment of the present invention;

fig. 6 is a flowchart illustrating an information communication method according to another embodiment of the present invention;

fig. 7 is a flowchart illustrating an information communication method according to another embodiment of the present invention.

Detailed Description

In order to further understand the objects, structures, features and functions of the present invention, the following embodiments are described in detail.

Certain terms are used throughout the description and following claims to refer to particular components. As one of ordinary skill in the art will appreciate, manufacturers may refer to a component by different names. The present specification and claims do not intend to distinguish between components that differ in name but not function. In the following description and in the claims, the terms "include" and "comprise" are used in an open-ended fashion, and thus should be interpreted to mean "include, but not limited to.

Referring to fig. 1, fig. 2, fig. 3, fig. 4, fig. 5, fig. 6, and fig. 7, fig. 1 is a functional block diagram of an information communication system according to an embodiment of the present invention, fig. 2 is a functional block diagram of an image capture module according to an embodiment of the present invention, fig. 3 is a schematic diagram of an application of a preset action according to an embodiment of the present invention, fig. 4 is a schematic diagram of an application of a preset action according to another embodiment of the present invention, fig. 5 is a schematic flowchart of an information communication method according to an embodiment of the present invention, fig. 6 is a schematic flowchart of an information communication method according to another embodiment of the present invention, and fig. 7 is a schematic flowchart of an information communication method according to another embodiment of the present invention.

As shown in fig. 1 to 7, the present invention provides an information communication method and system thereof, which are applied to a mobile device with a camera, wherein the device may be any one of a smart phone, a notebook computer and a wearable device, but is not limited to the above devices. The present invention is implemented based on hardware and software modules in the information communication system 10, as shown in fig. 1, the modules may include an input module 100, an image capturing module 200 coupled to the input module 100, a processing module 300 coupled to the image capturing module 200, and an execution module 400 coupled to the processing module 300.

The input module 100 includes a data input device, and can input data information such as text information, numerical information, video information, symbol information, and audio information. Specifically, the input module 100 may include one or a combination of a physical or virtual keyboard unit, a camera unit, and a microphone unit.

As shown in fig. 2, the image capturing module 200 may include a photographing lens 201, a camera 202 and a face detection unit 203, wherein the photographing lens 201 is used for imaging a subject image, the camera 202 is used for receiving the subject image imaged by the photographing lens 201, performing a photoelectric conversion process, and outputting an image signal, and the face detection unit 203 is used for detecting a face of the subject according to the image signal output from the camera 202. Preferably, the face detection unit 203 can capture facial expressions and movements of the person to be photographed, and the photographing lens 201 preferably employs a camera with a large angle of view so as to obtain a complete facial image of the user, and the frame rate of the photographing lens 201 should be large enough to obtain a complete facial movement of the person to be photographed. It should be noted that the specific embodiments may be determined according to actual requirements, and the invention is not limited thereto. Since the image capturing module 200 is coupled to the input module 100, so that the image capturing module 200 is turned on to operate after the image capturing module 200 receives the input signal of the input module 100 or the input module 100 completes the input of the communication information, it can be understood that the camera for capturing the current facial image of the user mentioned in this embodiment is preferably a camera facing to the user side in the information communication system 10; for example, a front-facing camera in a smartphone.

The processing module 300 may include a processor with computation/analysis, such as a Micro Controller Unit (MCU) -based computing and control circuit module, and a corresponding peripheral circuit module may be configured, where the processor is capable of analyzing the current facial image of the belonging user to obtain an expression parameter associated with the current facial image of the belonging user, and selecting a corresponding preset action according to the obtained expression parameter. Preferably, the processing module 300 further has a storage function, and is configured to collect a plurality of expression parameters corresponding to a plurality of expressions, so as to count the plurality of expression parameters and preset an expression parameter database.

Further, the processing module 300 establishes a plurality of corresponding expression parameters in advance according to architectures of different types of facial expressions, and each facial expression is described by a plurality of similar or similar expression parameters. In other words, a facial expression may correspond to a plurality of similar or analogous expression parameters. It will be appreciated that a facial expression corresponds to a corresponding one or more predetermined actions.

Generally, the facial expression of a person can be divided into five states, namely neutral, disgust, pleasure, surprise and anger, and the facial expression of a person can be arbitrarily changed from one of the states to one of the other four states. Therefore, in this embodiment, the expression parameter database may be set according to the above five expression change states, and the expression parameters may include, for example, expression parameters corresponding to neutral, disgust, joy, surprise, and anger expressions.

Specifically, the processing module 300 can perform image processing analysis and facial feature extraction algorithm on the facial image to identify facial expressions, in other words, the processing module 300 can perform image processing analysis and facial feature extraction algorithm on the facial image to obtain a plurality of corresponding facial expression parameters, such as relative coordinates, relative distances, sizes and shapes between eyebrows, eyes and mouth in the facial image. Referring to tables 1-1 and 1-2, the relative coordinates include the coordinate points of the facial features, and the relative distance is calculated from the relative coordinates. After obtaining the current expression parameter from the current facial image, the processing module 300 may compare the facial expression and/or emotion corresponding to the obtained expression parameter, and finally select the corresponding preset action according to the current facial expression and/or emotion.

TABLE 1-1

Coordinate point
Eyebrow (cx)1,cy1)(cx2,cy2) … eye (cx)7,cy7)(cx8,cy8) … mouth (cx)15,cy15)…

Tables 1 to 2

Expression of facial expressions Coordinate point Relative distance
Pleasure Eyebrow (x)1,y1)(x2,y2) … eye (x)7,y7)(x8,y8) … mouth (x)15,y15)… 10
Anger and anger Eyebrow (x)1,y1)(x2,y2) … eye (x)7,y7)(x8,y8) … mouth (x)15,y15)… 20
……

In one embodiment, different expressions and/or emotions correspond to different expression parameters, and the corresponding expression parameters may include relative coordinates (e.g., respective x and y values) between the eyebrows, eyes, and mouth. After the image capturing module 200 captures the current facial image of the user to which the mobile device belongs, the processing module analyzes the relative coordinates of the eyebrows, the eyes and the mouth in the facial image in real time, and further obtains the relative distance between the eyebrows, the eyes and the mouth according to the relative coordinates. For example, when the relative distance is short, the current expression and/or emotion of the user can be presumed to be a pleasant state, and when the relative distance is long, the current expression and/or emotion of the user can be presumed to be angry.

Specifically, the image processing and analyzing method includes an image processing method and a facial feature extraction and calculation method for identifying the facial expression of the user. The Image processing method may include Image processing techniques such as gray scale conversion, filtering, Image Binarization, edge capture, feature capture, Image compression, and Image segmentation. In practical applications, a suitable image processing technology may be selected as the image processing method of the processing module 300 according to the image recognition method.

The facial feature extraction and calculation method includes a neural network (neural network), a Support Vector Machine (Support Vector Machine), a template matching (template matching), a feature positioning method (active adaptive model), a conditional random field (conditional random field), a Hidden Markov Model (HMM), a geometric model (geometric modeling), and the like. The application and implementation of the facial feature extraction algorithm method can be inferred by those skilled in the art, and therefore, the description thereof is omitted here.

In one embodiment, the preset actions include: and controlling the communication information to be dynamically displayed according to the expression parameters. It is understood that the expression parameters derived from the current facial image correspond to different expressions and/or emotions, and therefore, the expression parameters corresponding to different expressions and/or emotions correspond to different preset actions. For example, when the communication information includes one or a combination of text information, digital information or symbolic information, the communication information may be dynamically displayed according to the expression parameters, as shown in fig. 3, when the expression parameters of the user represent a pleasant mood according to the current facial image analysis of the user, the text information, the digital information or the symbolic information in the communication information may be displayed in a wavy manner, a dynamic jump manner or a dynamic floating manner, or the like, or the information may be dynamically displayed in other manners as long as the information can express the corresponding expression and/or emotion of the user, which is not limited in the present invention.

In another embodiment, the preset actions further comprise: and adding a corresponding expression label according to the expression parameter so as to combine the expression label with the communication information for display. For example, when the communication information includes one or a combination of text information, digital information, image information, symbol information, and voice information, the communication information may be displayed by adding a corresponding emoticon according to the emoticon parameters to combine the emoticon with the communication information. As shown in fig. 4, when it is found that the expression parameter of the user represents that the mood of the user is angry according to the current facial image analysis of the user, a "flame" shaped emoticon may be added to the background of the communication information, and the emoticon may also be displayed in combination with the communication information with the same or similar meaning, wherein the emoticon may be an animation, a video, an emoticon formed by abstracting the current facial image of the user, and the like, as long as the emoticon can express the corresponding expression and/or emotion of the user, which is not limited in the present invention.

The execution module 400 is used for executing a predetermined action to be applied to the communication information for the user to operate. Specifically, the user may determine whether to apply a preset action to the communication information according to whether the currently displayed communication information can accurately express the current expression and/or emotion of the user, or may determine whether to send the communication information according to the actual needs of the user.

Further, if the user thinks that the currently displayed communication information can accurately express the own current expression and/or emotion, the communication information of the ditch can be directly sent; if the user thinks that the currently displayed communication information cannot accurately express the current expression and/or emotion, the current preset action may be cancelled and is not applied to the communication information, and preferably, the image capturing module 200 captures the current facial image of the user again after receiving the instruction to reselect the corresponding preset action.

It should be noted that the types, physical structures, implementations and/or connection manners of the input module 100, the image capturing module 200, the processing module 300 and the executing module 400 may be determined according to the actual manner of the information communication system 10, and thus the embodiment is not limited thereto. In addition, the various expressions and the corresponding preset actions mentioned in the above embodiments are not intended to limit the present embodiment, but are merely used for illustration.

From the above embodiments, the present invention can be summarized as an information communication method, which is suitable for the information communication system 10 described in the above embodiments. Please refer to fig. 5 and fig. 6.

First, before step S100, the method preferably includes step S101 of collecting a plurality of expression parameters corresponding to a plurality of expressions, thereby counting the plurality of expression parameters and presetting an expression parameter database.

In step S100, the user of the mobile device inputs a communication message through the input module 100, wherein the communication message may be one or a combination of a text message, a digital message, an image message, a symbol message, and a voice message.

Next, in step S200, since the image capturing module 200 is coupled to the input module 100, the image capturing module 200 is turned on to perform work after the image capturing module 200 receives the input signal of the input module 100 or the input module 100 completes the input of the communication information. The image capture module 200 automatically captures a current facial image of the user to which the device belongs, with the user's consent (e.g., by the user/sender turning on the camera's auto-capture mode or explicitly agreeing to capture their image and potentially available for transmission to a server for sharing in social software, text or email messages, etc.).

Next, in step S300, since the processing module 300 has preset the corresponding relationship between different expression parameters and preset actions, after the image capturing module 200 captures the current facial image of the belonging user, the processing module 300 can analyze the current facial image to obtain the expression parameters associated with the current facial image of the user, so as to select the corresponding preset action according to the obtained expression parameters.

In one embodiment, the preset actions include: and controlling the communication information to be dynamically displayed according to the expression parameters. It is understood that the expression parameters derived from the current facial image correspond to different expressions and/or emotions, and therefore, the expression parameters corresponding to different expressions and/or emotions correspond to different preset actions. For example, when the communication information includes one or a combination of text information, digital information or symbol information, the communication information may be dynamically displayed according to the expression parameter, for example, when the expression parameter of the user is analyzed according to the current facial image of the user to represent a pleasant mood, the text information, the digital information or the symbol information in the communication information may be displayed in a wavy manner, a dynamic jump manner or a dynamic floating manner, or the like, or the information may be dynamically displayed in other manners as long as the information can express a corresponding expression and/or emotion of the user, which is not limited in this disclosure.

In another embodiment, the preset actions further comprise: and adding a corresponding expression label according to the expression parameter so as to combine the expression label with the communication information for display. For example, when the communication information includes one or a combination of text information, digital information, image information, symbol information, and voice information, the communication information may be displayed by adding a corresponding emoticon according to the emoticon parameters to combine the emoticon with the communication information. For example, when it is found that the expression parameter of the user represents that the mood of the user is angry according to the current facial image analysis of the user, a "flame" shaped expression tag may be added to the background of the communication information, and the expression tag with the same or similar meaning may be displayed in combination with the communication information as long as the expression tag can express the corresponding expression and/or emotion of the user, which is not limited by the invention.

Finally, in step S400, a predetermined action is performed to be applied to the communication information for the user to operate. Specifically, the user may determine whether to apply a preset action to the communication information according to whether the currently displayed communication information can accurately express the current expression and/or emotion of the user, or may determine whether to send the communication information according to the actual needs of the user.

In addition, as shown in fig. 7, the method further includes step S401: it is determined whether the user would like to apply the current default action to the current communication message.

Further, in step S402, if the user thinks that the currently displayed communication information can accurately express the current expression and/or emotion, the communication information of the ditch can be directly sent; and

in step S403, if the user thinks that the currently displayed communication information cannot accurately express the current expression and/or emotion, the user may cancel the current preset action and does not apply the current preset action to the communication information, and preferably, the image capturing module 200 captures the current facial image of the user again after receiving the instruction to reselect the corresponding preset action.

In summary, embodiments of the present invention provide an information communication method and system, which capture a current facial image of a user immediately after the user inputs communication information, so as to obtain a current expression and/or emotion of the user according to the facial image analysis, select a corresponding preset action according to the current expression and/or emotion of the user, and execute the preset action to apply to a communication message. By the method and the system, the real expression and/or emotion of the user when inputting or sending the communication message can be accurately and effectively known, the communication efficiency of the sender and the receiver is improved, and the vividness of the communication process is enhanced.

The present invention has been described in relation to the above embodiments, which are only exemplary of the implementation of the present invention. It should be noted that the disclosed embodiments do not limit the scope of the invention. Rather, it is intended that all such modifications and variations be included within the spirit and scope of this invention.

11页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种控制方法及电子设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类