Method, device and system for acquiring expression information

文档序号:987769 发布日期:2020-11-06 浏览:3次 中文

阅读说明:本技术 获取表情信息的方法、装置和系统 (Method, device and system for acquiring expression information ) 是由 靳玉康 杨松鹤 于 2019-05-06 设计创作,主要内容包括:本申请公开了一种获取表情信息的方法、装置和系统。其中,该方法包括:获取目标信息;判断目标信息是否包含预设数据库中的任意一个或多个语料,其中,预设数据库中的语料类型包括如下至少之一:语气词、叹词、祝贺词、网络新词、符号和语调;在目标信息包含预设数据库中的任意一个或多个语料的情况下,基于目标信息得到对应的至少一个表情信息。本发明解决了相关技术中的聊天表情生成方法需要采集用户信息,导致用户隐私无法得到保障且图像处理性能损失大的技术问题。(The application discloses a method, a device and a system for obtaining expression information. Wherein, the method comprises the following steps: acquiring target information; judging whether the target information contains any one or more linguistic data in a preset database, wherein the linguistic data type in the preset database comprises at least one of the following types: word-of-moods, sigh words, congratulatory words, network new words, symbols and intonations; and obtaining at least one corresponding expression information based on the target information under the condition that the target information contains any one or more linguistic data in a preset database. The invention solves the technical problems that the user privacy cannot be guaranteed and the image processing performance loss is large because the chat expression generation method in the related technology needs to collect the user information.)

1. A method for acquiring expression information comprises the following steps:

acquiring target information;

judging whether the target information contains any one or more linguistic data in a preset database, wherein the linguistic data type in the preset database comprises at least one of the following types: word-of-moods, sigh words, congratulatory words, network new words, symbols and intonations;

and obtaining at least one corresponding expression information based on the target information under the condition that the target information contains any one or more linguistic data in a preset database.

2. The method according to claim 1, wherein before determining whether the target information contains any one or more corpora in a preset database, the method further comprises:

acquiring historical information of a user;

performing word segmentation on the historical information to obtain a first word segmentation result;

and extracting the linguistic data with the frequency exceeding a preset value from the first segmentation result, and adding the extracted linguistic data into the preset database.

3. The method of claim 2, wherein segmenting the historical information to obtain a first segmentation result comprises:

classifying the historical information according to attribute categories;

and performing word segmentation on the historical information with the same attribute category to obtain the first word segmentation result.

4. The method according to claim 2, wherein extracting the corpus with the frequency of occurrence exceeding a preset value from the first segmentation result, and adding the extracted corpus to the preset database comprises:

deleting public words in the first word segmentation result;

extracting corpora with the occurrence frequency exceeding the preset value from the first word segmentation result with the common words deleted;

and adding the extracted corpora into the preset database.

5. The method of claim 1, wherein prior to deriving the corresponding at least one expression information based on the target information, the method further comprises:

acquiring historical information of a user;

and training based on the historical information to obtain a trained machine learning model, wherein the machine learning model is used for processing the target information to obtain the at least one expression information.

6. The method of claim 5, wherein training based on the historical information results in a trained machine learning model, comprising:

performing word segmentation on the historical information to obtain a second word segmentation result, wherein the second word segmentation result comprises the characteristics and expression information of the historical information;

replacing the expression information in the second word segmentation result with an attribute category associated with the expression information;

and training the replaced second word segmentation result to obtain the machine learning model.

7. The method of claim 6, wherein when there are a plurality of attribute categories associated with the expression information, one attribute category is determined according to a preset policy.

8. The method of claim 5, wherein deriving the corresponding at least one expression information based on the target information comprises:

extracting at least one feature from the target information;

processing the at least one feature through the machine learning model to obtain an attribute category associated with the target information;

and obtaining at least one piece of expression information associated with the attribute category based on the attribute category.

9. The method of claim 8, wherein the at least one emotion information is derived from emotion information in the history information, and the number and position of the at least one emotion information are random.

10. The method of claim 1, wherein determining whether the target information includes any one or more corpora in a preset database comprises:

performing word segmentation on the target information to obtain a third word segmentation result;

and traversing each word segmentation in the third word segmentation result to judge whether the target information contains any one or more linguistic data in a preset database.

11. The method of claim 3, 6 or 8, wherein the attribute categories are obtained based on:

extracting expression information of the historical information;

and classifying the expression information according to the emotional characteristics to obtain an attribute category.

12. The method of claim 1, wherein after deriving the corresponding at least one expression information based on the target information, the method further comprises: adding the at least one piece of expression information to the target information.

13. The method of claim 12, wherein adding the at least one expression information to the target information comprises:

displaying the at least one expression information;

and if the preset operation is detected, adding the at least one piece of expression information subjected to the preset operation into the target information.

14. The method according to claim 1, wherein the target information is directly output in case the target information does not contain any one or more corpora in a preset database.

15. A method for acquiring expression information comprises the following steps:

displaying the target information;

displaying at least one piece of expression information associated with the target information, wherein when the target information contains any one or more corpora in a preset database, the corresponding at least one piece of expression information is obtained based on the target information, and the type of the corpora in the preset database includes at least one of the following: word of tone, sigh word, congratulatory word, network new word, symbol and intonation.

16. The method of claim 15, wherein prior to deriving the corresponding at least one expression information based on the target information, the method further comprises: and judging whether the target information contains any one or more linguistic data in a preset database.

17. The method according to claim 15, wherein the target information is displayed in case the target information does not contain any one or more corpora in a preset database.

18. The method of claim 15, wherein after presenting the at least one emoticon associated with the target information, the method further comprises: displaying the target information to which the at least one emotion information is added.

19. A method for acquiring expression information comprises the following steps:

inputting target information on an interactive interface;

under the condition that the target information contains any one or more corpora in a preset database, outputting at least one piece of expression information based on the target information, wherein the type of the corpora in the preset database comprises at least one of the following types: word of tone, sigh word, congratulatory word, network new word, symbol and intonation.

20. The method of claim 19, wherein after outputting at least one expression information based on the target information, the method further comprises:

if the preset operation of the interactive interface is detected, generating a trigger instruction, wherein the trigger instruction is used for adding the at least one piece of expression information subjected to the preset operation into the target information;

outputting the target information added with the at least one expression information based on the trigger instruction.

21. An apparatus for acquiring emotion information, comprising:

the acquisition module is used for acquiring target information;

a judging module, configured to judge whether the target information includes any one or more corpora in a preset database, where a corpus type in the preset database includes at least one of: word-of-moods, sigh words, congratulatory words, network new words, symbols and intonations;

and the processing module is used for obtaining corresponding at least one piece of expression information based on the target information under the condition that the target information contains any one or more linguistic data in a preset database.

22. A storage medium, characterized in that the storage medium comprises a stored program, wherein when the program runs, a device where the storage medium is located is controlled to execute the method for acquiring emotion information according to any one of claims 1 to 14.

23. A processor, characterized in that the processor is configured to execute a program, wherein the program executes the method for acquiring emotion information according to any one of claims 1 to 14.

24. A system for obtaining facial expression information, comprising:

a processor; and

a memory coupled to the processor for providing instructions to the processor for processing the following processing steps:

acquiring target information;

judging whether the target information contains any one or more linguistic data in a preset database, wherein the linguistic data type in the preset database comprises at least one of the following types: word-of-moods, sigh words, congratulatory words, network new words, symbols and intonations;

and obtaining at least one corresponding expression information based on the target information under the condition that the target information contains any one or more linguistic data in a preset database.

Technical Field

The application relates to the field of internet information processing, in particular to a method, a device and a system for acquiring expression information.

Background

With the development of internet technology, network chat has become one of the main ways for users to communicate on a daily basis. The chat expressions are used for expressing joy, anger, sadness and fun of the user, can vividly and intuitively convey related messages to the other side, and are deeply loved by the user. However, when the user uses the chat emoticons, the types and the number of the chat emoticons provided by the system have certain limitations, which often cannot satisfy different preferences of the user for the chat emoticons in different periods, and greatly affects the speed and the experience of the user for selecting the desired chat emoticons.

Drawings

The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:

fig. 1 is a block diagram of a hardware structure of a computer terminal (or a mobile device) for implementing a method for acquiring emotion information according to a first embodiment of the present application;

fig. 2 is a schematic diagram of a computer terminal (or mobile device) as a client terminal according to a first embodiment of the present application;

fig. 3 is a flowchart of an alternative method for acquiring facial expression information according to an embodiment of the present application;

FIG. 4 is a flowchart illustrating an alternative offline training history information according to an embodiment of the present application;

FIG. 5 is a flow chart of an alternative online processing of target information according to a first embodiment of the present application;

FIG. 6 is a flowchart illustrating an overall method for automatically adding an expression according to an embodiment of the present disclosure;

fig. 7 is a flowchart of an alternative method for acquiring facial expression information according to the second embodiment of the present application;

fig. 8 is a flowchart of an alternative method for acquiring facial expression information according to a third embodiment of the present application;

fig. 9 is a schematic diagram of an alternative apparatus for acquiring facial expression information according to a fourth embodiment of the present application;

fig. 10 is a schematic diagram of an alternative apparatus for acquiring facial expression information according to an embodiment of the present application;

fig. 11 is a schematic diagram of an alternative apparatus for acquiring facial expression information according to a sixth embodiment of the present application; and

fig. 12 is a block diagram of an alternative computer terminal according to a seventh embodiment of the present application.

Detailed Description

In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.

34页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种图像标注方法及设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!