Electronic device and method for controlling electronic device

文档序号:1146537 发布日期:2020-09-11 浏览:2次 中文

阅读说明:本技术 电子设备和用于控制电子设备的方法 (Electronic device and method for controlling electronic device ) 是由 金叡薰 尹昭正 徐钻源 于 2019-02-01 设计创作,主要内容包括:提供一种电子设备和用于控制电子设备的方法。所述用于控制电子设备的方法包括:基于确定用于输出信息的事件的发生,获得用于确定与电子设备对应的情境的数据;将所获得的数据输入到通过人工智能算法训练的第一模型,并获得关于位于电子设备附近的人的信息;将所获得的关于人的信息和关于事件的信息输入到通过人工智能算法训练的第二模型,并获得与事件对应的输出信息;以及提供所获得的输出信息。(An electronic device and a method for controlling the electronic device are provided. The method for controlling an electronic device includes: obtaining data for determining a context corresponding to the electronic device based on determining an occurrence of an event for outputting information; inputting the obtained data to a first model trained by an artificial intelligence algorithm and obtaining information about a person located in the vicinity of the electronic device; inputting the obtained information on the person and the information on the event to a second model trained by an artificial intelligence algorithm, and obtaining output information corresponding to the event; and providing the obtained output information.)

1. A method for controlling an electronic device, the method comprising:

obtaining data for identifying a context corresponding to the electronic device based on identifying an occurrence of an event for outputting information;

inputting the obtained data to a first model trained by an artificial intelligence algorithm, and obtaining information about a person located in the vicinity of the electronic device based on the obtained data input to the first model;

inputting the obtained information on the person and the information on the event to a second model trained by an artificial intelligence algorithm;

obtaining output information corresponding to the event based on the obtained information on the person and the information on the event being input to the second model; and

providing the obtained output information.

2. The method of claim 1, wherein the data for identifying a context corresponding to an electronic device comprises at least one of:

image data obtained by a camera included in the electronic device or an external device connected to the electronic device; and

audio data obtained by a microphone included in the electronic device or an external device connected to the electronic device.

3. The method of claim 1, wherein the second model is trained to:

determining detailed information of the event as output information based on including information on a main user using the electronic device in the obtained information on the person without including any other information on any other person; and

based on including both information on another person and information on a main user using the electronic device in the obtained information on the person, brief information of the event is determined as output information, the brief information being less detailed than detailed information of the event.

4. The method of claim 1, further comprising:

obtaining feedback information for the provided output information based on the user input,

wherein the second model is retrained or further trained based on the obtained feedback information for the provided output information.

5. The method of claim 4, further comprising:

inputting the obtained information about the person and information about another event for outputting the information to a retrained or further trained second model; and

obtaining output information corresponding to the other event based on the obtained information about the person and the information about the other event being input to a retrained or further trained second model.

6. The method of claim 4, wherein the obtained feedback information for the provided output information includes at least one of user reaction information to the provided output information, control command information for an event input by a user after the provision of the output information, and information found or searched by the user after the provision of the output information.

7. The method of claim 1, wherein at least one of the first and second models is stored in an external server.

8. The method of claim 1, wherein:

training a second model to obtain an output method of the event based on the obtained information on the person and the information on the event input to the second model; and

the providing includes providing the obtained output information based on an output method of the obtained event.

9. The method of claim 1, wherein the event comprises at least one of: a text message reception event, an email reception event, a call reception event, an information reception event, a Social Network Service (SNS) reception event, and a push notification reception event.

10. An electronic device, comprising:

a communication interface;

a display;

a speaker;

at least one memory for storing instructions; and

at least one processor configured to execute the stored instructions to:

obtaining data for identifying a context corresponding to the electronic device based on identifying an occurrence of an event for outputting information;

inputting the obtained data to a first model trained by an artificial intelligence algorithm, and obtaining information about a person located in the vicinity of the electronic device based on the obtained data input to the first model;

inputting the obtained information on the person and the information on the event to a second model trained by an artificial intelligence algorithm;

obtaining output information corresponding to the event based on the obtained information on the person and the information on the event being input to the second model; and

controlling at least one of the display and the speaker to provide the obtained output information.

11. The electronic device of claim 10, wherein the data for identifying a context corresponding to the electronic device comprises at least one of:

image data obtained by a camera included in the electronic device or an external device connected to the electronic device; and

audio data obtained by a microphone included in the electronic device or an external device connected to the electronic device.

12. The electronic device of claim 10, wherein the second model is trained to:

determining detailed information of the event as output information based on including information on a main user using the electronic device in the obtained information on the person without including any other information on any other person; and

based on including both information on another person and information on a main user using the electronic device in the obtained information on the person, brief information of the event is determined as output information, the brief information being less detailed than detailed information of the event.

13. The electronic device of claim 10, wherein:

the at least one processor is configured to further execute the stored instructions to obtain feedback information for the provided output information in accordance with a user input; and

wherein the second model is retrained or further trained based on the obtained feedback information for the provided output information.

14. The electronic device of claim 13, wherein the at least one processor is configured to further execute the stored instructions to:

inputting the obtained information about the person and information about another event for outputting the information to a retrained or further trained second model; and

obtaining output information corresponding to the other event based on the obtained information about the person and the information about the other event being input to a retrained or further trained second model.

15. An apparatus, comprising:

at least one memory for storing instructions; and

at least one processor configured to execute the stored instructions to:

receiving, from another device, data for identifying a context corresponding to the other device based on an event for outputting information occurring at the other device;

inputting the obtained data to a first model trained by an artificial intelligence algorithm, and obtaining information about a person located in the vicinity of the other device based on the obtained data input to the first model;

inputting the obtained information on the person and the information on the event to a second model trained by an artificial intelligence algorithm;

obtaining output information corresponding to the event based on the obtained information on the person and the information on the event being input to the second model; and

control transmitting the obtained output information to the other device.

Technical Field

The present disclosure relates to an electronic device and a method for controlling the electronic device. More particularly, the present disclosure relates to an electronic device capable of providing output information of an event according to a context and a method for controlling the electronic device.

Furthermore, the present disclosure relates to an Artificial Intelligence (AI) system for simulating functions such as cognition, human brain decision making, etc. using machine learning algorithms, and applications thereof.

Background

Recently, Artificial Intelligence (AI) systems for implementing intelligence corresponding to a human level have been used in various fields. Unlike previous rule-based intelligent (smart) systems, AI systems are systems in which machines learn, make decisions, and act upon the decisions on their own or on their own. As AI systems become more popular, recognition rates increase, thus understanding user preferences or characteristics (charateristic) more accurately. Thus, previous rule-based intelligent systems are gradually being replaced by deep-learning AI systems.

AI techniques include machine learning (e.g., deep learning) and element techniques that use machine learning.

Machine learning is an algorithmic technique that can classify and learn features of input data on its own or autonomously. Element techniques are techniques that use machine learning algorithms (e.g., deep learning, etc.) to mimic functions (e.g., cognition, human brain decision making, etc.), including the technical fields including language (linguistic) understanding, visual understanding, inference/prediction, knowledge representation, motion control, etc.

Various fields to which the AI technique is applied are as follows. Language understanding is a technology for recognizing and applying and processing human language (language) and words, and includes natural language processing, machine translation, dialog systems, questions and answers, voice recognition and synthesis, and the like. Visual understanding is a technique of recognizing and processing objects, just like human vision. The field of visual understanding includes object recognition, object tracking, image search, human recognition, scene understanding, spatial understanding, image improvement, and the like. Inference prediction is a technique for determining information and making logical inferences and predictions. The inference prediction domain includes knowledge/probability based inference, optimization prediction, preference based planning, recommendation, and the like. Knowledge representation is a technique for performing automated processing of human experience information using knowledge data. Knowledge representation domains include knowledge construction (data generation/classification), knowledge management (data usage), and the like. Motion control is a technique of controlling the autonomous driving ability of a vehicle (vehicle) and/or the motion of a robot. The field of motion control includes motion control (navigation, collision, driving), steering control (behavior control), and the like.

In recent years, electronic devices have become capable of detecting various events for providing information to users. As one example, when an alarm event is received, the electronic device outputs the alarm event regardless of the context of the electronic device. For example, when an alarm event is received in the electronic device, the electronic device outputs information related to the alarm event regardless of whether another user is present in the vicinity of the electronic device, the current location, and the like. That is, even if the user does not want to share these contents, the contents of the notification event are shared to others, and therefore, the privacy of the user is not protected. Furthermore, when these contents are shared in a situation where the user does not wish to share them, resources of the electronic device (e.g., processing speed, processing power, battery life, display resources, etc.) are unnecessarily consumed, thereby impairing the functionality of the device.

Disclosure of Invention

Technical problem

An electronic device capable of providing output information of an event according to a context of the electronic device and a method for controlling the same are provided.

Additional aspects will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the presented embodiments.

Technical scheme

According to one aspect of the present disclosure, a method for controlling an electronic device is provided. The method comprises the following steps: obtaining data for identifying a context corresponding to the electronic device based on identifying an occurrence of an event for outputting information; inputting the obtained data to a first model trained by an artificial intelligence algorithm, and obtaining information about a person located in the vicinity of the electronic device based on the obtained data input to the first model; inputting the obtained information on the person and the information on the event to a second model trained by an artificial intelligence algorithm; obtaining output information corresponding to the event based on the obtained information on the person and the information on the event being input to the second model; and providing the obtained output information.

According to another aspect of the present disclosure, an electronic device is provided. The electronic device includes: a communication interface; a display; a speaker; at least one memory for storing instructions; and at least one processor configured to execute the stored instructions to: obtaining data for identifying a context corresponding to the electronic device based on identifying an occurrence of an event for outputting information; inputting the obtained data to a first model trained by an artificial intelligence algorithm, and obtaining information about a person located in the vicinity of the electronic device based on the obtained data input to the first model; inputting the obtained information on the person and the information on the event to a second model trained by an artificial intelligence algorithm; obtaining output information corresponding to the event based on the obtained information on the person and the information on the event being input to the second model; and controlling at least one of the display and the speaker to provide the obtained output information.

According to another aspect of the present disclosure, an apparatus is provided. The apparatus comprises: at least one memory for storing instructions; and at least one processor configured to execute the stored instructions to: receiving, from another device, data for identifying a context corresponding to the other device based on an event for outputting information occurring at the other device; inputting the obtained data to a first model trained by an artificial intelligence algorithm, and obtaining information about a person located in the vicinity of another device based on the obtained data input to the first model; inputting the obtained information on the person and the information on the event to a second model trained by an artificial intelligence algorithm; obtaining output information corresponding to the event based on the obtained information on the person and the information on the event being input to the second model; and controlling transmission of the obtained output information to another device.

According to another aspect of the present disclosure, there is provided a non-transitory computer-readable recording medium having a program recorded thereon, the program being executable by a computer to perform the method.

Drawings

The above and other aspects, features and advantages of certain embodiments of the present disclosure will become more apparent from the following description taken in conjunction with the accompanying drawings, in which:

fig. 1 is a diagram illustrating the use of an electronic device that provides output information of an event according to a context, according to an embodiment;

fig. 2 is a diagram showing a system including an electronic device and a server according to an embodiment;

fig. 3A is a block diagram of an electronic device according to an embodiment;

fig. 3B is a block diagram of a detailed configuration of an electronic apparatus according to an embodiment;

fig. 4, 5A, and 5B are diagrams provided to explain an example of obtaining control commands related to an alarm event according to a context, in accordance with various embodiments;

FIG. 6, FIG. 7A, and FIG. 7B are diagrams provided to explain an example of providing output information of an alarm event according to a context, according to another embodiment;

fig. 8, 9A and 9B are diagrams provided to explain an example of providing user request information according to a context, according to another embodiment;

fig. 10 is a flowchart illustrating a method for controlling an electronic device according to an embodiment;

FIG. 11 is a flow diagram illustrating a method of providing output information of an alarm event according to context by an electronic device through an artificial intelligence model, according to another embodiment;

fig. 12 is a block diagram of a configuration of an apparatus for learning and using an Artificial Intelligence (AI) model according to an embodiment;

fig. 13A and 13B are block diagrams of specific configurations of a learning section and a determination section according to various embodiments;

FIGS. 14 and 15 are flow diagrams of network systems employing artificial intelligence models, in accordance with various embodiments; and

fig. 16 is a flow diagram provided to explain a method of providing output information of an event according to a context by an electronic device according to an embodiment.

Detailed Description

The foregoing and/or other aspects, features and advantages of certain embodiments of the present disclosure will become more apparent from the following description when taken in conjunction with the accompanying drawings, in which like reference characters designate like elements throughout. However, it is to be understood that the present disclosure is not limited to certain embodiments described herein, but includes various modifications, equivalents, and/or alternatives to the embodiments of the disclosure.

In the description, the terms "have", "may have", "include" or "may include" indicate that there are corresponding features (for example, numerical values, functions, operations, or constituent elements such as components), but do not preclude the presence of additional features.

In the description, the expressions "a and/or B", "a or B", "at least one of a and B", "at least one of a or B", "one or more of a and B" and "one or more of a or B" may include all possible combinations of the items listed together. For example, the term "a and/or B" or "at least one of a and B" may denote: (1) at least one A; (2) at least one B; or (3) both at least one A and at least one B.

As used herein, the expressions "1", "2", "first" or "second" may modify various elements regardless of their order and/or importance and are used herein to distinguish one element from another (unless explicitly stated otherwise) without otherwise limiting the corresponding elements.

If it is described that a particular element (e.g., a first element) is "operably or communicatively coupled," "operably or communicatively coupled" or "connected" to another element (e.g., a second element), it may be understood that the particular element may be connected to the other element directly or through another element (e.g., a third element). Meanwhile, when it is described that one element (e.g., a first element) is "directly coupled" or "directly connected" to another element (e.g., a second element), it is understood that there is no element (e.g., a third element) between the element and the other element.

In the description, the term "configured to" may be referred to as, for example, "adapted to," "having … capability," "designed to," "adapted to," "manufactured to," or "capable" in certain circumstances and/or contexts. The term "configured to" or "set to" at the hardware level does not necessarily mean "specifically designed to". In some cases, the term "device configured as …" may refer to a "device" that is "capable" of doing something with another device or component. For example, the phrase "processor configured to perform A, B and C" may refer to or refer to a dedicated processor (e.g., an embedded processor) for performing the corresponding operations, or a general-purpose processor (e.g., a Central Processing Unit (CPU) or an application processor) that may perform the corresponding operations by executing one or more software programs stored in a memory device.

An electronic device according to various embodiments may include, for example, at least one of a smart phone, a tablet device, a tablet Personal Computer (PC), a mobile phone, a video phone, an e-book reader, a desktop PC, a laptop PC, a netbook computer, a workstation, a server, a Personal Digital Assistant (PDA), a Portable Multimedia Player (PMP), an MP3 player, a multimedia player, a medical device, a camera, a wearable device, and so forth. The wearable device may include at least one of an accessory type (e.g., watch, ring, bracelet, anklet, necklace, glasses, contact lens, or Head Mounted Device (HMD)), a fabric or cloth embedded type (e.g., electronic cloth), a body attachment type (e.g., skin pad or tattoo), or a bio-implant circuit. In some embodiments, the electronic device may include, for example, at least one of a television, a Digital Video Disc (DVD) player, an optical recording medium player (e.g., a blu-ray disc player), an audio processing device, a smart appliance, a refrigerator, an air conditioner, a cleaner, an oven, a microwave oven, a washing machine, an air purifier, a set-top box, a home automation control panel, a security control panel, a media box (e.g., SAMSUNG HOMESYNC, application TV, or GOOGLE TV), a gaming machine (e.g., MICROSOFT XBOX, SONY PLAYSTATION), an electronic dictionary, an electronic key, a camcorder, an electronic photo frame, and the like.

However, it is to be understood that various other embodiments may not be so limited. For example, according to one or more other embodiments, the electronic device may include various medical devices (e.g., various portable medical measurement devices (blood glucose monitor, heart rate monitor, blood pressure measurement device, or body temperature measurement device, etc.), Magnetic Resonance Angiography (MRA) device, Magnetic Resonance Imaging (MRI) device, Computed Tomography (CT) device, camera device, ultrasound device, etc.), navigation device, Global Navigation Satellite System (GNSS) device, Event Data Recorder (EDR), Flight Data Recorder (FDR), vehicle infotainment device, marine electronic device (e.g., marine navigation device, gyrocompass, etc.), avionic device, security device, host of vehicle (headunit), industrial or home robot, unmanned aerial vehicle, cash or Automated Teller Machine (ATM) of financial institution, point of sale (POS) of store, etc.) Internet of things devices (e.g., bulbs, sensors, sprinklers, valves, locks, fire alarms, temperature controllers, street lights, toasters, sporting goods, hot water tanks, heaters, boilers, etc.), and the like.

Further, the term "user" may refer to a person using the electronic device or a device using the electronic device (e.g., an Artificial Intelligence (AI) electronic device).

Fig. 1 is a diagram illustrating the use of an electronic device that provides output information of an event 10 according to a context according to an embodiment.

First, the electronic device may receive a signal for sensing an event (e.g., an alarm event) for providing information from an external source. For example, as shown in part (a) of fig. 1, the electronic device may receive a signal from an external source for sensing an event 10 that an email is received for booking tickets for a concert. In addition to the email reception event, the event may be implemented as various events such as a text message reception event, a call reception event, an information request reception event, a Social Network Service (SNS) reception or notification event, a chat service reception or notification event, a push notification event, and the like.

When receiving a signal for sensing an event, the electronic device 100 may obtain ambient context information of the electronic device 100. For example, the electronic device 100 may obtain data for sensing a surrounding context of the electronic device 100 or data (e.g., schedule data, etc.) stored in the electronic device 100 by using sensors (e.g., cameras, GPS sensors, etc.) provided in the electronic device 100. However, it is to be understood that this is merely an example, and that one or more other embodiments may not be so limited. For example, the electronic device 100 may obtain data for sensing the ambient context of the electronic device 100 from an external device (e.g., IoT device, etc.) that is interlocked with the electronic device 100 or communicatively connected to the electronic device 100. The context information may be information related to a space in which the electronic device 100 is located or information related to a user using the electronic device 100, which may include information related to at least one user appearing in the space in which the electronic device 100 is located. However, this is merely an example, and the context information may include information related to a user's calendar, information related to a location where the electronic device 100 is located, and the like.

The electronic device 100 may input data for determining (or recognizing) context information to a first model trained by an artificial intelligence model or a processing system and obtain the context information of the electronic device 100 as output data of the artificial intelligence model or the processing system. Specifically, the electronic device 100 may input data for sensing a surrounding context of the electronic device to a first model trained by an artificial intelligence model, and obtain information on a person located in a space where the electronic device exists. For example, the electronic device 100 may input an image capturing the electronic device 100 or an external device or object to the first model and, in response, obtain information about a user present in the space in which the electronic device 100 is located. The first model may exist within the electronic device 100, but this is merely an example. For example, according to another embodiment, the first model may reside in an external server. The electronic device 100 may transmit data for determining the context information to an external server. The external server may obtain context information by means of the first model. The electronic device 100 may obtain context information from the external server 200.

The electronic device 100 may transmit the information related to the event and the obtained context information to the external server 200. The information related to the event may include at least one of information related to a type of the event, information related to a caller of the event, a call time of the event, and a content of the event. For example, the electronic apparatus 100 may transmit information related to an event and information related to a person located in a space where the electronic apparatus 100 exists to an external server. Although in the present embodiment, the electronic device 100 may transmit the context information to the external server 200, it should be understood that one or more other embodiments are not limited thereto. For example, according to another embodiment, the electronic device 100 may transmit data for obtaining context information to the external server 200.

The external server 200 may output information of the event based on the information related to the event and the obtained context information. In detail, the external server 200 may input information about the received event and the obtained context information to a second model trained by the artificial intelligence model and obtain output information of the event. The second model may be the following model: the model is trained to obtain output information of an event by using information related to the event and context information (or data for obtaining context information) as input data, and the model can be retrained by feedback information input by a user. In addition, the output information of the event may be information including at least a part of a context included in the event and information related to the event.

Furthermore, the external server 200 can determine not only output information of the event but also an output method of the event by means of the second model. For example, the external server 200 may determine at least one of an output method using a speaker, an output method using a display, an output method using vibration, an output method using a Light Emitting Diode (LED) (e.g., a dedicated notification LED), and a combination of two or more of the above methods as an output method of an event using the second model.

For example, in the case where the information related to the event includes information related to a concert ticket reservation and the context information includes information related to a user present in the vicinity of the electronic device 100, the external server 200 may obtain output information of the event by using the second model. In the case where the user is present alone in the living room in which the electronic apparatus 100 is present, as shown in part (b) of fig. 1, the external server 200 may use the second model to obtain the output information of the event "you have a mail from 'Inxxxpark', notifying 7:30 pm on 15 th 10 th to start ticket delivery at the Exo concert held in gochek Dome. In other words, in the case where the user is present alone, the external server 200 may obtain output information including details of specific content included in the event. In a case where the parent and the user appear together in the living room where the electronic apparatus 100 exists, the external server 200 can obtain the output information "you have an email from 'Inxxxpark' related to the event" using the second model, as shown in part (c) of fig. 1. In other words, in the case where the user is present with another person, the external server 200 may obtain output information including brief or less information about the event reception itself.

As another example, the external server 200 may determine an output method of the event by means of the second model. In the case where the user is present alone in the living room where the electronic device 100 is located, the external server 200 may determine an output method of the event as an output method using a display and a speaker by means of the second model. In the case where the parent is present in the living room where the electronic device 100 is located together with the user, the external server 200 may determine the output method of the event as the output method using the display by means of the second model. According to another embodiment, the external server 200 may determine the output information of the event by using the second model as described above, and may determine the output method of the event by using the second model as described above.

The external server 200 may transmit output information of the event and/or information related to the output method to the electronic apparatus 100.

The electronic apparatus 100 may provide output information of the event based on the obtained output information of the event and the obtained information related to the output method.

In the above-described embodiment, the electronic device 100 may obtain the output information and the output method of the event by interlocking or communicating with the external server 200 including the second model. However, this is merely an example, and it is to be understood that one or more other embodiments are not limited thereto. For example, according to another embodiment, an artificial intelligence model may be stored in the electronic device 100, and the electronic device 100 may directly obtain the output information and the output method of the event by means of the second model.

In addition, the electronic apparatus 100 may obtain feedback information input by the user while or after providing the output information of the event. The feedback information may include at least one of reaction information of the user with respect to the output information (e.g., facial expression, behavior, etc. of the user after the output information is output), control command information for an event input by the user after the output information is output, and information discovered by the user after the output information is output.

The electronic device 100 may transmit feedback information input by the user to the external server 200. The external server 200 may retrain or further train the second model by using the received feedback information. According to another embodiment, in which the artificial intelligence model is stored in the electronic device, the electronic device may directly retrain or further train the second model by using feedback information input by the user. The relearning process of the artificial intelligence model will be described in detail below with reference to the accompanying drawings.

The first model and/or the second model referred to in the above embodiments may be a deterministic model based on artificial intelligence algorithm training, which may be, for example, a neural network based model. The trained first model and the trained second model may be designed to mimic human brain structures on a computer and include a plurality of network nodes having weight values and simulating neurons of a human neural network. Each of the plurality of network nodes may form a connection relationship such that neurons exchange signals through synapses simulating their synaptic activity. Additionally, the trained first model and/or the trained second model may, for example, comprise a neural network model or a deep learning model developed from a neural network model. The multiple network nodes in the deep learning model may be located at different depths (or levels) from each other and may exchange data according to a convolutional connectivity relationship. For example, the trained first model and the trained second model may include a Deep Neural Network (DNN), a Recurrent Neural Network (RNN), a bidirectional recurrent deep neural network (BDNR), and the like, but the present disclosure is not limited thereto.

Further, the electronic apparatus 100 may obtain output information of an event using a personal secretary or an assistant program as an AI exclusive (exclusive) program (or an artificial intelligence agent). The private helper program may be a program that is dedicated or dedicated to providing Artificial Intelligence (AI) -based services, and may be run by an existing main processor (e.g., CPU) or an additional or dedicated AI-exclusive processor (e.g., GPU).

For example, in a case where a predetermined user input (e.g., an icon touch corresponding to the personal assistant chat robot, a user voice including a predetermined word such as "BIXBY") is input, a button provided in the electronic device 100 (e.g., a button for running the artificial intelligence agent) is pressed, or an event is sensed, the artificial intelligence agent may be operated (or run). In addition, the artificial intelligence agent may transmit information related to the event and context information to an external server and provide output information of the event received from the external server.

The artificial intelligence agent may also be operated when (or based on) sensing a predetermined user input, a button provided in the electronic device 100 (e.g., a button for running the artificial intelligence agent) is pressed, or an event is sensed. Alternatively, the artificial intelligence agent may be in a pre-operational state prior to sensing a predetermined user input, prior to selecting a button provided in the electronic device 100, or prior to sensing an event. After sensing a predetermined user input, after selecting a button provided in the electronic device 100, or after sensing an event, the artificial intelligence agent of the electronic device 100 may obtain output information of the event acquired based on the information related to the event and the context information. In addition, the artificial intelligence agent may be in a standby (standby) state before sensing a predetermined user input, before selecting a button provided in the electronic apparatus 100, or before sensing an event. In this regard, the standby state may be the following state: wherein receipt of a predefined user input for controlling initiation of operation of the artificial intelligence agent is sensed. When (or based on) sensing a predetermined user input, selecting a button provided in the electronic apparatus 100, or sensing an event while the artificial intelligence agent is in a standby state, the electronic apparatus 100 may operate the artificial intelligence agent and obtain output information of the obtained event based on information related to the event and context information.

In an example embodiment, in case that the electronic device 100 directly obtains the output information of the event according to the context by means of the artificial intelligence model, the artificial intelligence agent may control the second model and obtain the output information of the event. The artificial intelligence agent may operate the operation of the external server as described above.

Fig. 2 is a diagram illustrating a system including the electronic device 100 and the server 200 according to an embodiment. As shown in fig. 2, the system includes an electronic device 100 and a server 200. In fig. 2, the system includes only one server 200. However, this is merely an example, and the server 200 may be implemented as various servers or distributed servers, including a server for obtaining context information, a server for obtaining output information of an event, a server for obtaining information requested by a user, and the like.

The electronic device 100 may receive a signal for sensing an event. The event may include at least one of a text message reception event, an e-mail reception event, a call reception event, an information request reception event, an SNS reception event, a push notification event, an application notification event, and the like.

When an event is sensed, the electronic device 100 may obtain context information. The electronic device 100 may obtain the context information by using at least one of data sensed from the sensor, pre-stored data, and data obtained from the external device. The context information may include information about a user who is present in a space in which the electronic device 100 is located, information about a user schedule, information about a location in which the electronic device 100 is located, and the like.

The electronic device 100 may transmit information about the sensed event and context information to the external server 200. Alternatively, the electronic device 100 may transmit data for obtaining contextual information (e.g., capturing an image of a location, surrounding environment, or area in which the electronic device 100 is located) instead of the contextual information.

The electronic device 100 may provide output information of the event received from the external server 200. The electronic apparatus 100 may provide output information of the event according to the output method determined by the external server 200.

In addition, the electronic apparatus 100 may obtain feedback information of output information of the event and transmit the obtained feedback information to the external server 200.

The external server 200 may store a first model trained to obtain context information (e.g., information about a person located in a space where the electronic device is present) by using data for sensing a context (e.g., at least one of image data and voice data obtained by a camera and a microphone included in the electronic device 100 or an external device connected to the electronic device 100) as input data, and a second model trained to obtain output information of an event by using information about the event and the context information as input data. The external server 200 may obtain the output information of the event by means of the trained second model from context information received from the electronic device 100 or from the first model (e.g. within the server or from another server). The external server 200 may transmit the obtained output information of the event to the electronic apparatus 100.

In addition, when feedback information is received from the electronic device 100, the external server 200 may retrain the second model based on the received feedback information. Thereafter, the external server 200 may obtain output information of the event according to the context information received from the electronic device 100 by means of the retrained second model. It should be understood that in one or more other embodiments, at least one of the first model and the second model may be stored in the electronic device 100.

Fig. 3A is a block diagram of an electronic device 100 according to an embodiment. As shown in fig. 3A, electronic device 100 includes communication interface 110, display 120, speaker 130, memory 140, and processor 150. The elements shown in fig. 3A are examples of implementing example embodiments of the present disclosure, and suitable hardware/software elements of a level that will be apparent to those skilled in the art may further be included in electronic device 100, or the elements shown in fig. 3A may be omitted.

The communication interface 110 may communicate with an external device via various communication methods. In particular, the communication interface 110 may receive an alarm event from an external source. In addition, the communication interface 110 may transmit information related to an event and context information to the external server 200 and receive output information of the event from the external server 200.

The communication interface 110 may communicate with various types of external devices according to various communication methods. For example, the communication interface 110 (or communicator) may include at least one of a Wi-Fi chip, a bluetooth chip, and a wireless communication chip. The processor 150 may perform communication with an external server or various types of external devices by using the communication interface 110. In addition, the communication interface 110 may communicate with an external device through various communication chips such as a Near Field Communication (NFC) module and the like.

The display 120 may provide various screens. Specifically, the display 110 may display output information of an event. The display 110 may display output information of an event in the form of a pop-up window. However, this is merely an example, and the output information of the alarm event may be displayed in a full screen mode or in a notification area or column of the screen.

The speaker 130 may include various audio output circuits and be configured to output various types of alarm sounds or voice messages in addition to various audio data on which the audio processor performs various processing operations, such as decoding, amplification, and noise filtering. In particular, speaker 130 may output the output information of the event in audio form. The plurality of speakers 140 may be provided in a plurality of regions of the electronic apparatus (e.g., an upper end region of a front surface of the electronic apparatus, a lower side region of the electronic apparatus, etc.).

Memory 140 may store instructions or data regarding at least one of the other elements of electronic device 100. The memory 140 may be implemented as non-volatile memory, flash memory, a Hard Disk Drive (HDD), a solid state drive (SDD), or the like. The processor 150 accesses the memory 140, and can perform reading/recording/modifying/deleting/updating of data by the processor 150 according to an instruction from the processor 150. According to an embodiment of the present disclosure, the memory 140 may include one or more of an internal memory within the processor 150, a Read Only Memory (ROM) and a Random Access Memory (RAM), and a memory card (e.g., a micro Secure Digital (SD) card or a memory stick) attached to the electronic device 100. Further, the memory 140 may store programs, data, and the like for constituting various types of screens to be displayed in the display area of the display 120.

For example, the memory 140 may store a program exclusive to Artificial Intelligence (AI). In this regard, the program exclusive to the AI may be a personalized program for providing various services to the electronic device 100. Specifically, the program exclusive to AI may obtain output information of the event according to a context of the electronic device 100 or a user using the electronic device 100. Further, in one embodiment, the memory 140 may store at least one of a first model trained to obtain context information by using data sensed by the electronic device 100 and/or data obtained from an external source, and/or a second model trained to obtain output information of an event according to context.

Processor 150 may be electrically connected to communication interface 110, display 120, speaker 130, and memory 140, and control the overall operation and functionality of electronic device 100. In particular, the processor 150 may provide output information of events according to a context of the electronic device 100 or a user using the electronic device 100 through various programs (or instructions) stored in the memory 140.

For example, the electronic device 100 may execute instructions stored in the memory 140 and, when a signal for sensing an event is input, obtain context information of the electronic device 100, receive output information of the alarm event obtained by inputting information related to the alarm event and the context information to an artificial intelligence model trained by an artificial intelligence algorithm from the external server 200 via the communication interface 110, and control at least one of the display 120 and the speaker 130 to output the received output information of the event.

Upon sensing an alarm event, electronic device 100 may obtain information related to at least one user or person present in proximity to electronic device 100 (or in the vicinity of electronic device 100). For example, the electronic device 100 may capture a peripheral area of the electronic device 100 by means of a sensor (e.g., a camera) present in the electronic device 100 and analyze the captured image and obtain information about at least one user present around the electronic device 100. Alternatively, the electronic apparatus 100 may analyze an image captured by a camera interlocked with the electronic apparatus 100 or connected to the electronic apparatus 100 or a user voice obtained by a microphone interlocked with the electronic apparatus 100 or connected to the electronic apparatus 100 and obtain information on at least one user present around the electronic apparatus 100. Alternatively, the electronic apparatus 100 may obtain information on at least one user present around the electronic apparatus 100 by using schedule information stored in the electronic apparatus 100. However, these are merely examples, and it should be understood that electronic device 100 may obtain information related to at least one user present around electronic device 100 by other methods (e.g., by receiving information corresponding to or identifying another electronic device of another person within a predetermined area or proximity of electronic device 100, by peer-to-peer communication, communication with a base station, communication with a proximate device sensor, a communication discovery response, communication with an external server, etc.).

As described above, the electronic apparatus 100 may capture a peripheral area of the electronic apparatus 100 by means of a sensor (e.g., a camera) in the electronic apparatus 100, and input the captured image into the trained first model, and obtain context information (e.g., information about a person located around the electronic apparatus 100 or in the vicinity of the electronic apparatus 100).

The processor 150 may obtain output information of the event obtained by the trained second model from the external server 200. For example, in a case where the context information includes information related to a main user using the electronic device 100 and does not include information related to any other person (e.g., in the vicinity of the electronic device 100), the second model may obtain or provide, as an output, output information including detailed information about the event (or output information including instructions for outputting the detailed information about the event). Where the contextual information includes information about a primary user using the electronic device 100 along with information about another person, the electronic device 100 may obtain or provide output information that includes brief or less information about the event (or instructions for outputting less detailed information about the event, such as instructions for outputting only the sender of the incoming notification event).

Further, the processor 150 may obtain feedback information for the output information according to the user input and control the communication interface 110 to transmit the feedback information for the output information to the external server 200 (or directly to the second model, wherein the second model is stored in the electronic device 100). The second model may be retrained based on the feedback information for the output information, thereby providing improved functions of the external server 200 (or the electronic device 100) by improving the accuracy of the AI process or the model. In the case where information about another event and context information are input, the second model may obtain output information of the another event based on a relearning or retraining result. That is, the second model may be updated based on feedback information input by the user.

The feedback information for the output information may include at least one of: the information processing apparatus includes information of a reaction of a user to output information, control command information for an event input by the user after the output information is output, and information discovered by the user after the output information is output.

In the above-described embodiment, the context information is information related to users who are present around the electronic apparatus 100. However, this is merely an example, and it is to be understood that one or more other embodiments are not limited thereto. For example, the contextual information may include various information including information related to a user's calendar, information related to a place where the electronic device 100 is located, and the like.

Further, the processor 150 may input information related to the alarm event and context information to the artificial intelligence model and receive the obtained information related to the output method of the event from the external server 200 via the communication interface 110. Further, the processor 150 may control at least one of the speaker 140 and the display 130 to provide output information of the event based on an output method of the event. The output method may include an output method through the display 130, an output method through the speaker 140, an output method through vibration, an output method through an auxiliary notification device (e.g., LED), an output method through a combination thereof, and the like. However, it is to be understood that one or more other embodiments are not so limited and other approaches may also be used.

Fig. 3B is a block diagram of a detailed configuration of the electronic apparatus 100 according to the embodiment. As shown in fig. 3B, electronic device 100 may include communication interface 110, display 120, speaker 130, memory 140, sensors 160, input interface 170, and processor 150. Since the communication interface 110, the display 120, the speaker 130, and the memory 140 are the same as or similar to those described with reference to fig. 3A, redundant description thereof will be omitted below.

The sensor 160 may obtain sensing data for obtaining status information of the electronic device 100. Sensor 160 may include a Global Positioning System (GPS) sensor to obtain location information of electronic device 100 and/or may include at least one of various types of motion sensors, such as an accelerometer sensor, a gyroscope sensor, a magnetic sensor, etc., to obtain motion information of electronic device 100. Additionally or alternatively, sensor 160 may include an image sensor (e.g., a camera) to capture images of a peripheral area of electronic device 100. Additionally or alternatively, sensors 160 may include sensors capable of obtaining environmental information such as temperature, humidity, etc. of areas surrounding electronic device 100 and/or a microphone used to collect voice data.

The input interface 170 may receive various user inputs and transmit the received user inputs to the processor 150. In particular, the input interface 170 may comprise at least one of a touch sensor, a (digital) pen sensor, a pressure sensor, a key or a microphone. The touch sensor may use, for example, at least one of a capacitive method, a resistive method, an infrared method, and an ultrasonic method. The (digital) pen sensor may for example be part of a touch panel or comprise an additional sheet or layer for identifying the use. The key may, for example, comprise at least one of a physical button, an optical key, or a keypad. The microphone may be configured to receive user speech, and may be provided inside the electronic device 100. However, this is merely an example, and it should be understood that the microphone may be provided external to the electronic device 100 and electrically or communicatively connected to the electronic device 100.

For example, the input interface 170 may obtain the input signal according to a predetermined user touch for selecting an icon corresponding to a program exclusive to artificial intelligence or a user input for selecting a button provided outside the electronic device 100. In addition, the input interface 170 may transmit an input signal to the processor 150. In addition, the input interface 170 may receive user input to obtain feedback information for output information of the event.

The processor 150 (or controller) may control the overall operation of the electronic device 100 by using or executing various types of programs stored in the memory 140.

The processor 150 may include a RAM 151, a ROM 152, a graphic processor 153, a main Central Processing Unit (CPU)154, first to nth interfaces 155-1 to 155-n, and a bus 156. The RAM 151, the ROM 152, the graphic processor 153, the main CPU154, and the first to nth interfaces 155-1 to 155-n may be interconnected by a bus 156.

Fig. 4, 5A, and 5B are diagrams provided to explain an example of obtaining control commands related to an alarm event according to a context, in accordance with various embodiments.

As shown in fig. 4, the electronic device 100 may sense an alarm event in operation S410. The alarm event may be a call reception event in which a call is received from the outside as shown in part (a) of fig. 5A. However, this is merely an example, and the alert event may include various other events, such as a text message reception event, an email reception event, a push notification event, and so forth.

In operation S420, the electronic device 100 may obtain context information. The electronic device 100 may obtain the context information by using at least one of data obtained from the sensor 160 included in the electronic device 100, data stored in the electronic device 100, data obtained from an external device interlocked with the electronic device 100 or communicably connected to the electronic device 100, and the like. For example, the electronic device 100 may obtain, as context information, information about a space (e.g., a predetermined room, a predetermined area, an image capturing area of the sensor 160, a sensor area, etc.) in which the electronic device is located (i.e., in the vicinity of the electronic device) and a user present in the space in which the electronic device 100 is located.

The electronic device 100 may transmit the context information and the information related to the alarm event to the server 200.

The server 200 may generate or determine a control command for outputting an alarm event by using the trained artificial intelligence model in operation S440. In detail, the server 200 may generate a control command for performing an operation on the alarm event according to the current context. The artificial intelligence model may be a model trained to generate or determine control commands for outputting alarm events by using the contextual information and information about the alarm events as input data. For example, in the case where the context information includes information about "only the primary user is present in the vehicle", the server 200 may obtain or determine "automatically connect to a speaker phone in the vehicle" as a control command of the primary user mode as shown in part (b) of fig. 5A. In addition, in the case where the context information includes information about "appearing in the vehicle together with the boss B", the server 200 may obtain or determine "automatically connect to the bluetooth headset" as a control command of the default sub-user mode as shown in part (c) of fig. 5A.

The server 200 may transmit a control command (or information related to the control command or information indicating the control command) to the electronic device 100 in operation S450.

In operation S460, the electronic device 100 may perform an operation related to the alarm event according to the control command. That is, the electronic apparatus 100 may perform an operation related to an alarm event according to the control command by using an output method determined according to a context (e.g., context information). For example, in a case where only a primary user is present in the vehicle, the electronic apparatus 100 may perform an automatic connection operation using a speaker phone in the vehicle as a primary user mode according to a control command. In the case where the main user is present in the vehicle together with his/her boss B, the electronic apparatus 100 may perform an automatic connection operation using a bluetooth headset or an earphone as a default sub-user mode according to a control command.

In operation S470, the electronic device 100 may receive an input of feedback information according to a user input. The feedback information may be information related to a user command input by the user to the electronic device after performing an operation related to the alarm event. For example, in a case where the main user is present in the vehicle together with his/her boss B, the electronic apparatus 100 performs automatic connection using the bluetooth headset according to a control command. Then, when the primary user cancels the connection with the bluetooth headset and transmits the text message "i will connect again later", the electronic apparatus 100 may obtain corresponding feedback information, for example, "cancels the connection with the bluetooth headset and transmits the text message".

In operation S480, the electronic device 100 may transmit the received feedback information to the external server 200.

In operation S490, the external server 200 may retrain the artificial intelligence model based on the received feedback information. In detail, the external server 200 may perform operations related to alarm events and then retrain the artificial intelligence model based on information related to user commands input by the user to the electronic device. Thus, the external server 200 may reflect the user feedback information and update the artificial intelligence model according to the context.

For example, in the case where an alarm event is received as shown in part (a) of fig. 5B, after the artificial intelligence model is updated by the feedback information, when information "only the primary user is present in the vehicle" is included in the context information, the server 200 may obtain "automatically connect to a speaker phone in the vehicle" as a control command of the primary user mode as shown in part (B) of fig. 5B. In the case where the context information includes information about "the main user appears in the vehicle together with the boss B", the server 200 may obtain "automatically send reply text message 'i will call again later'" as a control command of the target perception mode as shown in part (c) of fig. 5B. According to another embodiment, the electronic device 100 may directly perform retraining or further training of the artificial intelligence model stored therein by using the received feedback information.

Fig. 6, 7A and 7B are diagrams provided to explain an example of providing output information of an alarm event according to a context, according to another embodiment.

As shown in fig. 6, the electronic device 100 may sense an alarm event in operation S610. The alarm event may be an event of receiving an e-mail from the outside as shown in part (a) of fig. 7A. However, this is merely an example, and it should be understood that the alarm event may include various events, such as a text message reception event, a call reception event, a push notification event, and the like.

In operation S620, the electronic device 100 may obtain context information. For example, the electronic apparatus 100 may obtain information about a space in which the electronic apparatus 100 is located and a user who appears in the space in which the electronic apparatus 100 is located, as the context information.

In operation S630, the electronic device 100 may transmit context information and information related to the sensed event to the server 200.

In operation S640, the server 200 may obtain or determine output information of the alarm event by using the trained artificial intelligence model. In detail, the server 200 may obtain or determine output information of the alarm event according to the current text. The artificial intelligence model may be a model trained to obtain output information for an alarm event by using information about the alarm event and contextual information as input data. For example, in the case where the context information includes information about "only the primary user is present in the room", the server 200 may obtain the entire contents of the e-mail as the output information of the alarm event as shown in part (b) of fig. 7A. In addition, in the case where the context information includes information about "appearing in the room together with sister B", the server 200 may obtain a message "you have an email" as output information of the alarm event as shown in part (c) of fig. 7A.

The server 200 may transmit output information of the alarm event to the electronic device 100 in operation S650. According to another embodiment, the server 200 may send instructions or indication information indicating output information of the alarm event.

In operation S660, the electronic device 100 may provide output information of the alarm event. In other words, the electronic apparatus 100 may provide output information of an event obtained from the external server 200 (or obtained based on an instruction from the server). For example, where only the primary user is present in the room, the electronic device 100 may provide the entire contents of the email as output information for the alarm event. In case the primary user appears in the room together with his/her sister B, the electronic device 100 may provide a message "you have an email" as output information of the alarm event.

In operation S670, the electronic device 100 may receive an input of feedback information according to a user input. The feedback information may be information related to a user command input to the electronic device 100 by the primary user after providing the output information of the alarm event. For example, in a case where the primary user appears in the room together with his/her sister B, the electronic device 100 may output a message "you have an email" as output information of the alarm event. Then, when the primary user commands to read the entire contents of the email or the primary user forwards the email to his/her sister B, the electronic device 100 may obtain corresponding feedback information, e.g., "entire message read" or "message forwarded".

In operation S680, the electronic device 100 may transmit the received feedback information to the external server 200.

In operation S690, the external server 200 may retrain the artificial intelligence model based on the received feedback information. In detail, the external server 200 may provide output information of the alarm event and then retrain the artificial intelligence model based on information about a user command input to the electronic device 100 by a primary user in response to the output information of the alarm event. Thus, the external server 200 may associate (relate) or determine user feedback information according to context and update the artificial intelligence model.

For example, in the case where a mail reception event is received as shown in part (a) of fig. 7B after the artificial intelligence model is updated by the feedback information, when the information "only the primary user is present in the room" is included in the context information, the server 200 may obtain the entire contents of the e-mail as the output information of the alarm event, as shown in part (B) of fig. 7B. In addition, in the case where the context information includes information about "appearing in the room with sister B", the server 200 may obtain the entire contents of the e-mail as output information of the alert event based on the updated or retrained artificial intelligence model as shown in part (c) of fig. 7B.

In the above embodiments, the artificial intelligence model obtains output information for the alarm event. However, this is merely an example, and it should be understood that the artificial intelligence model may also (or alternatively) determine an output method of outputting information.

Fig. 8, 9A and 9B are diagrams provided to explain an example of providing user request information according to a context, according to another embodiment.

As shown in fig. 8, the electronic apparatus 100 may receive an input of a command for requesting information (i.e., a request command) in operation S810. The command for requesting information may be a command for requesting current weather information. However, this is merely an example, and it should be understood that the command may be a user command requesting other information.

In operation S820, the electronic device 100 may obtain context information. For example, the electronic apparatus 100 may obtain information about a space in which the electronic apparatus 100 is located and schedule information of a main user using the electronic apparatus 100 as context information.

In operation S830, the electronic device 100 may transmit a request command and context information to the server 200.

In operation S840, the server 200 may obtain user request information using the trained artificial intelligence model. In detail, the server 200 may obtain user request information requested by the user according to the current text. The artificial intelligence model may be a model trained to obtain user requested information of the user request by using the request command and the context information as input data. For example, the context information includes information such as "schedule to be soon out", and the server 200 may obtain a message such as "today's weather guide → guide focusing on your wearing according to weather (such as concentration of fine dusts, temperature, and intensity of wind)" as the user request information, as shown in part (b) of fig. 9A. In addition, in the case where the context information includes information such as "remaining time of day is at home", the server 200 may obtain, as the user request information, a message such as "today's weather guide → guide to pay attention to your wearing in accordance with weather (such as concentration of fine dusts, temperature, and intensity of wind)", as shown in part (c) of fig. 9A.

The server 200 may transmit user request information to the electronic device 100 in operation S850.

In operation S860, the electronic apparatus 100 may provide the user request information. In the present example, regardless of whether the user is scheduled to go out, the electronic apparatus 100 may provide, as the user request information, information such as "today's weather guide → a guide focusing on your wearing according to weather (such as concentration of fine dust, temperature, and intensity of wind)".

In operation S870, the electronic device 100 may receive an input of feedback information according to a user input. The feedback information may be information that the user searches after providing the user request information. For example, in a case where the user stays at home for the rest of the day, when the main user searches whether weather is suitable for doing housework (e.g., ventilation, yard work, lighting, laundry, etc.), the electronic apparatus 100 may obtain feedback information of "searching weather information related to housework".

In operation S880, the electronic device 100 may transmit the received feedback information to the external server 200.

In operation S890, the server 200 may retrain the artificial intelligence model based on the received feedback information. In detail, the server 200 may provide output information for an alarm event and then retrain the artificial intelligence model based on information found or searched by the primary user using the electronic device 100. Thus, the external server 200 may reflect or determine the user feedback information according to the context and update the artificial intelligence model.

For example, in the case where a request command for searching for information is received after the artificial intelligence model is updated by the feedback information, when the context information includes information "schedule to be soon out" as shown in part (a) of fig. 9B, the server 200 may obtain, as the user request information, a message "today's weather guide → guide focusing on your wearing according to weather (such as concentration of fine dusts, temperature, and intensity of wind)", as shown in part (B) of fig. 9B. In addition, in the case where the context information includes information "remaining time of day is at home", the server 200 may obtain a message such as "guidance on whether weather is good for ventilation, lighting, and laundry (such as concentration of fine dust)" as the user request information, as shown in part (c) of fig. 9B.

In the above-described embodiments, the artificial intelligence model obtains at least one of output information of the alarm event, the control command, and/or the user request information based on the information related to the alarm event (or the information related to the request command) and the context information. However, this is merely an example, and it should be understood that the artificial intelligence model may obtain at least one of output information of an alarm event, control commands, and user request information by using other information. In detail, the artificial intelligence model may obtain at least one of output information of an alarm event, a control command, and user request information based on user history information, user preference information, and the like. For example, where the information about the history of the primary user having subscribed to a concert ticket with his/her sister or information indicating that the primary user prefers his/her sister to be present, the artificial intelligence model may output the entire information about the received email.

As described above, the electronic apparatus 100 may provide various services according to the context, thereby protecting the privacy of the user and providing the user with optimal content. In addition, by providing services according to the context, the functionality of the device is improved at least in the following way: improved privacy and convenience for the user; improved accuracy of autonomous operation of the device (or server); and improved conservation of resources (e.g., battery life, display elements, processing resources, etc.) that would otherwise be expended in outputting content or alerts that are not intended by the user.

Fig. 10 is a flowchart illustrating a method for controlling an electronic device according to an embodiment.

In operation S1010, the electronic device 100 may recognize whether a signal for sensing an alarm event is input. The alarm event may be implemented as various events such as an email reception event, a text message reception event, an information request reception event, an SNS reception event, a push notification event, and the like. In addition, the electronic apparatus 100 may receive an input of a command requesting information from a user in addition to the alarm event.

When a signal for sensing an alarm event is received in operations S1010-Y, the electronic device 100 may obtain surrounding context information of the electronic device 100. The context information may include information about a space in which the electronic device 100 is located, information about at least one user present in the space in which the electronic device 100 is located, schedule information of a main user using the electronic device 100, and the like.

The electronic device 100 may receive output information for an alarm event by inputting the context information and information related to the alarm event into an artificial intelligence model trained by an artificial intelligence algorithm. In detail, the electronic device 100 may transmit the context information and the information related to the alarm event to the external server 200. In case that the external server obtains the output information of the alarm event by means of the artificial intelligence model, the electronic device 100 may receive the output information of the alarm event from the external server 200.

In operation S1040, the electronic device 100 may provide output information of the received alarm event.

FIG. 11 is a flow diagram illustrating a method of providing output information of an alarm event according to context by an electronic device through an artificial intelligence model, according to another embodiment.

In operation S1110, the electronic device 100 may obtain a signal for sensing an alarm event.

In operation S1120, the electronic device 100 may obtain context information around the electronic device 100.

In operation S1130, the electronic apparatus 100 may obtain output information of the alarm event by using an artificial intelligence model. The artificial intelligence model may be stored by electronic device 100 and/or may be controlled by a program (e.g., a personal assistant program) that is exclusive to artificial intelligence of electronic device 100. Further, the artificial intelligence model may be a model trained to obtain output information for an alarm event by using information about the alarm event and contextual information as input data models. The artificial intelligence model can determine at least one of output information for the alarm event and a method for outputting the alarm event.

In operation S1140, the electronic device 100 may provide output information. In detail, the electronic device 100 may provide output information of the alarm event obtained through the artificial intelligence model. The output information for the alarm event may be provided according to an output method determined by an artificial intelligence model.

In operation S1150, the electronic device 100 may obtain feedback information according to the user input. The feedback information may include at least one of reaction information of the user to the output information, control command information input by the user for or in response to an alarm event after the output information is output, and information found or searched by the user after the output information is output. However, it is to be understood that one or more other embodiments are not so limited.

In operation S1160, the electronic device 100 may retrain or further train the artificial intelligence model based on the feedback information. That is, the electronic apparatus 100 may retrain the artificial intelligence model based on feedback information obtained according to the user input, thereby adaptively providing output information of the alarm event according to circumstances. As a result, the functionality of the electronic device 100 is improved at least in the following way: improved privacy and convenience for the user; improved accuracy of autonomous operation and user personalization of the device 100; and improved conservation of resources (e.g., battery life, display elements, processing resources, etc.) that would otherwise be expended in outputting content or alerts that are not intended by the user.

Fig. 12 is a block diagram of a configuration of an apparatus 1200 for learning and using an Artificial Intelligence (AI) model according to an embodiment.

Referring to fig. 12, a device 1200 (e.g., an electronic device or an external server) may include at least one of a learning part 1210 and a determination part 1220. The electronic device 1200 of fig. 12 may correspond to the electronic device 100 or the external server 200 of fig. 2.

The learning section 1210 may generate or train a first model having criteria for obtaining context information by using the learning data and a second model having criteria for obtaining output information of the event by using the learning data. The learning section 1210 may generate an artificial intelligence model having a determination criterion by using the collected learning data.

For example, the learning part 1210 may generate, train, or update a first model to obtain context information around the electronic device 100 using data sensed by the electronic device 100 or data sensed by an external device as learning data.

As another example, the learning part 1210 may generate, train, or update the second model using information about the event and context information as learning data to update output information (or an output method) of the event.

The determining part 1220 may use predetermined data in the trained first model as input data and obtain context information around the electronic device 100 or corresponding to the electronic device 100. In addition, the determination section 1220 may use predetermined data as input data of the trained artificial intelligence model and obtain output information of the event.

For example, the determination part 1220 may obtain context information around the electronic device 100 or corresponding to the electronic device 100 by using data sensed by the electronic device 100 or data sensed by an external device as learning data.

As another example, the determination section 1220 may use information about the event and contextual information as input data for a trained artificial intelligence model and obtain (or estimate or infer) output information for the event.

In one embodiment, the learning part 1210 and the determining part 1220 may be included in the external server 1200. However, this is merely an example, and it should be understood that at least one of the learning portion 1210 and the determining portion 1220 may be included in a different external device or in the electronic device 100 in various other embodiments. In detail, at least a portion of the learning part 1210 and at least a portion of the determining part 1220 may be implemented as software modules or manufactured in the form of at least one hardware chip and installed in the electronic device 100. For example, at least one of the learning part 1210 and the determining part 1220 may be manufactured in the form of a hardware chip exclusive to Artificial Intelligence (AI), or may be manufactured as a part of a general-purpose processor (e.g., CPU or application processor) or a graphics exclusive processor (e.g., GPU) and installed in the above-described various electronic devices. In this regard, an exclusive or dedicated hardware chip for artificial intelligence may be an exclusive processor dedicated to probabilistic operations and may exhibit higher performance than a general purpose processor to facilitate processing of computational operations in the field of artificial intelligence, such as machine learning. Further, by including a separate or exclusive processor for artificial intelligence, the functionality of the device may be improved, at least by reducing the load on the general or main processor (e.g., CPU). When the learning portion 1210 and the determining portion 1220 are implemented as software modules (or program modules including instructions), the software modules may be stored in a non-transitory computer-readable medium. In this regard, the software module may be provided by an Operating System (OS) or a predetermined application. Alternatively, a part of the software module may be provided by an Operating System (OS), and a part of the software module may be provided by a predetermined application.

In this case, the learning part 1210 and the determining part 1220 may be respectively installed on one electronic device or on separate electronic devices. For example, one of the learning part 1210 and the determining part 1220 may be included in the electronic device 100, and the other may be included in the external server 200. The learning section 1210 may supply the model information constructed by the learning section 1210 to the determination section 1220 in a wired or wireless manner, or may supply the data input to the learning section 1210 as additional learning data to the learning section 1210.

Fig. 13A is a block diagram of a learning portion 1210 and a determination portion 1220 according to one or more embodiments.

Referring to part (a) of fig. 13A, the learning part 1210 according to one or more embodiments may include a learning data obtaining part 1210-1 and a model learning part 1210-4. In addition, the learning part 1210 may further optionally include at least one of a learning data pre-processor 1210-2, a learning data selecting part 1210-3 and a model evaluating part 1210-5

The learning data obtaining part 1210-1 may obtain learning data of the first model for obtaining context information. In one embodiment, the learning data obtaining part 1210-1 may obtain data obtained by a sensor provided in the electronic apparatus 100, data received by an external apparatus, or the like as the learning data.

In addition, the learning data obtaining section 1210-1 may obtain learning data of a second model for obtaining output information of the event. In one embodiment, the learning data obtaining part 1210-1 may obtain information about an event, context information, and the like as the learning data. In addition, the learning data obtaining part 1210-1 may obtain user history information, user preference information, and the like as learning data to obtain output information of an event. The learning data may be data collected or tested by the learning portion 1210 or a manufacturer of the learning portion 1210.

The model learning section 1210-4 may train the first model using the learning data to establish criteria for obtaining context information. In addition, the model learning component 1210-4 may train a second model to establish criteria for obtaining output information for an event. For example, the model learning section 1210-4 may train at least one of the first model and the second model by supervised learning using at least a part of the learning data as a criterion for obtaining the output information of the event. In addition, the model learning section 1210-4 may train itself using learning data, for example, without a specific instruction, so as to train at least one of the first model and the second model by unsupervised learning to find a criterion for obtaining output information of an event. Further, the model learning section 1210-4 may train at least one of the first model and the second model by reinforcement learning using, for example, feedback on whether the result of the determination based on learning is correct. Further, the model learning section 1210-4 may train at least one of the first model and the second model by using a learning algorithm including, for example, an error back propagation method or a gradient descent method.

In addition, the model learning portion 1210-4 may use the input data to learn criteria for selection of which learning data to use to obtain context information and/or criteria for selection of which learning data to use to obtain output information for an event.

If there are a plurality of pre-constructed artificial intelligence models, the model learning section 1210-4 may identify an artificial intelligence model having a high correlation between the input learning data and the basic learning data as an artificial intelligence model to be learned. In this case, the basic learning data may be pre-classified according to the type of the data, and the artificial intelligence model may be pre-established according to the type of the data. For example, the basic learning data may be pre-classified by various criteria, such as an area in which the learning data is generated, a time at which the learning data is generated, a size of the learning data, a category of the learning data, a creator of the learning data, a category of one or more objects in the learning data, and the like.

When training the artificial intelligence model, the model learning part 1210-4 may store the trained artificial intelligence model. In this regard, the model learning portion 1210-4 may store the trained artificial intelligence model in a memory of the external server 200. Alternatively, the model learning part 1210-4 may store the trained artificial intelligence model in a server connected to the external server 200 through a wired or wireless network or in a memory of the electronic device 100.

The data learning part 1210 may further include a data learning preprocessor 1210-2 and a learning data selecting part 1210-3 to improve a determination result of the artificial intelligence model or save resources or time for generating the artificial intelligence model.

The learning data pre-processor 1210-2 may pre-process the obtained data such that the obtained data may be used for learning to obtain context information and/or may be used for learning to obtain output information for an event. The learning data pre-processor 1210-2 may process the obtained data into a predetermined format so that the model learning part 1210-4 may obtain output information of the event using the obtained data (e.g., to be compatible with, to adapt to, or to improve the processing of the model learning part 1210-4). For example, the learning data pre-processor 1210-2 may remove unnecessary text (e.g., proverb, exclamation, etc.) when the second model provides a response from the input information.

The learning data selection part 1210-3 may select data required or used for learning from the data obtained from the learning data obtaining part 1210-1 and/or the data preprocessed in the learning data preprocessor 1210-2. The selected learning data may be provided to the model learning part 1210-4. The learning data selection part 1210-3 may select learning data required or used for learning from the obtained or preprocessed data according to a preset selection criterion. The learning data selection part 1210-3 may also select learning data according to a preset selection criterion by learning through the model learning part 1210-4.

The learning portion 1210 may also include a model evaluation unit 1210-5 (e.g., a model evaluator) to improve the determination of the artificial intelligence model.

The model evaluation section 1210-5 may input evaluation data to the artificial intelligence model, and control the model learning section 1210-4 to learn again when a determination result output from the evaluation data does not satisfy a predetermined criterion. In this case, the evaluation data may be predefined data for evaluating the artificial intelligence model.

For example, if the number or ratio of evaluation data whose recognition result is inaccurate among the evaluation results of evaluation data of the trained artificial intelligence model exceeds a predetermined threshold, the model evaluation section 1210-5 may evaluate that the predetermined criterion is not satisfied.

On the other hand, in the case where there are a plurality of learned artificial intelligence models, the model evaluation part 1210-5 may evaluate whether each of the learned artificial intelligence models satisfies a predetermined criterion, and determine a model satisfying the predetermined criterion as a final artificial intelligence model. In this case, in the case where there are a plurality of models satisfying the predetermined criterion, the model evaluation part 1210-5 may determine any one of the models or a preset number of models, which are previously set in a descending order of evaluation scores, as the final artificial intelligence model.

Referring to part (b) of fig. 13A, the determination part 1220 according to some one or more embodiments may include an input data obtaining part 1220-1 and a determination result providing part 1220-4.

In addition, the determination part 1220 may further selectively include at least one of an input data pre-processor 1220-2, an input data selection part 1220-3, and a model update part 1220-5.

The input data obtaining part 1220-1 may obtain data for obtaining context information or data required or used to obtain output information of an event. The determination result providing part 1220-4 may obtain context information by applying the input data obtained in the input data obtaining part 1220-1 as input values to the trained first model, and may obtain output information of the event by applying the input data obtained in the input data obtaining part 1220-1 as input values to the trained second model. The determination result providing part 1220-4 may apply data selected by the input data pre-processor 1220-2 and the input data selecting part 1220-3, which will be described below, as input values to the artificial intelligence model and obtain a determination result.

In one embodiment, the determination result providing part 1220-4 may apply the data obtained in the input data obtaining part 1220-1 to the learned first model and obtain context information about the electronic device 100 or corresponding to the electronic device 100 around the electronic device 100.

In another embodiment, the determination result providing part 1220-4 may apply information about the event obtained in the input data obtaining part 1220-1 and context information to the trained second model and obtain output information of the event.

The determination part 1220 may further include an input data pre-processor 1220-2 and an input data selection part 1220-3 to improve the determination result of the artificial intelligence model or save resources or time for providing the determination result.

The input data pre-processor 1220-2 may pre-process the obtained data such that the obtained data may be used to obtain contextual information or output information for the event. The preprocessor 1220-2 may process the obtained data in a predefined format, so that the determination result providing part 1220-4 may use the obtained data for obtaining context information or the obtained data for obtaining output information of an event.

The input data selecting part 1220-3 may select data required or used for determination from the data acquired in the input data obtaining part 1220-1 and/or the data preprocessed in the input data preprocessor 1220-2. The selected data may be provided to the determination result providing part 1220-4. The input data selecting part 1220-3 may select some or all of the obtained or preprocessed data according to a preset selection criterion for determination. The input data selecting part 1220-3 may also select data according to a preset selection criterion through the learning of the model learning part 1210-4.

The model updating section 1220-5 may control the artificial intelligence model to be updated based on the evaluation of the determination result provided by the determination result providing section 1220-4. For example, the model updating section 1220-5 may provide the determination result provided by the determination result providing section 1220-4 to the model learning section 1210-4, thereby requesting the model learning section 1210-4 to further train or update the artificial intelligence model. In particular, the model update portion 1220-5 may retrain the artificial intelligence model based on the feedback information according to the user input. It should be appreciated that one or more of the components described above with reference to fig. 13A may be implemented as hardware (e.g., circuitry, processing cores, etc.) and/or software.

Fig. 13B is a diagram showing an example in which the electronic device a and the external server S are interlocked or communicably connected with each other and learn and determine data according to the embodiment.

Referring to fig. 13B, the external server S may learn criteria for obtaining context information or output information of an event, and the external device a may obtain the context information or provide the output information of the event based on the learning result of the server S.

The model learning section 1210-4 of the server S may perform the function of the learning section 1210 shown in fig. 12. That is, the model learning part 1210-4 of the server S may learn criteria related to event information or context information to obtain output information of an event and how to obtain the output information of the event by using the information.

The determination result providing part 1220-4 of the electronic device a obtains the output information of the event by applying the data selected by the input data selecting part 1220-3 to the artificial intelligence model generated by the server S. Alternatively, the determination result providing part 1220-4 of the electronic device a may receive the artificial intelligence model generated by the server S from the server S and obtain the output information of the event by using the received artificial intelligence model. It should be appreciated that one or more of the components described above with reference to fig. 13B may be implemented as hardware (e.g., circuitry, processing cores, etc.) and/or software.

Fig. 14 and 15 are flow diagrams of network systems using artificial intelligence models, in accordance with various embodiments.

In fig. 14 and 15, a network system using an artificial intelligence model may include first elements 1401 and 1501 and second elements 1402 and 1502.

The first elements 1401 and 1501 may be electronic devices 100. Second elements 1402 and 1502 may be servers 200 in which the determined models are stored. Alternatively, first elements 1401 and 1501 may be general purpose processors and second elements 1402 and 1502 may be artificial intelligence exclusive or dedicated processors. Alternatively, the first elements 1401 and 1501 may be at least one application, and the second elements 1402 and 1502 may be an Operating System (OS). That is, second elements 1402 and 1502 may be more integrated, dedicated, have less latency, have better performance, and/or have more resources than first elements 1401 and 1501, which may be able to handle the large number of operations to generate, update, or apply an artificial intelligence model more quickly and efficiently than first elements 1401 and 1501.

An interface for transmitting and receiving data between the first elements 1401 and 1501 and the second elements 1402 and 1502 may be defined. For example, the interface may include an Application Program Interface (API) having the learning data to be applied to the artificial intelligence model as a factor value (or intermediate or transition value). An API may be defined as a subroutine or a set of functions in which any one protocol (e.g., a protocol defined in the electronic device 100) may invoke some processing of another protocol (e.g., a protocol defined in the server 200). That is, an environment in which the operation of another protocol can be performed in either protocol can be provided through the API.

In addition, the second elements 1402 and 1502 may be implemented as a plurality of servers. For example, the second elements 1402 and 1502 may be implemented as a server for obtaining context information and a server for obtaining output information of an event.

In fig. 14, a first element 1401 may sense an event or determine the occurrence of an event in operation S1410. The event may include various events such as an email reception event, a text message reception event, a call reception event, and the like.

In operation S1420, the first element 1401 may obtain context information. For example, the first element 1401 may obtain context information, which is information on a space in which the first element 1401 is located and information on a user appearing in the space in which the first element 1401 is located. However, it is to be understood that one or more other embodiments are not so limited. For example, the first element 1401 may obtain various context information, such as schedule information of the user, health information of the user, emotion information of the user, and the like.

In operation S1430, the first element 1401 may transmit context information and information about the event to the second element 1402.

In operation S1440, the second element 1402 may determine output information and an output method of the event by using the trained artificial intelligence model. In detail, the second element 1402 may obtain output information of an event according to a current context and determine an output method for outputting the obtained output information. The artificial intelligence model may be a model trained to determine output information and an output method of an event by using context information and information about the event as input data.

In operation S1450, the second element 1402 may transmit output information and an output method of the event to the first element 1401.

The first element 1401 may provide output information of the event by using the determined output method in operation S1460. That is, the first element 1401 may provide output information of an event by using the determined content output method determined according to the context. For example, the first element 1401 may provide output information of an event through the speaker 130 when the user is present alone in a room, and the first element 1401 may provide output information of an event only through the display 120 when the user is present in the room together with another person.

In operation S1470, the first element 1401 may receive or determine feedback information according to a user input. The feedback information may be user reaction information regarding output information of the event, information regarding a user command input by the primary user after the output information of the event is provided, information found by the primary user after the output information of the event is output, and the like.

In operation S1480, the first element 1401 may transmit the input feedback information to the second element 1402.

In operation S1490, the second element 1402 may retrain or further train the artificial intelligence model based on the input feedback information. Thus, the second element 1402 may reflect or take into account the user feedback information according to the context and update the artificial intelligence model.

In fig. 15, in operation S1505, a first element 1501 may sense an event (e.g., determine occurrence of an event). The event may be an event for providing information, and the event may include various events such as a text message reception event, an e-mail reception event, a call reception event, an information request reception event, a push notification event, and the like.

In operation S1510, the first element 1501 may obtain data for sensing a context around the electronic device 100 or corresponding to the electronic device 100. The first element 1501 may obtain data through sensors (e.g., camera, microphone, etc.) present in the electronic device 100 and/or receive data from an external device connected to the electronic device 100.

In operation S1515, the first element 1501 may transmit data for sensing or determining a context to the second element.

In operation S1520, the second element 1520 may obtain information about a person located in a space where the electronic apparatus 100 exists by using the first model. The first model may be an artificial intelligence model trained to obtain contextual information (e.g., information about a person located in a space where the electronic device 100 is present) by using data for sensing or determining a context around or corresponding to the electronic device 100 as input data.

In operation S1525, the second element 1502 may send the obtained context information (e.g., the obtained information about the person) to the first element 1501. In operation S1530, the first element 1501 may transmit information about an event and context information (e.g., information about a person) to the second element. When the first element 1501 transmits information about an event together with data for sensing a context in operation S1515, operations S1525 and S1530 may be omitted.

In operation S1535, the second element 1502 may obtain output information corresponding to the event by using the second model. In detail, the second element 1502 may input information about an event and context information (e.g., information about a person located in a space where the electronic apparatus 100 exists) to the second model as input data and obtain output information corresponding to the event.

In operation S1540, the second component 1502 may transmit output information of the event to the first component 1501.

In operation S1545, the first element 1501 may provide output information of the event. For example, first element 1501 can output information via at least one of a display, an audio output interface, a speaker, an LED, and the like.

In operation S1550, the first element 1501 may receive or determine feedback information according to a user input. The feedback information may be user reaction information regarding output information of the event, information regarding a user command input by the primary user after the output information of the event is provided, information found by the primary user after the output information of the event is output, and the like.

In operation S1555, the first element 1501 may transmit the input feedback information to the second element 1502.

In operation S1560, the second element 1502 may retrain the second model based on the input feedback information. Thus, the second element 1502 may reflect or take into account user feedback information and update the second model according to context.

Fig. 16 is a flow diagram provided to explain a method of providing output information of an event according to a context by an electronic device according to an embodiment.

Referring to fig. 16, the electronic device 100 may sense an event for providing information (e.g., determine occurrence of the event) in operation S1610. The event for providing information may include at least one of a text message reception event, an e-mail reception event, a call reception event, an information reception event, an SNS reception event, a push notification event, and the like.

In operation S1620, the electronic device 100 may obtain data for sensing or determining a context around the electronic device 100 or corresponding to the electronic device 100. The data for sensing or determining a context around the electronic device may include at least one of image data and voice data obtained by a camera and a microphone included in the electronic device 100 and/or an external device connected to the electronic device 100.

In operation S1630, the electronic device 100 may input the obtained data to the trained first model and obtain information about a person located in a space where the electronic device 100 exists.

In operation S1640, the electronic device 100 may input the obtained information on the person and the information on the event to the trained second model, and obtain output information corresponding to the event. For example, in the case where the obtained information about the person includes only information about the primary user using the electronic device 100, the second model may be trained to obtain detailed information about the event as output information. In case the obtained information about a person comprises information about another person together with information about a primary user using the electronic device 100, the second model may be trained to obtain as output information brief information about the event, i.e. information comprising less details than detailed information about the event.

In operation S1650, the electronic device 100 may provide the obtained output information.

The above-described embodiments may be implemented as a software program comprising instructions stored on a machine (e.g., computer) readable storage medium. A machine is a device capable of calling stored instructions from a storage medium and operating according to the called instructions, and may include an electronic device (e.g., electronic device 100) according to the above-described embodiments. When a command is executed by a processor, the processor may perform a function corresponding to the command directly and/or by using other components under the control of the processor. The command may include code generated or executed by a compiler or interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. Herein, the term "non-transitory" merely means that the storage medium does not include a signal, but is tangible, and does not distinguish between a case where data is semi-permanently stored in the storage medium and a case where data is temporarily stored in the storage medium.

According to an embodiment, the method according to the various embodiments described above may be provided as comprised in a computer program product. The computer program product may be used as a product for conducting transactions between a seller and a consumer. The computer program product may be distributed in the form of a machine-readable storage medium, such as a compact disc read only memory (CD-ROM), or online through an application STORE, such as a PLAY STORE. In the case of online distribution, at least a portion of the computer program product may be at least temporarily stored or temporarily generated in a server of a manufacturer, a server of an application store, and/or a storage medium such as a memory.

Each component (e.g., module or program) according to various embodiments may comprise a single entity or multiple entities, and some corresponding sub-components described above may be omitted, or another sub-component may be further added to various embodiments. Alternatively or additionally, some components (e.g., modules or programs) may be combined to form a single entity that performs the same or similar function as the corresponding element before being combined. Operations performed by a module, program, or other component may be sequential, parallel, or both, iteratively or heuristically executed, or at least some operations may be performed in a different order, omitted, or other operations may be added, according to various embodiments.

39页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于陶瓷载体,特别是瓷砖的图形适配方法和系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类