Electronic device and method for controlling electronic device
阅读说明:本技术 电子设备和用于控制电子设备的方法 (Electronic device and method for controlling electronic device ) 是由 金叡薰 尹昭正 徐钻源 于 2019-02-01 设计创作,主要内容包括:提供一种电子设备和用于控制电子设备的方法。所述用于控制电子设备的方法包括:基于确定用于输出信息的事件的发生,获得用于确定与电子设备对应的情境的数据;将所获得的数据输入到通过人工智能算法训练的第一模型,并获得关于位于电子设备附近的人的信息;将所获得的关于人的信息和关于事件的信息输入到通过人工智能算法训练的第二模型,并获得与事件对应的输出信息;以及提供所获得的输出信息。(An electronic device and a method for controlling the electronic device are provided. The method for controlling an electronic device includes: obtaining data for determining a context corresponding to the electronic device based on determining an occurrence of an event for outputting information; inputting the obtained data to a first model trained by an artificial intelligence algorithm and obtaining information about a person located in the vicinity of the electronic device; inputting the obtained information on the person and the information on the event to a second model trained by an artificial intelligence algorithm, and obtaining output information corresponding to the event; and providing the obtained output information.)
1. A method for controlling an electronic device, the method comprising:
obtaining data for identifying a context corresponding to the electronic device based on identifying an occurrence of an event for outputting information;
inputting the obtained data to a first model trained by an artificial intelligence algorithm, and obtaining information about a person located in the vicinity of the electronic device based on the obtained data input to the first model;
inputting the obtained information on the person and the information on the event to a second model trained by an artificial intelligence algorithm;
obtaining output information corresponding to the event based on the obtained information on the person and the information on the event being input to the second model; and
providing the obtained output information.
2. The method of claim 1, wherein the data for identifying a context corresponding to an electronic device comprises at least one of:
image data obtained by a camera included in the electronic device or an external device connected to the electronic device; and
audio data obtained by a microphone included in the electronic device or an external device connected to the electronic device.
3. The method of claim 1, wherein the second model is trained to:
determining detailed information of the event as output information based on including information on a main user using the electronic device in the obtained information on the person without including any other information on any other person; and
based on including both information on another person and information on a main user using the electronic device in the obtained information on the person, brief information of the event is determined as output information, the brief information being less detailed than detailed information of the event.
4. The method of claim 1, further comprising:
obtaining feedback information for the provided output information based on the user input,
wherein the second model is retrained or further trained based on the obtained feedback information for the provided output information.
5. The method of claim 4, further comprising:
inputting the obtained information about the person and information about another event for outputting the information to a retrained or further trained second model; and
obtaining output information corresponding to the other event based on the obtained information about the person and the information about the other event being input to a retrained or further trained second model.
6. The method of claim 4, wherein the obtained feedback information for the provided output information includes at least one of user reaction information to the provided output information, control command information for an event input by a user after the provision of the output information, and information found or searched by the user after the provision of the output information.
7. The method of claim 1, wherein at least one of the first and second models is stored in an external server.
8. The method of claim 1, wherein:
training a second model to obtain an output method of the event based on the obtained information on the person and the information on the event input to the second model; and
the providing includes providing the obtained output information based on an output method of the obtained event.
9. The method of claim 1, wherein the event comprises at least one of: a text message reception event, an email reception event, a call reception event, an information reception event, a Social Network Service (SNS) reception event, and a push notification reception event.
10. An electronic device, comprising:
a communication interface;
a display;
a speaker;
at least one memory for storing instructions; and
at least one processor configured to execute the stored instructions to:
obtaining data for identifying a context corresponding to the electronic device based on identifying an occurrence of an event for outputting information;
inputting the obtained data to a first model trained by an artificial intelligence algorithm, and obtaining information about a person located in the vicinity of the electronic device based on the obtained data input to the first model;
inputting the obtained information on the person and the information on the event to a second model trained by an artificial intelligence algorithm;
obtaining output information corresponding to the event based on the obtained information on the person and the information on the event being input to the second model; and
controlling at least one of the display and the speaker to provide the obtained output information.
11. The electronic device of claim 10, wherein the data for identifying a context corresponding to the electronic device comprises at least one of:
image data obtained by a camera included in the electronic device or an external device connected to the electronic device; and
audio data obtained by a microphone included in the electronic device or an external device connected to the electronic device.
12. The electronic device of claim 10, wherein the second model is trained to:
determining detailed information of the event as output information based on including information on a main user using the electronic device in the obtained information on the person without including any other information on any other person; and
based on including both information on another person and information on a main user using the electronic device in the obtained information on the person, brief information of the event is determined as output information, the brief information being less detailed than detailed information of the event.
13. The electronic device of claim 10, wherein:
the at least one processor is configured to further execute the stored instructions to obtain feedback information for the provided output information in accordance with a user input; and
wherein the second model is retrained or further trained based on the obtained feedback information for the provided output information.
14. The electronic device of claim 13, wherein the at least one processor is configured to further execute the stored instructions to:
inputting the obtained information about the person and information about another event for outputting the information to a retrained or further trained second model; and
obtaining output information corresponding to the other event based on the obtained information about the person and the information about the other event being input to a retrained or further trained second model.
15. An apparatus, comprising:
at least one memory for storing instructions; and
at least one processor configured to execute the stored instructions to:
receiving, from another device, data for identifying a context corresponding to the other device based on an event for outputting information occurring at the other device;
inputting the obtained data to a first model trained by an artificial intelligence algorithm, and obtaining information about a person located in the vicinity of the other device based on the obtained data input to the first model;
inputting the obtained information on the person and the information on the event to a second model trained by an artificial intelligence algorithm;
obtaining output information corresponding to the event based on the obtained information on the person and the information on the event being input to the second model; and
control transmitting the obtained output information to the other device.
Technical Field
The present disclosure relates to an electronic device and a method for controlling the electronic device. More particularly, the present disclosure relates to an electronic device capable of providing output information of an event according to a context and a method for controlling the electronic device.
Furthermore, the present disclosure relates to an Artificial Intelligence (AI) system for simulating functions such as cognition, human brain decision making, etc. using machine learning algorithms, and applications thereof.
Background
Recently, Artificial Intelligence (AI) systems for implementing intelligence corresponding to a human level have been used in various fields. Unlike previous rule-based intelligent (smart) systems, AI systems are systems in which machines learn, make decisions, and act upon the decisions on their own or on their own. As AI systems become more popular, recognition rates increase, thus understanding user preferences or characteristics (charateristic) more accurately. Thus, previous rule-based intelligent systems are gradually being replaced by deep-learning AI systems.
AI techniques include machine learning (e.g., deep learning) and element techniques that use machine learning.
Machine learning is an algorithmic technique that can classify and learn features of input data on its own or autonomously. Element techniques are techniques that use machine learning algorithms (e.g., deep learning, etc.) to mimic functions (e.g., cognition, human brain decision making, etc.), including the technical fields including language (linguistic) understanding, visual understanding, inference/prediction, knowledge representation, motion control, etc.
Various fields to which the AI technique is applied are as follows. Language understanding is a technology for recognizing and applying and processing human language (language) and words, and includes natural language processing, machine translation, dialog systems, questions and answers, voice recognition and synthesis, and the like. Visual understanding is a technique of recognizing and processing objects, just like human vision. The field of visual understanding includes object recognition, object tracking, image search, human recognition, scene understanding, spatial understanding, image improvement, and the like. Inference prediction is a technique for determining information and making logical inferences and predictions. The inference prediction domain includes knowledge/probability based inference, optimization prediction, preference based planning, recommendation, and the like. Knowledge representation is a technique for performing automated processing of human experience information using knowledge data. Knowledge representation domains include knowledge construction (data generation/classification), knowledge management (data usage), and the like. Motion control is a technique of controlling the autonomous driving ability of a vehicle (vehicle) and/or the motion of a robot. The field of motion control includes motion control (navigation, collision, driving), steering control (behavior control), and the like.
In recent years, electronic devices have become capable of detecting various events for providing information to users. As one example, when an alarm event is received, the electronic device outputs the alarm event regardless of the context of the electronic device. For example, when an alarm event is received in the electronic device, the electronic device outputs information related to the alarm event regardless of whether another user is present in the vicinity of the electronic device, the current location, and the like. That is, even if the user does not want to share these contents, the contents of the notification event are shared to others, and therefore, the privacy of the user is not protected. Furthermore, when these contents are shared in a situation where the user does not wish to share them, resources of the electronic device (e.g., processing speed, processing power, battery life, display resources, etc.) are unnecessarily consumed, thereby impairing the functionality of the device.
Disclosure of Invention
Technical problem
An electronic device capable of providing output information of an event according to a context of the electronic device and a method for controlling the same are provided.
Additional aspects will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the presented embodiments.
Technical scheme
According to one aspect of the present disclosure, a method for controlling an electronic device is provided. The method comprises the following steps: obtaining data for identifying a context corresponding to the electronic device based on identifying an occurrence of an event for outputting information; inputting the obtained data to a first model trained by an artificial intelligence algorithm, and obtaining information about a person located in the vicinity of the electronic device based on the obtained data input to the first model; inputting the obtained information on the person and the information on the event to a second model trained by an artificial intelligence algorithm; obtaining output information corresponding to the event based on the obtained information on the person and the information on the event being input to the second model; and providing the obtained output information.
According to another aspect of the present disclosure, an electronic device is provided. The electronic device includes: a communication interface; a display; a speaker; at least one memory for storing instructions; and at least one processor configured to execute the stored instructions to: obtaining data for identifying a context corresponding to the electronic device based on identifying an occurrence of an event for outputting information; inputting the obtained data to a first model trained by an artificial intelligence algorithm, and obtaining information about a person located in the vicinity of the electronic device based on the obtained data input to the first model; inputting the obtained information on the person and the information on the event to a second model trained by an artificial intelligence algorithm; obtaining output information corresponding to the event based on the obtained information on the person and the information on the event being input to the second model; and controlling at least one of the display and the speaker to provide the obtained output information.
According to another aspect of the present disclosure, an apparatus is provided. The apparatus comprises: at least one memory for storing instructions; and at least one processor configured to execute the stored instructions to: receiving, from another device, data for identifying a context corresponding to the other device based on an event for outputting information occurring at the other device; inputting the obtained data to a first model trained by an artificial intelligence algorithm, and obtaining information about a person located in the vicinity of another device based on the obtained data input to the first model; inputting the obtained information on the person and the information on the event to a second model trained by an artificial intelligence algorithm; obtaining output information corresponding to the event based on the obtained information on the person and the information on the event being input to the second model; and controlling transmission of the obtained output information to another device.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable recording medium having a program recorded thereon, the program being executable by a computer to perform the method.
Drawings
The above and other aspects, features and advantages of certain embodiments of the present disclosure will become more apparent from the following description taken in conjunction with the accompanying drawings, in which:
fig. 1 is a diagram illustrating the use of an electronic device that provides output information of an event according to a context, according to an embodiment;
fig. 2 is a diagram showing a system including an electronic device and a server according to an embodiment;
fig. 3A is a block diagram of an electronic device according to an embodiment;
fig. 3B is a block diagram of a detailed configuration of an electronic apparatus according to an embodiment;
fig. 4, 5A, and 5B are diagrams provided to explain an example of obtaining control commands related to an alarm event according to a context, in accordance with various embodiments;
FIG. 6, FIG. 7A, and FIG. 7B are diagrams provided to explain an example of providing output information of an alarm event according to a context, according to another embodiment;
fig. 8, 9A and 9B are diagrams provided to explain an example of providing user request information according to a context, according to another embodiment;
fig. 10 is a flowchart illustrating a method for controlling an electronic device according to an embodiment;
FIG. 11 is a flow diagram illustrating a method of providing output information of an alarm event according to context by an electronic device through an artificial intelligence model, according to another embodiment;
fig. 12 is a block diagram of a configuration of an apparatus for learning and using an Artificial Intelligence (AI) model according to an embodiment;
fig. 13A and 13B are block diagrams of specific configurations of a learning section and a determination section according to various embodiments;
FIGS. 14 and 15 are flow diagrams of network systems employing artificial intelligence models, in accordance with various embodiments; and
fig. 16 is a flow diagram provided to explain a method of providing output information of an event according to a context by an electronic device according to an embodiment.
Detailed Description
The foregoing and/or other aspects, features and advantages of certain embodiments of the present disclosure will become more apparent from the following description when taken in conjunction with the accompanying drawings, in which like reference characters designate like elements throughout. However, it is to be understood that the present disclosure is not limited to certain embodiments described herein, but includes various modifications, equivalents, and/or alternatives to the embodiments of the disclosure.
In the description, the terms "have", "may have", "include" or "may include" indicate that there are corresponding features (for example, numerical values, functions, operations, or constituent elements such as components), but do not preclude the presence of additional features.
In the description, the expressions "a and/or B", "a or B", "at least one of a and B", "at least one of a or B", "one or more of a and B" and "one or more of a or B" may include all possible combinations of the items listed together. For example, the term "a and/or B" or "at least one of a and B" may denote: (1) at least one A; (2) at least one B; or (3) both at least one A and at least one B.
As used herein, the expressions "1", "2", "first" or "second" may modify various elements regardless of their order and/or importance and are used herein to distinguish one element from another (unless explicitly stated otherwise) without otherwise limiting the corresponding elements.
If it is described that a particular element (e.g., a first element) is "operably or communicatively coupled," "operably or communicatively coupled" or "connected" to another element (e.g., a second element), it may be understood that the particular element may be connected to the other element directly or through another element (e.g., a third element). Meanwhile, when it is described that one element (e.g., a first element) is "directly coupled" or "directly connected" to another element (e.g., a second element), it is understood that there is no element (e.g., a third element) between the element and the other element.
In the description, the term "configured to" may be referred to as, for example, "adapted to," "having … capability," "designed to," "adapted to," "manufactured to," or "capable" in certain circumstances and/or contexts. The term "configured to" or "set to" at the hardware level does not necessarily mean "specifically designed to". In some cases, the term "device configured as …" may refer to a "device" that is "capable" of doing something with another device or component. For example, the phrase "processor configured to perform A, B and C" may refer to or refer to a dedicated processor (e.g., an embedded processor) for performing the corresponding operations, or a general-purpose processor (e.g., a Central Processing Unit (CPU) or an application processor) that may perform the corresponding operations by executing one or more software programs stored in a memory device.
An electronic device according to various embodiments may include, for example, at least one of a smart phone, a tablet device, a tablet Personal Computer (PC), a mobile phone, a video phone, an e-book reader, a desktop PC, a laptop PC, a netbook computer, a workstation, a server, a Personal Digital Assistant (PDA), a Portable Multimedia Player (PMP), an MP3 player, a multimedia player, a medical device, a camera, a wearable device, and so forth. The wearable device may include at least one of an accessory type (e.g., watch, ring, bracelet, anklet, necklace, glasses, contact lens, or Head Mounted Device (HMD)), a fabric or cloth embedded type (e.g., electronic cloth), a body attachment type (e.g., skin pad or tattoo), or a bio-implant circuit. In some embodiments, the electronic device may include, for example, at least one of a television, a Digital Video Disc (DVD) player, an optical recording medium player (e.g., a blu-ray disc player), an audio processing device, a smart appliance, a refrigerator, an air conditioner, a cleaner, an oven, a microwave oven, a washing machine, an air purifier, a set-top box, a home automation control panel, a security control panel, a media box (e.g., SAMSUNG HOMESYNC, application TV, or GOOGLE TV), a gaming machine (e.g., MICROSOFT XBOX, SONY PLAYSTATION), an electronic dictionary, an electronic key, a camcorder, an electronic photo frame, and the like.
However, it is to be understood that various other embodiments may not be so limited. For example, according to one or more other embodiments, the electronic device may include various medical devices (e.g., various portable medical measurement devices (blood glucose monitor, heart rate monitor, blood pressure measurement device, or body temperature measurement device, etc.), Magnetic Resonance Angiography (MRA) device, Magnetic Resonance Imaging (MRI) device, Computed Tomography (CT) device, camera device, ultrasound device, etc.), navigation device, Global Navigation Satellite System (GNSS) device, Event Data Recorder (EDR), Flight Data Recorder (FDR), vehicle infotainment device, marine electronic device (e.g., marine navigation device, gyrocompass, etc.), avionic device, security device, host of vehicle (headunit), industrial or home robot, unmanned aerial vehicle, cash or Automated Teller Machine (ATM) of financial institution, point of sale (POS) of store, etc.) Internet of things devices (e.g., bulbs, sensors, sprinklers, valves, locks, fire alarms, temperature controllers, street lights, toasters, sporting goods, hot water tanks, heaters, boilers, etc.), and the like.
Further, the term "user" may refer to a person using the electronic device or a device using the electronic device (e.g., an Artificial Intelligence (AI) electronic device).
Fig. 1 is a diagram illustrating the use of an electronic device that provides output information of an event 10 according to a context according to an embodiment.
First, the electronic device may receive a signal for sensing an event (e.g., an alarm event) for providing information from an external source. For example, as shown in part (a) of fig. 1, the electronic device may receive a signal from an external source for sensing an event 10 that an email is received for booking tickets for a concert. In addition to the email reception event, the event may be implemented as various events such as a text message reception event, a call reception event, an information request reception event, a Social Network Service (SNS) reception or notification event, a chat service reception or notification event, a push notification event, and the like.
When receiving a signal for sensing an event, the
The
The
The
Furthermore, the
For example, in the case where the information related to the event includes information related to a concert ticket reservation and the context information includes information related to a user present in the vicinity of the
As another example, the
The
The
In the above-described embodiment, the
In addition, the
The
The first model and/or the second model referred to in the above embodiments may be a deterministic model based on artificial intelligence algorithm training, which may be, for example, a neural network based model. The trained first model and the trained second model may be designed to mimic human brain structures on a computer and include a plurality of network nodes having weight values and simulating neurons of a human neural network. Each of the plurality of network nodes may form a connection relationship such that neurons exchange signals through synapses simulating their synaptic activity. Additionally, the trained first model and/or the trained second model may, for example, comprise a neural network model or a deep learning model developed from a neural network model. The multiple network nodes in the deep learning model may be located at different depths (or levels) from each other and may exchange data according to a convolutional connectivity relationship. For example, the trained first model and the trained second model may include a Deep Neural Network (DNN), a Recurrent Neural Network (RNN), a bidirectional recurrent deep neural network (BDNR), and the like, but the present disclosure is not limited thereto.
Further, the
For example, in a case where a predetermined user input (e.g., an icon touch corresponding to the personal assistant chat robot, a user voice including a predetermined word such as "BIXBY") is input, a button provided in the electronic device 100 (e.g., a button for running the artificial intelligence agent) is pressed, or an event is sensed, the artificial intelligence agent may be operated (or run). In addition, the artificial intelligence agent may transmit information related to the event and context information to an external server and provide output information of the event received from the external server.
The artificial intelligence agent may also be operated when (or based on) sensing a predetermined user input, a button provided in the electronic device 100 (e.g., a button for running the artificial intelligence agent) is pressed, or an event is sensed. Alternatively, the artificial intelligence agent may be in a pre-operational state prior to sensing a predetermined user input, prior to selecting a button provided in the
In an example embodiment, in case that the
Fig. 2 is a diagram illustrating a system including the
The
When an event is sensed, the
The
The
In addition, the
The
In addition, when feedback information is received from the
Fig. 3A is a block diagram of an
The communication interface 110 may communicate with an external device via various communication methods. In particular, the communication interface 110 may receive an alarm event from an external source. In addition, the communication interface 110 may transmit information related to an event and context information to the
The communication interface 110 may communicate with various types of external devices according to various communication methods. For example, the communication interface 110 (or communicator) may include at least one of a Wi-Fi chip, a bluetooth chip, and a wireless communication chip. The processor 150 may perform communication with an external server or various types of external devices by using the communication interface 110. In addition, the communication interface 110 may communicate with an external device through various communication chips such as a Near Field Communication (NFC) module and the like.
The display 120 may provide various screens. Specifically, the display 110 may display output information of an event. The display 110 may display output information of an event in the form of a pop-up window. However, this is merely an example, and the output information of the alarm event may be displayed in a full screen mode or in a notification area or column of the screen.
The speaker 130 may include various audio output circuits and be configured to output various types of alarm sounds or voice messages in addition to various audio data on which the audio processor performs various processing operations, such as decoding, amplification, and noise filtering. In particular, speaker 130 may output the output information of the event in audio form. The plurality of speakers 140 may be provided in a plurality of regions of the electronic apparatus (e.g., an upper end region of a front surface of the electronic apparatus, a lower side region of the electronic apparatus, etc.).
Memory 140 may store instructions or data regarding at least one of the other elements of
For example, the memory 140 may store a program exclusive to Artificial Intelligence (AI). In this regard, the program exclusive to the AI may be a personalized program for providing various services to the
Processor 150 may be electrically connected to communication interface 110, display 120, speaker 130, and memory 140, and control the overall operation and functionality of
For example, the
Upon sensing an alarm event,
As described above, the
The processor 150 may obtain output information of the event obtained by the trained second model from the
Further, the processor 150 may obtain feedback information for the output information according to the user input and control the communication interface 110 to transmit the feedback information for the output information to the external server 200 (or directly to the second model, wherein the second model is stored in the electronic device 100). The second model may be retrained based on the feedback information for the output information, thereby providing improved functions of the external server 200 (or the electronic device 100) by improving the accuracy of the AI process or the model. In the case where information about another event and context information are input, the second model may obtain output information of the another event based on a relearning or retraining result. That is, the second model may be updated based on feedback information input by the user.
The feedback information for the output information may include at least one of: the information processing apparatus includes information of a reaction of a user to output information, control command information for an event input by the user after the output information is output, and information discovered by the user after the output information is output.
In the above-described embodiment, the context information is information related to users who are present around the
Further, the processor 150 may input information related to the alarm event and context information to the artificial intelligence model and receive the obtained information related to the output method of the event from the
Fig. 3B is a block diagram of a detailed configuration of the
The sensor 160 may obtain sensing data for obtaining status information of the
The input interface 170 may receive various user inputs and transmit the received user inputs to the processor 150. In particular, the input interface 170 may comprise at least one of a touch sensor, a (digital) pen sensor, a pressure sensor, a key or a microphone. The touch sensor may use, for example, at least one of a capacitive method, a resistive method, an infrared method, and an ultrasonic method. The (digital) pen sensor may for example be part of a touch panel or comprise an additional sheet or layer for identifying the use. The key may, for example, comprise at least one of a physical button, an optical key, or a keypad. The microphone may be configured to receive user speech, and may be provided inside the
For example, the input interface 170 may obtain the input signal according to a predetermined user touch for selecting an icon corresponding to a program exclusive to artificial intelligence or a user input for selecting a button provided outside the
The processor 150 (or controller) may control the overall operation of the
The processor 150 may include a RAM 151, a ROM 152, a graphic processor 153, a main Central Processing Unit (CPU)154, first to nth interfaces 155-1 to 155-n, and a bus 156. The RAM 151, the ROM 152, the graphic processor 153, the main CPU154, and the first to nth interfaces 155-1 to 155-n may be interconnected by a bus 156.
Fig. 4, 5A, and 5B are diagrams provided to explain an example of obtaining control commands related to an alarm event according to a context, in accordance with various embodiments.
As shown in fig. 4, the
In operation S420, the
The
The
The
In operation S460, the
In operation S470, the
In operation S480, the
In operation S490, the
For example, in the case where an alarm event is received as shown in part (a) of fig. 5B, after the artificial intelligence model is updated by the feedback information, when information "only the primary user is present in the vehicle" is included in the context information, the
Fig. 6, 7A and 7B are diagrams provided to explain an example of providing output information of an alarm event according to a context, according to another embodiment.
As shown in fig. 6, the
In operation S620, the
In operation S630, the
In operation S640, the
The
In operation S660, the
In operation S670, the
In operation S680, the
In operation S690, the
For example, in the case where a mail reception event is received as shown in part (a) of fig. 7B after the artificial intelligence model is updated by the feedback information, when the information "only the primary user is present in the room" is included in the context information, the
In the above embodiments, the artificial intelligence model obtains output information for the alarm event. However, this is merely an example, and it should be understood that the artificial intelligence model may also (or alternatively) determine an output method of outputting information.
Fig. 8, 9A and 9B are diagrams provided to explain an example of providing user request information according to a context, according to another embodiment.
As shown in fig. 8, the
In operation S820, the
In operation S830, the
In operation S840, the
The
In operation S860, the
In operation S870, the
In operation S880, the
In operation S890, the
For example, in the case where a request command for searching for information is received after the artificial intelligence model is updated by the feedback information, when the context information includes information "schedule to be soon out" as shown in part (a) of fig. 9B, the
In the above-described embodiments, the artificial intelligence model obtains at least one of output information of the alarm event, the control command, and/or the user request information based on the information related to the alarm event (or the information related to the request command) and the context information. However, this is merely an example, and it should be understood that the artificial intelligence model may obtain at least one of output information of an alarm event, control commands, and user request information by using other information. In detail, the artificial intelligence model may obtain at least one of output information of an alarm event, a control command, and user request information based on user history information, user preference information, and the like. For example, where the information about the history of the primary user having subscribed to a concert ticket with his/her sister or information indicating that the primary user prefers his/her sister to be present, the artificial intelligence model may output the entire information about the received email.
As described above, the
Fig. 10 is a flowchart illustrating a method for controlling an electronic device according to an embodiment.
In operation S1010, the
When a signal for sensing an alarm event is received in operations S1010-Y, the
The
In operation S1040, the
FIG. 11 is a flow diagram illustrating a method of providing output information of an alarm event according to context by an electronic device through an artificial intelligence model, according to another embodiment.
In operation S1110, the
In operation S1120, the
In operation S1130, the
In operation S1140, the
In operation S1150, the
In operation S1160, the
Fig. 12 is a block diagram of a configuration of an apparatus 1200 for learning and using an Artificial Intelligence (AI) model according to an embodiment.
Referring to fig. 12, a device 1200 (e.g., an electronic device or an external server) may include at least one of a learning part 1210 and a determination part 1220. The electronic device 1200 of fig. 12 may correspond to the
The learning section 1210 may generate or train a first model having criteria for obtaining context information by using the learning data and a second model having criteria for obtaining output information of the event by using the learning data. The learning section 1210 may generate an artificial intelligence model having a determination criterion by using the collected learning data.
For example, the learning part 1210 may generate, train, or update a first model to obtain context information around the
As another example, the learning part 1210 may generate, train, or update the second model using information about the event and context information as learning data to update output information (or an output method) of the event.
The determining part 1220 may use predetermined data in the trained first model as input data and obtain context information around the
For example, the determination part 1220 may obtain context information around the
As another example, the determination section 1220 may use information about the event and contextual information as input data for a trained artificial intelligence model and obtain (or estimate or infer) output information for the event.
In one embodiment, the learning part 1210 and the determining part 1220 may be included in the external server 1200. However, this is merely an example, and it should be understood that at least one of the learning portion 1210 and the determining portion 1220 may be included in a different external device or in the
In this case, the learning part 1210 and the determining part 1220 may be respectively installed on one electronic device or on separate electronic devices. For example, one of the learning part 1210 and the determining part 1220 may be included in the
Fig. 13A is a block diagram of a learning portion 1210 and a determination portion 1220 according to one or more embodiments.
Referring to part (a) of fig. 13A, the learning part 1210 according to one or more embodiments may include a learning data obtaining part 1210-1 and a model learning part 1210-4. In addition, the learning part 1210 may further optionally include at least one of a learning data pre-processor 1210-2, a learning data selecting part 1210-3 and a model evaluating part 1210-5
The learning data obtaining part 1210-1 may obtain learning data of the first model for obtaining context information. In one embodiment, the learning data obtaining part 1210-1 may obtain data obtained by a sensor provided in the
In addition, the learning data obtaining section 1210-1 may obtain learning data of a second model for obtaining output information of the event. In one embodiment, the learning data obtaining part 1210-1 may obtain information about an event, context information, and the like as the learning data. In addition, the learning data obtaining part 1210-1 may obtain user history information, user preference information, and the like as learning data to obtain output information of an event. The learning data may be data collected or tested by the learning portion 1210 or a manufacturer of the learning portion 1210.
The model learning section 1210-4 may train the first model using the learning data to establish criteria for obtaining context information. In addition, the model learning component 1210-4 may train a second model to establish criteria for obtaining output information for an event. For example, the model learning section 1210-4 may train at least one of the first model and the second model by supervised learning using at least a part of the learning data as a criterion for obtaining the output information of the event. In addition, the model learning section 1210-4 may train itself using learning data, for example, without a specific instruction, so as to train at least one of the first model and the second model by unsupervised learning to find a criterion for obtaining output information of an event. Further, the model learning section 1210-4 may train at least one of the first model and the second model by reinforcement learning using, for example, feedback on whether the result of the determination based on learning is correct. Further, the model learning section 1210-4 may train at least one of the first model and the second model by using a learning algorithm including, for example, an error back propagation method or a gradient descent method.
In addition, the model learning portion 1210-4 may use the input data to learn criteria for selection of which learning data to use to obtain context information and/or criteria for selection of which learning data to use to obtain output information for an event.
If there are a plurality of pre-constructed artificial intelligence models, the model learning section 1210-4 may identify an artificial intelligence model having a high correlation between the input learning data and the basic learning data as an artificial intelligence model to be learned. In this case, the basic learning data may be pre-classified according to the type of the data, and the artificial intelligence model may be pre-established according to the type of the data. For example, the basic learning data may be pre-classified by various criteria, such as an area in which the learning data is generated, a time at which the learning data is generated, a size of the learning data, a category of the learning data, a creator of the learning data, a category of one or more objects in the learning data, and the like.
When training the artificial intelligence model, the model learning part 1210-4 may store the trained artificial intelligence model. In this regard, the model learning portion 1210-4 may store the trained artificial intelligence model in a memory of the
The data learning part 1210 may further include a data learning preprocessor 1210-2 and a learning data selecting part 1210-3 to improve a determination result of the artificial intelligence model or save resources or time for generating the artificial intelligence model.
The learning data pre-processor 1210-2 may pre-process the obtained data such that the obtained data may be used for learning to obtain context information and/or may be used for learning to obtain output information for an event. The learning data pre-processor 1210-2 may process the obtained data into a predetermined format so that the model learning part 1210-4 may obtain output information of the event using the obtained data (e.g., to be compatible with, to adapt to, or to improve the processing of the model learning part 1210-4). For example, the learning data pre-processor 1210-2 may remove unnecessary text (e.g., proverb, exclamation, etc.) when the second model provides a response from the input information.
The learning data selection part 1210-3 may select data required or used for learning from the data obtained from the learning data obtaining part 1210-1 and/or the data preprocessed in the learning data preprocessor 1210-2. The selected learning data may be provided to the model learning part 1210-4. The learning data selection part 1210-3 may select learning data required or used for learning from the obtained or preprocessed data according to a preset selection criterion. The learning data selection part 1210-3 may also select learning data according to a preset selection criterion by learning through the model learning part 1210-4.
The learning portion 1210 may also include a model evaluation unit 1210-5 (e.g., a model evaluator) to improve the determination of the artificial intelligence model.
The model evaluation section 1210-5 may input evaluation data to the artificial intelligence model, and control the model learning section 1210-4 to learn again when a determination result output from the evaluation data does not satisfy a predetermined criterion. In this case, the evaluation data may be predefined data for evaluating the artificial intelligence model.
For example, if the number or ratio of evaluation data whose recognition result is inaccurate among the evaluation results of evaluation data of the trained artificial intelligence model exceeds a predetermined threshold, the model evaluation section 1210-5 may evaluate that the predetermined criterion is not satisfied.
On the other hand, in the case where there are a plurality of learned artificial intelligence models, the model evaluation part 1210-5 may evaluate whether each of the learned artificial intelligence models satisfies a predetermined criterion, and determine a model satisfying the predetermined criterion as a final artificial intelligence model. In this case, in the case where there are a plurality of models satisfying the predetermined criterion, the model evaluation part 1210-5 may determine any one of the models or a preset number of models, which are previously set in a descending order of evaluation scores, as the final artificial intelligence model.
Referring to part (b) of fig. 13A, the determination part 1220 according to some one or more embodiments may include an input data obtaining part 1220-1 and a determination result providing part 1220-4.
In addition, the determination part 1220 may further selectively include at least one of an input data pre-processor 1220-2, an input data selection part 1220-3, and a model update part 1220-5.
The input data obtaining part 1220-1 may obtain data for obtaining context information or data required or used to obtain output information of an event. The determination result providing part 1220-4 may obtain context information by applying the input data obtained in the input data obtaining part 1220-1 as input values to the trained first model, and may obtain output information of the event by applying the input data obtained in the input data obtaining part 1220-1 as input values to the trained second model. The determination result providing part 1220-4 may apply data selected by the input data pre-processor 1220-2 and the input data selecting part 1220-3, which will be described below, as input values to the artificial intelligence model and obtain a determination result.
In one embodiment, the determination result providing part 1220-4 may apply the data obtained in the input data obtaining part 1220-1 to the learned first model and obtain context information about the
In another embodiment, the determination result providing part 1220-4 may apply information about the event obtained in the input data obtaining part 1220-1 and context information to the trained second model and obtain output information of the event.
The determination part 1220 may further include an input data pre-processor 1220-2 and an input data selection part 1220-3 to improve the determination result of the artificial intelligence model or save resources or time for providing the determination result.
The input data pre-processor 1220-2 may pre-process the obtained data such that the obtained data may be used to obtain contextual information or output information for the event. The preprocessor 1220-2 may process the obtained data in a predefined format, so that the determination result providing part 1220-4 may use the obtained data for obtaining context information or the obtained data for obtaining output information of an event.
The input data selecting part 1220-3 may select data required or used for determination from the data acquired in the input data obtaining part 1220-1 and/or the data preprocessed in the input data preprocessor 1220-2. The selected data may be provided to the determination result providing part 1220-4. The input data selecting part 1220-3 may select some or all of the obtained or preprocessed data according to a preset selection criterion for determination. The input data selecting part 1220-3 may also select data according to a preset selection criterion through the learning of the model learning part 1210-4.
The model updating section 1220-5 may control the artificial intelligence model to be updated based on the evaluation of the determination result provided by the determination result providing section 1220-4. For example, the model updating section 1220-5 may provide the determination result provided by the determination result providing section 1220-4 to the model learning section 1210-4, thereby requesting the model learning section 1210-4 to further train or update the artificial intelligence model. In particular, the model update portion 1220-5 may retrain the artificial intelligence model based on the feedback information according to the user input. It should be appreciated that one or more of the components described above with reference to fig. 13A may be implemented as hardware (e.g., circuitry, processing cores, etc.) and/or software.
Fig. 13B is a diagram showing an example in which the electronic device a and the external server S are interlocked or communicably connected with each other and learn and determine data according to the embodiment.
Referring to fig. 13B, the external server S may learn criteria for obtaining context information or output information of an event, and the external device a may obtain the context information or provide the output information of the event based on the learning result of the server S.
The model learning section 1210-4 of the server S may perform the function of the learning section 1210 shown in fig. 12. That is, the model learning part 1210-4 of the server S may learn criteria related to event information or context information to obtain output information of an event and how to obtain the output information of the event by using the information.
The determination result providing part 1220-4 of the electronic device a obtains the output information of the event by applying the data selected by the input data selecting part 1220-3 to the artificial intelligence model generated by the server S. Alternatively, the determination result providing part 1220-4 of the electronic device a may receive the artificial intelligence model generated by the server S from the server S and obtain the output information of the event by using the received artificial intelligence model. It should be appreciated that one or more of the components described above with reference to fig. 13B may be implemented as hardware (e.g., circuitry, processing cores, etc.) and/or software.
Fig. 14 and 15 are flow diagrams of network systems using artificial intelligence models, in accordance with various embodiments.
In fig. 14 and 15, a network system using an artificial intelligence model may include
The
An interface for transmitting and receiving data between the
In addition, the
In fig. 14, a
In operation S1420, the
In operation S1430, the
In operation S1440, the
In operation S1450, the
The
In operation S1470, the
In operation S1480, the
In operation S1490, the
In fig. 15, in operation S1505, a first element 1501 may sense an event (e.g., determine occurrence of an event). The event may be an event for providing information, and the event may include various events such as a text message reception event, an e-mail reception event, a call reception event, an information request reception event, a push notification event, and the like.
In operation S1510, the first element 1501 may obtain data for sensing a context around the
In operation S1515, the first element 1501 may transmit data for sensing or determining a context to the second element.
In operation S1520, the second element 1520 may obtain information about a person located in a space where the
In operation S1525, the second element 1502 may send the obtained context information (e.g., the obtained information about the person) to the first element 1501. In operation S1530, the first element 1501 may transmit information about an event and context information (e.g., information about a person) to the second element. When the first element 1501 transmits information about an event together with data for sensing a context in operation S1515, operations S1525 and S1530 may be omitted.
In operation S1535, the second element 1502 may obtain output information corresponding to the event by using the second model. In detail, the second element 1502 may input information about an event and context information (e.g., information about a person located in a space where the
In operation S1540, the second component 1502 may transmit output information of the event to the first component 1501.
In operation S1545, the first element 1501 may provide output information of the event. For example, first element 1501 can output information via at least one of a display, an audio output interface, a speaker, an LED, and the like.
In operation S1550, the first element 1501 may receive or determine feedback information according to a user input. The feedback information may be user reaction information regarding output information of the event, information regarding a user command input by the primary user after the output information of the event is provided, information found by the primary user after the output information of the event is output, and the like.
In operation S1555, the first element 1501 may transmit the input feedback information to the second element 1502.
In operation S1560, the second element 1502 may retrain the second model based on the input feedback information. Thus, the second element 1502 may reflect or take into account user feedback information and update the second model according to context.
Fig. 16 is a flow diagram provided to explain a method of providing output information of an event according to a context by an electronic device according to an embodiment.
Referring to fig. 16, the
In operation S1620, the
In operation S1630, the
In operation S1640, the
In operation S1650, the
The above-described embodiments may be implemented as a software program comprising instructions stored on a machine (e.g., computer) readable storage medium. A machine is a device capable of calling stored instructions from a storage medium and operating according to the called instructions, and may include an electronic device (e.g., electronic device 100) according to the above-described embodiments. When a command is executed by a processor, the processor may perform a function corresponding to the command directly and/or by using other components under the control of the processor. The command may include code generated or executed by a compiler or interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. Herein, the term "non-transitory" merely means that the storage medium does not include a signal, but is tangible, and does not distinguish between a case where data is semi-permanently stored in the storage medium and a case where data is temporarily stored in the storage medium.
According to an embodiment, the method according to the various embodiments described above may be provided as comprised in a computer program product. The computer program product may be used as a product for conducting transactions between a seller and a consumer. The computer program product may be distributed in the form of a machine-readable storage medium, such as a compact disc read only memory (CD-ROM), or online through an application STORE, such as a PLAY STORE. In the case of online distribution, at least a portion of the computer program product may be at least temporarily stored or temporarily generated in a server of a manufacturer, a server of an application store, and/or a storage medium such as a memory.
Each component (e.g., module or program) according to various embodiments may comprise a single entity or multiple entities, and some corresponding sub-components described above may be omitted, or another sub-component may be further added to various embodiments. Alternatively or additionally, some components (e.g., modules or programs) may be combined to form a single entity that performs the same or similar function as the corresponding element before being combined. Operations performed by a module, program, or other component may be sequential, parallel, or both, iteratively or heuristically executed, or at least some operations may be performed in a different order, omitted, or other operations may be added, according to various embodiments.
- 上一篇:一种医用注射器针头装配设备
- 下一篇:用于陶瓷载体,特别是瓷砖的图形适配方法和系统