Method of providing text translation management data related to application and electronic device thereof

文档序号:1132099 发布日期:2020-10-02 浏览:23次 中文

阅读说明:本技术 提供与应用有关的文本翻译管理数据的方法及其电子装置 (Method of providing text translation management data related to application and electronic device thereof ) 是由 李始炯 金范洙 金宣廷 金树完 金在贤 宋仁善 李贤奭 崔智焕 于 2019-02-21 设计创作,主要内容包括:本公开的某些实施例涉及一种用于在电子装置中通过使用外部电子装置来翻译图像中包括的文本的设备和方法。一种方法包括:在显示器上显示图片,该图片包括在图片内的位置处承载文本的对象;提取文本;从提取的文本生成另一文本;以及将另一文本自动覆盖在显示器上的另一图片中的该对象上,该另一图片在另一图片内的另一位置处包括该对象。(Certain embodiments of the present disclosure relate to an apparatus and method for translating text included in an image by using an external electronic device in an electronic device. One method comprises the following steps: displaying a picture on a display, the picture including an object bearing text at a location within the picture; extracting a text; generating another text from the extracted text; and automatically overlaying another text over the object in another picture on the display, the other picture including the object at another location within the other picture.)

1. An electronic device, comprising:

a camera;

a display;

a transceiver;

a memory; and

one or more processors for executing a program to perform,

wherein the one or more processors are configured to:

displaying, by the display, a first image including one or more external objects obtained by using the camera;

identifying at least one of the one or more external objects corresponding to text during at least a portion of the time the first image is displayed;

transmitting, by the transceiver, a partial image of the first image corresponding to the at least one external object to an external electronic device;

receiving, by the transceiver, text corresponding to the text from the external electronic device;

identifying a motion of the electronic device or the at least one external object generated during a time when the partial image is transmitted and corresponding text is received from the external electronic device; and

displaying the corresponding text on the at least one external object by compensating for the motion while displaying a second image including the at least one external object through the display.

2. The electronic device of claim 1, wherein the corresponding text is from a first language identified based on image recognition corresponding to the partial image or is text from a second language different from the first language translated based on text corresponding to the first language, and

wherein the processor is configured to display text from the first language or text from the second language on the partial image through the display based on the text.

3. The electronic device of claim 1, wherein the one or more processors are configured to:

receiving region information corresponding to the text from the external electronic device and setting a region of interest in the first image based on the received region information;

detecting a candidate region including pixels similar in terms of at least one of brightness or color based on at least a comparison between pixels included in the region of interest or a region adjacent to the region of interest; and

calibrating the region information based on the candidate region.

4. The electronic device of claim 3, wherein the one or more processors are configured to:

determining additional attributes related to corresponding text based on at least some regions of the first image corresponding to the calibrated region information; and

displaying text information on the at least one external object based on the additional attributes, an

Wherein the additional attribute comprises at least one of a color, size or font of the text or a background color.

5. The electronic device of claim 1, wherein the one or more processors are configured to:

when a plurality of corresponding texts are received, comparing reliabilities between the plurality of corresponding texts, an

Displaying any one of the plurality of corresponding texts selected based on a result of the comparison on the at least one external object.

6. The electronic device of claim 5, wherein the one or more processors are configured to detect the reliability of the plurality of corresponding text based on a location of the plurality of corresponding text in the local image.

7. A method of an electronic device, comprising:

displaying, by a display, a first image obtained by using a camera of the electronic device;

transmitting at least one partial image and a whole image of the first image to an external electronic device through a transceiver of the electronic device;

when text information corresponding to a partial image or a whole image is received from the external electronic device through the transceiver, refining an area for displaying the text information based on the partial image or the whole image;

calibrating a position of the text information based on motion information of the electronic device or at least one external object included in the first image; and

and displaying the text information on the second image based on the complete area information and the position information.

8. The method of claim 7, wherein the text information comprises text corresponding to a first language that is recognized based on image recognition corresponding to the at least one partial image or text corresponding to a second language different from the first language that is translated based on text corresponding to the first language.

9. The method of claim 7, further comprising:

determining whether to provide a translation service based on at least one of motion information of the electronic device or at least one external object included in the first image or quality of the first image; and

transmitting the at least one partial image and the whole image of the first image to the external electronic device through the transceiver when it is determined that the translation service is to be provided.

10. The electronic device of claim 7, further comprising:

extracting at least one partial image from the first image based on at least one of a history of using a translation service or distribution information of feature points included in the first image.

11. The electronic device of claim 7, further comprising:

setting a region of interest in the local image based on region information of the text information received from the external electronic device;

detecting a candidate region in the region of interest, the candidate region comprising at least one pixel that is similar in terms of at least one of a brightness or a color; and

and refining the area information of the text information based on the candidate area.

12. The electronic device of claim 11, further comprising:

determining additional attributes related to the text information based on at least some regions of the first image corresponding to refined region information of the text information; and

displaying the text information to overlap at least a portion of the second image based on the additional attribute,

wherein the additional attribute comprises at least one of a color, size or font of the text or a background color.

13. The electronic device of claim 7, further comprising:

comparing reliability of a plurality of text messages corresponding to at least some regions of the first image when the plurality of text messages are received; and

based on the result of the comparison, any one of the plurality of text information is selected as the text information corresponding to the at least some regions.

14. The electronic device of claim 13, further comprising:

detecting reliability of the text information based on a position of the text information in the partial image.

Technical Field

Certain embodiments of the present disclosure relate to an apparatus and method for providing a translation service of text included in an image obtained through a camera in an electronic device.

Background

With the enhancement of information and communication technologies and semiconductor technologies, various types of electronic devices are being developed to multimedia devices providing various multimedia services. For example, the multimedia service may include at least one of a cellular phone service, a Voice Over IP (VOIP) service, a messaging service, a broadcasting service, a wireless internet service, a camera, an electronic payment, or a media playback.

The above information is presented merely as background information to aid in understanding the present disclosure. No determination is made as to whether any of the above would be applicable as prior art with respect to the present disclosure, nor is an assertion made.

Disclosure of Invention

Solution to the problem

The electronic device may provide various services by using a camera to enhance convenience of a user. For example, the electronic device may search for information of a product photographed by a camera and may output the information. It may also provide translation services with respect to text included in images obtained by the camera.

The electronic device may translate text included in an image obtained through the camera into a language desired by the user, and may display the text. For example, the electronic device may recognize text included in a preview image obtained by a camera through Optical Character Recognition (OCR) for the preview image. The electronic device may translate the text recognized from the preview image into a language desired by the user through a translation engine, and may render the translated text to a text region of the preview image.

As the electronic device drives OCR and translation engines in the electronic device, the performance of the translation service may be limited. The electronic device may download the database and the translation engine from an external device for translation of the text, and may store it in the electronic device to enhance performance of the translation service. This may consume a large amount of memory of the electronic device.

Certain embodiments of the present disclosure provide an apparatus and method for providing a translation service of text included in an image obtained through a camera by using an external electronic device in an electronic device.

According to some embodiments of the present disclosure, an electronic device includes a camera, a display, a transceiver, a memory, and a processor, wherein the processor is configured to: displaying, by a display, a first image including one or more external objects obtained by using a camera; identifying at least one of the one or more external objects corresponding to the text during at least a portion of the time the first image is displayed; transmitting, by a transceiver, a partial image of a first image corresponding to at least one external object to an external electronic device; receiving, by a transceiver, text corresponding to the text from an external electronic device; identifying a motion of the electronic device or at least one external object generated during a time when the partial image is transmitted and the corresponding text is received from the external electronic device; and displaying the corresponding text on the at least one external object by compensating for the motion while displaying the second image including the at least one external object through the display.

According to some embodiments of the present disclosure, a method of operation of an electronic device includes: displaying a first image on a display of an electronic device, the first image obtained using a camera operatively connected to the electronic device, the first image including one or more external objects; identifying at least one of the one or more external objects corresponding to the text during at least a portion of the time the first image is displayed; transmitting a partial image of a first image corresponding to at least one external object to an external electronic device; receiving a text corresponding to the text from an external electronic device; identifying a motion of the electronic device or at least one external object generated during a time when the partial image is transmitted and the corresponding text is received from the external electronic device; displaying the corresponding text on the at least one external object by compensating for the motion while displaying a second image including the at least one external object through the display.

According to some embodiments of the present disclosure, an electronic device includes a camera, a display, a transceiver, a memory, and a processor, wherein the processor is configured to: displaying, by a display, a first image obtained by using a camera; transmitting, by the transceiver, at least one partial image and a whole image of the first image to an external electronic device; when text information corresponding to the partial image or the entire image is received from the external electronic device through the transceiver, refining an area for displaying the text information based on the partial image or the entire image; calibrating a position of the text information based on motion information of the electronic device or at least one external object included in the first image; and displaying text information on the second image based on the refined region information and the position information.

According to some embodiments, there is a method for annotating a picture, the method comprising: displaying a picture including a text-bearing object at a location within the picture; extracting a text; and displaying another text on the object in another picture that includes the object at another location within the other picture.

Drawings

The above and other aspects, features and advantages of certain embodiments of the present disclosure will become more apparent from the following description taken in conjunction with the accompanying drawings in which:

FIG. 1 is a block diagram of an electronic device for managing data related to applications in a network environment, in accordance with certain embodiments of the present disclosure;

FIG. 2 is a block diagram illustrating a camera according to some embodiments of the present disclosure;

FIG. 3 is a flow diagram for providing translation services in an electronic device using an external electronic device, in accordance with certain embodiments of the present disclosure;

FIG. 4 is a flow diagram for selectively sending images for translation services in an electronic device to an external electronic device, in accordance with certain embodiments of the present disclosure;

FIG. 5 is a flow chart for extracting a partial image of an image obtained by a camera in an electronic device according to some embodiments of the present disclosure;

fig. 6 is a view showing a configuration of an image obtained by a camera according to some embodiments of the present disclosure;

FIG. 7 is a flow diagram for displaying translated text received from an external electronic device in an electronic device, in accordance with certain embodiments of the present disclosure;

FIG. 8A is a view showing a screen of a translation service for images obtained by a camera, according to some embodiments of the present disclosure;

FIG. 8B is a view of a screen showing a translation service for images obtained by a camera, in accordance with certain embodiments of the present disclosure;

fig. 8C is a view showing a screen of a translation service for an image obtained by a camera according to some embodiments of the present disclosure;

FIG. 8D is a view of a screen showing a translation service for images obtained by a camera, in accordance with certain embodiments of the present disclosure;

FIG. 8E is a view of a screen showing a translation service for images obtained by a camera, in accordance with certain embodiments of the present disclosure;

FIG. 8F is a view of a screen showing a translation service for images obtained by a camera, in accordance with certain embodiments of the present disclosure;

FIG. 9 is a flow diagram for setting a display position of translated text in an electronic device, according to some embodiments of the present disclosure.

Fig. 10A is a view showing a configuration for setting a display position of translated text in an external electronic device according to some embodiments of the present disclosure;

fig. 10B is a view showing a configuration for setting a display position of translated text in an external electronic device according to some embodiments of the present disclosure;

fig. 10C is a view showing a configuration for setting a display position of translated text in an external electronic device according to some embodiments of the present disclosure; and

FIG. 11 is a flow diagram for displaying translated text corresponding to a partial image in an electronic device, according to some embodiments of the present disclosure.

Detailed Description

Certain embodiments of the present disclosure will be described herein below with reference to the accompanying drawings. In the following description, well-known functions or constructions are not described in detail since they would obscure the disclosure in unnecessary detail. Further, terms used herein are defined according to the functions of the present disclosure. Accordingly, these terms may vary according to the intention and usage of the user or operator. That is, the terms used herein must be understood based on the description made herein.

FIG. 1 is a block diagram illustrating an electronic device 101 for managing data related to applications in a network environment 100, in accordance with some embodiments. Referring to fig. 1, an electronic device 101 in a network environment 100 may communicate with an electronic device 102 via a first network 198 (e.g., a short-range wireless communication network) or with an electronic device 104 or a server 108 via a second network 199 (e.g., a long-range wireless communication network). According to an embodiment, the electronic device 101 may communicate with the electronic device 104 via the server 108. According to an embodiment, electronic device 101 may include a processor 120, a memory 130, an input device 150, a sound output device 155, a display device 160 (touchscreen display), an audio module 170, a sensor module 176 (motion sensor), an interface 177, a haptic module 179, a camera 180, a power management module 188, a battery 189, a communication module 190 (wireless communication circuitry that may include a transmitter/receiver (transceiver), a modulator/demodulator (MODEM), and an oscillator, among others), a Subscriber Identity Module (SIM)196, or an antenna module 197. In some embodiments, at least one of the components (e.g., display device 160 or camera 180) may be omitted from electronic device 101, or one or more other components may be added to electronic device 101. In some embodiments, some of the components may be implemented as a single integrated circuit. For example, the sensor module 176 (e.g., a fingerprint sensor, an iris sensor, or an illuminance sensor) may be implemented to be embedded in the display device 160 (e.g., a display). The term "transceiver" may refer to a single transmitter/receiver, or a group of transmitters and receivers.

The processor 120 may run, for example, software (e.g., the program 140) to control at least one other component (e.g., a hardware component or a software component) of the electronic device 101 connected to the processor 120, and may perform various data processing or calculations. The term "processor," although used in the singular, should be understood to mean one or more processors. According to one embodiment, as at least part of the data processing or calculation, processor 120 may load commands or data received from another component (e.g., sensor module 176 or communication module 190) into volatile memory 132, process the commands or data stored in volatile memory 132, and store the resulting data in non-volatile memory 134. According to an embodiment, the processor 120 may include a main processor 121 (e.g., a Central Processing Unit (CPU) or an Application Processor (AP)) and an auxiliary processor 123 (e.g., a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a sensor hub processor, or a Communication Processor (CP)) that is operatively independent of or in conjunction with the main processor 121. Additionally or alternatively, the auxiliary processor 123 may be adapted to consume less power than the main processor 121, or be adapted specifically for a specified function. The auxiliary processor 123 may be implemented separately from the main processor 121 or as part of the main processor 121.

The auxiliary processor 123 may control at least some of the functions or states associated with at least one of the components of the electronic device 101 (e.g., the display device 160, the sensor module 176, or the communication module 190) when the main processor 121 is in an inactive (e.g., sleep) state, or the auxiliary processor 123 may control at least some of the functions or states associated with at least one of the components of the electronic device 101 (e.g., the display device 160, the sensor module 176, or the communication module 190) with the main processor 121 when the main processor 121 is in an active state (e.g., running an application). According to an embodiment, the auxiliary processor 123 (e.g., an image signal processor or a communication processor) may be implemented as part of another component (e.g., the camera 180 or the communication module 190) that is functionally related to the auxiliary processor 123.

The memory 130 may store various data used by at least one component of the electronic device 101 (e.g., the processor 120 or the sensor module 176). The various data may include, for example, software (e.g., program 140) and input data or output data for commands associated therewith. The memory 130 may include volatile memory 132 or non-volatile memory 134.

The program 140 may be stored in the memory 130 as software, and the program 140 may include, for example, an Operating System (OS)142, middleware 144, or an application 146.

The input device 150 may receive commands or data from outside of the electronic device 101 (e.g., a user) to be used by other components of the electronic device 101, such as the processor 120. The input device 150 may include, for example, a microphone, a mouse, or a keyboard.

The sound output device 155 may output a sound signal to the outside of the electronic device 101. The sound output device 155 may include, for example, a speaker or a receiver. The speaker may be used for general purposes such as playing multimedia or playing a record and the receiver may be used for incoming calls. Depending on the embodiment, the receiver may be implemented separate from the speaker, or as part of the speaker.

Display device 160 may visually provide information to the exterior of electronic device 101 (e.g., a user). The display device 160 may include, for example, a display, a holographic device, or a projector, and control circuitry for controlling a respective one of the display, holographic device, and projector. According to embodiments, the display device 160 may include touch circuitry adapted to detect a touch or sensor circuitry (e.g., a pressure sensor) adapted to measure the intensity of a force caused by a touch.

The audio module 170 may convert sound into an electrical signal and vice versa. According to embodiments, the audio module 170 may obtain sound via the input device 150 or output sound via the sound output device 155 or a headset of an external electronic device (e.g., the electronic device 102) directly (e.g., wired) connected or wirelessly connected with the electronic device 101.

The sensor module 176 may detect an operating state (e.g., power or temperature) of the electronic device 101 or an environmental state (e.g., state of a user) external to the electronic device 101 and then generate an electrical signal or data value corresponding to the detected state. According to an embodiment, the sensor module 176 may include, for example, a gesture sensor, a gyroscope sensor, an atmospheric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an Infrared (IR) sensor, a biometric sensor, a temperature sensor, a humidity sensor, or an illuminance sensor.

The interface 177 may support one or more particular protocols to be used to directly (e.g., wired) or wirelessly connect the electronic device 101 with an external electronic device (e.g., the electronic device 102). According to an embodiment, the interface 177 may include, for example, a high-definition multimedia interface (HDMI), a Universal Serial Bus (USB) interface, a Secure Digital (SD) card interface, or an audio interface.

The connection end 178 may include a connector via which the electronic device 101 may be physically connected with an external electronic device (e.g., the electronic device 102). According to an embodiment, the connection end 178 may include, for example, an HDMI connector, a USB connector, an SD card connector, or an audio connector (e.g., a headphone connector).

The haptic module 179 may convert the electrical signal into a mechanical stimulus (e.g., vibration or motion) or an electrical stimulus that may be recognized by the user via his sense of touch or kinesthesia. According to an embodiment, the haptic module 179 may include, for example, a motor, a piezoelectric element, or an electrical stimulator.

The camera 180 may capture still images or moving images. According to an embodiment, the camera 180 may include one or more lenses, an image sensor, an image signal processor, or a flash.

The power management module 188 may manage power to the electronic device 101. According to an embodiment, the power management module 188 may be implemented as at least part of a Power Management Integrated Circuit (PMIC), for example.

The battery 189 may power at least one component of the electronic device 101. According to an embodiment, the battery 189 may include, for example, a non-rechargeable primary cell, a rechargeable secondary cell, or a fuel cell.

The communication module 190 may support establishing a direct (e.g., wired) communication channel or a wireless communication channel between the electronic device 101 and an external electronic device (e.g., the electronic device 102, the electronic device 104, or the server 108), and performing communication via the established communication channel. The communication module 190 may include one or more communication processors capable of operating independently of the processor 120 (e.g., an Application Processor (AP)) and supporting direct (e.g., wired) communication or wireless communication. According to an embodiment, the communication module 190 may include a wireless communication module 192 (e.g., a cellular communication module, a short-range wireless communication module, or a Global Navigation Satellite System (GNSS) communication module) or a wired communication module 194 (e.g., a Local Area Network (LAN) communication module or a Power Line Communication (PLC) module). A respective one of these communication modules may communicate with external electronic devices via a first network 198 (e.g., a short-range communication network such as bluetooth, wireless fidelity (Wi-Fi) direct, or infrared data association (IrDA)) or a second network 199 (e.g., a long-range communication network such as a cellular network, the internet, or a computer network (e.g., a LAN or Wide Area Network (WAN))). These various types of communication modules may be implemented as a single component (e.g., a single chip), or may be implemented as multiple components (e.g., multiple chips) that are separate from one another. The wireless communication module 192 may identify and authenticate the electronic device 101 in a communication network, such as the first network 198 or the second network 199, using subscriber information, such as an International Mobile Subscriber Identity (IMSI), stored in the subscriber identity module 196.

The antenna module 197 may transmit signals or power to or receive signals or power from outside of the electronic device 101 (e.g., an external electronic device). According to an embodiment, the antenna module 197 may include one or more antennas, and thus, at least one antenna suitable for a communication scheme used in a communication network, such as the first network 198 or the second network 199, may be selected from the plurality of antennas by, for example, the communication module 190 (e.g., the wireless communication module 192). Signals or power may then be transmitted or received between the communication module 190 and the external electronic device via the selected at least one antenna.

At least some of the above components may be interconnected and communicate signals (e.g., commands or data) communicatively between them via an inter-peripheral communication scheme (e.g., bus, General Purpose Input Output (GPIO), Serial Peripheral Interface (SPI), or Mobile Industry Processor Interface (MIPI)).

According to an embodiment, commands or data may be sent or received between the electronic device 101 and the external electronic device 104 via the server 108 connected with the second network 199. Each of the electronic device 102 and the electronic device 104 may be the same type of device as the electronic device 101 or a different type of device from the electronic device 101. According to embodiments, all or some of the operations to be performed at the electronic device 101 may be performed at one or more of the external electronic device 102, the external electronic device 104, or the server 108. For example, if the electronic device 101 should automatically perform a function or service or should perform a function or service in response to a request from a user or another device, the electronic device 101 may request the one or more external electronic devices to perform at least part of the function or service instead of or in addition to performing the function or service. The one or more external electronic devices that received the request may perform the requested at least part of the functions or services or perform another function or another service related to the request and transmit the result of the execution to the electronic device 101. The electronic device 101 may provide the result as at least a partial reply to the request with or without further processing of the result. To this end, for example, cloud computing technology, distributed computing technology, or client-server computing technology may be used.

An electronic device according to some embodiments may be one of various types of electronic devices. The electronic device may comprise, for example, a portable communication device (e.g., a smartphone), a computer device, a portable multimedia device, a portable medical device, a camera, a wearable device, or a home appliance. According to the embodiments of the present disclosure, the electronic devices are not limited to those described above.

It should be understood that certain embodiments of the present disclosure and terms used therein are not intended to limit the technical features set forth herein to specific embodiments, but include various changes, equivalents, or alternatives to the respective embodiments. For the description of the figures, like reference numerals may be used to refer to like or related elements. It will be understood that a noun in the singular corresponding to a term may include one or more things unless the relevant context clearly dictates otherwise. As used herein, each of the phrases such as "a or B," "at least one of a and B," "at least one of a or B," "A, B or C," "at least one of A, B and C," and "at least one of A, B or C" may include all possible combinations of the items listed together with the respective one of the plurality of phrases. As used herein, terms such as "1 st" and "2 nd" or "first" and "second" may be used to distinguish one element from another element simply and not to limit the elements in other respects (e.g., importance or order). It will be understood that, if an element (e.g., a first element) is referred to as being "coupled to", "connected to" or "connected to" another element (e.g., a second element), it can be directly (e.g., wiredly) connected to, wirelessly connected to, or connected to the other element via a third element, when the term "operatively" or "communicatively" is used or not.

As used herein, the term "module" may include units implemented in hardware, software, or firmware, and may be used interchangeably with other terms (e.g., "logic," "logic block," "portion," or "circuitry"). A module may be a single integrated component adapted to perform one or more functions or a minimal unit or portion of the single integrated component. For example, according to an embodiment, the modules may be implemented in the form of Application Specific Integrated Circuits (ASICs).

Certain embodiments set forth herein may be implemented as software (e.g., program 140) comprising one or more instructions stored in a storage medium (e.g., internal memory 136 or external memory 138) that is readable by a machine (e.g., electronic device 101). For example, under control of a processor, a processor (e.g., processor 120) of the machine (e.g., electronic device 101) may invoke and execute at least one of the one or more instructions stored in the storage medium, with or without the use of one or more other components. This enables the machine to be operable to perform at least one function in accordance with the invoked at least one instruction. The one or more instructions may include code generated by a compiler or code capable of being executed by an interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. Where the term "non-transitory" simply means that the storage medium is a tangible device and does not include a signal (e.g., an electromagnetic wave), the term does not distinguish between data being semi-permanently stored in the storage medium and data being temporarily stored in the storage medium.

According to embodiments, methods according to certain embodiments of the present disclosure may be included and provided in a computer program product. The computer program product may be used as a product for conducting a transaction between a seller and a buyer. The computer program product may be distributed in the form of a machine-readable storage medium, such as a compact disc read only memory (CD-ROM), or may be distributed (e.g., downloaded or uploaded) online via an application store (e.g., a Play store), or may be distributed (e.g., downloaded or uploaded) directly between two user devices (e.g., smartphones). At least part of the computer program product may be temporarily generated if it is published online, or at least part of the computer program product may be at least temporarily stored in a machine readable storage medium, such as a memory of a manufacturer's server, a server of an application store, or a forwarding server.

According to some embodiments, each of the above components (e.g., modules or programs) may comprise a single entity or multiple entities. According to certain embodiments, one or more of the above components may be omitted, or one or more other components may be added. Alternatively or additionally, multiple components (e.g., modules or programs) may be integrated into a single component. In such a case, according to some embodiments, the integrated component may still perform one or more functions of each of the plurality of components in the same or similar manner as the corresponding one of the plurality of components performed the one or more functions prior to integration. Operations performed by a module, program, or another component may, according to some embodiments, be performed sequentially, in parallel, repeatedly, or in a heuristic manner, or one or more of the operations may be performed in a different order or omitted, or one or more other operations may be added.

Fig. 2 is a block diagram 200 illustrating the camera 180 according to some embodiments. Referring to fig. 2, the camera 180 may include a lens assembly 210, a flash 220, an image sensor 230, an image stabilizer 240, a memory 250 (e.g., a buffer memory), or an image signal processor 260. Lens assembly 210 may collect light emitted or reflected from objects of an image to be captured. The lens assembly 210 may include one or more lenses. According to an embodiment, the camera 180 may include a plurality of lens assemblies 210. In this case, the camera 180 may form a dual camera, a 360-degree camera, or a spherical camera, for example. Some of the plurality of lens assemblies 210 may have the same lens properties (e.g., angle of view, focal length, auto-focus, f-number, or optical zoom), or at least one lens assembly may have one or more lens properties that are different from the lens properties of another lens assembly. Lens assembly 210 may include, for example, a wide-angle lens or a telephoto lens.

The flash lamp 220 may emit light, wherein the emitted light is used to enhance light reflected from the object. According to an embodiment, the flash 220 may include one or more Light Emitting Diodes (LEDs) (e.g., Red Green Blue (RGB) LEDs, white LEDs, Infrared (IR) LEDs, or Ultraviolet (UV) LEDs) or xenon lamps. The image sensor 230 may acquire an image corresponding to an object by converting light emitted or reflected from the object and transmitted through the lens assembly 210 into an electrical signal. According to an embodiment, the image sensor 230 may include one image sensor (e.g., an RGB sensor, a Black and White (BW) sensor, an IR sensor, or a UV sensor) selected from a plurality of image sensors having different attributes, a plurality of image sensors having the same attribute, or a plurality of image sensors having different attributes. Each of the image sensors included in the image sensor 230 may be implemented using, for example, a Charge Coupled Device (CCD) sensor or a Complementary Metal Oxide Semiconductor (CMOS) sensor.

The image stabilizer 240 may move the image sensor 230 or at least one lens included in the lens assembly 210 in a particular direction or control an operational property of the image sensor 230 (e.g., adjust a readout timing) in response to movement of the camera 180 or the electronic device 101 including the camera 180. In this way, it is allowed to compensate for at least part of the negative effects (e.g., image blur) due to the movement of the image being captured. According to an embodiment, the image stabilizer 240 may sense such movement of the camera 180 or the electronic device 101 using a gyro sensor (not shown) or an acceleration sensor (not shown) disposed inside or outside the camera 180. According to an embodiment, the image stabilizer 240 may be implemented as, for example, an optical image stabilizer.

The memory 250 may at least temporarily store at least a portion of the image acquired via the image sensor 230 for subsequent image processing tasks. For example, if multiple images are captured quickly or image capture delays due to shutter lag, the acquired raw images (e.g., bayer pattern images, high resolution images) may be stored in memory 250 and their corresponding replica images (e.g., low resolution images) may be previewed via display device 160. Then, if a specified condition is satisfied (e.g., by user input or system command), at least a portion of the original image stored in the memory 250 may be retrieved and processed by, for example, the image signal processor 260. According to embodiments, memory 250 may be configured as at least a portion of memory 130, or memory 250 may be configured as a separate memory that operates independently of memory 130.

The image signal processor 260 may perform one or more image processes on the image acquired via the image sensor 230 or the image stored in the memory 250. The one or more image processes may include, for example, depth map generation, three-dimensional (3D) modeling, panorama generation, feature point extraction, image synthesis, or image compensation (e.g., noise reduction, resolution adjustment, brightness adjustment, blurring, sharpening, or softening). Additionally or alternatively, the image signal processor 260 may perform control (e.g., exposure time control or readout timing control) on at least one of the components (e.g., the image sensor 230) included in the camera 180. The image processed by image signal processor 260 may be stored back to memory 250 for further processing or may be provided to an external component external to camera 180 (e.g., memory 130, display device 160, electronic device 102, electronic device 104, or server 108). According to an embodiment, the image signal processor 260 may be configured as at least a portion of the processor 120, or the image signal processor 260 may be configured as a separate processor operating independently of the processor 120. If the image signal processor 260 is configured as a processor separate from the processor 120, at least one image processed by the image signal processor 260 may be displayed as it is by the processor 120 via the display device 160, or may be displayed after being further processed.

According to an embodiment, the electronic device 101 may include multiple cameras 180 having different attributes or functions. In this case, at least one camera 180 of the plurality of cameras 180 may form a wide-angle camera, for example, and at least another camera 180 of the plurality of cameras 180 may form a telephoto camera. Similarly, at least one camera 180 of the plurality of cameras 180 may form a front-facing camera, for example, and at least another camera 180 of the plurality of cameras 180 may form a rear-facing camera.

Electronic devices according to certain embodiments of the present disclosure may include various types of electronic devices. The electronic device may comprise, for example, at least one of a portable communication device (e.g., a smartphone), a computer device, a portable multimedia device, a portable medical device, a camera, a wearable device, or a home appliance. The electronic device according to the embodiment of the present disclosure is not limited to the above-described device.

It should be understood that certain embodiments of the present disclosure and terms used therein are not intended to limit the technical features set forth herein to specific embodiments, but include various changes, equivalents, or alternatives to the respective embodiments. For the description of the figures, like reference numerals may be used to refer to like or related elements. It will be understood that a noun in the singular corresponding to a term may include one or more things unless the relevant context clearly dictates otherwise. As used herein, each of the phrases such as "a or B," "at least one of a and B," "at least one of a or B," "A, B or C," "at least one of A, B and C," and "at least one of A, B or C" may include all possible combinations of the items listed together with the respective one of the plurality of phrases. As used herein, terms such as "1 st" and "2 nd" or "first" and "second" may be used to distinguish one element from another element simply and not to limit the elements in other respects (e.g., importance or order). It will be understood that, if an element (e.g., a first element) is referred to as being "coupled to", "connected to" or "connected to" another element (e.g., a second element), it can be directly connected to the other element or be connected to the other element via a third element, if the term "operable" or "communicatively" is used or is not used.

As used herein, the term "module" may include units implemented in hardware, software, or firmware, and may be used interchangeably with other terms (e.g., "logic," "logic block," "portion," or "circuitry"). A module may be a single integrated component adapted to perform one or more functions or a minimal unit or portion of the single integrated component. For example, the modules may be implemented in the form of Application Specific Integrated Circuits (ASICs).

Certain embodiments set forth herein may be implemented as software (e.g., program 140) comprising instructions stored in a machine-readable storage medium (e.g., internal memory 136 or external memory 138) that are readable by a machine (e.g., a computer). The machine may invoke instructions stored in the storage medium and may operate in accordance with the invoked instructions and may include an electronic device (e.g., electronic device 101) in accordance with the disclosed embodiments. Under the control of a processor, which when executed by a processor (e.g., processor 120), may perform functions corresponding to the instructions directly or using other components under the control of the processor. The instructions may comprise code generated or executed by a compiler or interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. Wherein the term "non-transitory" simply means that the storage medium is a tangible device and does not include a signal, but the term does not distinguish between data being semi-permanently stored in the storage medium and data being temporarily stored in the storage medium.

According to embodiments, methods according to certain embodiments of the present disclosure may be included and provided in a computer program product. The computer program product may be used as a product for conducting a transaction between a seller and a buyer. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., compact disc read only memory (CD-ROM)), or may be distributed via an application Store (e.g., Play Store)TM) The computer program product is distributed online. At least part of the computer program product may be temporarily generated if it is distributed online, or at least part of the computer program product may be at least temporarily stored in a storage medium, such as a memory of a manufacturer's server, a server of an application store or a forwarding server.

According to some embodiments, each of the above components (e.g., modules or programs) may comprise a single entity or multiple entities. One or more of the above-described subcomponents may be omitted, or one or more other subcomponents may be added. Alternatively or additionally, multiple components (e.g., modules or programs) may be integrated into a single component. In such a case, according to some embodiments, the integrated component may still perform one or more functions of each of the plurality of components in the same or similar manner as the corresponding one of the plurality of components performed the one or more functions prior to integration. Operations performed by a module, program, or another component may, according to some embodiments, be performed sequentially, in parallel, repeatedly, or in a heuristic manner, or one or more of the operations may be performed in a different order or omitted, or one or more other operations may be added.

According to an embodiment, the processor 120 may control the communication module 190 to transmit an image (preview image) obtained through the camera 180 to an external electronic device (e.g., the electronic device 102, the electronic device 104, or the server 108) for a translation service. For example, the processor 120 may determine whether to transmit the image obtained by the camera 180 based on at least one of a quality of the image obtained by the camera 180 or motion information obtained by the sensor module 176. For example, when the sharpness (e.g., blur state) of the image obtained by the camera 180 is less than or equal to a reference value, the processor 120 may determine that there is a limitation on the translation of the image obtained by the camera 180. That is, when the sharpness of the image obtained by the camera 180 is less than or equal to the reference value, the processor 120 may determine not to transmit the image obtained by the camera 180 to the external electronic device. For example, when the number of feature points of the image obtained by the camera 180 is less than the reference number, or when the feature points are distributed by longer than the reference distance, the processor 120 may determine that there is a limitation on the translation of the image obtained by the camera 180. The reference distance may include a maximum distance between feature points that can form text. For example, when the motion of the electronic device 100 falls outside of the reference range based on the motion information obtained by the sensor module 176, the processor 120 may determine that there is a limitation on the translation of the image obtained by the camera 180. For example, when it is determined that an image obtained through the camera 180 is to be transmitted to the external electronic device, the processor 120 may control the communication module 190 to transmit at least one partial image and a whole image corresponding to the image to the external electronic device. For example, the processor 120 may extract at least one partial image based on at least one of a history of using the translation service or a distribution of feature points of the image. The history of using the translation service is at least one text region probability model corresponding to the history of using the translation service, and may include region information on the display device 160 on which the text of the external object has been photographed and displayed for the translation service in the electronic device 101. For example, the processor 120 may control the communication module 190 to transmit at least one partial image and a whole image corresponding to an image to different external electronic devices.

According to an embodiment, processor 120 may calibrate a display area received from an external electronic device (e.g., electronic device 102, electronic device 104, or server 108) through communication module 190 that is translated into text in a different language. For example, the processor 120 may receive text included in an image (partial image or entire image) transmitted to an external electronic device and region information (e.g., a position, a size, and a length of a text) of the text through the communication module 190. The processor 120 may set a region of interest (ROI) corresponding to text in an image transmitted to an external electronic device based on region information of the text. The processor 120 may set pixels having the same brightness or color attribute in the region of interest as the text candidate region. The processor 120 may set the text display area by removing outliers (outliers) of features that do not fit into the text in the text candidate area. For example, the outlier may include at least one pixel located on a boundary of the region of interest in the candidate region of text. For example, the text may include text recognized by an optical character recognition method or text translated by a translation engine.

According to an embodiment, the processor 120 may determine additional properties of the text through the display area of the text to display the text translated into a different language received from the external electronic device on the corresponding area. For example, the additional attribute of the text may include at least one of a color of the text, a background color, or a size or font of the text. For example, the processor 120 may identify attributes of brightness or color of pixels included in a display region of text, and may distinguish the text region from a background region. The processor 120 may set at least one of a color, a size, or a font of the text translated into the different language based on at least one of a size of the text region or a color attribute of pixels included in the text region. The processor 120 may set a color (background color) of a region (background) other than the text in the display region of the text based on the color attribute of the pixel included in the background region. For example, the color of a region other than text may be set based on the average value of the color attributes of pixels included in the background region. For example, the color of a region other than text may be set based on the color attribute most distributed among the color attributes of the pixels included in the background region.

According to an embodiment, processor 120 may calibrate a display location of text received from an external electronic device (e.g., electronic device 102, electronic device 104, or server 108) that is translated into a different language based on motion information of electronic device 101 detected by sensor module 176. For example, the processor 120 may continuously collect motion information of the electronic device 101 through the sensor module 176 from the time an image (preview image) is obtained through the camera 180. The processor 120 may detect a difference between an image transmitted to the external electronic device and an image received from the external electronic device displaying text translated into a different language based on the motion information of the electronic device 101. The processor 120 may calibrate the display position of the text translated into the different language based on the difference between the images.

According to an embodiment, the processor 120 may refine the results of the translation service based on the text being translated into a different language corresponding to a plurality of images (e.g., partial images or whole images) received from an external electronic device (e.g., the electronic device 102, the electronic device 104, or the server 108) through the communication module 190. For example, when receiving text translated into a different language corresponding to one of the images transmitted to the external electronic device, the processor 120 may determine whether text corresponding to another image exists in an area for displaying the text translated into the different language. When there is no text corresponding to another image in the corresponding region, the processor 120 may render text received from the external electronic device that is translated into a different language for display in the corresponding region. When there is text corresponding to another image in the corresponding region, the processor 120 may select text having relatively high reliability as text to be displayed in the corresponding region from the text corresponding to the other image and text translated into a different language received from the external electronic device. For example, the reliability of the text may be calculated based on at least one of a location of the text in the image or an accuracy of the text.

Fig. 3 is a flow diagram 300 for providing translation services by using an external electronic device in an electronic device, in accordance with certain embodiments of the present disclosure. In the following description, the electronic device may include the electronic device 101 of fig. 1 or at least a portion of the electronic device 101 (e.g., the processor 120).

Referring to fig. 3, in operation 301, an electronic device (e.g., processor 120) may display a first image including at least one external object obtained by a camera (e.g., camera 180 of fig. 1) on a display device (e.g., display device 160 of fig. 1). For example, the first image may include a preview image obtained by the camera 180. One of the external objects may carry text.

In operation 303, an electronic device (e.g., processor 120) may transmit at least one partial image of the first image to an external electronic device. For example, the processor 120 may determine whether to transmit the preview image based on at least one of a sharpness (e.g., blur state) of the preview image, a number of feature points of the preview image, or motion information obtained by the sensor module 176. When it is determined that the preview image is to be transmitted to the external electronic device, the processor 120 may control at least one partial image and the entire image of the preview image to be transmitted to the external electronic device through the communication module 190. For example, the local image may be extracted from the preview image based on at least one of a history of using a translation service (e.g., a text region probability model) or a distribution of feature points of the image. For example, the processor 120 may control at least one partial image and the entire image of the preview image to be transmitted to different external electronic devices through the communication module 190. For example, the processor 120 may control transmission of at least one image (e.g., at least one of a partial image and a whole image) and language information for translating text included in the image to an external electronic device through the communication module 190. In some embodiments, a partial image may be extracted to extract text as one of the objects.

In operation 305, an electronic device (e.g., the processor 120) may identify a motion of the electronic device or an external object included in the first image. For example, the processor 120 may collect, by the sensor module 176, motion information of the electronic device occurring when transmitting an image for translating text included in the image and receiving text corresponding to the image.

In operation 307, the electronic device (e.g., the processor 120) may determine whether text (text information) corresponding to the partial image transmitted to the external electronic device is received. For example, the text corresponding to the partial image may include text in a second language (e.g., korean) that is a translation of text in a first language (e.g., english) extracted from the image. For example, the processor 120 may receive text corresponding to the partial image and region information of the text in the corresponding image (e.g., a position, a size, and a length of the text). However, the present disclosure is not limited to language translation. In some embodiments, the received text may be a synonym of text. In other embodiments, the received text may be an update of the text. For example, if the text is time-varying information, the received text may include an update to the information.

When the text corresponding to the partial image transmitted to the external electronic device is not received (no in operation 307), the electronic device (e.g., the processor 120) may continuously recognize the motion of the electronic device or the external object included in the first image in operation 305.

When receiving text corresponding to the partial image transmitted to the external electronic device (yes in operation 307), the electronic device (e.g., the processor 120) may compensate for a position of the text corresponding to the partial image based on motion information of the electronic device or an external object included in the first image and may display the text on a display device (e.g., the display device 160 of fig. 1) in operation 309.

For example, the processor 120 may detect a position change of the first image displayed on the display device 160 or the external object included in the first image, which occurs between the time when the first image is obtained and the current time, based on the motion information of the electronic device or the external object included in the first image. The processor 120 may compensate for a position of text corresponding to the partial image based on a change in position of the first image or an external object included in the first image, and may control display of the text through the display device 160 to cover at least one external object.

The detection of a change in position of the external image from the first image to the second image may be determined in various ways. In some embodiments, the camera may encode the first image and the second image using MPEG. The motion information may be determined by examining the first image, the second image, and the motion vector in each image between the first image and the second image. In one embodiment, the electronic device may create the motion vector by encoding the first picture as an I-picture according to MPEG standards and encoding the second picture as a P-picture that depends on data of the first picture. The motion vector of the proximity object may be used to determine motion.

FIG. 4 is a flow diagram 400 for selectively sending images for translation services in an electronic device to an external electronic device, in accordance with certain embodiments of the present disclosure. The following description may be about an operation of transmitting at least one partial image of the first image obtained by the camera 180 to the external electronic device in operation 303 of fig. 3. In the following description, the electronic device may include the first electronic device 101 of fig. 1 or at least a portion of the electronic device 101 (e.g., the processor 120).

Referring to fig. 4, when a first image (e.g., a preview image) obtained by a camera (e.g., camera 180 of fig. 1) is displayed on a display device (e.g., display device 160 of fig. 1) (e.g., operation 301 of fig. 3), an electronic device (e.g., processor 120) may identify a quality of the first image in operation 401. For example, the quality of the first image may include at least one of a sharpness (e.g., blur state) of the first image, a number of feature points included in the first image, or a distribution of feature points included in the first image.

In operation 403, the electronic device (e.g., the processor 120) may identify a motion of the electronic device or at least one external object included in the first image. For example, the processor 120 may collect motion information of the electronic device 101 or at least one external object included in the first image from the time when the first image is obtained by the camera 180. For example, the motion of the electronic device 101 may be obtained by the sensor module 176 (e.g., acceleration sensor, gravity sensor).

In operation 405, the electronic device (e.g., the processor 120) may determine whether to request translation of the first image based on the quality of the first image and motion information of the electronic device or at least one external object included in the first image. For example, when the resolution (e.g., blur state) of the first image is higher than or equal to a reference value (i.e., good), the processor 120 may determine to provide a translation service for the first image. As described above, although the present embodiment uses translation, the present disclosure is not limited to translation. For example, the processor 120 may determine to provide a translation service of the first image when the number of feature points of the first image is greater than a reference number or the feature points are concentrated on a certain area. For example, when the motion of the electronic device 101 obtained by the sensor module 176 falls within the reference range, the processor 120 may determine to provide a translation service of the first image. For example, when the motion of the preview image obtained by the camera 180 falls within the reference range, the processor 120 may determine to provide a translation service of the first image. For example, the motion of the preview image may include a motion of at least one object included in the preview image. The processor 120 may detect motion of an object included in the preview image by comparing successive preview images (e.g., pixels forming an image) obtained by the camera 180.

When it is determined that translation of the first image is not requested (e.g., no in operation 405), the electronic device (e.g., processor 120) may impose a restriction on the transmission of the first image. For example, when it is determined that the translation of the first image is not requested, the processor 120 may control to display a guide message indicating that there is a limitation on the translation service through the display device 160. For example, the instructional message may include the reason the translation service is restricted.

When it is determined that translation of the first image is requested (e.g., yes in operation 405), the electronic device (e.g., the processor 120) may transmit at least one partial image and a whole image of the first image to at least one external electronic device. For example, the processor 120 may extract at least one local image from the first image based on at least one of a history of using a translation service (e.g., a text region probability model) or a distribution of first feature points of the first image. The processor 120 may transmit the at least one partial image and the entire image extracted from the first image to at least one external electronic device through the communication module 190.

Fig. 5 is a flow diagram 500 for extracting a partial image of an image obtained by a camera in an electronic device, according to some embodiments of the present disclosure. Fig. 6 illustrates a configuration 600 of images obtained by a camera, according to some embodiments of the present disclosure. The following description may be about an operation of transmitting an image to an external electronic device in operation 407 of fig. 4. In the following description, the electronic device may include the electronic device 101 of fig. 1 or at least a portion of the electronic device 101 (e.g., the processor 120). Object 610 is a text-bearing object. The electronic device may take a picture 620 of the object bearing the text and extract the text 660, 662.

Referring to fig. 5, when a translation service (e.g., operation 405 of fig. 4) of a first image (e.g., a preview image) obtained by a camera (e.g., camera 180 of fig. 1) is provided, an electronic device (e.g., processor 120) may identify a probabilistic model of at least one text region corresponding to a history of using the translation service in operation 501. For example, the probabilistic model of the text region corresponding to the history of using the translation service may include region information about the display 160 on which the text of the external object has been photographed and displayed for the translation service. The probabilistic model of the text region corresponding to the history of using the translation service may be refined as the translation service is provided, and may include at least one region.

In operation 503, the electronic device (e.g., the processor 120) may identify a probabilistic model of a text region corresponding to image analysis information of a first image obtained by a camera (e.g., the camera 180 of fig. 1). For example, the probability model of the text region corresponding to the image analysis information of the first image may include at least one piece of region information set based on a distribution of feature points extracted from the first image. For example, the probabilistic model of the text region corresponding to the image analysis information of the first image may include information of at least one region on which a plurality of feature points greater than or equal to a reference number are concentrated within the first image.

At operation 505, an electronic device (e.g., the processor 120) may set at least one text region of a first image. The at least one text region of the first image may be based on a probabilistic model of the text region. The probabilistic model may correspond to at least one of a history of using the translation service or image analysis information of the first image. For example, the processor 120 may control the preview image 620 obtained through the camera 180 with respect to the external object 610 to be displayed through the display device 160, as shown in fig. 6. When the probabilistic model of the text region corresponding to the history of using the translation service is disposed at the center of the display device 160, the processor 120 may set at least a portion of the center of the preview image 620 as the text region of the first image. The processor 120 may set at least a portion of the center of the preview image 620 as a text region of the first image, with the feature points being concentrated on the center of the preview image 620 by the text "apple" and "banana" within the preview image 620.

The processor 120 may set a region corresponding to at least one of the plurality of probabilistic models of the text region corresponding to the history of using the translation service, which overlaps with a region on which the feature points are at least partially concentrated, as the text region of the first image. For example, the text regions of the first image may include at least some regions estimated to have text within the first image. For example, the preview image 620 may include an image 630 of the external object 610, a language 650 of text included in the image, and a language 652 for translation of the text. The display device 160 may display a service list 640 related to the preview image 620 on at least some areas, as shown in fig. 6. The service list 640 may include a menu (e.g., "text") 642 for selecting a translation service.

In operation 507, the electronic device (e.g., the processor 120) may extract at least one local image corresponding to at least one text region of the first image. For example, the processor 120 may extract the partial images 660 and 662 corresponding to the respective text regions of the preview image 620 of fig. 6. For example, the processor 120 may generate local images 660 and 662 corresponding to respective text regions that are not continuous within the preview image 620. In some embodiments, the partial images 660 and 662 may be selected to contain primarily the text and surrounding pixels of the image. For example, OCR may be used to identify text in an image, and the partial images 660 and 662 may be selected to extract portions of the image that primarily include text and surrounding pixels.

In some cases, the partial images (e.g., partial image 662) may properly isolate the text. For example, the partial image 662 includes the text "applet" and a portion of the text "banana", where the bottom portion of "banana" is clipped.

In operation 509, the electronic device (e.g., the processor 120) may transmit at least one partial image and a whole image of the first image to at least one external electronic device. For example, the processor 120 may control the partial images 660 and 662 extracted from the preview image 620 of fig. 6 to be transmitted to at least one external electronic device through the communication module 190. For example, the processor 120 may control the first partial image 660, the second partial image 662, and the entire image 620 to be transmitted to different external electronic devices through the communication module 190.

Fig. 7 is a flow diagram 700 for displaying translated text (or otherwise corresponding text) in an electronic device that may be received from an external electronic device, in accordance with some embodiments of the present disclosure. Fig. 8A illustrates a screen 800 for a translation service for images obtained by a camera, according to some embodiments of the present disclosure. Fig. 8B illustrates a screen 800 for a translation service for images obtained by a camera, according to some embodiments of the present disclosure. Fig. 8C illustrates a screen 800 for a translation service for images obtained by a camera, according to some embodiments of the present disclosure. Fig. 8D illustrates a screen 800 of a translation service for images obtained by a camera, where the object has moved, according to some embodiments of the present disclosure. Fig. 8E illustrates a screen 800 for a translation service for images obtained by a camera, according to some embodiments of the present disclosure. Fig. 8F illustrates a screen 800 for a translation service for images obtained by a camera, according to some embodiments of the present disclosure. The following description may be an operation regarding displaying text corresponding to the partial image in operation 307 of fig. 3. In the following description, the electronic device may include the electronic device 101 of fig. 1 or at least a portion of the electronic device 101 (e.g., the processor 120).

Referring to fig. 7, when text corresponding to an image is received from an external electronic device (e.g., operation 307 of fig. 3), the electronic device (e.g., the processor 120) may calibrate the text received from the external electronic device in operation 701. For example, when text corresponding to an image is received from an external electronic device, the processor 120 may recognize region information (e.g., a position, a length, and a size of the text) of the text extracted in the external electronic device. The processor 120 may refine the region information of the text based on the estimated text display region based on the image. For example, the image may include any one of a plurality of images (at least one partial image and a whole image) related to a first image, which is transmitted to an external electronic device for a translation service of the first image.

In operation 703, the electronic device (e.g., the processor 120) may determine whether there is text corresponding to another image received from the external electronic device. For example, the processor 120 may determine whether text corresponding to another image of the at least one partial image and the entire image transmitted to the external electronic device is received at a previous time. For example, the another image may include another image different from an image among a plurality of images (at least one partial image and the whole image) related to the first image, which is transmitted to the external electronic device for a translation service of the first image.

When there is text corresponding to another image received from the external electronic device at a previous time (e.g., yes in operation 703), the electronic device (e.g., the processor 120) may refine a result of the translation service received from the external electronic device at the previous time based on the text corresponding to the image in operation 705. For example, when text corresponding to another image received at a previous time exists at the same position as that of text corresponding to an image received from an external electronic device, the processor 120 may select text to be displayed at the corresponding position based on the reliability of each text. For example, the reliability of the text may be determined based on the location of the text or the accuracy of the text in an image (partial image or whole image) that is sent to the external electronic device to extract the text. For example, the accuracy of the text may indicate the accuracy as to whether the text received from the external electronic device may be interpreted as text in a particular language. For example, when there is a difference in position between text corresponding to an image received from an external electronic device and text corresponding to another image received at a previous time, the processor 120 may add the text corresponding to the image to the result of the translation service received from the external electronic device.

When there is no text corresponding to another image received from the external electronic device at a previous time (e.g., no in operation 703), or when the result of the translation service is completed (e.g., operation 705), the electronic device (e.g., the processor 120) may calibrate a display position of the text received from the external electronic device based on motion information of the electronic device or an external object included in the first image in operation 707. For example, the processor 120 may detect a difference between a first image transmitted to the external electronic device for a translation service and a second image obtained through the camera 180 while displaying text received from the external electronic device based on motion information of the electronic device or an external object included in the first image. The processor 120 may calibrate a display position of text received from the external electronic device based on a difference between the first image and the second image. For example, motion information of the electronic device or an external object included in the first image may be continuously collected from the time when the first image for the translation service is obtained.

In some embodiments, the first image may comprise a live (live) image provided by the camera and displayed on the display when the camera has not captured the image. The second image may comprise a live image provided by the camera when the corresponding text is received. During the time when the text is sent to the external device and the corresponding text is received, the object may have moved, or the camera may have moved. Thus, the object bearing the text will be located in a different position in the second picture. Thus, in order to place the received corresponding text on the object bearing the text, the motion is compensated from the original position of the text in the first image.

In operation 709, the electronic device (e.g., the processor 120) may determine a text color and a background color for displaying the text received from the external electronic device based on color information of a text display area for displaying the text received from the external electronic device within the first image.

For example, note that in the second image, the colors of the pixels forming the object bearing the image may have changed. For example, if the object has moved, the illumination may have changed, resulting in a different color of the pixel.

In operation 711, the electronic device (e.g., the processor 120) may superimpose text received from the external electronic device on at least a portion of the preview image based on the calibrated display position of the text, the text color, and the background color. For example, when text corresponding to the second partial image 662 of fig. 6 and translated to a different language is received at a first time, the processor 120 may control the display of the text translated to the different language on the area corresponding to the corresponding text in the preview image 810 via the display device 160 "

Figure BDA0002643660440000231

(apple in Korean) "and"(Korean butter) 820 ″, as shown in FIG. 8A. When text corresponding to the first partial image 660 of fig. 6 and translated into a different language is received at a second time, the processor 120 may compare the authenticity of the text corresponding to the first partial image 660 with the authenticity of the text corresponding to the second partial image 662, and may select the text corresponding to the first partial image 660 ('

Figure BDA0002643660440000233

"(bananas in korean)). For example, when selecting text corresponding to the first partial image 660, the processor 120 may control displaying text translated into a different language on an area corresponding to the corresponding text in the preview image 810 through the display device 160 ""(apple in Korean) and""(banana in korean) 830, as shown in fig. 8B. When text corresponding to the whole image 620 of fig. 6 and translated into a different language is received at the third time, the processor 120 may control to display the text translated into the different language on an area corresponding to the corresponding text in the preview image 810 through the display device 160 "

Figure BDA0002643660440000236

"(apple in Korean)," (apple in Korean) "

Figure BDA0002643660440000237

"(bananas in Korean) and""(Korean)The word carrot) 840 as shown in fig. 8C. For example, as shown in fig. 8D, when the position of the external object included in the first image obtained through the camera 180 is changed (850), the processor 120 may control to display text translated into a different language on a display position calibrated based on the motion information of the electronic device 101 or the external object included in the first image through the display device 160 ″ (850) "

Figure BDA0002643660440000239

"(apple in Korean)," (apple in Korean) ""(bananas in Korean) and"

Figure BDA00026436604400002311

"(carrot in Korean) 860, as shown in FIG. 8E.

According to an embodiment, the electronic device 101 (e.g., the processor 120) may set a language for translating the text extracted from the first image. For example, when an input (e.g., a touch input) in a language 652 for translating text displayed on the preview image 620 of fig. 6 is detected, the processor 120 may control to display an available language list 870 for translation through the display device 160, as shown in fig. 8F. The processor 120 may set a language selected from the list of available languages 870 as a language for translating the text extracted from the first image. As noted above, the present disclosure is not limited to language translation. In some embodiments, instead of listing the language 870, the user may select whether the corresponding text is a synonym, updated information, or any kind of different correspondence.

According to an embodiment, the electronic device 101 (e.g., the processor 120) may recognize text included in the image through a first external electronic device (e.g., a first server) and may translate the text included in the image into text of a different language through a second external electronic device (e.g., a second server). For example, the processor 120 may control at least one partial image and a whole image of the preview image to be transmitted to the first external electronic device through the communication module 190. When receiving text extracted from the image and corresponding to the first language from the first external electronic device, the processor 120 may control transmission of the text corresponding to the first language to the second external electronic device through the communication module 190. As in operations 705 through 711, the processor 120 may control displaying, through the display device 160, the text translated into the second language received from the second external electronic device by compensating for a motion difference of the external object included in the first image. For example, the processor 120 may calibrate text corresponding to a first language while text corresponding to the first language is transmitted to a second external electronic device and translated into text corresponding to a second language (operation 701). As noted above, the present disclosure is not limited to language translation, and may include various other relationships, such as synonyms and updates, to name a few.

FIG. 9 is a flow chart 900 for setting a display position of translated text or corresponding text in an electronic device according to some embodiments of the present disclosure. Fig. 10A illustrates a configuration 1000 for setting a display position of translated text in an external electronic device according to some embodiments of the present disclosure, fig. 10B illustrates a configuration 1000 for setting a display position of translated text in an external electronic device according to some embodiments of the present disclosure, and fig. 10C illustrates a configuration 1000 for setting a display position of translated text in an external electronic device according to some embodiments of the present disclosure. The following description may be about an operation of calibrating text received from an external electronic device in operation 701 of fig. 7. In the following description, the electronic device may include the electronic device 101 of fig. 1 or at least a portion of the electronic device 101 (e.g., the processor 120).

Referring to fig. 9, when text corresponding to an image is received from an external electronic device (e.g., yes in operation 307 of fig. 3), the electronic device (e.g., the processor 120) may set at least a portion of the image corresponding to the text received from the external electronic device as a region of interest in operation 901. For example, the processor 120 may recognize region information of text received from an external electronic device through the communication module 190. For example, the region information of the text may include at least one of a position, a size, and a length of the text extracted from the image. The processor 120 may set at least a portion of an image corresponding to text received from the external electronic device as a region of interest based on region information of the text received from the external electronic device. For example, the processor 120 may set at least a portion of the local image 1010 as a region of interest 1020, as shown in fig. 10A. For example, the region of interest may be set to be larger or smaller than region information of text received from the external electronic device in the image.

In operation 903, the electronic device (e.g., processor 120) may identify a candidate region for displaying text within the region of interest. For example, as shown in fig. 10B, the processor 120 may set pixels having the same brightness or color attribute in the region of interest 1020 within the local image 1010 as the text candidate regions 1030 and 1032. For example, the processor 120 may set the text candidate regions 1030 and 1032 by an algorithm for finding regions of stable change of the binary encoding threshold (e.g., Maximally Stable Extreme Regions (MSERs)) from relatively brighter or darker pixels than the surroundings within the region of interest 1020 of fig. 10B.

In operation 905, the electronic device (e.g., processor 120) may remove surrounding pixels that are too far from the text candidate region. For example, the processor 120 may remove outliers that are unnecessary for enclosing text based on at least one of a position, a size, or a length of the text candidate region. For example, as shown in fig. 10B, the processor 120 may identify a second text candidate region 1032 located on the border of the region of interest 1020 as an outlier and may remove it from the text candidate region.

In operation 907, the electronic device (e.g., processor 120) may set the text candidate region from which the outliers are removed as a text display region for displaying text within the image. For example, when the second text candidate region 1032 located on the boundary of the region of interest 1020 is identified as an outlier and removed, the processor 120 may set the first text candidate region 1030 as the text display region 1040 within the local image 1010, as shown in fig. 10C. For example, the text display area may indicate an area where text in a first language extracted from an image is translated into a second language and displayed, or otherwise replaced.

FIG. 11 is a flow diagram 1100 for displaying translated text corresponding to a partial image in an electronic device, according to some embodiments of the present disclosure. The following description may be of operations to complete the results of the translation service in operations 703 and 705 of fig. 7. In the following description, the electronic device may include the electronic device 101 of fig. 1 or at least a portion of the electronic device 101 (e.g., the processor 120).

Referring to fig. 11, when calibrating text corresponding to an image (e.g., a partial image or a whole image) received from an external electronic device, the electronic device (e.g., the processor 120) may detect reliability of the corresponding text in operation 1101. For example, the processor 120 may detect the reliability of the corresponding text based on at least one of the accuracy of the text or the position of the text in the image (e.g., the partial image or the entire image) received from the external electronic device. For example, in the case of the first partial image 660 of fig. 6, since the text "banana" in the first partial image 660 is closer to the center of the first partial image 660, the processor 120 may determine that the text has relatively high reliability. For example, in the case of the second partial image 662 of fig. 6, the processor 120 may detect the reliability of each of the texts "applet" and "banana" based on the position of the text in the second partial image 662. In this case, based on the positions of the texts "applet" and "banana" in the second partial image 662, it can be determined that the reliability of the text "applet" is higher than that of the text "banana". For example, the accuracy of the text may be determined based on whether the text maps to text in a particular language.

In operation 1103, the electronic device (e.g., the processor 120) may determine whether there is another text received from the external electronic device at a previous time in an area for displaying the text received from the external electronic device in the first image (preview image). For example, the processor 120 may determine whether there is text corresponding to another image (e.g., a partial image or a whole image) received from the external electronic device at a previous time. When there is text corresponding to another image received from the external electronic device at a previous time, the processor 120 may determine whether a display area of the text corresponding to the other image in the first image (preview image) overlaps at least a portion of the display area of the text of which reliability was detected in operation 1101. For example, when there is no text corresponding to another image received from the external electronic device at a previous time, or when a display area of the text of the other image does not overlap at least a portion of the display area of the text in the first image, the processor 120 may determine that there is no other text in the area for displaying the text received from the external electronic device in the first image. For example, when a display area of text in the first image corresponding to another image received from the external electronic device at a previous time overlaps at least a portion of the display area of the text, the processor 120 may determine that another text exists in the area for displaying the text received from the external electronic device in the first image.

When there is no text corresponding to another image in an area for displaying a text received from an external electronic device in a first image (preview image) (e.g., no in operation 1103), the electronic device (e.g., the processor 120) may display the text received from the external electronic device on the area for displaying the text received from the external electronic device in the whole image in operation 1105. For example, the processor 120 may render an image to display text translated into a different language received from an external electronic device on a text display area in the preview image set through operations 901 to 907. For example, when text (translated into text in a different language) corresponding to the second partial image 662 of fig. 6 is received at a first time, the processor 120 may render the preview image to display the text translated into the different language on a display area of the text corresponding to the second partial image 662 in the preview image 810 "

Figure BDA0002643660440000261

"(apple in Korean) and""(butter in korean) 820, as shown in fig. 8A.

When there is text corresponding to another image in an area for displaying text received from the external electronic device in the first image (preview image) (e.g., yes in operation 1103), the electronic device (e.g., the processor 120) may determine whether reliability of the text received from the external electronic device is higher than reliability of the another text in operation 1107. For example, when text corresponding to the first partial image 660 of FIG. 6 (translated into text in a different language) is received at the second time, the processor 120 may compare the text corresponding to the first partial image 660 ('"(banana in korean)) and a text corresponding to the second partial image 662 (") ""(butter in korean)), reliability. For example, due to the text (') corresponding to the first partial image 660 ""(banana in korean)) is located at the center of the image of the first partial image 660, so the text may have a relatively higher level than the text corresponding to the second partial image 662 (") "Reliability of "(butter in korean)), text corresponding to the second partial image 662 (") "

Figure BDA0002643660440000275

"(butter in korean)) is located on the boundary of the second partial image 662.

Reliability of text received from an external electronic device is higher than that of text received from an external electronic deviceWhen the reliability of the text corresponding to the other image (e.g., "yes" in operation 1107), the electronic device (e.g., processor 120) may refine the text of the text display area to text received from an external electronic device in operation 1109. For example, when determining the text corresponding to the first partial image 660 (""(bananas in korean)), the processor 120 may render the preview image to display the text of the preview image 810 displayed through the display device 160 as shown in fig. 8A""butter in korean was perfected as a text as shown in fig. 8B""(bananas in korean), and shows a complete text.

When the reliability of the text received from the external electronic device is lower than or equal to the reliability of another text (e.g., no in operation 1107), the electronic device (e.g., the processor 120) may hold the text corresponding to the another image displayed on the text display area of the first image (preview image).

According to an embodiment, the electronic device 101 (e.g., the processor 120) may maintain the text displayed on the first display region of the preview image when the text of the first display region received at the first time is the same as the text of the first display region received at the second time. For example, when the text of the first display region received at the first time is the same as the text of the first display region received at the second time, operations 1107 to 1109 may be omitted.

According to an embodiment, the electronic device 101 (e.g., the processor 120) may continuously collect motion information of the electronic device 101 or an external object included in the first image from the time when the first image is obtained by the camera 180. The electronic device 101 may end the translation service before the translation service is completed based on the motion information of the electronic device 101 or the external object included in the first image. For example, when the motion of the electronic device 101 or an external object included in the first image falls outside of the reference range, the processor 120 may determine that it is not possible to display text translated into a different language. Thus, the processor 120 may stop the translation service.

The electronic device and the operating method thereof according to some embodiments transmit an image obtained through a camera to an external electronic device (e.g., a server), calibrate a display position of a translated text received from the external electronic device, or compensate for a difference between an image transmitted to the external electronic device (e.g., the server) for a translation service and an image displayed on a display, and display the translated text. Accordingly, the text translated by the external electronic device can be smoothly displayed on the image obtained by the camera.

The electronic device and the operating method thereof according to some embodiments selectively transmit an image for a translation service to an external electronic device (e.g., a server) based on at least one of quality of an image obtained through a camera (e.g., distribution of feature points) or motion information of the electronic device. Therefore, the number of translations using the external electronic device can be reduced, and thus the consumption of network resources can be reduced.

The electronic device and the operation method according to some embodiments transmit at least one partial image and a whole image corresponding to an image obtained by a camera to an external electronic device (e.g., a server). Accordingly, it is possible to reduce delay in translation service and translation delay in a network caused by calculation of an external electronic device.

Certain embodiments of the present disclosure may be implemented by software including instructions stored in a machine (e.g., computer) readable storage medium (e.g., memory 130 of fig. 1). The machine is a device capable of retrieving stored instructions from a storage medium and operating in accordance with the retrieved instructions and may include an electronic apparatus 100, 102, 104 or a server 108. When the instructions are executed by a processor (e.g., processor 120), the processor may perform the functions corresponding to the instructions directly, or may use other elements under the control of the processor to perform the functions corresponding to the instructions. The instructions may include code generated or executed by a compiler or interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. Herein, the term "non-transitory" simply means that the storage medium is tangible and does not include signals, regardless of whether data is semi-permanently or temporarily stored in the storage medium.

Methods according to certain embodiments disclosed herein may be included and provided in a computer program product. The computer program product may be used as a product for conducting a transaction between a seller and a buyer. For example, the computer program product may be a downloadable application or computer program for a transaction between a seller and a purchaser. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., a compact disc read only memory (CD-ROM)), or may be distributed via an application store (e.g., a PlayStore)TM) And (4) online distribution. If distributed online, at least a portion of the computer program product may be temporarily generated or at least temporarily stored in a machine-readable storage medium, such as a memory of a relay server, a server of an application store, or a manufacturer server.

Each element (e.g., module or program) according to some embodiments may comprise a single entity or multiple entities, and in some embodiments some of the above elements may be omitted, or other sub-elements may be added. Alternatively or additionally, some elements (e.g., modules or programs) may be integrated into a single element, and the integrated element may still perform the functions performed by each corresponding element in the same or a similar manner as before the corresponding element was integrated. Operations performed by a module, programming module, or other element according to some embodiments may run sequentially, in parallel, repeatedly, or in a heuristic manner. At least some of the operations may be performed according to another order, may be omitted, or may include other operations as well.

The present disclosure has been described with reference to various example embodiments thereof. It will be understood by those skilled in the art that the present disclosure may be embodied in modified forms without departing from the essential characteristics thereof. The disclosed embodiments are, therefore, to be considered in all respects as illustrative and not restrictive. The scope of the present disclosure is defined not by the detailed description but by the appended claims, and all differences within the scope will be construed as being included in the present disclosure.

40页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于独立于领域的术语链接的系统和方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!