Method of providing text translation management data related to application and electronic device thereof
阅读说明:本技术 提供与应用有关的文本翻译管理数据的方法及其电子装置 (Method of providing text translation management data related to application and electronic device thereof ) 是由 李始炯 金范洙 金宣廷 金树完 金在贤 宋仁善 李贤奭 崔智焕 于 2019-02-21 设计创作,主要内容包括:本公开的某些实施例涉及一种用于在电子装置中通过使用外部电子装置来翻译图像中包括的文本的设备和方法。一种方法包括:在显示器上显示图片,该图片包括在图片内的位置处承载文本的对象;提取文本;从提取的文本生成另一文本;以及将另一文本自动覆盖在显示器上的另一图片中的该对象上,该另一图片在另一图片内的另一位置处包括该对象。(Certain embodiments of the present disclosure relate to an apparatus and method for translating text included in an image by using an external electronic device in an electronic device. One method comprises the following steps: displaying a picture on a display, the picture including an object bearing text at a location within the picture; extracting a text; generating another text from the extracted text; and automatically overlaying another text over the object in another picture on the display, the other picture including the object at another location within the other picture.)
1. An electronic device, comprising:
a camera;
a display;
a transceiver;
a memory; and
one or more processors for executing a program to perform,
wherein the one or more processors are configured to:
displaying, by the display, a first image including one or more external objects obtained by using the camera;
identifying at least one of the one or more external objects corresponding to text during at least a portion of the time the first image is displayed;
transmitting, by the transceiver, a partial image of the first image corresponding to the at least one external object to an external electronic device;
receiving, by the transceiver, text corresponding to the text from the external electronic device;
identifying a motion of the electronic device or the at least one external object generated during a time when the partial image is transmitted and corresponding text is received from the external electronic device; and
displaying the corresponding text on the at least one external object by compensating for the motion while displaying a second image including the at least one external object through the display.
2. The electronic device of claim 1, wherein the corresponding text is from a first language identified based on image recognition corresponding to the partial image or is text from a second language different from the first language translated based on text corresponding to the first language, and
wherein the processor is configured to display text from the first language or text from the second language on the partial image through the display based on the text.
3. The electronic device of claim 1, wherein the one or more processors are configured to:
receiving region information corresponding to the text from the external electronic device and setting a region of interest in the first image based on the received region information;
detecting a candidate region including pixels similar in terms of at least one of brightness or color based on at least a comparison between pixels included in the region of interest or a region adjacent to the region of interest; and
calibrating the region information based on the candidate region.
4. The electronic device of claim 3, wherein the one or more processors are configured to:
determining additional attributes related to corresponding text based on at least some regions of the first image corresponding to the calibrated region information; and
displaying text information on the at least one external object based on the additional attributes, an
Wherein the additional attribute comprises at least one of a color, size or font of the text or a background color.
5. The electronic device of claim 1, wherein the one or more processors are configured to:
when a plurality of corresponding texts are received, comparing reliabilities between the plurality of corresponding texts, an
Displaying any one of the plurality of corresponding texts selected based on a result of the comparison on the at least one external object.
6. The electronic device of claim 5, wherein the one or more processors are configured to detect the reliability of the plurality of corresponding text based on a location of the plurality of corresponding text in the local image.
7. A method of an electronic device, comprising:
displaying, by a display, a first image obtained by using a camera of the electronic device;
transmitting at least one partial image and a whole image of the first image to an external electronic device through a transceiver of the electronic device;
when text information corresponding to a partial image or a whole image is received from the external electronic device through the transceiver, refining an area for displaying the text information based on the partial image or the whole image;
calibrating a position of the text information based on motion information of the electronic device or at least one external object included in the first image; and
and displaying the text information on the second image based on the complete area information and the position information.
8. The method of claim 7, wherein the text information comprises text corresponding to a first language that is recognized based on image recognition corresponding to the at least one partial image or text corresponding to a second language different from the first language that is translated based on text corresponding to the first language.
9. The method of claim 7, further comprising:
determining whether to provide a translation service based on at least one of motion information of the electronic device or at least one external object included in the first image or quality of the first image; and
transmitting the at least one partial image and the whole image of the first image to the external electronic device through the transceiver when it is determined that the translation service is to be provided.
10. The electronic device of claim 7, further comprising:
extracting at least one partial image from the first image based on at least one of a history of using a translation service or distribution information of feature points included in the first image.
11. The electronic device of claim 7, further comprising:
setting a region of interest in the local image based on region information of the text information received from the external electronic device;
detecting a candidate region in the region of interest, the candidate region comprising at least one pixel that is similar in terms of at least one of a brightness or a color; and
and refining the area information of the text information based on the candidate area.
12. The electronic device of claim 11, further comprising:
determining additional attributes related to the text information based on at least some regions of the first image corresponding to refined region information of the text information; and
displaying the text information to overlap at least a portion of the second image based on the additional attribute,
wherein the additional attribute comprises at least one of a color, size or font of the text or a background color.
13. The electronic device of claim 7, further comprising:
comparing reliability of a plurality of text messages corresponding to at least some regions of the first image when the plurality of text messages are received; and
based on the result of the comparison, any one of the plurality of text information is selected as the text information corresponding to the at least some regions.
14. The electronic device of claim 13, further comprising:
detecting reliability of the text information based on a position of the text information in the partial image.
Technical Field
Certain embodiments of the present disclosure relate to an apparatus and method for providing a translation service of text included in an image obtained through a camera in an electronic device.
Background
With the enhancement of information and communication technologies and semiconductor technologies, various types of electronic devices are being developed to multimedia devices providing various multimedia services. For example, the multimedia service may include at least one of a cellular phone service, a Voice Over IP (VOIP) service, a messaging service, a broadcasting service, a wireless internet service, a camera, an electronic payment, or a media playback.
The above information is presented merely as background information to aid in understanding the present disclosure. No determination is made as to whether any of the above would be applicable as prior art with respect to the present disclosure, nor is an assertion made.
Disclosure of Invention
Solution to the problem
The electronic device may provide various services by using a camera to enhance convenience of a user. For example, the electronic device may search for information of a product photographed by a camera and may output the information. It may also provide translation services with respect to text included in images obtained by the camera.
The electronic device may translate text included in an image obtained through the camera into a language desired by the user, and may display the text. For example, the electronic device may recognize text included in a preview image obtained by a camera through Optical Character Recognition (OCR) for the preview image. The electronic device may translate the text recognized from the preview image into a language desired by the user through a translation engine, and may render the translated text to a text region of the preview image.
As the electronic device drives OCR and translation engines in the electronic device, the performance of the translation service may be limited. The electronic device may download the database and the translation engine from an external device for translation of the text, and may store it in the electronic device to enhance performance of the translation service. This may consume a large amount of memory of the electronic device.
Certain embodiments of the present disclosure provide an apparatus and method for providing a translation service of text included in an image obtained through a camera by using an external electronic device in an electronic device.
According to some embodiments of the present disclosure, an electronic device includes a camera, a display, a transceiver, a memory, and a processor, wherein the processor is configured to: displaying, by a display, a first image including one or more external objects obtained by using a camera; identifying at least one of the one or more external objects corresponding to the text during at least a portion of the time the first image is displayed; transmitting, by a transceiver, a partial image of a first image corresponding to at least one external object to an external electronic device; receiving, by a transceiver, text corresponding to the text from an external electronic device; identifying a motion of the electronic device or at least one external object generated during a time when the partial image is transmitted and the corresponding text is received from the external electronic device; and displaying the corresponding text on the at least one external object by compensating for the motion while displaying the second image including the at least one external object through the display.
According to some embodiments of the present disclosure, a method of operation of an electronic device includes: displaying a first image on a display of an electronic device, the first image obtained using a camera operatively connected to the electronic device, the first image including one or more external objects; identifying at least one of the one or more external objects corresponding to the text during at least a portion of the time the first image is displayed; transmitting a partial image of a first image corresponding to at least one external object to an external electronic device; receiving a text corresponding to the text from an external electronic device; identifying a motion of the electronic device or at least one external object generated during a time when the partial image is transmitted and the corresponding text is received from the external electronic device; displaying the corresponding text on the at least one external object by compensating for the motion while displaying a second image including the at least one external object through the display.
According to some embodiments of the present disclosure, an electronic device includes a camera, a display, a transceiver, a memory, and a processor, wherein the processor is configured to: displaying, by a display, a first image obtained by using a camera; transmitting, by the transceiver, at least one partial image and a whole image of the first image to an external electronic device; when text information corresponding to the partial image or the entire image is received from the external electronic device through the transceiver, refining an area for displaying the text information based on the partial image or the entire image; calibrating a position of the text information based on motion information of the electronic device or at least one external object included in the first image; and displaying text information on the second image based on the refined region information and the position information.
According to some embodiments, there is a method for annotating a picture, the method comprising: displaying a picture including a text-bearing object at a location within the picture; extracting a text; and displaying another text on the object in another picture that includes the object at another location within the other picture.
Drawings
The above and other aspects, features and advantages of certain embodiments of the present disclosure will become more apparent from the following description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a block diagram of an electronic device for managing data related to applications in a network environment, in accordance with certain embodiments of the present disclosure;
FIG. 2 is a block diagram illustrating a camera according to some embodiments of the present disclosure;
FIG. 3 is a flow diagram for providing translation services in an electronic device using an external electronic device, in accordance with certain embodiments of the present disclosure;
FIG. 4 is a flow diagram for selectively sending images for translation services in an electronic device to an external electronic device, in accordance with certain embodiments of the present disclosure;
FIG. 5 is a flow chart for extracting a partial image of an image obtained by a camera in an electronic device according to some embodiments of the present disclosure;
fig. 6 is a view showing a configuration of an image obtained by a camera according to some embodiments of the present disclosure;
FIG. 7 is a flow diagram for displaying translated text received from an external electronic device in an electronic device, in accordance with certain embodiments of the present disclosure;
FIG. 8A is a view showing a screen of a translation service for images obtained by a camera, according to some embodiments of the present disclosure;
FIG. 8B is a view of a screen showing a translation service for images obtained by a camera, in accordance with certain embodiments of the present disclosure;
fig. 8C is a view showing a screen of a translation service for an image obtained by a camera according to some embodiments of the present disclosure;
FIG. 8D is a view of a screen showing a translation service for images obtained by a camera, in accordance with certain embodiments of the present disclosure;
FIG. 8E is a view of a screen showing a translation service for images obtained by a camera, in accordance with certain embodiments of the present disclosure;
FIG. 8F is a view of a screen showing a translation service for images obtained by a camera, in accordance with certain embodiments of the present disclosure;
FIG. 9 is a flow diagram for setting a display position of translated text in an electronic device, according to some embodiments of the present disclosure.
Fig. 10A is a view showing a configuration for setting a display position of translated text in an external electronic device according to some embodiments of the present disclosure;
fig. 10B is a view showing a configuration for setting a display position of translated text in an external electronic device according to some embodiments of the present disclosure;
fig. 10C is a view showing a configuration for setting a display position of translated text in an external electronic device according to some embodiments of the present disclosure; and
FIG. 11 is a flow diagram for displaying translated text corresponding to a partial image in an electronic device, according to some embodiments of the present disclosure.
Detailed Description
Certain embodiments of the present disclosure will be described herein below with reference to the accompanying drawings. In the following description, well-known functions or constructions are not described in detail since they would obscure the disclosure in unnecessary detail. Further, terms used herein are defined according to the functions of the present disclosure. Accordingly, these terms may vary according to the intention and usage of the user or operator. That is, the terms used herein must be understood based on the description made herein.
FIG. 1 is a block diagram illustrating an electronic device 101 for managing data related to applications in a network environment 100, in accordance with some embodiments. Referring to fig. 1, an electronic device 101 in a network environment 100 may communicate with an electronic device 102 via a first network 198 (e.g., a short-range wireless communication network) or with an electronic device 104 or a server 108 via a second network 199 (e.g., a long-range wireless communication network). According to an embodiment, the electronic device 101 may communicate with the electronic device 104 via the server 108. According to an embodiment, electronic device 101 may include a processor 120, a memory 130, an input device 150, a sound output device 155, a display device 160 (touchscreen display), an audio module 170, a sensor module 176 (motion sensor), an interface 177, a haptic module 179, a camera 180, a power management module 188, a battery 189, a communication module 190 (wireless communication circuitry that may include a transmitter/receiver (transceiver), a modulator/demodulator (MODEM), and an oscillator, among others), a Subscriber Identity Module (SIM)196, or an antenna module 197. In some embodiments, at least one of the components (e.g., display device 160 or camera 180) may be omitted from electronic device 101, or one or more other components may be added to electronic device 101. In some embodiments, some of the components may be implemented as a single integrated circuit. For example, the sensor module 176 (e.g., a fingerprint sensor, an iris sensor, or an illuminance sensor) may be implemented to be embedded in the display device 160 (e.g., a display). The term "transceiver" may refer to a single transmitter/receiver, or a group of transmitters and receivers.
The processor 120 may run, for example, software (e.g., the program 140) to control at least one other component (e.g., a hardware component or a software component) of the electronic device 101 connected to the processor 120, and may perform various data processing or calculations. The term "processor," although used in the singular, should be understood to mean one or more processors. According to one embodiment, as at least part of the data processing or calculation, processor 120 may load commands or data received from another component (e.g., sensor module 176 or communication module 190) into volatile memory 132, process the commands or data stored in volatile memory 132, and store the resulting data in non-volatile memory 134. According to an embodiment, the processor 120 may include a main processor 121 (e.g., a Central Processing Unit (CPU) or an Application Processor (AP)) and an auxiliary processor 123 (e.g., a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a sensor hub processor, or a Communication Processor (CP)) that is operatively independent of or in conjunction with the main processor 121. Additionally or alternatively, the auxiliary processor 123 may be adapted to consume less power than the main processor 121, or be adapted specifically for a specified function. The auxiliary processor 123 may be implemented separately from the main processor 121 or as part of the main processor 121.
The auxiliary processor 123 may control at least some of the functions or states associated with at least one of the components of the electronic device 101 (e.g., the display device 160, the sensor module 176, or the communication module 190) when the main processor 121 is in an inactive (e.g., sleep) state, or the auxiliary processor 123 may control at least some of the functions or states associated with at least one of the components of the electronic device 101 (e.g., the display device 160, the sensor module 176, or the communication module 190) with the main processor 121 when the main processor 121 is in an active state (e.g., running an application). According to an embodiment, the auxiliary processor 123 (e.g., an image signal processor or a communication processor) may be implemented as part of another component (e.g., the camera 180 or the communication module 190) that is functionally related to the auxiliary processor 123.
The memory 130 may store various data used by at least one component of the electronic device 101 (e.g., the processor 120 or the sensor module 176). The various data may include, for example, software (e.g., program 140) and input data or output data for commands associated therewith. The memory 130 may include volatile memory 132 or non-volatile memory 134.
The program 140 may be stored in the memory 130 as software, and the program 140 may include, for example, an Operating System (OS)142, middleware 144, or an application 146.
The input device 150 may receive commands or data from outside of the electronic device 101 (e.g., a user) to be used by other components of the electronic device 101, such as the processor 120. The input device 150 may include, for example, a microphone, a mouse, or a keyboard.
The sound output device 155 may output a sound signal to the outside of the electronic device 101. The sound output device 155 may include, for example, a speaker or a receiver. The speaker may be used for general purposes such as playing multimedia or playing a record and the receiver may be used for incoming calls. Depending on the embodiment, the receiver may be implemented separate from the speaker, or as part of the speaker.
Display device 160 may visually provide information to the exterior of electronic device 101 (e.g., a user). The display device 160 may include, for example, a display, a holographic device, or a projector, and control circuitry for controlling a respective one of the display, holographic device, and projector. According to embodiments, the display device 160 may include touch circuitry adapted to detect a touch or sensor circuitry (e.g., a pressure sensor) adapted to measure the intensity of a force caused by a touch.
The audio module 170 may convert sound into an electrical signal and vice versa. According to embodiments, the audio module 170 may obtain sound via the input device 150 or output sound via the sound output device 155 or a headset of an external electronic device (e.g., the electronic device 102) directly (e.g., wired) connected or wirelessly connected with the electronic device 101.
The sensor module 176 may detect an operating state (e.g., power or temperature) of the electronic device 101 or an environmental state (e.g., state of a user) external to the electronic device 101 and then generate an electrical signal or data value corresponding to the detected state. According to an embodiment, the sensor module 176 may include, for example, a gesture sensor, a gyroscope sensor, an atmospheric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an Infrared (IR) sensor, a biometric sensor, a temperature sensor, a humidity sensor, or an illuminance sensor.
The interface 177 may support one or more particular protocols to be used to directly (e.g., wired) or wirelessly connect the electronic device 101 with an external electronic device (e.g., the electronic device 102). According to an embodiment, the interface 177 may include, for example, a high-definition multimedia interface (HDMI), a Universal Serial Bus (USB) interface, a Secure Digital (SD) card interface, or an audio interface.
The connection end 178 may include a connector via which the electronic device 101 may be physically connected with an external electronic device (e.g., the electronic device 102). According to an embodiment, the connection end 178 may include, for example, an HDMI connector, a USB connector, an SD card connector, or an audio connector (e.g., a headphone connector).
The haptic module 179 may convert the electrical signal into a mechanical stimulus (e.g., vibration or motion) or an electrical stimulus that may be recognized by the user via his sense of touch or kinesthesia. According to an embodiment, the haptic module 179 may include, for example, a motor, a piezoelectric element, or an electrical stimulator.
The camera 180 may capture still images or moving images. According to an embodiment, the camera 180 may include one or more lenses, an image sensor, an image signal processor, or a flash.
The power management module 188 may manage power to the electronic device 101. According to an embodiment, the power management module 188 may be implemented as at least part of a Power Management Integrated Circuit (PMIC), for example.
The battery 189 may power at least one component of the electronic device 101. According to an embodiment, the battery 189 may include, for example, a non-rechargeable primary cell, a rechargeable secondary cell, or a fuel cell.
The communication module 190 may support establishing a direct (e.g., wired) communication channel or a wireless communication channel between the electronic device 101 and an external electronic device (e.g., the electronic device 102, the electronic device 104, or the server 108), and performing communication via the established communication channel. The communication module 190 may include one or more communication processors capable of operating independently of the processor 120 (e.g., an Application Processor (AP)) and supporting direct (e.g., wired) communication or wireless communication. According to an embodiment, the communication module 190 may include a wireless communication module 192 (e.g., a cellular communication module, a short-range wireless communication module, or a Global Navigation Satellite System (GNSS) communication module) or a wired communication module 194 (e.g., a Local Area Network (LAN) communication module or a Power Line Communication (PLC) module). A respective one of these communication modules may communicate with external electronic devices via a first network 198 (e.g., a short-range communication network such as bluetooth, wireless fidelity (Wi-Fi) direct, or infrared data association (IrDA)) or a second network 199 (e.g., a long-range communication network such as a cellular network, the internet, or a computer network (e.g., a LAN or Wide Area Network (WAN))). These various types of communication modules may be implemented as a single component (e.g., a single chip), or may be implemented as multiple components (e.g., multiple chips) that are separate from one another. The wireless communication module 192 may identify and authenticate the electronic device 101 in a communication network, such as the first network 198 or the second network 199, using subscriber information, such as an International Mobile Subscriber Identity (IMSI), stored in the subscriber identity module 196.
The antenna module 197 may transmit signals or power to or receive signals or power from outside of the electronic device 101 (e.g., an external electronic device). According to an embodiment, the antenna module 197 may include one or more antennas, and thus, at least one antenna suitable for a communication scheme used in a communication network, such as the first network 198 or the second network 199, may be selected from the plurality of antennas by, for example, the communication module 190 (e.g., the wireless communication module 192). Signals or power may then be transmitted or received between the communication module 190 and the external electronic device via the selected at least one antenna.
At least some of the above components may be interconnected and communicate signals (e.g., commands or data) communicatively between them via an inter-peripheral communication scheme (e.g., bus, General Purpose Input Output (GPIO), Serial Peripheral Interface (SPI), or Mobile Industry Processor Interface (MIPI)).
According to an embodiment, commands or data may be sent or received between the electronic device 101 and the external electronic device 104 via the server 108 connected with the second network 199. Each of the electronic device 102 and the electronic device 104 may be the same type of device as the electronic device 101 or a different type of device from the electronic device 101. According to embodiments, all or some of the operations to be performed at the electronic device 101 may be performed at one or more of the external electronic device 102, the external electronic device 104, or the server 108. For example, if the electronic device 101 should automatically perform a function or service or should perform a function or service in response to a request from a user or another device, the electronic device 101 may request the one or more external electronic devices to perform at least part of the function or service instead of or in addition to performing the function or service. The one or more external electronic devices that received the request may perform the requested at least part of the functions or services or perform another function or another service related to the request and transmit the result of the execution to the electronic device 101. The electronic device 101 may provide the result as at least a partial reply to the request with or without further processing of the result. To this end, for example, cloud computing technology, distributed computing technology, or client-server computing technology may be used.
An electronic device according to some embodiments may be one of various types of electronic devices. The electronic device may comprise, for example, a portable communication device (e.g., a smartphone), a computer device, a portable multimedia device, a portable medical device, a camera, a wearable device, or a home appliance. According to the embodiments of the present disclosure, the electronic devices are not limited to those described above.
It should be understood that certain embodiments of the present disclosure and terms used therein are not intended to limit the technical features set forth herein to specific embodiments, but include various changes, equivalents, or alternatives to the respective embodiments. For the description of the figures, like reference numerals may be used to refer to like or related elements. It will be understood that a noun in the singular corresponding to a term may include one or more things unless the relevant context clearly dictates otherwise. As used herein, each of the phrases such as "a or B," "at least one of a and B," "at least one of a or B," "A, B or C," "at least one of A, B and C," and "at least one of A, B or C" may include all possible combinations of the items listed together with the respective one of the plurality of phrases. As used herein, terms such as "1 st" and "2 nd" or "first" and "second" may be used to distinguish one element from another element simply and not to limit the elements in other respects (e.g., importance or order). It will be understood that, if an element (e.g., a first element) is referred to as being "coupled to", "connected to" or "connected to" another element (e.g., a second element), it can be directly (e.g., wiredly) connected to, wirelessly connected to, or connected to the other element via a third element, when the term "operatively" or "communicatively" is used or not.
As used herein, the term "module" may include units implemented in hardware, software, or firmware, and may be used interchangeably with other terms (e.g., "logic," "logic block," "portion," or "circuitry"). A module may be a single integrated component adapted to perform one or more functions or a minimal unit or portion of the single integrated component. For example, according to an embodiment, the modules may be implemented in the form of Application Specific Integrated Circuits (ASICs).
Certain embodiments set forth herein may be implemented as software (e.g., program 140) comprising one or more instructions stored in a storage medium (e.g., internal memory 136 or external memory 138) that is readable by a machine (e.g., electronic device 101). For example, under control of a processor, a processor (e.g., processor 120) of the machine (e.g., electronic device 101) may invoke and execute at least one of the one or more instructions stored in the storage medium, with or without the use of one or more other components. This enables the machine to be operable to perform at least one function in accordance with the invoked at least one instruction. The one or more instructions may include code generated by a compiler or code capable of being executed by an interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. Where the term "non-transitory" simply means that the storage medium is a tangible device and does not include a signal (e.g., an electromagnetic wave), the term does not distinguish between data being semi-permanently stored in the storage medium and data being temporarily stored in the storage medium.
According to embodiments, methods according to certain embodiments of the present disclosure may be included and provided in a computer program product. The computer program product may be used as a product for conducting a transaction between a seller and a buyer. The computer program product may be distributed in the form of a machine-readable storage medium, such as a compact disc read only memory (CD-ROM), or may be distributed (e.g., downloaded or uploaded) online via an application store (e.g., a Play store), or may be distributed (e.g., downloaded or uploaded) directly between two user devices (e.g., smartphones). At least part of the computer program product may be temporarily generated if it is published online, or at least part of the computer program product may be at least temporarily stored in a machine readable storage medium, such as a memory of a manufacturer's server, a server of an application store, or a forwarding server.
According to some embodiments, each of the above components (e.g., modules or programs) may comprise a single entity or multiple entities. According to certain embodiments, one or more of the above components may be omitted, or one or more other components may be added. Alternatively or additionally, multiple components (e.g., modules or programs) may be integrated into a single component. In such a case, according to some embodiments, the integrated component may still perform one or more functions of each of the plurality of components in the same or similar manner as the corresponding one of the plurality of components performed the one or more functions prior to integration. Operations performed by a module, program, or another component may, according to some embodiments, be performed sequentially, in parallel, repeatedly, or in a heuristic manner, or one or more of the operations may be performed in a different order or omitted, or one or more other operations may be added.
Fig. 2 is a block diagram 200 illustrating the camera 180 according to some embodiments. Referring to fig. 2, the camera 180 may include a
The flash lamp 220 may emit light, wherein the emitted light is used to enhance light reflected from the object. According to an embodiment, the flash 220 may include one or more Light Emitting Diodes (LEDs) (e.g., Red Green Blue (RGB) LEDs, white LEDs, Infrared (IR) LEDs, or Ultraviolet (UV) LEDs) or xenon lamps. The image sensor 230 may acquire an image corresponding to an object by converting light emitted or reflected from the object and transmitted through the
The image stabilizer 240 may move the image sensor 230 or at least one lens included in the
The memory 250 may at least temporarily store at least a portion of the image acquired via the image sensor 230 for subsequent image processing tasks. For example, if multiple images are captured quickly or image capture delays due to shutter lag, the acquired raw images (e.g., bayer pattern images, high resolution images) may be stored in memory 250 and their corresponding replica images (e.g., low resolution images) may be previewed via display device 160. Then, if a specified condition is satisfied (e.g., by user input or system command), at least a portion of the original image stored in the memory 250 may be retrieved and processed by, for example, the
The
According to an embodiment, the electronic device 101 may include multiple cameras 180 having different attributes or functions. In this case, at least one camera 180 of the plurality of cameras 180 may form a wide-angle camera, for example, and at least another camera 180 of the plurality of cameras 180 may form a telephoto camera. Similarly, at least one camera 180 of the plurality of cameras 180 may form a front-facing camera, for example, and at least another camera 180 of the plurality of cameras 180 may form a rear-facing camera.
Electronic devices according to certain embodiments of the present disclosure may include various types of electronic devices. The electronic device may comprise, for example, at least one of a portable communication device (e.g., a smartphone), a computer device, a portable multimedia device, a portable medical device, a camera, a wearable device, or a home appliance. The electronic device according to the embodiment of the present disclosure is not limited to the above-described device.
It should be understood that certain embodiments of the present disclosure and terms used therein are not intended to limit the technical features set forth herein to specific embodiments, but include various changes, equivalents, or alternatives to the respective embodiments. For the description of the figures, like reference numerals may be used to refer to like or related elements. It will be understood that a noun in the singular corresponding to a term may include one or more things unless the relevant context clearly dictates otherwise. As used herein, each of the phrases such as "a or B," "at least one of a and B," "at least one of a or B," "A, B or C," "at least one of A, B and C," and "at least one of A, B or C" may include all possible combinations of the items listed together with the respective one of the plurality of phrases. As used herein, terms such as "1 st" and "2 nd" or "first" and "second" may be used to distinguish one element from another element simply and not to limit the elements in other respects (e.g., importance or order). It will be understood that, if an element (e.g., a first element) is referred to as being "coupled to", "connected to" or "connected to" another element (e.g., a second element), it can be directly connected to the other element or be connected to the other element via a third element, if the term "operable" or "communicatively" is used or is not used.
As used herein, the term "module" may include units implemented in hardware, software, or firmware, and may be used interchangeably with other terms (e.g., "logic," "logic block," "portion," or "circuitry"). A module may be a single integrated component adapted to perform one or more functions or a minimal unit or portion of the single integrated component. For example, the modules may be implemented in the form of Application Specific Integrated Circuits (ASICs).
Certain embodiments set forth herein may be implemented as software (e.g., program 140) comprising instructions stored in a machine-readable storage medium (e.g., internal memory 136 or external memory 138) that are readable by a machine (e.g., a computer). The machine may invoke instructions stored in the storage medium and may operate in accordance with the invoked instructions and may include an electronic device (e.g., electronic device 101) in accordance with the disclosed embodiments. Under the control of a processor, which when executed by a processor (e.g., processor 120), may perform functions corresponding to the instructions directly or using other components under the control of the processor. The instructions may comprise code generated or executed by a compiler or interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. Wherein the term "non-transitory" simply means that the storage medium is a tangible device and does not include a signal, but the term does not distinguish between data being semi-permanently stored in the storage medium and data being temporarily stored in the storage medium.
According to embodiments, methods according to certain embodiments of the present disclosure may be included and provided in a computer program product. The computer program product may be used as a product for conducting a transaction between a seller and a buyer. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., compact disc read only memory (CD-ROM)), or may be distributed via an application Store (e.g., Play Store)TM) The computer program product is distributed online. At least part of the computer program product may be temporarily generated if it is distributed online, or at least part of the computer program product may be at least temporarily stored in a storage medium, such as a memory of a manufacturer's server, a server of an application store or a forwarding server.
According to some embodiments, each of the above components (e.g., modules or programs) may comprise a single entity or multiple entities. One or more of the above-described subcomponents may be omitted, or one or more other subcomponents may be added. Alternatively or additionally, multiple components (e.g., modules or programs) may be integrated into a single component. In such a case, according to some embodiments, the integrated component may still perform one or more functions of each of the plurality of components in the same or similar manner as the corresponding one of the plurality of components performed the one or more functions prior to integration. Operations performed by a module, program, or another component may, according to some embodiments, be performed sequentially, in parallel, repeatedly, or in a heuristic manner, or one or more of the operations may be performed in a different order or omitted, or one or more other operations may be added.
According to an embodiment, the processor 120 may control the communication module 190 to transmit an image (preview image) obtained through the camera 180 to an external electronic device (e.g., the electronic device 102, the electronic device 104, or the server 108) for a translation service. For example, the processor 120 may determine whether to transmit the image obtained by the camera 180 based on at least one of a quality of the image obtained by the camera 180 or motion information obtained by the sensor module 176. For example, when the sharpness (e.g., blur state) of the image obtained by the camera 180 is less than or equal to a reference value, the processor 120 may determine that there is a limitation on the translation of the image obtained by the camera 180. That is, when the sharpness of the image obtained by the camera 180 is less than or equal to the reference value, the processor 120 may determine not to transmit the image obtained by the camera 180 to the external electronic device. For example, when the number of feature points of the image obtained by the camera 180 is less than the reference number, or when the feature points are distributed by longer than the reference distance, the processor 120 may determine that there is a limitation on the translation of the image obtained by the camera 180. The reference distance may include a maximum distance between feature points that can form text. For example, when the motion of the electronic device 100 falls outside of the reference range based on the motion information obtained by the sensor module 176, the processor 120 may determine that there is a limitation on the translation of the image obtained by the camera 180. For example, when it is determined that an image obtained through the camera 180 is to be transmitted to the external electronic device, the processor 120 may control the communication module 190 to transmit at least one partial image and a whole image corresponding to the image to the external electronic device. For example, the processor 120 may extract at least one partial image based on at least one of a history of using the translation service or a distribution of feature points of the image. The history of using the translation service is at least one text region probability model corresponding to the history of using the translation service, and may include region information on the display device 160 on which the text of the external object has been photographed and displayed for the translation service in the electronic device 101. For example, the processor 120 may control the communication module 190 to transmit at least one partial image and a whole image corresponding to an image to different external electronic devices.
According to an embodiment, processor 120 may calibrate a display area received from an external electronic device (e.g., electronic device 102, electronic device 104, or server 108) through communication module 190 that is translated into text in a different language. For example, the processor 120 may receive text included in an image (partial image or entire image) transmitted to an external electronic device and region information (e.g., a position, a size, and a length of a text) of the text through the communication module 190. The processor 120 may set a region of interest (ROI) corresponding to text in an image transmitted to an external electronic device based on region information of the text. The processor 120 may set pixels having the same brightness or color attribute in the region of interest as the text candidate region. The processor 120 may set the text display area by removing outliers (outliers) of features that do not fit into the text in the text candidate area. For example, the outlier may include at least one pixel located on a boundary of the region of interest in the candidate region of text. For example, the text may include text recognized by an optical character recognition method or text translated by a translation engine.
According to an embodiment, the processor 120 may determine additional properties of the text through the display area of the text to display the text translated into a different language received from the external electronic device on the corresponding area. For example, the additional attribute of the text may include at least one of a color of the text, a background color, or a size or font of the text. For example, the processor 120 may identify attributes of brightness or color of pixels included in a display region of text, and may distinguish the text region from a background region. The processor 120 may set at least one of a color, a size, or a font of the text translated into the different language based on at least one of a size of the text region or a color attribute of pixels included in the text region. The processor 120 may set a color (background color) of a region (background) other than the text in the display region of the text based on the color attribute of the pixel included in the background region. For example, the color of a region other than text may be set based on the average value of the color attributes of pixels included in the background region. For example, the color of a region other than text may be set based on the color attribute most distributed among the color attributes of the pixels included in the background region.
According to an embodiment, processor 120 may calibrate a display location of text received from an external electronic device (e.g., electronic device 102, electronic device 104, or server 108) that is translated into a different language based on motion information of electronic device 101 detected by sensor module 176. For example, the processor 120 may continuously collect motion information of the electronic device 101 through the sensor module 176 from the time an image (preview image) is obtained through the camera 180. The processor 120 may detect a difference between an image transmitted to the external electronic device and an image received from the external electronic device displaying text translated into a different language based on the motion information of the electronic device 101. The processor 120 may calibrate the display position of the text translated into the different language based on the difference between the images.
According to an embodiment, the processor 120 may refine the results of the translation service based on the text being translated into a different language corresponding to a plurality of images (e.g., partial images or whole images) received from an external electronic device (e.g., the electronic device 102, the electronic device 104, or the server 108) through the communication module 190. For example, when receiving text translated into a different language corresponding to one of the images transmitted to the external electronic device, the processor 120 may determine whether text corresponding to another image exists in an area for displaying the text translated into the different language. When there is no text corresponding to another image in the corresponding region, the processor 120 may render text received from the external electronic device that is translated into a different language for display in the corresponding region. When there is text corresponding to another image in the corresponding region, the processor 120 may select text having relatively high reliability as text to be displayed in the corresponding region from the text corresponding to the other image and text translated into a different language received from the external electronic device. For example, the reliability of the text may be calculated based on at least one of a location of the text in the image or an accuracy of the text.
Fig. 3 is a flow diagram 300 for providing translation services by using an external electronic device in an electronic device, in accordance with certain embodiments of the present disclosure. In the following description, the electronic device may include the electronic device 101 of fig. 1 or at least a portion of the electronic device 101 (e.g., the processor 120).
Referring to fig. 3, in
In
In
In
When the text corresponding to the partial image transmitted to the external electronic device is not received (no in operation 307), the electronic device (e.g., the processor 120) may continuously recognize the motion of the electronic device or the external object included in the first image in
When receiving text corresponding to the partial image transmitted to the external electronic device (yes in operation 307), the electronic device (e.g., the processor 120) may compensate for a position of the text corresponding to the partial image based on motion information of the electronic device or an external object included in the first image and may display the text on a display device (e.g., the display device 160 of fig. 1) in
For example, the processor 120 may detect a position change of the first image displayed on the display device 160 or the external object included in the first image, which occurs between the time when the first image is obtained and the current time, based on the motion information of the electronic device or the external object included in the first image. The processor 120 may compensate for a position of text corresponding to the partial image based on a change in position of the first image or an external object included in the first image, and may control display of the text through the display device 160 to cover at least one external object.
The detection of a change in position of the external image from the first image to the second image may be determined in various ways. In some embodiments, the camera may encode the first image and the second image using MPEG. The motion information may be determined by examining the first image, the second image, and the motion vector in each image between the first image and the second image. In one embodiment, the electronic device may create the motion vector by encoding the first picture as an I-picture according to MPEG standards and encoding the second picture as a P-picture that depends on data of the first picture. The motion vector of the proximity object may be used to determine motion.
FIG. 4 is a flow diagram 400 for selectively sending images for translation services in an electronic device to an external electronic device, in accordance with certain embodiments of the present disclosure. The following description may be about an operation of transmitting at least one partial image of the first image obtained by the camera 180 to the external electronic device in
Referring to fig. 4, when a first image (e.g., a preview image) obtained by a camera (e.g., camera 180 of fig. 1) is displayed on a display device (e.g., display device 160 of fig. 1) (e.g.,
In operation 403, the electronic device (e.g., the processor 120) may identify a motion of the electronic device or at least one external object included in the first image. For example, the processor 120 may collect motion information of the electronic device 101 or at least one external object included in the first image from the time when the first image is obtained by the camera 180. For example, the motion of the electronic device 101 may be obtained by the sensor module 176 (e.g., acceleration sensor, gravity sensor).
In operation 405, the electronic device (e.g., the processor 120) may determine whether to request translation of the first image based on the quality of the first image and motion information of the electronic device or at least one external object included in the first image. For example, when the resolution (e.g., blur state) of the first image is higher than or equal to a reference value (i.e., good), the processor 120 may determine to provide a translation service for the first image. As described above, although the present embodiment uses translation, the present disclosure is not limited to translation. For example, the processor 120 may determine to provide a translation service of the first image when the number of feature points of the first image is greater than a reference number or the feature points are concentrated on a certain area. For example, when the motion of the electronic device 101 obtained by the sensor module 176 falls within the reference range, the processor 120 may determine to provide a translation service of the first image. For example, when the motion of the preview image obtained by the camera 180 falls within the reference range, the processor 120 may determine to provide a translation service of the first image. For example, the motion of the preview image may include a motion of at least one object included in the preview image. The processor 120 may detect motion of an object included in the preview image by comparing successive preview images (e.g., pixels forming an image) obtained by the camera 180.
When it is determined that translation of the first image is not requested (e.g., no in operation 405), the electronic device (e.g., processor 120) may impose a restriction on the transmission of the first image. For example, when it is determined that the translation of the first image is not requested, the processor 120 may control to display a guide message indicating that there is a limitation on the translation service through the display device 160. For example, the instructional message may include the reason the translation service is restricted.
When it is determined that translation of the first image is requested (e.g., yes in operation 405), the electronic device (e.g., the processor 120) may transmit at least one partial image and a whole image of the first image to at least one external electronic device. For example, the processor 120 may extract at least one local image from the first image based on at least one of a history of using a translation service (e.g., a text region probability model) or a distribution of first feature points of the first image. The processor 120 may transmit the at least one partial image and the entire image extracted from the first image to at least one external electronic device through the communication module 190.
Fig. 5 is a flow diagram 500 for extracting a partial image of an image obtained by a camera in an electronic device, according to some embodiments of the present disclosure. Fig. 6 illustrates a configuration 600 of images obtained by a camera, according to some embodiments of the present disclosure. The following description may be about an operation of transmitting an image to an external electronic device in operation 407 of fig. 4. In the following description, the electronic device may include the electronic device 101 of fig. 1 or at least a portion of the electronic device 101 (e.g., the processor 120). Object 610 is a text-bearing object. The electronic device may take a picture 620 of the object bearing the text and extract the text 660, 662.
Referring to fig. 5, when a translation service (e.g., operation 405 of fig. 4) of a first image (e.g., a preview image) obtained by a camera (e.g., camera 180 of fig. 1) is provided, an electronic device (e.g., processor 120) may identify a probabilistic model of at least one text region corresponding to a history of using the translation service in
In
At
The processor 120 may set a region corresponding to at least one of the plurality of probabilistic models of the text region corresponding to the history of using the translation service, which overlaps with a region on which the feature points are at least partially concentrated, as the text region of the first image. For example, the text regions of the first image may include at least some regions estimated to have text within the first image. For example, the preview image 620 may include an image 630 of the external object 610, a language 650 of text included in the image, and a language 652 for translation of the text. The display device 160 may display a service list 640 related to the preview image 620 on at least some areas, as shown in fig. 6. The service list 640 may include a menu (e.g., "text") 642 for selecting a translation service.
In
In some cases, the partial images (e.g., partial image 662) may properly isolate the text. For example, the partial image 662 includes the text "applet" and a portion of the text "banana", where the bottom portion of "banana" is clipped.
In
Fig. 7 is a flow diagram 700 for displaying translated text (or otherwise corresponding text) in an electronic device that may be received from an external electronic device, in accordance with some embodiments of the present disclosure. Fig. 8A illustrates a
Referring to fig. 7, when text corresponding to an image is received from an external electronic device (e.g.,
In
When there is text corresponding to another image received from the external electronic device at a previous time (e.g., yes in operation 703), the electronic device (e.g., the processor 120) may refine a result of the translation service received from the external electronic device at the previous time based on the text corresponding to the image in
When there is no text corresponding to another image received from the external electronic device at a previous time (e.g., no in operation 703), or when the result of the translation service is completed (e.g., operation 705), the electronic device (e.g., the processor 120) may calibrate a display position of the text received from the external electronic device based on motion information of the electronic device or an external object included in the first image in
In some embodiments, the first image may comprise a live (live) image provided by the camera and displayed on the display when the camera has not captured the image. The second image may comprise a live image provided by the camera when the corresponding text is received. During the time when the text is sent to the external device and the corresponding text is received, the object may have moved, or the camera may have moved. Thus, the object bearing the text will be located in a different position in the second picture. Thus, in order to place the received corresponding text on the object bearing the text, the motion is compensated from the original position of the text in the first image.
In
For example, note that in the second image, the colors of the pixels forming the object bearing the image may have changed. For example, if the object has moved, the illumination may have changed, resulting in a different color of the pixel.
In
According to an embodiment, the electronic device 101 (e.g., the processor 120) may set a language for translating the text extracted from the first image. For example, when an input (e.g., a touch input) in a language 652 for translating text displayed on the preview image 620 of fig. 6 is detected, the processor 120 may control to display an available language list 870 for translation through the display device 160, as shown in fig. 8F. The processor 120 may set a language selected from the list of available languages 870 as a language for translating the text extracted from the first image. As noted above, the present disclosure is not limited to language translation. In some embodiments, instead of listing the language 870, the user may select whether the corresponding text is a synonym, updated information, or any kind of different correspondence.
According to an embodiment, the electronic device 101 (e.g., the processor 120) may recognize text included in the image through a first external electronic device (e.g., a first server) and may translate the text included in the image into text of a different language through a second external electronic device (e.g., a second server). For example, the processor 120 may control at least one partial image and a whole image of the preview image to be transmitted to the first external electronic device through the communication module 190. When receiving text extracted from the image and corresponding to the first language from the first external electronic device, the processor 120 may control transmission of the text corresponding to the first language to the second external electronic device through the communication module 190. As in
FIG. 9 is a
Referring to fig. 9, when text corresponding to an image is received from an external electronic device (e.g., yes in
In
In
In
FIG. 11 is a flow diagram 1100 for displaying translated text corresponding to a partial image in an electronic device, according to some embodiments of the present disclosure. The following description may be of operations to complete the results of the translation service in
Referring to fig. 11, when calibrating text corresponding to an image (e.g., a partial image or a whole image) received from an external electronic device, the electronic device (e.g., the processor 120) may detect reliability of the corresponding text in operation 1101. For example, the processor 120 may detect the reliability of the corresponding text based on at least one of the accuracy of the text or the position of the text in the image (e.g., the partial image or the entire image) received from the external electronic device. For example, in the case of the first partial image 660 of fig. 6, since the text "banana" in the first partial image 660 is closer to the center of the first partial image 660, the processor 120 may determine that the text has relatively high reliability. For example, in the case of the second partial image 662 of fig. 6, the processor 120 may detect the reliability of each of the texts "applet" and "banana" based on the position of the text in the second partial image 662. In this case, based on the positions of the texts "applet" and "banana" in the second partial image 662, it can be determined that the reliability of the text "applet" is higher than that of the text "banana". For example, the accuracy of the text may be determined based on whether the text maps to text in a particular language.
In
When there is no text corresponding to another image in an area for displaying a text received from an external electronic device in a first image (preview image) (e.g., no in operation 1103), the electronic device (e.g., the processor 120) may display the text received from the external electronic device on the area for displaying the text received from the external electronic device in the whole image in operation 1105. For example, the processor 120 may render an image to display text translated into a different language received from an external electronic device on a text display area in the preview image set through operations 901 to 907. For example, when text (translated into text in a different language) corresponding to the second partial image 662 of fig. 6 is received at a first time, the processor 120 may render the preview image to display the text translated into the different language on a display area of the text corresponding to the second partial image 662 in the
When there is text corresponding to another image in an area for displaying text received from the external electronic device in the first image (preview image) (e.g., yes in operation 1103), the electronic device (e.g., the processor 120) may determine whether reliability of the text received from the external electronic device is higher than reliability of the another text in
Reliability of text received from an external electronic device is higher than that of text received from an external electronic deviceWhen the reliability of the text corresponding to the other image (e.g., "yes" in operation 1107), the electronic device (e.g., processor 120) may refine the text of the text display area to text received from an external electronic device in operation 1109. For example, when determining the text corresponding to the first partial image 660 (""(bananas in korean)), the processor 120 may render the preview image to display the text of the
When the reliability of the text received from the external electronic device is lower than or equal to the reliability of another text (e.g., no in operation 1107), the electronic device (e.g., the processor 120) may hold the text corresponding to the another image displayed on the text display area of the first image (preview image).
According to an embodiment, the electronic device 101 (e.g., the processor 120) may maintain the text displayed on the first display region of the preview image when the text of the first display region received at the first time is the same as the text of the first display region received at the second time. For example, when the text of the first display region received at the first time is the same as the text of the first display region received at the second time,
According to an embodiment, the electronic device 101 (e.g., the processor 120) may continuously collect motion information of the electronic device 101 or an external object included in the first image from the time when the first image is obtained by the camera 180. The electronic device 101 may end the translation service before the translation service is completed based on the motion information of the electronic device 101 or the external object included in the first image. For example, when the motion of the electronic device 101 or an external object included in the first image falls outside of the reference range, the processor 120 may determine that it is not possible to display text translated into a different language. Thus, the processor 120 may stop the translation service.
The electronic device and the operating method thereof according to some embodiments transmit an image obtained through a camera to an external electronic device (e.g., a server), calibrate a display position of a translated text received from the external electronic device, or compensate for a difference between an image transmitted to the external electronic device (e.g., the server) for a translation service and an image displayed on a display, and display the translated text. Accordingly, the text translated by the external electronic device can be smoothly displayed on the image obtained by the camera.
The electronic device and the operating method thereof according to some embodiments selectively transmit an image for a translation service to an external electronic device (e.g., a server) based on at least one of quality of an image obtained through a camera (e.g., distribution of feature points) or motion information of the electronic device. Therefore, the number of translations using the external electronic device can be reduced, and thus the consumption of network resources can be reduced.
The electronic device and the operation method according to some embodiments transmit at least one partial image and a whole image corresponding to an image obtained by a camera to an external electronic device (e.g., a server). Accordingly, it is possible to reduce delay in translation service and translation delay in a network caused by calculation of an external electronic device.
Certain embodiments of the present disclosure may be implemented by software including instructions stored in a machine (e.g., computer) readable storage medium (e.g., memory 130 of fig. 1). The machine is a device capable of retrieving stored instructions from a storage medium and operating in accordance with the retrieved instructions and may include an electronic apparatus 100, 102, 104 or a server 108. When the instructions are executed by a processor (e.g., processor 120), the processor may perform the functions corresponding to the instructions directly, or may use other elements under the control of the processor to perform the functions corresponding to the instructions. The instructions may include code generated or executed by a compiler or interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. Herein, the term "non-transitory" simply means that the storage medium is tangible and does not include signals, regardless of whether data is semi-permanently or temporarily stored in the storage medium.
Methods according to certain embodiments disclosed herein may be included and provided in a computer program product. The computer program product may be used as a product for conducting a transaction between a seller and a buyer. For example, the computer program product may be a downloadable application or computer program for a transaction between a seller and a purchaser. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., a compact disc read only memory (CD-ROM)), or may be distributed via an application store (e.g., a PlayStore)TM) And (4) online distribution. If distributed online, at least a portion of the computer program product may be temporarily generated or at least temporarily stored in a machine-readable storage medium, such as a memory of a relay server, a server of an application store, or a manufacturer server.
Each element (e.g., module or program) according to some embodiments may comprise a single entity or multiple entities, and in some embodiments some of the above elements may be omitted, or other sub-elements may be added. Alternatively or additionally, some elements (e.g., modules or programs) may be integrated into a single element, and the integrated element may still perform the functions performed by each corresponding element in the same or a similar manner as before the corresponding element was integrated. Operations performed by a module, programming module, or other element according to some embodiments may run sequentially, in parallel, repeatedly, or in a heuristic manner. At least some of the operations may be performed according to another order, may be omitted, or may include other operations as well.
The present disclosure has been described with reference to various example embodiments thereof. It will be understood by those skilled in the art that the present disclosure may be embodied in modified forms without departing from the essential characteristics thereof. The disclosed embodiments are, therefore, to be considered in all respects as illustrative and not restrictive. The scope of the present disclosure is defined not by the detailed description but by the appended claims, and all differences within the scope will be construed as being included in the present disclosure.
- 上一篇:一种医用注射器针头装配设备
- 下一篇:用于独立于领域的术语链接的系统和方法