Image processing method and device

文档序号:193397 发布日期:2021-11-02 浏览:56次 中文

阅读说明:本技术 图像处理方法及装置 (Image processing method and device ) 是由 王俊贤 于 2021-07-28 设计创作,主要内容包括:本申请公开了一种图像处理的方法及装置,属于信息处理技术领域。所述方法包括:接收用户的第一输入,所述第一输入用于选择目标图像,以及输入目标信息;响应于所述第一输入,将所述目标信息与目标对象进行关联,以生成目标文件;向第二终端发送所述目标文件;其中,所述目标图像包括所述目标对象;所述目标信息包括文字信息或语音信息中的至少一项;所述目标文件用于所述第二终端显示将所述目标信息与所述目标对象进行关联后的图像信息。(The application discloses an image processing method and device, and belongs to the technical field of information processing. The method comprises the following steps: receiving a first input of a user, wherein the first input is used for selecting a target image and inputting target information; in response to the first input, associating the target information with a target object to generate a target file; sending the target file to a second terminal; wherein the target image comprises the target object; the target information comprises at least one item of text information or voice information; the target file is used for the second terminal to display the image information which associates the target information with the target object.)

1. An image processing method, comprising:

receiving a first input of a user, wherein the first input is used for selecting a target image and inputting target information;

in response to the first input, associating the target information with a target object to generate a target file;

sending the target file to a second terminal;

wherein the target image comprises the target object; the target information corresponds to the target object, and the target information comprises at least one item of text information or voice information; the target file is used for the second terminal to display the image information which associates the target information with the target object.

2. The image processing method according to claim 1, wherein in a case where the target information includes text information, said associating the target information with a target object includes:

selecting target characters from the character information and marking the target object in the target image;

and associating the target characters with the marked target object.

3. The image processing method according to claim 2, wherein the marking the target object comprises: and changing the display mode of the target object in the target image.

4. The method of claim 3, wherein associating the target text with the marked target object comprises:

and under the condition that the target characters are selected, displaying the marked target object in the target image.

5. The image processing method according to claim 1, wherein in a case where the target information includes voice information, said associating the target information with the target object includes:

receiving a second input of a user triggering voice recognition and image recognition;

responding to the second input, extracting keywords in the voice information, and determining the target object from the target image according to the keywords;

and associating the keywords with the target object.

6. The image processing method according to claim 5, wherein the associating the keyword with the target object comprises:

under the condition of playing the voice information, starting to display key images from the playing time corresponding to the keywords in the voice information, and canceling to display the key images until the voice information is played or the next keyword is played;

the key image is an image obtained by marking a target object corresponding to the keyword.

7. An image processing method, comprising:

receiving a target file sent by a first terminal; the target file comprises image information generated after the first terminal associates target information with a target object in a target image, wherein the target information is information for the target object and comprises at least one of text information or voice information;

and displaying the image information.

8. The image processing method according to claim 7, wherein, in a case where the target information includes the text information, the displaying the image information includes:

displaying the text information;

receiving a third input of a user for selecting a target character in the character information;

displaying the marked target object in the target image in response to the third input;

the target characters correspond to the target object, and the target characters are characters used by the first terminal for marking the target object.

9. The image processing method according to claim 7, wherein in a case where the target information includes the voice information, the displaying the image information includes:

receiving a fourth input of the voice played selected by the user;

in response to the fourth input, playing the voice information with the image information displayed;

under the condition of playing the voice information, starting to display key images from the playing time corresponding to the keywords in the voice information, and canceling to display the key images until the voice information is played or the next keyword is played;

the key image is an image obtained by marking a target object corresponding to the keyword.

10. An image processing apparatus characterized by comprising:

the receiving module is used for receiving a first input of a user, wherein the first input comprises a selection target image and input target information;

the association module is used for responding to the first input, associating the target information with a target object to generate a target file;

the sending module is used for sending the target file to a second terminal;

wherein the target image comprises the target object; the target information corresponds to the target object, and the target information comprises at least one item of text information or voice information;

the target file is used for the second terminal to display the image information which associates the target information with the target object.

11. An image processing apparatus characterized by comprising:

the receiving module is used for receiving a target file sent by a first terminal; the target file comprises image information generated after the first terminal associates target information with a target object in a target image, wherein the target information is information for the target object and comprises at least one of text information or voice information;

and the display module is used for displaying the image information.

Technical Field

The application belongs to the technical field of information processing, and particularly relates to an image processing method and device.

Background

When a person wants to share or describe a certain picture with other people in a non-face-to-face condition, the communication software needs to send a picture and send a descriptive text or voice at the same time so as to make a receiver understand the reason for sending the picture and the meaning that the sender wants to express.

In this scenario, since the picture and text or voice resources are sent separately, the visual and text or voice operations are separated, so it is difficult to achieve the same easy spoken language description as the image-to-surface, and it is also difficult for the receiver to understand the received information.

Disclosure of Invention

The embodiment of the application aims to provide an image processing method and an image processing device, which can solve the technical problems of low picture description efficiency and poor picture description precision brought by a non-face-to-face picture sharing scene.

In a first aspect, an embodiment of the present application provides an image processing method, including:

receiving a first input of a user, wherein the first input is used for selecting a target image and inputting target information;

in response to the first input, associating the target information with a target object to generate a target file;

sending the target file to a second terminal;

wherein the target image comprises the target object; the target information corresponds to the target object, and the target information comprises at least one item of text information or voice information;

the target file is used for the second terminal to display the image information which associates the target information with the target object.

In a second aspect, an embodiment of the present application provides an image processing method, including:

receiving a target file sent by a first terminal; the target file comprises image information generated after the first terminal associates target information with a target object in a target image, wherein the target information is information for the target object and comprises at least one of text information or voice information;

and displaying the image information.

In a third aspect, an embodiment provides an image processing apparatus comprising:

the receiving module is used for receiving a first input of a user, wherein the first input comprises a selection target image and input target information;

the association module is used for responding to the first input, associating the target information with a target object to generate a target file;

the sending module is used for sending the target file to a second terminal;

wherein the target image comprises the target object; the target information corresponds to the target object, and the target information comprises at least one item of text information or voice information;

the target file is used for the second terminal to display the image information which associates the target information with the target object.

In a fourth aspect, an embodiment of the present application provides an image processing apparatus, including:

the receiving module is used for receiving a target file sent by a first terminal; the target file comprises image information generated after the first terminal associates target information with a target object in a target image, wherein the target information is information for the target object and comprises at least one of text information or voice information;

and the display module is used for displaying the image information.

In a fifth aspect, the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect or the second aspect.

In a sixth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first or second aspect.

In a seventh aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the steps of the method according to the first aspect or the second aspect.

According to the image processing method and device, the image and the text or voice resources for the image are associated to generate the target file, combination of vision and text or voice can be achieved, so that a receiver can understand the reason for sending the image and the meaning which the sender wants to express, the image description efficiency and description precision in a non-face-to-face image sharing scene are improved, and user experience is effectively improved.

Drawings

Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;

FIG. 2 is one of the schematic diagrams of associating target information with a target object according to an embodiment of the application;

FIG. 3 is a second illustration of associating target information with a target object according to an embodiment of the present application;

FIG. 4 is a third illustration of associating target information with a target object according to an embodiment of the present application;

FIG. 5 is a fourth illustration of associating target information with a target object according to an embodiment of the present application;

FIG. 6 is a fifth diagram illustrating associating target information with a target object according to an embodiment of the present application;

FIG. 7 is a sixth schematic view of associating target information with a target object according to an embodiment of the present application;

FIG. 8 is a seventh illustration of associating target information with a target object according to an embodiment of the present application;

FIG. 9 is an eighth schematic diagram illustrating associating target information with a target object according to an embodiment of the present application;

FIG. 10 is a ninth illustration of associating target information with a target object, in accordance with an embodiment of the present application;

fig. 11 is a second flowchart illustrating an image processing method according to an embodiment of the present application;

FIG. 12 is a schematic diagram of displaying a target document according to an embodiment of the present application;

fig. 13 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;

fig. 14 is a second schematic structural diagram of an image processing apparatus according to an embodiment of the present application;

FIG. 15 is a schematic structural diagram of an electronic device according to an embodiment of the present application;

fig. 16 is a hardware configuration diagram of an electronic device implementing an embodiment of the present application.

Detailed Description

The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.

The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.

The following describes in detail an image processing method and an image processing apparatus provided in the embodiments of the present application with reference to the accompanying drawings.

Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application. Referring to fig. 1, an embodiment of the present application provides an image processing method, which may include:

step 110, receiving a first input of a user, wherein the first input is used for selecting a target image and inputting target information;

step 120, responding to the first input, associating the target information with the target object to generate a target file;

step 130, sending the target file to a second terminal;

wherein the target image comprises a target object; the target information corresponds to the target object, and the target information comprises at least one item of text information or voice information;

the target file is used for the second terminal to display the image information obtained by associating the target information with the target object.

It should be noted that the execution subject of the image processing method provided by the embodiment of the present application may be the first terminal. The first terminal may be an intelligent electronic device, such as a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like.

A user may enter an APP into a particular mode while using, for example, a chat APP. In this mode, the user can select a target picture and input target information.

In step 110, the first terminal may receive a first input of a user for selecting a target image and inputting target information.

As shown in fig. 2, the target image may be an image including 5 cartoon cat characters, and the target object may be at least one of the 5 cartoon cat characters included in the image.

The target information is information corresponding to the target object. For example, the target information may be textual information, such as "i like this cat the best" or the like; or voice information, such as "i like the cat in the lower left corner, the cat in the middle is the smallest and lovely", etc.

In step 120, the first terminal may associate the target information with the target object in response to the first input, thereby generating a target file.

For example, in the case where the target information is the text information "i prefer the cat" the first terminal may associate "the cat" with any one of 5 cartoon cat characters. In the case that the target information is the voice information "i likes the cat in the lower left corner most, and the cat in the middle of the upper face is the smallest and lovely", the first terminal may associate { lower left corner, cat } with the cartoon cat image located in the lower left corner of the target image, and associate { middle, cat } with the cartoon cat image located in the middle of the target image.

After associating the target information with the target object, the first terminal may generate a target file according to the association result.

In step 130, the first terminal sends the target file to the second terminal, so that the second terminal can analyze the target file, thereby displaying the image information obtained by associating the target information with the target object.

It should be noted that the second terminal may be an intelligent electronic device, such as a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook, or a Personal Digital Assistant (PDA).

The second terminal and the first terminal may be the same type of terminal, for example, the second terminal and the first terminal may both be mobile phones. The second terminal and the first terminal may also be different types of terminals, for example, the second terminal may be a PC and the first terminal may be a handset.

According to the image processing method provided by the embodiment of the application, the image and the text or voice resource aiming at the image are associated to generate the target file, so that the combination of vision and text or voice can be realized, a receiver can conveniently understand the reason for sending the image and the meaning which the sender wants to express, the image description efficiency and the description precision under the non-face-to-face image sharing scene are improved, and the user experience is effectively improved.

In one embodiment, in the case that the target information includes text information, associating the target information with the target object may include:

selecting target characters from the character information and marking target objects in the target images;

and associating the target characters with the marked target object.

As shown in fig. 2, the user may enter textual information, such as "i prefer the cat," from a text entry box before associating the target information with the target object.

Thereafter, the first terminal may receive the above input of the user, where the input includes two operation instructions of the user:

operation instruction 1: the target text, e.g., "this cat," is selected from the text information entered by the user, as shown in fig. 3. Next, the user can perform operation instruction 2: the target object is marked from the target image, for example, when the "cat" expressed by the user refers to the middle cartoon cat character, the user marks the middle cartoon cat character from the target image, as shown in fig. 4.

Optionally, the first terminal may record a marking operation of the user on the target object, for example, a two-finger zoom-in operation of the user on the target object, the first terminal records operation related information such as an operation duration of the operation, and the target file includes operation information, so that after the second terminal receives the target file, the associated image information may be analyzed and displayed according to the operation information.

Wherein marking the target object may include: and changing the display mode of the target object in the target image.

For example, the user may operate the cartoon cat image in the middle of the target image, so that the middle cartoon cat image is displayed in the target image in an enlarged manner, highlighted, displayed in a vibrating manner, displayed in a circled manner, and the like. The embodiment of the present application does not specifically limit the specific manner of the marking.

The first terminal may then associate the target text "this cat" with the cartoon cat character in the middle of the marked target image in response to the user input.

The associating of the target text with the marked target object may be displaying the marked target object in the target image when the target text is selected.

As shown in fig. 5, when the user selects the target text "this cat", the first terminal may display the cartoon cat character located in the middle of the target image in at least one of an enlarged display, a highlighted display, a vibration display, and a circled display in the target image.

After the user completes the operation and the first terminal associates the target text with the target object, the first terminal may synthesize the operation information, the text information (including the target text), and the image information (including the target object corresponding to the target text) into a target file in a predetermined format.

The predetermined format may be, for example, PPT, GIF, etc.

According to the image processing method provided by the embodiment of the application, the target characters are associated with the target object according to the input of the user, and the association result can be determined according to the intention of the user, so that the picture description efficiency and the picture description precision can be further improved.

In an embodiment, the image processing method provided in the embodiment of the present application may further include:

and highlighting the target characters in the target file.

It can be understood that the target characters are highlighted in the target file, so that a user of the second terminal can conveniently and quickly locate the target characters from the character information, the target characters are selected to determine the target object, and the understanding speed of the reason for sending the picture and the meaning which the sender wants to express is further improved.

In one embodiment, in the case that the target information includes voice information, associating the target information with the target object may include:

receiving a second input of a user triggering voice recognition and image recognition;

responding to the second input, extracting keywords in the voice information, and determining a target object from the target image according to the keywords; the keywords are associated with the target object.

In step 110, the first terminal may receive voice information for the target object input by the user and save the voice information, as shown in fig. 6.

The second input may be an operation instruction to drag the voice information into the target image, as shown in fig. 7. That is, after the user drags the voice information into the target image, the first terminal triggers the image recognition function and the voice recognition function in response to the input.

Through the image recognition function, the first terminal can determine each target object in the target image: the cartoon cat image is positioned at the lower left corner, the cartoon cat image is positioned at the lower right corner, the cartoon cat image is positioned in the middle, the cartoon cat image is positioned at the upper left corner, and the cartoon cat image is positioned at the upper right corner.

Through the voice recognition function, the first terminal may determine a keyword related to the target object included in the voice information.

For example, when the voice message is "i like the cat in the lower left corner, the cat in the middle is the smallest and lovely", the first terminal may extract keywords "lower left corner", "cat", "middle" and "cat" related to the target object.

Since there are a plurality of keywords extracted from the voice information, the first terminal may combine the keywords by a semantic recognition algorithm: { lower left corner, cat }, { middle, cat }.

Then, the first terminal may respectively take the cartoon cat image located at the lower left corner of the target image and the cartoon cat image located at the middle of the target image as target objects according to the { lower left corner, cat }, { middle, cat }.

The first terminal then associates the keyword with the target object.

Associating the keyword with the target object may include:

under the condition of playing voice information, starting to display key images from the playing time corresponding to the keywords in the voice information, and canceling to display the key images until the voice information is played or the next keyword is played;

the key image is an image obtained by marking a target object corresponding to the keyword.

The first terminal marks the target object, for example, the cartoon cat image located at the lower left corner of the target image and the cartoon cat image located in the middle of the target image are circled and stored, as shown in fig. 8.

Then, the first terminal plays the voice message, when the cat in the 'i favorite lower left corner is played, the cat in the middle is the smallest and the word' left 'in the' is loved, the first terminal displays the marked cartoon cat image positioned in the lower left corner of the target image until the cat in the 'i favorite lower left corner is played, the cat in the middle is the smallest and the word' middle 'in the' is loved, as shown in fig. 9.

When the "middle" word in the "i likes the" middle "word, which is the smallest cat in the middle of the" i likes the "middle" word, is played, the first terminal displays the marked cartoon cat image located in the middle of the target image until the playing of the voice message is finished, as shown in fig. 10.

After the user associates the keyword with the target object through the first terminal, the first terminal may synthesize the voice information (including the keyword) and the image information (including the target object corresponding to the keyword) into a target file in a video format or a GIF format with voice information.

According to the image processing method provided by the embodiment of the application, the keywords in the voice information are associated with the target object according to the input of the user, and the association result can be determined according to the intention of the user, so that the picture description efficiency and the description precision can be further improved.

Fig. 11 is a second flowchart of an image processing method according to the embodiment of the present application. Referring to fig. 11, an embodiment of the present application further provides an image processing method, which may include:

step 1110, receiving a target file sent by a first terminal; the target file comprises image information generated after the first terminal associates target information with a target object in a target image, wherein the target information is information specific to the target object and comprises text information or voice information;

and step 1120, displaying the image information.

It should be noted that the execution subject of the image processing method provided by the embodiment of the present application may be the second terminal. The second terminal may be an intelligent electronic device, such as a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like.

In step 1110, the second terminal may receive the target file sent by the first terminal through, for example, a chat APP. The target file is determined by the first terminal according to the following mode:

the first terminal may receive a first input of a user for selecting a target image and inputting target information.

As shown in fig. 2, the target image may be an image including 5 cartoon cat characters, and the target object may be at least one of the 5 cartoon cat characters included in the image.

The target information is information corresponding to the target object. For example, the target information may be textual information, such as "i like this cat the best" or the like; or voice information, such as "i like the cat in the lower left corner, the cat in the middle is the smallest and lovely", etc.

The first terminal may associate the target information with the target object in response to the first input, thereby generating a target file.

For example, in the case where the target information is the text information "i prefer the cat" the first terminal may associate "the cat" with any one of 5 cartoon cat characters. In the case that the target information is the voice information "i likes the cat in the lower left corner most, and the cat in the middle of the upper face is the smallest and lovely", the first terminal may associate { lower left corner, cat } with the cartoon cat image located in the lower left corner of the target image, and associate { middle, cat } with the cartoon cat image located in the middle of the target image.

After associating the target information with the target object, the first terminal may generate a target file according to the association result, and send the target file to the second terminal through, for example, a chat APP.

In step 1120, after the second terminal receives the target file sent by the first terminal, the second terminal parses the target file, so as to display the image information obtained by associating the target information with the target object.

It should be noted that the second terminal may be an intelligent electronic device, such as a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook, or a Personal Digital Assistant (PDA).

The second terminal and the first terminal may be the same type of terminal, for example, the second terminal and the first terminal may both be mobile phones. The second terminal and the first terminal may also be different types of terminals, for example, the second terminal may be a PC and the first terminal may be a handset.

According to the image processing method provided by the embodiment of the application, the image and the text or voice resource aiming at the image are associated to generate the target file, so that the combination of vision and text or voice can be realized, a receiver can conveniently understand the reason for sending the image and the meaning which the sender wants to express, the image description efficiency and the description precision under the non-face-to-face image sharing scene are improved, and the user experience is effectively improved.

In one embodiment, in the case that the target information includes text information, step 1120 may include:

displaying the text information;

receiving a third input of a target character in the character information selected by the user;

displaying the marked target object in the target image in response to a third input;

the target characters correspond to the target object, and the target characters are characters used by the first terminal for marking the target object.

As shown in fig. 5, when the user selects the marked target text "this cat", the second terminal may display the marked cartoon cat character located in the middle of the target image in response to the input.

Wherein, the target character 'the cat' corresponds to the cartoon cat image positioned in the middle of the target image.

According to the image processing method provided by the embodiment of the application, the target object associated with the target character is displayed under the condition that the user selects the target character, so that the picture description efficiency and the description precision can be further improved.

In one embodiment, after the second terminal receives the target file sent by the first terminal, the target file may be parsed to determine target characters associated with the target object, and the target characters may be highlighted.

Alternatively, in the case where the first terminal has highlighted the target text, the second terminal may also directly highlight the target text.

It can be understood that the target characters are highlighted, so that the user of the second terminal can conveniently and quickly locate the target characters from the character information, and the target characters are selected to determine the target object, so that the understanding speed of the reason for sending the picture and the meaning which the sender wants to express is further improved.

In one embodiment, where the target information comprises voice information, step 1120 may comprise:

receiving a fourth input of the voice played selected by the user;

in response to a fourth input, playing the voice information with the image information displayed;

under the condition of playing voice information, starting to display key images from the playing time corresponding to the keywords in the voice information, and canceling to display the key images until the voice information is played or the next keyword is played;

the key image is an image in which a target object corresponding to the keyword is marked.

As shown in fig. 12, a play button may be provided on the cover of the object document. The user may click the play button to cause the second terminal to begin playing the speech.

The second terminal, upon receiving the input, plays the voice message with the image information displayed in response to the input. When the "left" character in the "i prefer the cat in the lower left corner, which is the smallest and lovely" is played, the second terminal displays the marked cartoon cat image located in the lower left corner of the target image until the "middle" character in the "i prefer the lower left corner, which is the smallest and lovely" is played, as shown in fig. 9.

When the "middle" character in the "i likes the" middle "character, which is the smallest cat in the middle of the" i likes the "middle" character, is played, the second terminal displays the marked cartoon cat image located in the middle of the target image until the playing of the voice message is finished, as shown in fig. 10.

According to the image processing method provided by the embodiment of the application, the keywords in the voice information are associated with the target object, so that the association result can be determined according to the intention of the user, and the picture description efficiency and the picture description precision can be further improved.

It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the image processing method. The image processing apparatus provided in the embodiment of the present application is described with an example in which an image processing apparatus executes an image processing method.

Fig. 13 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. Referring to fig. 13, an embodiment of the present application provides an image processing apparatus, which may include:

a receiving module 1310 for receiving a first input of a user, the first input including a selection target image and input target information;

an association module 1320, configured to, in response to the first input, associate the target information with a target object to generate a target file;

a sending module 1330, configured to send the target file to a second terminal;

wherein the target image comprises the target object; the target information corresponds to the target object, and the target information comprises at least one item of text information or voice information;

the target file is used for the second terminal to display the image information which associates the target information with the target object.

According to the image processing device, the image and the text or voice resources aiming at the image are associated to generate the target file, combination of vision and the text or voice can be achieved, so that a receiver can understand the reason for sending the image and the meaning which the sender wants to express, the image description efficiency and the description precision under the non-face-to-face image sharing scene are improved, and user experience is effectively improved.

In an embodiment, in a case that the target information includes text information, the associating module 1320 is specifically configured to:

selecting target characters from the character information and marking the target object in the target image;

and associating the target characters with the marked target object.

In one embodiment, the association module 1320 is specifically configured to:

and changing the display mode of the target object in the target image.

In one embodiment, the association module 1320 is specifically configured to:

and under the condition that the target characters are selected, displaying the marked target object in the target image.

In an embodiment, in a case that the target information includes voice information, the associating module 1320 is specifically configured to:

receiving a second input of a user triggering voice recognition and image recognition;

responding to the second input, extracting keywords in the voice information, and determining the target object from the target image according to the keywords;

and associating the keywords with the target object.

In one embodiment, the association module 1320 is specifically configured to:

under the condition of playing the voice information, starting to display key images from the playing time corresponding to the keywords in the voice information, and canceling to display the key images until the voice information is played or the next keyword is played;

the key frame is an image obtained by marking a target object corresponding to the keyword.

Fig. 14 is a second schematic structural diagram of an image processing apparatus according to an embodiment of the present application. Referring to fig. 14, an embodiment of the present application provides an image processing apparatus, which may include:

a receiving module 1410, configured to receive a target file sent by a first terminal; the target file comprises image information generated after the first terminal associates target information with a target object in a target image, wherein the target information is information for the target object and comprises at least one of text information or voice information;

and a display module 1420, configured to display the image information.

According to the image processing device, the image and the text or voice resources aiming at the image are associated to generate the target file, combination of vision and the text or voice can be achieved, so that a receiver can understand the reason for sending the image and the meaning which the sender wants to express, the image description efficiency and the description precision under the non-face-to-face image sharing scene are improved, and user experience is effectively improved.

In an embodiment, in a case that the target information includes the text information, the display module 1420 is specifically configured to:

displaying the text information;

receiving a third input of a user for selecting a target character in the character information;

displaying the marked target object in the target image in response to the third input;

the target characters correspond to the target object, and the target characters are characters used by the first terminal for marking the target object.

In an embodiment, in a case that the target information includes the voice information, the display module 1420 is specifically configured to:

receiving a fourth input of the voice played selected by the user;

in response to the fourth input, playing the voice information with the image information displayed;

under the condition of playing the voice information, starting to display key images from the playing time corresponding to the keywords in the voice information, and canceling to display the key images until the voice information is played or the next keyword is played;

the key image is an image obtained by marking a target object corresponding to the keyword.

The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.

The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.

The image processing apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments in fig. 1 to 12, and is not described herein again to avoid repetition.

Optionally, as shown in fig. 15, an electronic device 1500 according to an embodiment of the present application is further provided, and includes a processor 1501, a memory 1502, and a program or an instruction stored in the memory 1502 and executable on the processor 1501, where the program or the instruction is executed by the processor 1501 to implement the processes of the image processing method embodiment, and can achieve the same technical effects, and details are not repeated here to avoid repetition.

It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.

Fig. 16 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.

The electronic device 1600 includes, but is not limited to: radio frequency unit 1601, network module 1602, audio output unit 1603, input unit 1604, sensor 1605, display unit 1606, user input unit 1607, interface unit 1608, memory 1609, and processor 1610.

Those skilled in the art will appreciate that the electronic device 1600 may further include a power supply (e.g., a battery) for supplying power to various components, which may be logically coupled to the processor 1610 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 16 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description thereof is omitted.

Wherein the input unit 1604 is configured to receive a first input of a user, the first input being used to select a target image and input target information;

a processor 1610, configured to associate the target information with the target object in response to the first input to generate a target file;

the radio frequency unit 1601 is configured to send the target file to a second terminal;

wherein the target image comprises the target object; the target information corresponds to the target object, and the target information comprises at least one item of text information or voice information;

the target file is used for the second terminal to display the image information which associates the target information with the target object.

According to the electronic equipment provided by the embodiment of the application, the image and the text or voice resource aiming at the image are associated to generate the target file, so that the combination of vision and text or voice can be realized, a receiver can conveniently understand the reason for sending the image and the meaning that the sender wants to express, the image description efficiency and the description precision under the non-face-to-face image sharing scene are improved, and the user experience is effectively improved.

Optionally, the processor 1610 is further configured to:

selecting target characters from the character information and marking the target object in the target image;

and associating the target characters with the marked target object.

Optionally, the processor 1610 is specifically configured to change a display manner of the target object in the target image.

Optionally, the processor 1610 is specifically configured to display the marked target object in the target image when the target text is selected.

Optionally, the input unit 1604 is further configured to receive a second input that triggers speech recognition and image recognition by a user;

the processor 1610 is further configured to:

responding to the second input, extracting keywords in the voice information, and determining the target object from the target image according to the keywords;

and associating the keywords with the target object.

Optionally, the processor 1610 is specifically configured to:

under the condition of playing the voice information, starting to display key images from the playing time corresponding to the keywords in the voice information, and canceling to display the key images until the voice information is played or the next keyword is played;

the key image is an image obtained by marking a target object corresponding to the keyword.

It should be understood that in the embodiment of the present application, the input Unit 1604 may include a Graphics Processing Unit (GPU) 16041 and a microphone 16042, and the Graphics processor 16041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1606 may include a display panel 16061, and the display panel 16061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1607 includes a touch panel 16071 and other input devices 16072. Touch panel 16071, also referred to as a touch screen. The touch panel 16071 may include two parts of a touch detection device and a touch controller. Other input devices 16072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1609 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 110 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 1610.

The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.

The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.

The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.

It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.

It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.

Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.

While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

27页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于布置盘的装置、存储设备和盘阵列

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!