Image operation method and device and nonvolatile storage medium

文档序号:681911 发布日期:2021-04-30 浏览:8次 中文

阅读说明:本技术 图像的操作方法及装置、非易失性存储介质 (Image operation method and device and nonvolatile storage medium ) 是由 张哲维 刘波 于 2019-10-28 设计创作,主要内容包括:本申请公开了一种图像的操作方法及装置、非易失性存储介质。其中,该方法包括:在展示界面的第一区域展示第一图像,在展示界面的第二区域展示第二图像,其中,第一图像用于展示目标区域的第一子区域中目标对象的状态,第二图像用于展示目标区域的第二子区域中目标对象的状态,第一图像和第二图像的采集时间不同,第一子区域和第二子区域共同组成目标区域;接收针对第一图像的操作指令;依据操作指令对第一图像进行相应的操作,以及将对第一图像进行的操作同步至第二图像。本申请解决了不能同步对待对比的图像进行操作导致的影响用户体验的技术问题。(The application discloses an image operation method and device and a nonvolatile storage medium. Wherein, the method comprises the following steps: displaying a first image in a first area of a display interface, and displaying a second image in a second area of the display interface, wherein the first image is used for displaying the state of a target object in the first sub-area of the target area, the second image is used for displaying the state of the target object in the second sub-area of the target area, the acquisition time of the first image is different from that of the second image, and the first sub-area and the second sub-area jointly form the target area; receiving an operation instruction aiming at a first image; and carrying out corresponding operation on the first image according to the operation instruction, and synchronizing the operation on the first image to the second image. The method and the device solve the technical problem that the user experience is influenced due to the fact that the images to be compared cannot be synchronously operated.)

1. A method of manipulating an image, comprising:

displaying a first image in a first area of a display interface, and displaying a second image in a second area of the display interface, wherein the first image is used for displaying the state of a target object in the first area of a target area, the second image is used for displaying the state of the target object in the second area of the target area, and the acquisition time of the first image is different from that of the second image;

receiving an operation instruction aiming at the first image;

and performing corresponding operation on the first image according to the operation instruction, and synchronizing the operation performed on the first image to the second image.

2. The method of claim 1, wherein synchronizing the operation performed on the first image to the second image comprises:

determining a first operation position when the display interface operates on the first image;

determining a second operation position corresponding to the first operation position in the second image based on the first operation position;

and executing the same operation as the first image at the second operation position.

3. The method of claim 1,

receiving an operation instruction for the first image, comprising: displaying a control used for operating the first image and the second image in the display interface; receiving a selection instruction of the first image in the display interface, and receiving a trigger instruction of the control; determining an operation instruction aiming at the first image according to the trigger instruction;

performing corresponding operation on the first image according to the operation instruction, wherein the operation comprises the following steps: determining an operation corresponding to the trigger instruction according to the trigger instruction; and carrying out operation corresponding to the trigger instruction on the first image selected according to the selection instruction.

4. The method of claim 1,

receiving an operation instruction for the first image, comprising: receiving a hardware combination instruction of an instruction input device; and selecting the first image according to the hardware combination instruction, and generating an operation instruction aiming at the first image.

5. The method of claim 1, wherein a segmentation indicator is disposed between the first region and the second region.

6. The method of claim 5, wherein the segmentation identification comprises: a dividing line for dividing the display interface into the first region and a second region; the method further comprises the following steps: and when the dividing line is selected, the dividing line is highlighted.

7. The method of claim 5, wherein the split flag is movable, and wherein the size of the first and second regions varies with the movement of the split flag.

8. The method of claim 1, wherein the first and second sub-regions are two different regions and the first and second sub-regions together comprise a target region; or the first sub-area and the second sub-area are the same sub-area in the target area.

9. A method of manipulating an image, comprising:

displaying a first image in a first area of a display interface, and displaying a second image in a second area of the display interface, wherein the first image is used for displaying the state of a target object in the first area of a target area, the second image is used for displaying the state of the target object in the second area of the target area, and the acquisition time of the first image is different from that of the second image;

receiving an operation instruction for any one of the first image and the second image;

and carrying out corresponding operation on any image according to the operation instruction, and synchronizing the operation on any image to other images.

10. An image manipulation apparatus, comprising:

the display module is used for displaying a first image in a first area of a display interface and displaying a second image in a second area of the display interface, wherein the first image is used for displaying the state of a target object in the first sub-area of a target area, the second image is used for displaying the state of the target object in the second sub-area of the target area, and the acquisition time of the first image is different from that of the second image;

a receiving module, configured to receive an operation instruction for the first image;

and the operation module is used for carrying out corresponding operation on the first image according to the operation instruction and synchronizing the operation carried out on the first image to the second image.

11. The apparatus according to claim 10, wherein a division mark is provided between the first area and the second area, the division mark is movable, and the size of the first area and the second area changes with the movement of the division mark.

12. A non-volatile storage medium, characterized in that the storage medium comprises a stored program, wherein the program when running controls a device on which the storage medium is located to perform the method of operating an image according to any one of claims 1 to 8.

Technical Field

The present application relates to the field of image display, and in particular, to an image operation method and apparatus, and a non-volatile storage medium.

Background

In the existing image contrast display technology, the contrast mode of the target area is realized by respectively opening two programs to respectively display images. Moreover, when the images involved in the comparison are operated (for example, enlarged display), the images need to be operated separately, which affects the user experience.

In view of the above problems, no effective solution has been proposed.

Disclosure of Invention

The embodiment of the application provides an image operation method and device and a nonvolatile storage medium, so as to at least solve the technical problem that user experience is influenced due to the fact that images to be compared cannot be synchronously operated.

According to an aspect of an embodiment of the present application, there is provided an image processing method, including: displaying a first image in a first area of a display interface, and displaying a second image in a second area of the display interface, wherein the first image is used for displaying the state of a target object in the first area of a target area, and the second image is used for displaying the state of the target object in the second area of the target area; receiving an operation instruction aiming at the first image; and performing corresponding operation on the first image according to the operation instruction, and synchronizing the operation performed on the first image to the second image.

Optionally, synchronizing the operation performed on the first image to the second image comprises: determining a first operation position when the display interface operates on the first image; determining a second operation position corresponding to the first operation position in the second image based on the first operation position; the same operation as the first image is performed at the second operation position.

Optionally, receiving an operation instruction for the first image includes: displaying a control used for operating the first image and the second image in a display interface; receiving a selection instruction of a first image in a display interface, and receiving a trigger instruction of a control; determining an operation instruction aiming at the first image according to the trigger instruction; performing corresponding operation on the first image according to the operation instruction, wherein the operation comprises the following steps: determining an operation corresponding to the trigger instruction according to the trigger instruction; and carrying out operation corresponding to the trigger instruction on the first image selected according to the selection instruction.

Optionally, receiving an operation instruction for the first image includes: receiving a hardware combination instruction of an instruction input device; selecting a first image according to the hardware combination instruction, and generating an operation instruction aiming at the first image;

optionally, a division identifier is disposed between the first region and the second region.

Optionally, the segmentation identification includes: a dividing line for dividing the display interface into a first region and a second region; the method further comprises the following steps: and when the dividing line is selected, highlighting the dividing line.

Optionally, the split flag is movable, and the sizes of the first area and the second area vary with the movement of the split flag.

Optionally, the first sub-area and the second sub-area are two different areas, and the first sub-area and the second sub-area jointly form a target area; alternatively, the first and second sub-regions are the same sub-region in the target region.

According to an aspect of an embodiment of the present application, there is provided an image processing method, including: displaying a first image in a first area of a display interface, and displaying a second image in a second area of the display interface, wherein the first image is used for displaying the state of a target object in the first sub-area of the target area, the second image is used for displaying the state of the target object in the second sub-area of the target area, and the acquisition time of the first image is different from that of the second image; receiving an operation instruction for any one of the first image and the second image; and carrying out corresponding operation on any one image according to the operation instruction, and synchronizing the operation on any one image to other images.

According to another aspect of the embodiments of the present application, there is provided an image manipulation apparatus including: the display module is used for displaying a first image in a first area of a display interface and displaying a second image in a second area of the display interface, wherein the first image is used for displaying the state of a target object in the first sub-area of the target area, the second image is used for displaying the state of the target object in the second sub-area of the target area, and the acquisition time of the first image is different from that of the second image; the receiving module is used for receiving an operation instruction aiming at the first image; and the operation module is used for carrying out corresponding operation on the first image according to the operation instruction and synchronizing the operation carried out on the first image to the second image.

Optionally, a division identifier is disposed between the first area and the second area, the division identifier is movable, and the sizes of the first area and the second area change with the movement of the division identifier.

According to still another aspect of the embodiments of the present application, there is provided a non-volatile storage medium including a stored program, wherein the program controls a device on which the storage medium is located to perform the above operation method of the image when running.

In the embodiment of the application, when a first image of a first area in different areas in a display interface is operated, the operation is synchronized to a second image of a second area, wherein the acquisition time of the first image is different from that of the second image, and the first sub-area to which the first image corresponds and the second sub-area to which the second image corresponds jointly form a target area.

Drawings

The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:

FIG. 1 is a schematic flow chart diagram of a method of operating an image according to an embodiment of the present application;

FIG. 2 is a schematic structural diagram of an image manipulation device according to an embodiment of the present application;

FIG. 3 is a schematic diagram of an image presentation interface according to an embodiment of the present application;

fig. 4 is a flowchart illustrating an operation method of an image according to an embodiment of the present application.

Detailed Description

In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.

In accordance with an embodiment of the present application, there is provided a method embodiment of a method of operating an image, it being noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.

Fig. 1 is a schematic flow chart of an image operation method according to an embodiment of the present application, as shown in fig. 1, the method includes the following steps:

step S102, displaying a first image in a first area of a display interface, and displaying a second image in a second area of the display interface, wherein the first image is used for displaying the state of a target object in the first sub-area of the target area, the second image is used for displaying the state of the target object in the second sub-area of the target area, and the acquisition time of the first image is different from that of the second image;

for example, where the target area is a field area, the first and second images show the status of the crop in the first and second sub-areas, respectively, including but not limited to: growth status, e.g., plant height, leaf color, etc.

Specifically, the display interface may be an interface displayed on a mobile device such as a smart phone (including an Android phone and an IOS phone), a tablet computer, an IPAD, a palm computer, and a notebook computer, or an interface displayed on a computer terminal device such as a personal computer.

In some embodiments of the present application, the first sub-area and the second sub-area are two different areas, and the first sub-area and the second sub-area together form a target area, for example, as shown in the display interface of fig. 3, the left side is the first sub-area, the time is 5 months in 2019, the right side is the second sub-area, the time is 6 months in 2019, the first sub-area and the second sub-area will form the whole target area, the middle division line is the division line of the first sub-area and the second sub-area in space, and the images corresponding to the first sub-area and the second sub-area are different in time. The first sub-area and the second sub-area jointly form the target area, and the meaning that the first sub-area and the second sub-area can be spliced into the target area in space, and the boundary of the first image and the boundary of the second image are shown to be identical on two sides of the dividing line on the display interface.

In other embodiments of the present application, the first sub-area and the second sub-area may also be the same area in the target area. In this way, images of the same region at different time points can be contrasted and displayed.

In some embodiments of the present application, the first image and the second image may be determined by: determining first contour information of the first area and second contour information of the second area by taking the dividing line as a dividing boundary; a first target image is extracted from the first image based on the first contour information, and a second target image is extracted from the second image based on the second contour information.

Step S104, receiving an operation instruction aiming at the first image;

optionally, the operation instruction may be triggered by a corresponding control in the presentation interface, such as a zoom-in and zoom-out control, a rotation control, an altering control, a selection control, a frame selection control, a translation control, and the like.

And step S106, performing corresponding operation on the first image according to the operation instruction, and synchronizing the operation performed on the first image to the second image.

In some embodiments of the present application, the operations performed on the first image may be synchronized to the second image by: determining a first operation position when the display interface operates on the first image; determining a second operation position corresponding to the first operation position in the second image based on the first operation position; the same operation as the first image is performed at the second operation position.

The first and second operation positions can be expressed as pixel point coordinates, and the content of the image can be modified by altering or framing the image; the method can also be expressed as a corresponding control, that is, a position is represented by a control in the interface, and at this time, determining a first operation position when the display interface operates on the first image includes: determining a first control when the display interface operates the first image, and taking the first control as a first operation position; determining an operation instruction corresponding to the first control; and executing corresponding operation on the second image based on the operation instruction so as to realize synchronous operation on the second image.

For the latter, in some embodiments of the present application, the operation instruction for the first image may be received by: displaying a control used for operating the first image and the second image in a display interface; receiving a selection instruction of a first image in a display interface, and receiving a trigger instruction of a control; determining an operation instruction aiming at the first image according to the trigger instruction; at this time, performing corresponding operations on the first image according to the operation instruction includes: determining an operation corresponding to the trigger instruction according to the trigger instruction; and carrying out operation corresponding to the trigger instruction on the first image selected according to the selection instruction.

In some embodiments of the present application, the operation instruction for the first image may also be received by: receiving a hardware combination instruction of an instruction input device; selecting a first image according to the hardware combination instruction, and generating an operation instruction aiming at the first image; the hardware combination instruction includes, but is not limited to, a key combination instruction of the device.

Optionally, a division identifier is disposed between the first region and the second region. Wherein, the segmentation identification comprises: a dividing line for dividing the display interface into a first region and a second region; the division flag is movable, and the sizes of the first area and the second area change with the movement of the division flag. Specifically, the method comprises the following steps: receiving a moving instruction of a dividing line: in response to a move instruction, the dividing line is moved to adjust the sizes of the first region and the second region.

In order to make the user better know the operation feedback, the dividing line can be highlighted when the dividing line is selected. For example, the dividing line may be represented by being bold or blinking.

In some embodiments of the present application, upon selection of a parting line, a pattern for indicating an operational state of the parting line may also be generated, wherein the pattern is for indicating that the operational state of the parting line is movement-enabled.

For example, when the focus of the mouse is moved to the division line by clicking on the display interface, the operable pattern is displayed at the click position or the position selected by the mouse, and the division line is thickened or lightened to enhance the visualization effect and inform that the line segment is selected.

When the display area is adjusted by using the dividing line, the following process can be performed: detecting a moving direction of the dividing line in response to the moving instruction; when the moving direction indicates to move towards the direction of the first area, increasing the display area of the second area and reducing the display area of the first area; and when the moving direction indicates to move towards the direction of the second area, increasing the display area of the first area and reducing the display area of the second area. It should be noted that the amount of change in the display areas of the two display regions is the same.

For example, as shown in the display interface of fig. 3, the middle solid line represents the movable dividing line, the upper arrow represents the moving direction of the movable dividing line indicating the second sub-region, and the right dotted line represents the position of the movable dividing line after moving along the moving direction, at this time, since the dividing line moves toward the second sub-region, the display area of the first sub-region is increased and the display area of the second sub-region is decreased. The same applies to the opposite direction. Thus, the size of the first region and the second region can be adjusted by moving the dividing line.

As described above, some operation controls, such as a zoom-in and zoom-out control, a rotation control, an altering control, a selecting control, a frame selecting control, a translation control, etc., may be provided on the visual interface, and when one of the pictures is operated, the other picture is automatically linked to the other picture. The related operations of the pictures can also directly trigger related function operations in a hot key combination mode of a mouse and a keyboard without clicking controls. Specifically, when the system receives a synchronous operation instruction, the operation position of the first picture is identified, and then the same operation is displayed at the corresponding position of the second picture so as to be synchronized to the user.

Fig. 2 is a schematic structural diagram of an image operating device according to an embodiment of the present application. As shown in fig. 2, the apparatus includes:

the display module 20 is configured to display a first image in a first area of a display interface, and display a second image in a second area of the display interface, where the first image is used to display a state of a target object in the first sub-area of the target area, the second image is used to display a state of the target object in the second sub-area of the target area, and acquisition times of the first image and the second image are different;

a receiving module 22, configured to receive an operation instruction for the first image;

and the operation module 24 is configured to perform corresponding operation on the first image according to the operation instruction, and synchronize the operation performed on the first image to the second image.

Optionally, a division identifier is disposed between the first area and the second area, the division identifier is movable, and the sizes of the first area and the second area change with the movement of the division identifier.

The first sub-area and the second sub-area are two different areas, and the first sub-area and the second sub-area jointly form a target area; or the first sub-area and the second sub-area are the same sub-area in the target area.

In some embodiments of the present application, the operation module 24 is further configured to determine a first operation position when the display interface operates on the first image; determining a second operation position corresponding to the first operation position in the second image based on the first operation position; and executing the same operation as the first image at the second operation position.

The receiving module 22 is further configured to display a control for operating the first image and the second image in the display interface; receiving a selection instruction of the first image in the display interface, and receiving a trigger instruction of the control; determining an operation instruction aiming at the first image according to the trigger instruction; the operation module 24 is further configured to determine, according to the trigger instruction, an operation corresponding to the trigger instruction; and carrying out operation corresponding to the trigger instruction on the first image selected according to the selection instruction.

It should be noted that, reference may be made to the description related to the embodiment shown in fig. 1 for a preferred implementation of the embodiment shown in fig. 2, and details are not described here again.

An embodiment of the present application further provides an image operation method, as shown in fig. 4, the method includes:

step S402, displaying a first image in a first area of a display interface, and displaying a second image in a second area of the display interface, wherein the first image is used for displaying the state of a target object in the first sub-area of the target area, the second image is used for displaying the state of the target object in the second sub-area of the target area, and the acquisition time of the first image is different from that of the second image;

step S404, receiving an operation instruction aiming at any one of the first image and the second image;

step S406, performing corresponding operation on any one image according to the operation instruction, and synchronizing the operation performed on any one image to other images.

It should be noted that, reference may be made to the description related to the embodiment shown in fig. 1 to 3 for a preferred implementation of the embodiment shown in fig. 4, and details are not repeated here.

The embodiment of the present application further provides a non-volatile storage medium, where the storage medium includes a stored program, where the program, when running, controls a device on which the storage medium is located to execute the above operation method of the image, and for example, may store instructions for implementing the following functions: displaying a first image in a first area of a display interface, and displaying a second image in a second area of the display interface, wherein the first image is used for displaying the state of a target object in the first area of a target area, the second image is used for displaying the state of the target object in the second area of the target area, and the acquisition time of the first image is different from that of the second image; receiving an operation instruction aiming at the first image; and performing corresponding operation on the first image according to the operation instruction, and synchronizing the operation performed on the first image to the second image.

Optionally, the nonvolatile storage medium is further configured to execute program instructions for implementing the following functions: determining a first operation position when the display interface operates on the first image; determining a second operation position corresponding to the first operation position in the second image based on the first operation position; and executing the same operation as the first image at the second operation position.

The embodiment of the present application further provides a processor, where the processor is configured to execute the program instructions included in the storage medium, where the program, when executed, controls a device in which the storage medium is located to execute the above operation method of the image, and for example, may execute the following operation instructions: displaying a first image in a first area of a display interface, and displaying a second image in a second area of the display interface, wherein the first image is used for displaying the state of a target object in the first area of a target area, the second image is used for displaying the state of the target object in the second area of the target area, and the acquisition time of the first image is different from that of the second image; receiving an operation instruction aiming at the first image; and performing corresponding operation on the first image according to the operation instruction, and synchronizing the operation performed on the first image to the second image.

The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.

In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.

In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.

The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

12页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:消息处理的方法、装置、电子设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类